In [1]:
import diplib as dip
In [2]:
a = dip.Image((10,20), 1)
a.Fill(3)
Indexing into an Image
object works as it does for other array types in
Python. The first index is x (horizontal), the second one is y (vertical). Note that b
is a view of a
and shares the same data segment.
In [3]:
b = a[0:4, 4:-1]
b.Fill(55)
a[:3, :10] = 100
a[5:7, 10:15] = 200
Images can be displayed using the Show
method. normal
sets the range, 0-255 in this case. By default, the image is linearly stretched to the minimum and maximum values.
In [4]:
a.Show('normal')
Operators are generally applied on a per-pixel basis.
In [5]:
m = a >= 100
m.Show()
Images can be indexed by binary images, resulting in a 1D image containing only the selected pixels. Note that the order depends on the internal linear pixel storage order, which is not consistent with any specific (row-major or column-major) order due to operations such as Rotation90
and Mirror
.
In [6]:
a[m].Show('normal')
Binary image indexing can also be used for assignment.
In [7]:
a[m] = 176
a.Show('normal')
An Image
object uses the NumPy buffer interface. This means that you can use an Image
object everywhere where you can use a NumPy array, and you can use a NumPy array anywhere where you would use an Image
object.
Here we create a NumPy array and display it like it were an Image
(remember that NumPy uses height as the first dimension and width as the second one, this is reverse from how PyDIP does it):
In [8]:
import numpy as np
b = np.random.rand(a.Size(1), a.Size(0))
dip.Show(b)
Then we add the NumPy array to our image:
In [9]:
a += 30 * b
a.Show()
Then we call a NumPy function with an Image
as input, and a PyDIP function with a NumPy array
as input:
In [10]:
print(np.mean(a))
print(dip.Mean(b)[0][0])
Note that the Image
object can contain images of any number of dimensions, including 0 (a single pixel). The Show
method only displays 1D or 2D images, for images with more dimensions, it will display a projection. Read this method's help to learn how to control the projection mode, etc. Alternatively, use dip.viewer.Show
for an interactive image display that shows images with any number of dimensions.
In [11]:
import matplotlib.pyplot as pp
a = dip.ImageRead('../cermet.ics')
a.Show()
b = dip.Image(pp.imread('../erika.tif'))
b.Show()
Color images are supported too, it's a bit different from what you're likely used to. A 2D color image is seen as 2D, each pixel has three values. This affects indexing! Here we give a little demo, we recommend that you look through the tensor image demo notebook to learn more about images with more than one value per pixel.
In [12]:
a = dip.ImageRead('../DIP.tif')
print(a)
print(a[50:100, :]) # spatial indexing is the same as for other 2D images!
print(a(0)) # this is the red channel
print(a[128, 45]) # this returns a Python list with all the values for the pixel
print(a[128, 45][0]) # this returns the red value of one pixel
print(a(0)[128, 45]) # this also, but as a Python list
PyDIP knows a lot of color spaces. The Show
method automatically converts to RGB for display.
In [13]:
print(a.ColorSpace())
b = dip.ColorSpaceManager.Convert(a, 'Lab')
print(b.ColorSpace())
b.Show()
a(2).Show()
b(2).Show()
There are many image filtering functions available. Because there is not yet any documentation in PyDIP, you will need to explore the DIPlib documentation. Function names and parameters in Python are identical to the names in the DIPlib library, except that the output image is always an output argument, not an input argument. Here we do some simple color filtering.
In [14]:
dip.Gauss(a, 5).Show()
In [15]:
b = dip.BilateralFilter(a, spatialSigmas=5, tonalSigma=30)
b.Show()
Some filters are specific for gray-value images (these are called "scalar images" everywhere in the documentation). For example, all morphological filters currently require a scalar image.
In [16]:
b = dip.ColorSpaceManager.Convert(b, 'gray')
dip.Canny(b, upper=0.99).Show()
dip.MeasurementTool
can measure quite a lot of features for objects in an image. This is a simple example usage.
We first read the 'cermet' image, and record its pixel size (this is an old example image, the actual pixel size has gotten lost over the years). If an image contains the pixel size in the metadata, it will automatically be recorded in the Image
object and used by measurement functions and some other functions. Note that pixels do not need to be isotropic, it is possible to give a different pixel size for each dimension.
In [17]:
a = dip.ImageReadICS('../cermet')
a.SetPixelSize(dip.PixelSize(dip.PhysicalQuantity(1, "um")))
print(a.PixelSize())
a.Show()
Next, we threshold and label the image, then measure some basic features. Because 'Solidity' depends on the 'ConvexArea' measurement, we get that one too in the output.
In [18]:
b = a < 120
b = dip.EdgeObjectsRemove(b)
b = dip.Label(b, minSize=30)
m = dip.MeasurementTool.Measure(b, a, ['Size', 'Solidity', 'Statistics'])
print(m)
The dip.Measurement
object m
can be indexed in three levels: the measurement name ('Statistics'), the object number (30), and the measurement value within the selected measurement (2):
In [19]:
print(m['Statistics'][30])
print(m['Statistics'][30][2])
print(m[30]['Statistics'][2])
Leaving out one of the indices returns the full row or column:
In [20]:
print(m[30])
print(m['Solidity'])
These objects can be indexed further as above, or be converted to a NumPy array:
In [21]:
import numpy as np
print(np.array(m[30]))
print(np.array(m['Solidity']).transpose())
We can paint the objects with one of the measurements, which can be useful for display:
In [22]:
c = dip.ObjectToMeasurement(b, m['Solidity'])
c.Show(colormap='viridis')
In [ ]: