During this part of the tutorial, we will illustrate a task of image processing frequently encountered in natural or material science, that is the extraction and labeling of pixels belonging to objects of interest. Such an operation is called image segmentation.
Image segmentation typically requires to perform a succession of different operations on the image of interest, therefore this second part of the tutorial will bring the opportunity to use concepts introduced during the first part of the tutorial, such as the manipulation of numpy arrays, or the filtering of images.
As an example, we will use a scanning electron microscopy image of a multiphase glass. Let us start by opening the image.
In [ ]:
from __future__ import division, print_function
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt, cm
In [ ]:
from skimage import io
from skimage import img_as_float
In [ ]:
im = io.imread('../images/phase_separation.png')
In [ ]:
plt.imshow(im, cmap='gray')
In [ ]:
im.dtype, im.shape
For the sake of convenience, one first removes the information bar at the bottom, in order to retain only the region of the image with the blobs of interest. This operation is just an array slicing removing the last rows, for which we can leverage the nice syntax of NumPy's slicing.
In order to determine how many rows to remove, it is possible to use either visual inspection, or a more advanced and robust way relying on NumPy machinery in order to determine the first completely dark row.
In [ ]:
phase_separation = im[:947]
plt.imshow(phase_separation, cmap='gray')
In [ ]:
np.nonzero(np.all(im < 0.1 * im.max(), axis=1))[0][0]
In [ ]:
from skimage import exposure
In [ ]:
histogram = exposure.histogram(phase_separation)
plt.plot(histogram[1], histogram[0])
plt.xlabel('gray value')
plt.ylabel('number of pixels')
plt.title('Histogram of gray values')
Two peaks are clearly visible in the histogram, but they have a strong overlap. What happens if we try to threshold the image at a value that separates the two peaks?
For an automatic computation of the thresholding values, we use Otsu's thresholding, an operation that chooses the threshold in order to have a good separation between gray values of background and foreground.
In [ ]:
from skimage import filters
In [ ]:
threshold = filters.threshold_otsu(phase_separation)
print(threshold)
In [ ]:
fig, ax = plt.subplots(ncols=2, figsize=(12, 8))
ax[0].imshow(phase_separation, cmap='gray')
ax[0].contour(phase_separation, [threshold])
ax[1].imshow(phase_separation < threshold, cmap='gray')
In order to improve the thresholding, we will try first to filter the image so that gray values are more uniform inside the two phases, and more separated. Filters used to this aim are called denoising filters, since their action amounts to reducing the intensity of the noise on the image.
Zooming on a part of the image that should be uniform illustrates well the concept of noise: the image has random variations of gray levels that originate from the imaging process. Noise can be due to low photon-counting, or to electronic noise on the sensor, although other sources of noise are possible as well.
In [ ]:
plt.imshow(phase_separation[390:410, 820:840], cmap='gray',
interpolation='nearest')
plt.colorbar()
print(phase_separation[390:410, 820:840].std())
Several denoising filters average together pixels that are close to each other. If the noise is not spatially correlated, random noise fluctuations will be strongly attenuated by this averaging.
One of the most common denoising filters is called the median filter: it replaces the value of a pixel by the median gray value inside a neighbourhood of the pixel. Taking the median gray value preserves edges much better than taking the mean gray value.
Here we use a square neighbourhood of size 7x7: the larger the window size, the larger the attenuation of the noise, but this may come at the expense of precision for the location of boundaries. Choosing a window size therefore represents a trade-off between denoising and accuracy.
In [ ]:
from skimage import restoration
from skimage import filters
In [ ]:
median_filtered = filters.median(phase_separation, np.ones((7, 7)))
In [ ]:
plt.imshow(median_filtered, cmap='gray')
In [ ]:
plt.imshow(median_filtered[390:410, 820:840], cmap='gray',
interpolation='nearest')
plt.colorbar()
print(median_filtered[390:410, 820:840].std())
Variations of gray levels inside zones that should be uniform are now smaller in range, and also spatially smoother.
Plotting the histogram of the denoised image shows that the gray levels of the two phases are now better separated.
In [ ]:
histo_median = exposure.histogram(median_filtered)
plt.plot(histo_median[1], histo_median[0])
As a consequence, Otsu thresholding now results in a much better segmentation.
In [ ]:
plt.imshow(phase_separation[:300, :300], cmap='gray')
plt.contour(median_filtered[:300, :300],
[filters.threshold_otsu(median_filtered)])
Going further: Otsu thresholding with adaptative threshold. For images with non-uniform illumination, it is possible to extend Otsu's method to the case for which different thresholds are used in different regions of space.
In [ ]:
binary_image = median_filtered < filters.threshold_otsu(median_filtered)
In [ ]:
plt.imshow(binary_image, cmap='gray')
Exercise: try other denoising filters
Several other denoising filters are available in scikit-image.
The bilateral filter uses similar ideas as for the median filter or the average filter: it averages a pixel with other pixels in a neighbourhood, but gives more weight to pixels for which the gray value is close to the one of the central pixel. The bilateral filter is very efficient at preserving edges.
The total variation filter results in images that are piecewise-constant. This filter optimizes a trade-off between the closeness to the original image, and the (L1) norm of the gradient, the latter part resulting in picewise-constant regions.
Going further: in addition to trying different denoising filters on the phase separation image, do the same on a synthetic image of a square, corrupted by artificial noise.
Further reading on denoising with scikit-image: see the Gallery example on denoising
Our approach above consisted in filtering the image so that it was as binary as possible, and then to threshold it. Other methods are possible, that do not threshold the image according only to gray values, but also use spatial information: they tend to attribute the same label to neighbouring pixels. A famous algorithm in order to segment binary images is called the graph cuts algorithm. Although graph cut is not available yet in scikit-image, other algorithms using spatial information are available as well, such as the watershed) algorithm, or the random walker algorithm.
In [ ]:
blob_markers = median_filtered < 110
bg_markers = median_filtered > 160
markers = np.zeros_like(phase_separation)
markers[blob_markers] = 2
markers[bg_markers] = 1
from skimage import morphology
watershed = morphology.watershed(filters.sobel(median_filtered), markers)
plt.imshow(watershed, cmap='gray')
If we use the denoising + thresholding approach, the result of the thresholding is not completely what we want: small objects are detected, and small holes exist in the objects. Such defects of the segmentation can be amended, using the knowledge that no small holes should exist, and that blobs have a minimal size.
Utility functions to modify binary images are found in the morphology
submodule. Although mathematical morphology encompasses a large set of possible operations, we will only see here how to remove small objects. In order to learn more about mathematical morphology within scikit-image, please take a look at the dedicated tutorial
In [ ]:
from skimage import morphology
In [ ]:
only_large_blobs = morphology.remove_small_objects(binary_image,
min_size=300)
plt.imshow(only_large_blobs, cmap='gray')
In [ ]:
only_large = np.logical_not(morphology.remove_small_objects(
np.logical_not(only_large_blobs),
min_size=300))
plt.imshow(only_large, cmap='gray')
The segmentation of foreground (objects) and background results in a binary image. In order to measure the properties of the different blobs, one must first attribute a different label to each blob (identified as a connected component of the foreground phase). Then, the utility function measure.regionprops
can be used to compute several properties of the labeled regions.
Properties of the regions can be used for classifying the objects, for example with scikit-learn
.
In [ ]:
from skimage import measure
In [ ]:
labels = measure.label(only_large)
plt.imshow(labels, cmap='spectral')
In [ ]:
props = measure.regionprops(labels, phase_separation)
In [ ]:
areas = np.array([prop.area for prop in props])
perimeters = np.array([prop.perimeter for prop in props])
In [ ]:
plt.plot(np.sort(perimeters**2./areas), 'o')
Other examples
Exercise: visualize an image where the color of a blob encodes its size (blobs of similar size have a similar color).
Exercise: visualize an image where only the most circular blobs are represented. Hint: this involves some manipulations of NumPy arrays.
If one wishes to process a single image, a lot of trial and error is possible, using interactive sessions and intermediate visualizations. Such workflow typically allows to optimize over parameter values, such as the size of the filtering window for denoising the image, or the area of small spurious objects to be removed.
In a time of cheap CCD sensors, it is also frequent to deal with collections of images, for which one cannot afford to process each image individually. In such a case, the workflow has to be adapted.
function parameters need to be set in a more robust manner, using statistical information like the typical noise of the image, or the typical size of objects in the image.
it is a good practice to divide the different array manipulations into several functions. Outside an Ipython notebook, such functions would typically be found in a dedicated module, that could be imported from a script.
applying the same operations to a collection of (independent) images is a typical example of embarassingly parallel workflow, that calls for multiprocessing computation. The joblib
module provides a simple helper function for using multiprocessing on embarassingly parallel for
loops.
Let us first define two functions with a more robust handling of parameters.
In [ ]:
def remove_information_bar(image, value=0.1):
value *= image.max()
row_index = np.nonzero(np.all(image < value, axis=1))[0][0]
return image[:row_index]
In [ ]:
from scipy import stats
def clean_image(binary_image):
labels = measure.label(binary_image)
props = measure.regionprops(labels)
areas = np.array([prop.area for prop in props])
large_area = stats.scoreatpercentile(areas, 90)
remove_small = morphology.remove_small_objects(binary_image,
large_area / 20)
remove_holes = np.logical_not(morphology.remove_small_objects(
np.logical_not(remove_small),
large_area / 20))
return remove_holes
In [ ]:
def process_blob_image(image):
image = remove_information_bar(image)
image = filters.median(image, np.ones((7, 7)))
binary_im = image < filters.threshold_otsu(image)
binary_im = clean_image(binary_im)
return binary_im
The glob
module is very handy to retrieve lists of image file names using wildcard patterns.
In [ ]:
from glob import glob
filelist = glob('../images/phase_separation*.png')
filelist.sort()
print(filelist)
In [ ]:
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 8))
for index, filename in enumerate(filelist[1:]):
print(filename)
im = io.imread(filename)
binary_im = process_blob_image(im)
i, j = np.unravel_index(index, (2, 2))
ax[i, j].imshow(binary_im, cmap='gray')
ax[i, j].axis('off')
It is quite uncommon to perform a successful segmentation in only one or two operations: typical image require some pre- and post-processing. However, a large number of image processing steps, each using some hand-tuning of parameters, can result in disasters, since the processing pipeline will not work as well for a different image.
Also, the order in which the operations are performed is important.
In [ ]:
crude_segmentation = phase_separation < filters.threshold_otsu(phase_separation)
In [ ]:
clean_crude = morphology.remove_small_objects(crude_segmentation, 300)
clean_crude = np.logical_not(morphology.remove_small_objects(
np.logical_not(clean_crude), 300))
plt.imshow(clean_crude[:200, :200], cmap='gray')
It would be possible to filter the image to smoothen the boundary, and then threshold again. However, it is more satisfying to first filter the image so that it is as binary as possible (which corresponds better to our prior information on the materials), and then to threshold the image.
Going further: want to know more about image segmentation with scikit-image, or see different examples?
In [ ]:
%reload_ext load_style
%load_style ../themes/tutorial.css