Segmentation: Thresholding and Edge Detection

In this notebook our goal is to estimate the location and radius of spherical markers visible in a Cone-Beam CT volume.

We will use two approaches:

  1. Segment the fiducial using a thresholding approach, derive the sphere's radius from the segmentation. This approach is solely based on SimpleITK.
  2. Localize the fiducial's edges using the Canny edge detector and then fit a sphere to these edges using a least squares approach. This approach is a combination of SimpleITK and scipy/numpy.

Note that all of the operations, filtering and computations, are natively in 3D. This is the "magic" of ITK and SimpleITK at work.

The practical need for localizing spherical fiducials in CBCT images and additional algorithmic details are described in: Z. Yaniv, "Localizing spherical fiducials in C-arm based cone-beam CT", Med. Phys., Vol. 36(11), pp. 4957-4966.


In [ ]:
import SimpleITK as sitk

%run update_path_to_download_script
from downloaddata import fetch_data as fdata

%matplotlib notebook
import gui
import matplotlib.pyplot as plt

import numpy as np
from scipy import linalg

from ipywidgets import interact, fixed

Load the volume, it contains two spheres. You can either identify the regions of interest (ROIs) yourself or use the predefined rectangular regions of interest specified below ((min_x,max_x), (min_y, max_y), (min_z, max_z)).

To evaluate the sensitivity of the algorithms to the image content (varying size and shape of the ROI) you should identify the ROIs yourself.


In [ ]:
spherical_fiducials_image = sitk.ReadImage(fdata("spherical_fiducials.mha"))
roi_list = [((280,320), (65,90), (8, 30)),
            ((200,240), (65,100), (15, 40))]

We use a GUI to specify a region of interest. The GUI below allows you to specify a box shaped ROI. Draw a rectangle on the image (move and resize it) and specify the z range of the box using the range slider. You can then view the ROI overlaid onto the slices using the slice slider. The toolbar on the bottom of the figure allows you to zoom and pan. In zoom/pan mode the rectangle interaction is disabled. Once you exit zoom/pan mode (click the button again) you can specify a rectangle and interact with it.

We already specify two ROIs containing the two spheres found in the data (second row below).

To evaluate the sensitivity of the two approaches used in this notebook you should select the ROI on your own and see how the different sizes effect the results.


In [ ]:
roi_acquisition_interface = gui.ROIDataAquisition(spherical_fiducials_image)
roi_acquisition_interface.set_rois(roi_list)

Get the user specified ROIs and select one of them.


In [ ]:
specified_rois = roi_acquisition_interface.get_rois()
# select the one ROI we will work on
ROI_INDEX = 0

roi = specified_rois[ROI_INDEX]
mask_value = 255

mask = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt8)
mask.CopyInformation(spherical_fiducials_image)
for x in range(roi[0][0], roi[0][1]+1):
    for y in range(roi[1][0], roi[1][1]+1):        
        for z in range(roi[2][0], roi[2][1]+1):
            mask[x,y,z] = mask_value

Thresholding based approach

To see whether this approach is appropriate we look at the histogram of intensity values inside the ROI. We know that the spheres have higher intensity values. Ideally we would have a bimodal distribution with clear separation between the sphere and background.


In [ ]:
intensity_values = sitk.GetArrayViewFromImage(spherical_fiducials_image)
roi_intensity_values = intensity_values[roi[2][0]:roi[2][1],
                                        roi[1][0]:roi[1][1],
                                        roi[0][0]:roi[0][1]].flatten()
plt.figure()
plt.hist(roi_intensity_values, bins=100)
plt.title("Intensity Values in ROI")
plt.show()

Can you identify the region of the histogram associated with the sphere?

In our case it looks like we can automatically select a threshold separating the sphere from the background. We will use Otsu's method for threshold selection to segment the sphere and estimate its radius.


In [ ]:
# Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are
# set to outside_value. The sphere's have higher intensity values than the background, so they are outside.

inside_value = 0
outside_value = 255
number_of_histogram_bins = 100
mask_output = True

labeled_result = sitk.OtsuThreshold(spherical_fiducials_image, mask, inside_value, outside_value, 
                                   number_of_histogram_bins, mask_output, mask_value)

# Estimate the sphere radius from the segmented image using the LabelShapeStatisticsImageFilter.
label_shape_analysis = sitk.LabelShapeStatisticsImageFilter()
label_shape_analysis.SetBackgroundValue(inside_value)
label_shape_analysis.Execute(labeled_result)
print("The sphere's location is: {0:.2f}, {1:.2f}, {2:.2f}".format(*(label_shape_analysis.GetCentroid(outside_value))))
print("The sphere's radius is: {0:.2f}mm".format(label_shape_analysis.GetEquivalentSphericalRadius(outside_value)))

In [ ]:
# Visually evaluate the results of segmentation, just to make sure. Use the zoom tool, second from the right, to 
# inspect the segmentation.
gui.MultiImageDisplay(image_list = [sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611),
                                      sitk.sitkUInt8), labeled_result, opacity=0.5)],                   
                      title_list = ['thresholding result'])

Based on your visual inspection, did the automatic threshold correctly segment the sphere or did it over/under segment it?

If automatic thresholding did not provide the desired result, you can correct it by allowing the user to modify the threshold under visual inspection. Implement this approach below.


In [ ]:
# Your code here:

Edge detection based approach

In this approach we will localize the sphere's edges in 3D using SimpleITK. We then compute a least squares sphere that optimally fits the 3D points using scipy/numpy. The mathematical formulation we use is as follows:

Given $m$ points in $\mathbb{R}^n$, $m>n+1$, we want to fit them to a sphere such that the sum of the squared algebraic distances is minimized. The algebraic distance is: $$ \delta_i = \mathbf{p_i}^T\mathbf{p_i} - 2\mathbf{p_i}^T\mathbf{c} + \mathbf{c}^T\mathbf{c}-r^2 $$

The optimal sphere parameters are computed as: $$ [\mathbf{c^*},r^*] = argmin_{\mathbf{c},r} \Sigma _{i=1}^m \delta _i ^2 $$

setting $k=\mathbf{c}^T\mathbf{c}-r^2$ we obtain the following linear equation system ($Ax=b$): $$ \left[\begin{array}{cc} -2\mathbf{p_1}^T & 1\\ \vdots & \vdots \\ -2\mathbf{p_m}^T & 1 \end{array} \right] \left[\begin{array}{c} \mathbf{c}\\ k \end{array} \right] = \left[\begin{array}{c} -\mathbf{p_1}^T\mathbf{p_1}\\ \vdots\\ -\mathbf{p_m}^T\mathbf{p_m} \end{array} \right] $$

The solution of this equation system minimizes $\Sigma _{i=1}^m \delta _i ^2 = \|Ax-b\|^2$.

Note that the equation system admits solutions where $k \geq \mathbf{c}^T\mathbf{c}$. That is, we have a solution that does not represent a valid sphere, as $r^2<=0$. This situation can arise in the presence of outliers.

Note that this is not the geometric distance which is what we really want to minimize and that we are assuming that there are no outliers. Both issues were addressed in the original work ("Localizing spherical fiducials in C-arm based cone-beam CT").


In [ ]:
# Create a cropped version of the original image.
sub_image = spherical_fiducials_image[roi[0][0]:roi[0][1],
                                      roi[1][0]:roi[1][1],
                                      roi[2][0]:roi[2][1]]

# Edge detection on the sub_image with appropriate thresholds and smoothing.
edges = sitk.CannyEdgeDetection(sitk.Cast(sub_image, sitk.sitkFloat32), lowerThreshold=0.0, 
                                upperThreshold=200.0, variance = (5.0,5.0,5.0))

Get the 3D location of the edge points and fit a sphere to them.


In [ ]:
edge_indexes = np.where(sitk.GetArrayViewFromImage(edges) == 1.0)

# Note the reversed order of access between SimpleITK and numpy (z,y,x)
physical_points = [edges.TransformIndexToPhysicalPoint([int(x), int(y), int(z)]) \
                   for z,y,x in zip(edge_indexes[0], edge_indexes[1], edge_indexes[2])]

# Setup and solve linear equation system.
A = np.ones((len(physical_points),4))
b = np.zeros(len(physical_points))

for row, point in enumerate(physical_points):
    A[row,0:3] = -2*np.array(point)
    b[row] = -linalg.norm(point)**2

res,_,_,_ = linalg.lstsq(A,b)

print("The sphere's location is: {0:.2f}, {1:.2f}, {2:.2f}".format(*res[0:3]))
print("The sphere's radius is: {0:.2f}mm".format(np.sqrt(linalg.norm(res[0:3])**2 - res[3])))

In [ ]:
# Visually evaluate the results of edge detection, just to make sure. Note that because SimpleITK is working in the
# physical world (not pixels, but mm) we can easily transfer the edges localized in the cropped image to the original.
# Use the zoom tool, second from the right, for close inspection of the edge locations.

edge_label = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt16)
edge_label.CopyInformation(spherical_fiducials_image)
e_label = 255
for point in physical_points:
    edge_label[edge_label.TransformPhysicalPointToIndex(point)] = e_label

gui.MultiImageDisplay(image_list = [sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611),
                                                                sitk.sitkUInt8), edge_label, opacity=0.5)],                   
                      title_list = ['edge detection result'])

You've made it to the end of the notebook, you deserve to know the correct answer

The sphere's radius is 3mm. With regard to sphere location, we don't have a the ground truth for that, so your estimate is as good as ours.