Registration Initialization: We Have to Start Somewhere

Initialization is a critical aspect of most registration algorithms, given that most algorithms are formulated as an iterative optimization problem.

In many cases we perform initialization in an automatic manner by making assumptions with regard to the contents of the image and the imaging protocol. For instance, if we expect that images were acquired with the patient in a known orientation we can align the geometric centers of the two volumes or the center of mass of the image contents if the anatomy is not centered in the image (this is what we previously did in this example).

When the orientation is not known, or is known but incorrect, this approach will not yield a reasonable initial estimate for the registration.

When working with clinical images, the DICOM tags define the orientation and position of the anatomy in the volume. The tags of interest are:

  • (0020|0032) Image Position (Patient) : coordinates of the the first transmitted voxel.
  • (0020|0037) Image Orientation (Patient): directions of first row and column in 3D space.
  • (0018|5100) Patient Position: Patient placement on the table
    • Head First Prone (HFP)
    • Head First Supine (HFS)
    • Head First Decibitus Right (HFDR)
    • Head First Decibitus Left (HFDL)
    • Feet First Prone (FFP)
    • Feet First Supine (FFS)
    • Feet First Decibitus Right (FFDR)
    • Feet First Decibitus Left (FFDL)

The patient position is manually entered by the CT/MR operator and thus can be erroneous (HFP instead of FFP will result in a $180^o$ orientation error).

A heuristic, yet effective, solution is to use a sampling strategy of the parameter space. Note that this strategy is primarily usefull in low dimensional parameter spaces (rigid or possibly affine transformations).

In this notebook we illustrate how to sample the parameter space in a fixed pattern. We then initialize the registration with the parameters that correspond to the best similiarity metric value obtained by our sampling.


In [ ]:
import SimpleITK as sitk
import os
import numpy as np

from ipywidgets import interact, fixed
from downloaddata import fetch_data as fdata

import registration_callbacks as rc
import registration_utilities as ru

# Always write output to a separate directory, we don't want to pollute the source directory. 
OUTPUT_DIR = 'Output'

%matplotlib inline

# This is the registration configuration which we use in all cases. The only parameter that we vary 
# is the initial_transform. 
def multires_registration(fixed_image, moving_image, initial_transform):
    registration_method = sitk.ImageRegistrationMethod()
    registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
    registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
    registration_method.SetMetricSamplingPercentage(0.01)
    registration_method.SetInterpolator(sitk.sitkLinear)
    registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, estimateLearningRate=registration_method.Once)
    registration_method.SetOptimizerScalesFromPhysicalShift() 
    registration_method.SetInitialTransform(initial_transform)
    registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
    registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
    registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

    registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_start_plot)
    registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_end_plot)
    registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, rc.metric_update_multires_iterations) 
    registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_plot_values(registration_method))

    final_transform = registration_method.Execute(fixed_image, moving_image)
    print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
    print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
    return final_transform

Loading Data


In [ ]:
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))

fixed_series_ID = "1.2.840.113619.2.290.3.3233817346.783.1399004564.515"
moving_series_ID = "1.3.12.2.1107.5.2.18.41548.30000014030519285935000000933"

reader = sitk.ImageSeriesReader()
fixed_image = sitk.ReadImage(reader.GetGDCMSeriesFileNames(data_directory, fixed_series_ID), sitk.sitkFloat32)
moving_image = sitk.ReadImage(reader.GetGDCMSeriesFileNames(data_directory, moving_series_ID), sitk.sitkFloat32)

# To provide a reasonable display we need to window/level the images. By default we could have used the intensity
# ranges found in the images [SimpleITK's StatisticsImageFilter], but these are not the best values for viewing.
# Using an external viewer we identified the following settings.
fixed_intensity_range = (-1183,544)
moving_intensity_range = (0,355)

interact(lambda image1_z, image2_z, image1, image2,:ru.display_scalar_images(image1_z, image2_z, image1, image2, 
                                                                             fixed_intensity_range,
                                                                             moving_intensity_range,
                                                                             'fixed image',
                                                                             'moving image'), 
         image1_z=(0,fixed_image.GetSize()[2]-1), 
         image2_z=(0,moving_image.GetSize()[2]-1), 
         image1 = fixed(fixed_image), 
         image2=fixed(moving_image));

Arbitrarily rotate the moving image.


In [ ]:
rotation_x = 0.0
rotation_z = 0.0

def modify_rotation(rx_in_degrees, rz_in_degrees):
    global rotation_x, rotation_z
    
    rotation_x = np.radians(rx_in_degrees)
    rotation_z = np.radians(rz_in_degrees)
    
interact(modify_rotation, rx_in_degrees=(0.0,180.0,5.0), rz_in_degrees=(-90.0,180.0,5.0));

In [ ]:
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(moving_image)
resample.SetInterpolator(sitk.sitkLinear)
# Rotate around the physical center of the image. 
rotation_center = moving_image.TransformContinuousIndexToPhysicalPoint([(index-1)/2.0 for index in moving_image.GetSize()])
transform = sitk.Euler3DTransform(rotation_center, rotation_x, 0, rotation_z, (0,0,0))
resample.SetTransform(transform)
modified_moving_image = resample.Execute(moving_image)

interact(lambda image1_z, image2_z, image1, image2,:ru.display_scalar_images(image1_z, image2_z, image1, image2, 
                                                                             moving_intensity_range,
                                                                             moving_intensity_range, 'original', 'rotated'), 
         image1_z=(0,moving_image.GetSize()[2]-1), 
         image2_z=(0,modified_moving_image.GetSize()[2]-1), 
         image1 = fixed(moving_image), 
         image2=fixed(modified_moving_image));

Register using standard initialization (assumes orientation is similar)


In [ ]:
initial_transform = sitk.CenteredTransformInitializer(fixed_image, 
                                                      modified_moving_image, 
                                                      sitk.Euler3DTransform(), 
                                                      sitk.CenteredTransformInitializerFilter.GEOMETRY)

final_transform = multires_registration(fixed_image, modified_moving_image, initial_transform)

Visually evaluate our results:


In [ ]:
moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())

interact(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), 
         image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), 
         image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));

Register using heuristic initialization approach (using multiple orientations)

As we want to account for significant orientation differences due to erroneous patient position (HFS...) we evaluate the similarity measure at locations corresponding to the various orientation differences. This can be done in two ways which will be illustrated below:

  • Use the ImageRegistrationMethod.MetricEvaluate() method.
  • Use the Exhaustive optimizer.

The former approach is more computationally intensive as it constructs and configures a metric object each time it is invoked. It is therefore more appropriate for use if the set of parameter values we want to evaluate are not on a rectilinear grid in the parameter space. The latter approach is appropriate if the set of parameter values are on a rectilinear grid, in which case the approach is more computationally efficient.

In both cases we use the CenteredTransformInitializer to obtain the initial translation.

MetricEvaluate

To use the MetricEvaluate method we create a ImageRegistrationMethod, set its metric and interpolator. We then iterate over all parameter settings, set the initial transform and evaluate the metric. The minimal similarity measure value corresponds to the best paramter settings.


In [ ]:
# Dictionary with all the orientations we will try. We omit the identity (x=0, y=0, z=0) as we always use it. This
# set of rotations is arbitrary. For a complete grid coverage we would have 64 entries (0,pi/2,pi,1.5pi for each angle).
all_orientations = {'x=0, y=0, z=90': (0.0,0.0,np.pi/2.0),
                    'x=0, y=0, z=-90': (0.0,0.0,-np.pi),
                    'x=0, y=0, z=180': (0.0,0.0,np.pi),
                    'x=180, y=0, z=0': (np.pi,0.0,0.0),
                    'x=180, y=0, z=90': (np.pi,0.0,np.pi/2.0),
                    'x=180, y=0, z=-90': (np.pi,0.0,-np.pi/2.0),
                    'x=180, y=0, z=180': (np.pi,0.0,np.pi)}    

# Registration framework setup.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)

# Evaluate the similarity metric using the eight possible orientations, translation remains the same for all.
initial_transform = sitk.Euler3DTransform(sitk.CenteredTransformInitializer(fixed_image, 
                                                                            modified_moving_image, 
                                                                            sitk.Euler3DTransform(), 
                                                                            sitk.CenteredTransformInitializerFilter.GEOMETRY))
registration_method.SetInitialTransform(initial_transform, inPlace=False)
best_orientation = (0.0,0.0,0.0)
best_similarity_value = registration_method.MetricEvaluate(fixed_image, modified_moving_image)

# Iterate over all other rotation parameter settings. 
for key, orientation in all_orientations.items():
    initial_transform.SetRotation(*orientation)
    registration_method.SetInitialTransform(initial_transform)
    current_similarity_value = registration_method.MetricEvaluate(fixed_image, modified_moving_image)
    if current_similarity_value < best_similarity_value:
        best_similarity_value = current_similarity_value
        best_orientation = orientation

initial_transform.SetRotation(*best_orientation)

final_transform = multires_registration(fixed_image, modified_moving_image, initial_transform)

Visually evaluate our results:


In [ ]:
moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())

interact(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), 
         image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), 
         image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));

Exhaustive optimizer

The exhaustive optimizer evaluates the similarity measure using a grid overlaid on the parameter space. The grid is centered on the parameter values set by the SetInitialTransform, and the location of its vertices are determined by the numberOfSteps, stepLength and optimizer scales. To quote the documentation of this class: "a side of the region is stepLength(2numberOfSteps[d]+1)*scaling[d]."

Using this approach we have superfluous evaluations (15 evaluations corresponding to 3 values for rotations around the x axis and five for rotation around the z axis, as compared to the 8 evaluations using the MetricEvaluate method).


In [ ]:
initial_transform = sitk.CenteredTransformInitializer(fixed_image, 
                                                      modified_moving_image, 
                                                      sitk.Euler3DTransform(), 
                                                      sitk.CenteredTransformInitializerFilter.GEOMETRY)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# The order of parameters for the Euler3DTransform is [angle_x, angle_y, angle_z, t_x, t_y, t_z]. The parameter 
# sampling grid is centered on the initial_transform parameter values, that are all zero for the rotations. Given
# the number of steps and their length and optimizer scales we have:
# angle_x = -pi, 0, pi
# angle_y = 0
# angle_z = -pi, -pi/2, 0, pi/2, pi
registration_method.SetOptimizerAsExhaustive(numberOfSteps=[1,0,2,0,0,0], stepLength = np.pi)
registration_method.SetOptimizerScales([1,1,0.5,1,1,1])

#Perform the registration in-place so that the initial_transform is modified.
registration_method.SetInitialTransform(initial_transform, inPlace=True)
registration_method.Execute(fixed_image, modified_moving_image)

final_transform = multires_registration(fixed_image, modified_moving_image, initial_transform)

Visually evaluate our results:


In [ ]:
moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())

interact(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), 
         image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), 
         image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));

In [ ]: