In [1]:
%matplotlib inline

Volumetric Registration and Analysis

In this tutorial we align a CLARITY brain Control258 to the Allen Reference Atlas (ARA). Thus the ARA is our reference image, it consists of image data stored in the default "average" channel and corresponding annotations stored in a separate "annotation" channel. We begin by importing the relevant modules


In [2]:
from ndreg import *
import matplotlib
import ndio.remote.neurodata as neurodata

Next we'll download the image data


In [3]:
refToken = "ara_ccf2"
refImg = imgDownload(refToken)

When given an image volume imgShow displays it as a set of evenly spaced z-slices (1st column), y-slices (2nd column) and x slices (3rd column).


In [4]:
imgShow(refImg)


The displayed slices above are a little too dark. So we set the visuilization max vmax manually.


In [5]:
imgShow(refImg, vmax=500)


Now we download the atlas annotations.


In [6]:
refAnnoImg = imgDownload(refToken, channel="annotation")
imgShow(refAnnoImg, vmax=1000)


It's usually better to visuaize a set of annotations in color. Let's display the atlas annotations using a random colormap. We begin by creating a 1000x3 array of random values.


In [7]:
randValues = np.random.rand(1000,3)

Since we always want the backgrond (label 0) to be black make the 0th row is [0,0,0].


In [8]:
randValues = np.concatenate(([[0,0,0]],randValues))

Now we can display the annotations.


In [9]:
randCmap = matplotlib.colors.ListedColormap (randValues)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap)


Let's overlay these annotations on the atlas image using alpha transparency. To do this we must set newFig to False so that matplotlib appends the annotation data to the current figure instead of creating new one.


In [10]:
imgShow(refImg, vmax=500, newFig=False)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.2, newFig=False)
plt.show()


Downloading CLARITY brain

Our input image will be CLARITY brain Control258. At full resolution, CLARITY brains can be 1 Terabyte in size, far to large to be downloaded to a personal computer. Thus ndstore stores the brains at multiple resolutions. Resolution 0 is always the highest resolution each subsequent resoltion takes up 1/4th (under slice scaling) or 1/8th (under isotropic scaling) as many bytes as the previous resoltion. We can use ndio's get_metadata method to see a list of available resoltions.


In [11]:
inToken = "Control258"

nd = neurodata()
print(nd.get_metadata(inToken)['dataset']['voxelres'].keys())


[u'1', u'0', u'3', u'2', u'5', u'4']

Clearly resolution 5 is the lowest available resolution. So we'll download the CLARITY image at that resolution. Depending on your internet connection downloading may take several minutes.


In [12]:
inImg = imgDownload(inToken, resolution=5)
imgShow(inImg, vmax=500)


Let's make a copy of the downloaded image. We'll need to use it later


In [13]:
inImg_download = imgCopy(inImg)

Resampling CLARITY image

Notice how the CLARITY brain's resolution is higher than the ARA image that we want to align it to.


In [14]:
print(inImg.GetSpacing())


(0.01872, 0.01872, 0.005)

In [15]:
print(refImg.GetSpacing())


(0.024999999999999998, 0.024999999999999998, 0.024999999999999998)

Since we are aliging the CLARITY image to a lower resolution atlas, we can save memory by downsampling it to the resolution atlas.


In [16]:
inImg = imgResample(inImg, spacing=refImg.GetSpacing())
imgShow(inImg, vmax=500)


Reorienting CLARITY image

You may have noticed that the CLARITY brain is not oriented in the same way as the atlas. Let's look at the atlas.


In [17]:
imgShow(refImg, vmax=500)


By examining the the z slices in the first column its clear that the x-axis goes from Right to Left side of the brain. The y-axis varies from Superior to Inferior and the z-axis goes from Anterior to posterior. Thus it is in RSA orientation. Looking at the the CLARITY brain...


In [18]:
imgShow(inImg, vmax=500)


...we see that the x-axis goes from Left to right, the y-axis goes from Anterior to Posterior and the Z axis goes from Inferior to Superior. Thus it's in LAI orientation. Therefore we reorient the CLARITY image from LAI to RSA


In [19]:
inImg = imgReorient(inImg, "LAI", "RSA")
imgShow(inImg, vmax=500)


Compare the above to the Atlas. The slices should now correspond with the atlas. Let's make a copy of this image as well because we'll need it later.


In [20]:
inImg_reorient = imgCopy(inImg)

Thresholding CLARITY image

Lets plot the histogram of the CLARITY image


In [21]:
(values, bins) = np.histogram(sitk.GetArrayFromImage(inImg), bins=100, range=(0,500))
plt.plot(bins[:-1], values)


Out[21]:
[<matplotlib.lines.Line2D at 0x7f482a154a10>]

Notice the huge spike at 100. This occurs because the intensity of the background is about 100. Idealy we want a black background with an intensity of 0. Therefore we'll threhold the image below 100 and then subtract 100 from the image.


In [22]:
lowerThreshold = 100
upperThreshold = sitk.GetArrayFromImage(inImg).max()+1

inImg = sitk.Threshold(inImg,lowerThreshold,upperThreshold,lowerThreshold) - lowerThreshold
imgShow(inImg, vmax = 500)


Here's a histogram of the result


In [23]:
(values, bins) = np.histogram(sitk.GetArrayFromImage(inImg), bins=100, range=(0,500))
plt.plot(bins[:-1], values)


Out[23]:
[<matplotlib.lines.Line2D at 0x7f4829f4ea10>]

Generating CLARITY mask

CLARITY brains contain really brignt fluorescent spots in the cerebral cortex and thalamus which can interfer with the registration.


In [24]:
imgShow(inImg, vmax = 500)


To avoid this problem we create a mask which will be used to exclud the top 5% brighest voxels from registration. To make the mask we first compute the normalized cumulative histogram.


In [25]:
(values, bins) = np.histogram(sitk.GetArrayFromImage(inImg), bins=1000)
cumValues = np.cumsum(values).astype(float)
cumValues = (cumValues - cumValues.min()) / cumValues.ptp()

We then find the first value greater than 0.95 (which is 100% - 5%)


In [26]:
maxIndex = np.argmax(cumValues>0.95)-1
threshold = bins[maxIndex]
print(threshold)


218.813

We then threshold the image using this value to obtain a mask


In [27]:
inMask = sitk.BinaryThreshold(inImg, 0, threshold, 1, 0)
imgShow(inMask)


The masked areas will be excluded from the registration


In [28]:
imgShow(imgMask(inImg,inMask))


Affine Registration

We can finally begin the registration. Ideally we would do resgistration at the full atlas scale of 0.025 mm x 0.025 x 0.025 mm but this would be far to computationally expensive for the purposes of this tutorial. Therefore to save time we downsample the images to 0.25 mm x 0.25mm x 0.25mm


In [29]:
spacing=[0.25,0.25,0.25]
refImg_ds = imgResample(refImg, spacing=spacing)
imgShow(refImg_ds, vmax=500)



In [30]:
inImg_ds = imgResample(inImg, spacing=spacing)
imgShow(inImg_ds, vmax=500)


Notice how we used nearest-neighbor interpolation when downsampling the mask


In [31]:
inMask_ds = imgResample(inMask, spacing=spacing, useNearest=True)
imgShow(inMask_ds)


Now we compute the affine transform. Unlike in the basic registration tutiorial we'll use the imgAffineComposite instead of imgAffine. imgAffine simply computes the affine transform between the input and reference images. imgAffineComposite computes a translation then a rigid then an affine transformation. It's output is the composition of those three transforms. We use Mutual Information since the CLARITY and ARA images have very differnt intensity profiles. We also enable the verbose option so that each iteration is printed.


In [32]:
affine = imgAffineComposite(inImg_ds, refImg_ds, inMask=inMask_ds, iterations=100, useMI=True, verbose=True)


Step translation:
0.	 -0.275686100442
1.	 -0.291574537119
2.	 -0.312210822481
3.	 -0.324350742638
4.	 -0.33517648148
5.	 -0.353540176408
6.	 -0.36026389546
7.	 -0.357676501254
8.	 -0.353962595501
9.	 -0.357357098762
10.	 -0.35456975475
11.	 -0.355968251971
12.	 -0.357432906463
13.	 -0.35622748796
14.	 -0.35740137251
15.	 -0.357406585039
Step rigid:
0.	 -0.370280002532
1.	 -0.337820206762
2.	 -0.406673900993
3.	 -0.398354914099
4.	 -0.420797981414
5.	 -0.419753067398
6.	 -0.428502075873
7.	 -0.427738664736
8.	 -0.432637993122
9.	 -0.428463468689
10.	 -0.432543825382
11.	 -0.433180376789
12.	 -0.434242966059
13.	 -0.434670876878
14.	 -0.435772840368
15.	 -0.436181934034
16.	 -0.436846806531
17.	 -0.437145371698
18.	 -0.436470988958
19.	 -0.437685185069
20.	 -0.438047772742
21.	 -0.438175622121
22.	 -0.438647547272
23.	 -0.438875295903
24.	 -0.439330887992
25.	 -0.438612923045
26.	 -0.439766456186
27.	 -0.438741003415
28.	 -0.439867826646
29.	 -0.439375530737
30.	 -0.43951914017
31.	 -0.439633938685
32.	 -0.439515892381
33.	 -0.439473355104
34.	 -0.439573700654
35.	 -0.439821361035
36.	 -0.439852523712
37.	 -0.440110635793
38.	 -0.440141335356
39.	 -0.440377317228
40.	 -0.440506672645
41.	 -0.440599792891
42.	 -0.44075915475
43.	 -0.440819414562
44.	 -0.441015682915
45.	 -0.440996749321
46.	 -0.441491340854
47.	 -0.441148920347
48.	 -0.441914333733
49.	 -0.441489823942
50.	 -0.442042506539
51.	 -0.442075544983
52.	 -0.4423007025
53.	 -0.442268377054
Step affine:
0.	 -0.436539894081
1.	 -0.432843609628
2.	 -0.462420687245
3.	 -0.40655184468
4.	 -0.462770980337
5.	 -0.466047109726
6.	 -0.469402042365
7.	 -0.470436862892
8.	 -0.470872339754
9.	 -0.472023314811
10.	 -0.471341884964
11.	 -0.47211136652
12.	 -0.472979129267
13.	 -0.473024301497
14.	 -0.473468369171
15.	 -0.473567022839
16.	 -0.473982732186
17.	 -0.474361448272
18.	 -0.474633599862
19.	 -0.474983028945
20.	 -0.475242747494
21.	 -0.474893113432
22.	 -0.475315426399
23.	 -0.475429971657
24.	 -0.475918529675
25.	 -0.475882413077
26.	 -0.4763997246
27.	 -0.476487868732
28.	 -0.476576168233
29.	 -0.476562231896
30.	 -0.477220360479

Now we apply the affine transform to the input CLARITY image and mask


In [33]:
inImg_affine = imgApplyAffine(inImg, affine, size=refImg.GetSize())
imgShow(inImg_affine, vmax=500)

inMask_affine = imgApplyAffine(inMask, affine, size=refImg.GetSize(), useNearest=True)
imgShow(inMask_affine)


LDDMM registration

Now we run LDDMM registration. Here we use imgMetamorphosisComposite. Unlike imgMetamorphosis introduced in the basic registration notebook, this function runs LDDMM in multiple steps using the alpha values specified by in alphaList. The field and invField outputs are the composition of all steps. Once agan we use a Mutual Information cost because the input CLARITY brain and reference ARA average have very differnt intensity profiles.


In [34]:
inImg_ds = imgResample(inImg_affine, spacing=spacing)
inMask_ds = imgResample(inMask_affine, spacing=spacing, useNearest=True)
(field, invField) = imgMetamorphosisComposite(inImg_ds, refImg_ds, inMask=inMask_ds, alphaList=[0.05, 0.02, 0.01], useMI=True, iterations=100, verbose=True)
inImg_lddmm = imgApplyField(inImg_affine, field, size=refImg.GetSize())
inMask_lddmm = imgApplyField(inMask_affine, field, size=refImg.GetSize(), useNearest=True)
imgShow(inImg_lddmm, vmax = 500)
imgShow(inMask_lddmm)


Step 0: alpha=0.05, beta=0.05, scale=1.0
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-1.89004e+10, 132.856, 0, -1.89004e+10 (97.1977%), 1.100000e-03
1.	-1.95189e+10, 239.149, 0, -1.95189e+10 (96.5138%), 1.210000e-03
2.	-2.13263e+10, 443.537, 0, -2.13263e+10 (94.5153%), 1.331000e-03
3.	-2.17861e+10, 780.352, 0, -2.17861e+10 (94.0069%), 1.464100e-03
4.	-2.30273e+10, 952.936, 0, -2.30274e+10 (92.6345%), 8.052550e-04
5.	-2.37502e+10, 1016.57, 0, -2.37502e+10 (91.8353%), 4.428903e-04
6.	-2.39865e+10, 1117.63, 0, -2.39865e+10 (91.574%), 4.871793e-04
7.	-2.42662e+10, 1213.39, 0, -2.42662e+10 (91.2648%), 5.358972e-04
8.	-2.44106e+10, 1326.26, 0, -2.44106e+10 (91.1051%), 5.894869e-04
9.	-2.46172e+10, 1431.81, 0, -2.46172e+10 (90.8767%), 6.484356e-04
10.	-2.50184e+10, 1539.45, 0, -2.50184e+10 (90.4331%), 7.132792e-04
11.	-2.51054e+10, 1670.05, 0, -2.51054e+10 (90.3369%), 7.846071e-04
12.	-2.52719e+10, 1744.4, 0, -2.52719e+10 (90.1528%), 4.315339e-04
13.	-2.53369e+10, 1819.91, 0, -2.53369e+10 (90.0809%), 4.746873e-04
14.	-2.56168e+10, 1910.02, 0, -2.56168e+10 (89.7714%), 5.221560e-04
15.	-2.56498e+10, 1956.45, 0, -2.56498e+10 (89.7349%), 2.871858e-04
16.	-2.5754e+10, 2003.8, 0, -2.5754e+10 (89.6198%), 3.159044e-04
17.	-2.58224e+10, 2032.44, 0, -2.58224e+10 (89.5441%), 1.737474e-04
18.	-2.58232e+10, 2033.39, 0, -2.58232e+10 (89.5432%), 5.972567e-06
19.	-2.58247e+10, 2033.4, 0, -2.58247e+10 (89.5415%), 2.566338e-08
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
20.	-2.58247e+10, 2033.4, 0, -2.58247e+10 (89.5415%), 1.411486e-08
21.	-2.58247e+10, 2033.4, 0, -2.58247e+10 (89.5415%), 7.763171e-09
E = -2.56361e+10 (89.75%)
Length = 36.5331
Time = 36.6467s (0.610778m)

Step 1: alpha=0.02, beta=0.05, scale=1.0
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-2.65043e+10, 52.6633, 0, -2.65043e+10 (98.7445%), 1.100000e-03
1.	-2.69088e+10, 62.2, 0, -2.69088e+10 (98.247%), 3.025000e-04
2.	-2.7317e+10, 84.8591, 0, -2.7317e+10 (97.7451%), 3.327500e-04
3.	-2.7396e+10, 89.8809, 0, -2.7396e+10 (97.648%), 9.150625e-05
4.	-2.75106e+10, 97.0993, 0, -2.75106e+10 (97.5071%), 1.006569e-04
5.	-2.76227e+10, 105.825, 0, -2.76227e+10 (97.3693%), 1.107226e-04
6.	-2.77082e+10, 116.399, 0, -2.77082e+10 (97.2642%), 1.217948e-04
7.	-2.78263e+10, 128.777, 0, -2.78263e+10 (97.1188%), 1.339743e-04
8.	-2.79311e+10, 143.317, 0, -2.79311e+10 (96.99%), 1.473717e-04
9.	-2.80248e+10, 151.646, 0, -2.80248e+10 (96.8748%), 8.105445e-05
10.	-2.81029e+10, 161.465, 0, -2.81029e+10 (96.7788%), 8.915990e-05
11.	-2.82157e+10, 172.632, 0, -2.82157e+10 (96.6401%), 9.807589e-05
12.	-2.82493e+10, 179.102, 0, -2.82493e+10 (96.5987%), 5.394174e-05
13.	-2.8254e+10, 179.992, 0, -2.8254e+10 (96.593%), 7.416989e-06
14.	-2.82548e+10, 180.114, 0, -2.82548e+10 (96.592%), 1.019836e-06
15.	-2.82552e+10, 180.181, 0, -2.82552e+10 (96.5915%), 5.609098e-07
16.	-2.82552e+10, 180.186, 0, -2.82552e+10 (96.5914%), 3.856255e-08
17.	-2.82553e+10, 180.189, 0, -2.82553e+10 (96.5914%), 2.120940e-08
18.	-2.82553e+10, 180.19, 0, -2.82553e+10 (96.5914%), 1.166517e-08
19.	-2.82553e+10, 180.19, 0, -2.82553e+10 (96.5914%), 4.009902e-10
E = -2.82553e+10 (96.5914%)
Length = 11.8081
Time = 32.8657s (0.547762m)

Step 2: alpha=0.01, beta=0.05, scale=1.0
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-2.90622e+10, 100.262, 0, -2.90622e+10 (98.0618%), 1.100000e-03
1.	-3.00128e+10, 100.174, 0, -3.00128e+10 (96.8628%), 1.512500e-04
2.	-3.02247e+10, 103.9, 0, -3.02247e+10 (96.5954%), 8.318750e-05
3.	-3.04082e+10, 109.162, 0, -3.04082e+10 (96.364%), 9.150625e-05
4.	-3.04089e+10, 112.701, 0, -3.04089e+10 (96.3632%), 5.032844e-05
5.	-3.05042e+10, 117.149, 0, -3.05042e+10 (96.2429%), 5.536128e-05
6.	-3.05292e+10, 122.253, 0, -3.05292e+10 (96.2114%), 6.089741e-05
7.	-3.06209e+10, 128.46, 0, -3.06209e+10 (96.0957%), 6.698715e-05
8.	-3.06465e+10, 135.741, 0, -3.06465e+10 (96.0634%), 7.368587e-05
9.	-3.07572e+10, 144.424, 0, -3.07572e+10 (95.9238%), 8.105445e-05
10.	-3.07695e+10, 146.645, 0, -3.07695e+10 (95.9084%), 2.228997e-05
11.	-3.07777e+10, 149.431, 0, -3.07777e+10 (95.898%), 2.451897e-05
12.	-3.08066e+10, 152.472, 0, -3.08066e+10 (95.8616%), 2.697087e-05
13.	-3.08265e+10, 155.846, 0, -3.08265e+10 (95.8364%), 2.966796e-05
14.	-3.08553e+10, 159.59, 0, -3.08553e+10 (95.8%), 3.263475e-05
15.	-3.0908e+10, 164.038, 0, -3.0908e+10 (95.7336%), 3.589823e-05
16.	-3.09423e+10, 168.968, 0, -3.09423e+10 (95.6903%), 3.948805e-05
17.	-3.09793e+10, 174.736, 0, -3.09793e+10 (95.6437%), 4.343685e-05
18.	-3.10127e+10, 181.154, 0, -3.10127e+10 (95.6016%), 4.778054e-05
19.	-3.1085e+10, 188.512, 0, -3.1085e+10 (95.5104%), 5.255859e-05
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
20.	-3.10958e+10, 192.595, 0, -3.10958e+10 (95.4967%), 2.890723e-05
21.	-3.11418e+10, 197.147, 0, -3.11418e+10 (95.4387%), 3.179795e-05
E = -3.11376e+10 (95.4441%)
Length = 14.5525
Time = 35.2983s (0.588305m)

We can now overlay tha ARA annotaions on the CLARITY image.


In [35]:
imgShow(inImg_lddmm, vmax=500, newFig=False, numSlices=1)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.2, newFig=False, numSlices=1)


Evaluating the registration

Evaluation using checker board

We can evaluate the registration by generating a checkerboard of the deformed CLARITY and ARA. In this method the input image is placed on the black squares of the metaphorical checkerboard while the reference image is placed on the red squares. Idealy anatomical sturctures should line up across squares.


In [36]:
imgShow(imgChecker(inImg_lddmm, refImg, useHM=False), vmax=500, numSlices=1)


Evaluation using cost functions

Another way we can evaluate the deformable alignment is by using the Mutual Information (MI) and Mean Square Error (MSE) cost functions. Before LDDMM the MSE and MI were


In [37]:
miAffine = imgMI(inImg_affine, refImg)
print(miAffine)
mseAffine = imgMSE(inImg_affine, refImg)
print(mseAffine)


0.0716097800639
30921.9799549

After LDDMM these cost functions were


In [38]:
miLddmm = imgMI(inImg_lddmm, refImg)
print(miLddmm)
mseLddmm = imgMSE(inImg_lddmm, refImg)
print(mseLddmm)


0.182893681931
27390.4591556

Notice how the MI has increased and the MSE has decreased indicating that LDDMM improved the alignment. We can normalize these values to a range of [0,1] where a value of 1 represents the worst possible value (no alignment) and 0 represents the best possible value (deformed input and reference images are identical)


In [39]:
miNormalized = (miLddmm - miAffine)/(imgMI(refImg, refImg) - miAffine)
print(miNormalized)
mseNormalized = (mseLddmm - mseAffine)/(imgMSE(refImg, refImg) - mseAffine)
print(mseNormalized)


0.0776689700281
0.114207460341

Measuring Volume Change

We can also evaluate the registration using the negative log Jacobian determinant of the displacement field. The Jacobian Determinant is a measure of the local volume change. Wherever it's negitive the input CLARITY image contracted during registration. Wherever it's positive the CLARITY image expanded.


In [40]:
logJDet = -sitk.Log(sitk.DisplacementFieldJacobianDeterminant(field))

Since the field was computed on a downsampled grid we resample logJDet to the size of the deformed image.


In [41]:
logJDet = imgResample(logJDet, spacing=inImg_lddmm.GetSpacing(), size=inImg_lddmm.GetSize())

To overlay it on the deformed CLARITY image, we must set newFig to False so that matplotlib adds the logJDet data to the old figure instead of creating a new figure.


In [42]:
imgShow(inImg_lddmm, vmax=500, newFig=False)
imgShow(logJDet, newFig=False, alpha=0.5, cmap=plt.cm.jet, vmin=-2, vmax=2)
plt.show()


Let's add a colorbar so that it's clear what the color's mean. This is a little tricky so we'll do this one step at a time. First we get the current figure.


In [43]:
imgShow(inImg_lddmm, vmax=500, newFig=False)
imgShow(logJDet, newFig=False, alpha=0.5, cmap=plt.cm.jet, vmin=-2, vmax=2)
fig = plt.gcf()


There a 9 subplots in the figure.


In [44]:
fig.axes


Out[44]:
[<matplotlib.axes.AxesSubplot at 0x7f4829ff7a10>,
 <matplotlib.axes.AxesSubplot at 0x7f4829fa5950>,
 <matplotlib.axes.AxesSubplot at 0x7f482a445490>,
 <matplotlib.axes.AxesSubplot at 0x7f482985f050>,
 <matplotlib.axes.AxesSubplot at 0x7f48294b6c50>,
 <matplotlib.axes.AxesSubplot at 0x7f482a373a90>,
 <matplotlib.axes.AxesSubplot at 0x7f4829d3f8d0>,
 <matplotlib.axes.AxesSubplot at 0x7f482a0898d0>,
 <matplotlib.axes.AxesSubplot at 0x7f482a357b50>]

Let's get the axis of the 0th subplot from the figure.


In [45]:
ax = fig.axes[0]

Each subplot has two images two images. The 0th one is a slice from inImg_lddmm. The 1st one is a slice from the logJDet. Since we want the colorbar from logJDet we get the 1st image.


In [46]:
img_ax = ax.get_images()[1]

Now we make room for the colorbar, add an axis for it and the add it to the figure.


In [47]:
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.2, 0.05, 0.6])
cbar = fig.colorbar(img_ax, cax=cbar_ax)

Putting it all together.


In [48]:
imgShow(inImg_lddmm, vmax=500, newFig=False)
imgShow(logJDet, newFig=False, alpha=0.5, cmap=plt.cm.jet, vmin=-2, vmax=2)
fig = plt.gcf()
ax = fig.axes[0]
img_ax = ax.get_images()[1]
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.2, 0.05, 0.6])
cbar = fig.colorbar(img_ax, cax=cbar_ax) 
plt.suptitle("Log volume change")
plt.show()


/usr/lib/pymodules/python2.7/matplotlib/collections.py:548: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if self._edgecolors == 'face':

Since we see more reds, oranges and yellows than blues it's obvious that the CLARITY brain expaned in most places during registration. We can also plot a histogram of the Jacobian values.


In [49]:
(values, bins) = np.histogram(sitk.GetArrayFromImage(logJDet), bins=100, range=(-2,2))
numVoxels = float(np.prod(logJDet.GetSize()))
plt.plot(bins[:-1], values / numVoxels)
plt.xlabel("$log |D \phi_{10} |$")
plt.ylabel("Probability")
plt.title("Histogram of log volume change")


Out[49]:
<matplotlib.text.Text at 0x7f482a029ad0>

Measuring Distance

Simmilary we can plot the magnitude of the displacement


In [50]:
magField = sitk.VectorMagnitude(field)

Like before, since the field was computed on a downsampled grid, we upsample the field magnitude to the size of the deformed input image.


In [51]:
magField = imgResample(magField, spacing=inImg_lddmm.GetSpacing(), size=inImg_lddmm.GetSize())

Now we can overlay the dispacement magnitude on the deformed image


In [52]:
imgShow(inImg_lddmm, vmax=500, newFig=False)
imgShow(magField, newFig=False, alpha=0.5, cmap=plt.cm.jet)
fig = plt.gcf()
ax = fig.axes[0]
img_ax = ax.get_images()[1]
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.2, 0.05, 0.6])
cbar = fig.colorbar(img_ax, cax=cbar_ax) 
plt.suptitle("Distance $|\phi_{10} - id|$ in mm")
plt.show()


Clearly most displacement occured in the olfactory bulbs. Likewise lets display a histogram of the distances.


In [53]:
(values, bins) = np.histogram(sitk.GetArrayFromImage(magField), bins=100)
numVoxels = float(np.prod(magField.GetSize()))
plt.plot(bins[:-1], values/numVoxels)
plt.xlabel("$|\phi_{10} - id|$ in mm")
plt.ylabel("Probability")
plt.title("Histogram of distances")


Out[53]:
<matplotlib.text.Text at 0x7f48299f1e50>

Uploading Results

Uploading deformed ARA annotations

Recall that we made a copy of the downloaded CLARITY image


In [54]:
imgShow(inImg_download, vmax=500)


Recall that we also have ARA annotations


In [55]:
imgShow(refAnnoImg, vmax=1000, cmap=randCmap)


Before we can overlay the ARA annotations on the downloaded image we must transfom them to its space. Fortunatly this can be done since all spatial transforms in this tutorial are invertable. First we construct an inverse displacement field which transforms the annotations from the ARA space to the space before registration.


In [56]:
invAffine = affineInverse(affine)
invAffineField = affineToField(invAffine, refImg.GetSize(), refImg.GetSpacing())
invField = fieldApplyField(invAffineField, invField)
inAnnoImg = imgApplyField(refAnnoImg, invField,useNearest=True, size=inImg_reorient.GetSize())
imgShow(inAnnoImg, vmax=1000, cmap=randCmap)


Were not done yet. We still need to reorient these annotations from RSA to LAI


In [57]:
inAnnoImg = imgReorient(inAnnoImg, "RSA", "LAI")
imgShow(inAnnoImg, vmax=1000, cmap=randCmap)


Finally we resample the annotations to the spacing and size of the downloaded CLARITY image. Lets look at the overlay.


In [58]:
inAnnoImg = imgResample(inAnnoImg, spacing=inImg_download.GetSpacing(), size=inImg_download.GetSize(), useNearest=True)
imgShow(inImg_download, vmax=500, numSlices=1, newFig=False)
imgShow(inAnnoImg, vmax=1000, cmap=randCmap, alpha=0.2, numSlices=1, newFig=False)


We can upload these annotations at the lowest possible resolution.


In [ ]:
channel = "myAnnotationChannel"
imgUpload(inImg_lddmm, inToken, channel, resolution=5)

Uploading deformed CLARITY image

We can also upload the ARA-aligned CLARITY brain back into ndstore.


In [ ]:
token = "myToken"
channel = "myChannel"
imgUpload(inImg_lddmm, token, channel)