In [1]:
%matplotlib inline

Volumetric Registration and Analysis

In this tutorial we align a partial mouse brain to the Allen Referece Atlas (ARA). Thus the ARA is our reference image, it consists of image data stored in the default "average" channel and corresponding annotations stored in a separate "annotation" channel. We begin by importing the relevant modules


In [2]:
from ndreg import *
import matplotlib
import ndio.remote.neurodata as neurodata

We define the server and our user token


In [3]:
server = "dev.neurodata.io"
userToken = txtRead("userToken.pem").strip()

First we'll download the atlas image


In [4]:
refToken = "ara3"
refImg = imgDownload(refToken, channel="average", server=server, userToken=userToken)

Next we'll visuaize the image. To ensure that the visuization is has good contrast we'll only show intensity values below the 99th percentile.


In [5]:
refThreshold = imgPercentile(refImg, 0.99)
print(refThreshold)


269.129411765

When given an image volume imgShow displays it as a set of evenly spaced z-slices (1st column), y-slices (2nd column) and x slices (3rd column).


In [6]:
imgShow(refImg, vmax=refThreshold)


Now we download the corresponding annotations


In [7]:
refAnnoImg = imgDownload(refToken, channel="annotation", server=server, userToken=userToken)
refAnnoImgOrig = refAnnoImg[:,:,:]
imgShow(refAnnoImg, vmax=1000)


It's usually better to visuaize a set of annotations in color. Let's display the atlas annotations using a random colormap. We begin by creating a 1000x3 array of random values. Since we always want the backgrond (label 0) to be dark we make the 0th row is [0,0,0].


In [8]:
randValues = np.random.rand(1000,3)
randValues = np.concatenate(([[0,0,0]],randValues))

Now we can display the annotations.


In [9]:
randCmap = matplotlib.colors.ListedColormap (randValues)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap)


Let's overlay these annotations on the atlas image using alpha transparency. To do this we must set newFig to False so that matplotlib appends the annotation data to the current figure instead of creating new one.


In [10]:
imgShow(refImg, vmax=refThreshold, newFig=False)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.3, newFig=False)
plt.show()


Downloading input image

ndstore stores the brains at multiple resolutions. Resolution 0 is always the highest resolution each subsequent resoltion takes up 1/4th (under slice scaling) or 1/8th (under isotropic scaling) as many bytes as the previous resoltion. We can use ndio's get_metadata method to see a list of available resoltions.


In [11]:
inToken = "s275"
nd = neurodata(hostname="dev.neurodata.io", user_token=userToken)
print(nd.get_metadata(inToken)['dataset']['imagesize'])


{u'10': [2, 2, 1114], u'1': [804, 979, 1114], u'0': [1608, 1957, 1114], u'3': [201, 245, 1114], u'2': [402, 490, 1114], u'5': [51, 62, 1114], u'4': [101, 123, 1114], u'7': [13, 16, 1114], u'6': [26, 31, 1114], u'9': [4, 4, 1114], u'8': [7, 8, 1114]}

Clearly resolution 1 is a resonable resolution. So we'll download the image at that resolution. Depending on your internet connection downloading may take several minutes.


In [12]:
inImg = imgDownload(inToken, resolution=1, userToken=userToken, server=server)
inImg.SetSpacing(np.array(inImg.GetSpacing())*1000) ###
inImgOrig = inImg[:,:,:]

We'll only use intensity values below the 95th percentile in the visualization.


In [13]:
inImg = inImgOrig[:,:,:]
inThreshold = imgPercentile(inImg, 0.95)
imgShow(inImg, vmax=inThreshold)


Reorienting input image

You may have noticed that the input brain is not oriented in the same way as the atlas. Let's look at the atlas.


In [14]:
imgShow(refImg, vmax=refThreshold)


By examining the the z slices in the first column its clear that the x-axis goes from Right to Left side of the brain. The y-axis varies from Superior to Inferior and the z-axis goes from Anterior to posterior. Thus it is in RSA orientation. Looking at the the input brain...


In [15]:
imgShow(inImg, vmax=inThreshold)


...we see that the x-axis goes from Inferior to Superior, the y-axis goes from Anterior to Posterior and the Z axis goes from Left to Right. Thus it's in IAL orientation. Therefore we reorient the input image from IAL to RSA


In [16]:
inOrient = "IAL"
refOrient = "RSA"
inImg = imgReorient(inImg, inOrient, refOrient)
imgShow(inImg, vmax=inThreshold)


Compare the above to the Atlas. The slices should now correspond with the atlas. Let's make a copy of the reorianted image size and spacing because we'll need it later.


In [17]:
inImgSize_reorient = inImg.GetSize()
inImgSpacing_reorient= inImg.GetSpacing()

Downample images

Now we downsample the input and reference images to a more manageable size


In [18]:
spacing = [0.1,0.1, 0.1]
inImg_ds = imgResample(inImg, spacing)
imgShow(inImg_ds, vmax=inThreshold)



In [19]:
refImg_ds = imgResample(refImg, spacing)
imgShow(refImg_ds, vmax=refThreshold)



In [20]:
imgWrite(inImg_ds,"/cis/project/clarity/data/ailey/s275_ch0_rsa_100um.img")

Create and apply mask of reference image

Our input image only consists of part of the brain. Therefore we mask the reference image to match the input image. First we define a region of interest starting at the following point with the following size in mm.


In [21]:
roiStart = [5.4, 1.2, 2.1]
roiSize = [4.5,6.5,7.5]

We then convert these values to from mm to voxels


In [22]:
roiStartVoxel = (roiStart / np.array(spacing)).astype('uint16').tolist()
print(roiStartVoxel)
roiSizeVoxel = (roiSize / np.array(spacing)).astype('uint16').tolist()
print(roiSizeVoxel)


[54, 11, 21]
[45, 65, 75]

We create an Region Of Interest of value 255


In [23]:
roiImg = sitk.Image(roiSizeVoxel,sitk.sitkUInt8)
roiImg += 255

We then paste this into an empty image to create a mask of the reference image


In [24]:
emptyImg = sitk.Image(refImg_ds.GetSize(),sitk.sitkUInt8) # Create an empty image
emptyImg.CopyInformation(refImg_ds) # Copy spacing, origin and direction from reference image
refMask = sitk.Paste(emptyImg, roiImg, roiSizeVoxel, [0,0,0], roiStartVoxel)
imgShow(refMask, vmax=255)


Now we apply this mask to our downsampled reference image


In [25]:
refImg_ds = sitk.Mask(refImg_ds, refMask)
imgShow(refImg_ds, vmax=refThreshold)


Create input image mask

Notice how the input image has extremely bright regions.


In [26]:
imgShow(inImg_ds, vmax=inThreshold)


This will likly interfere with the registration. Therefore we create a registration mask which excludes those regions


In [27]:
threshold = imgPercentile(inImg_ds,0.95)
inMask_ds = sitk.BinaryThreshold(inImg_ds, 0, threshold, 255, 0)
imgShow(inMask_ds, vmax=255)


Affine Registration

We can finally begin the registration. We will initialize the affine registration using the reference image's region of interest


In [28]:
translation = -np.array(roiStart)
inAffine = [1.2,0,0,0,1.2,0,0,0,1]+translation.tolist()
print(inAffine)
imgShow(imgApplyAffine(inImg_ds, inAffine, size=refImg_ds.GetSize()),vmax = inThreshold)


[1.2, 0, 0, 0, 1.2, 0, 0, 0, 1, -5.4, -1.2, -2.1]

Now we compute the affine transform. Unlike in the basic registration tutiorial we'll use the imgAffineComposite instead of imgAffine. imgAffine simply computes the affine transform between the input and reference images. imgAffineComposite computes a translation then a rigid then an affine transformation. It's output is the composition of those three transforms. We use Mutual Information since the input and reference images have very differnt intensity profiles. We also enable the verbose option so that each iteration is printed.


In [29]:
affine = imgAffineComposite(inImg_ds, refImg_ds, inMask=inMask_ds, iterations=100, useMI=True, verbose=True, inAffine=inAffine)


Step translation:
0.	 -0.13711907421
1.	 -0.149089889635
2.	 -0.159700654271
3.	 -0.173569965532
4.	 -0.188802284259
5.	 -0.19985600114
6.	 -0.216204588758
7.	 -0.230122669931
8.	 -0.238205848542
9.	 -0.254666612266
10.	 -0.270171352534
11.	 -0.280129016567
12.	 -0.290959520949
13.	 -0.300725570743
14.	 -0.3070526028
15.	 -0.311334075232
16.	 -0.3188772023
17.	 -0.317170413782
18.	 -0.317193429738
19.	 -0.317325753143
20.	 -0.317336864534
21.	 -0.319987584134
22.	 -0.320009228661
23.	 -0.317360317429
Step scale:
0.	 -0.320740388405
1.	 -0.30869806795
2.	 -0.344844480324
3.	 -0.328265554837
4.	 -0.339438804667
5.	 -0.342117888332
6.	 -0.346916436572
7.	 -0.341504179411
8.	 -0.342321904462
9.	 -0.342294394854
10.	 -0.342332294462
Step rigid:
0.	 -0.335756851832
1.	 -0.245054167008
2.	 -0.331640830056
3.	 -0.332663009582
4.	 -0.345705984278
5.	 -0.322919870832
6.	 -0.334706487428
7.	 -0.342869831243
8.	 -0.342444291122
9.	 -0.343234211825
Step affine:
0.	 -0.351422227993
1.	 -0.28820534517
2.	 -0.362538031993
3.	 -0.352405997291
4.	 -0.375509354676
5.	 -0.380260054531
6.	 -0.380019786873
7.	 -0.383536630018
8.	 -0.383379691767
9.	 -0.384122934183
10.	 -0.384508663904
11.	 -0.384969658855
12.	 -0.385141467831
13.	 -0.385434554946
14.	 -0.385804506765
15.	 -0.386086239332
16.	 -0.38639124961
17.	 -0.386769039986
18.	 -0.387160186886
19.	 -0.387566526059
20.	 -0.387815501146
21.	 -0.387889877297
22.	 -0.387807260551
23.	 -0.387987727476
24.	 -0.387910835125
25.	 -0.388102617402
26.	 -0.388141080805
27.	 -0.388235518641
28.	 -0.388402509893
29.	 -0.388511358555
30.	 -0.388646805588
31.	 -0.388834639062
32.	 -0.389004586772
33.	 -0.389166454518
34.	 -0.389317140512
35.	 -0.389427136398
36.	 -0.389486286271
37.	 -0.389564354961
38.	 -0.38964172551
39.	 -0.38960784885
40.	 -0.389633240045
41.	 -0.389647746897
42.	 -0.389728549093
43.	 -0.389699799978
44.	 -0.389824688551
45.	 -0.38982289209
46.	 -0.390044139644

Now we apply the affine transform to the input image and mask


In [30]:
inImg_affine = imgApplyAffine(inImg, affine, size=refImg.GetSize(), spacing=refImg.GetSpacing())
imgShow(inImg_affine, vmax=inThreshold)


We can evaluate the affine registration by generating a checkerboard of the reference and input images. In this method the input image is placed on the black squares of the metaphorical checkerboard while the reference image is placed on the red squares. Idealy anatomical sturctures should line up across squares.


In [31]:
inImg_ds = imgResample(inImg_affine, spacing=spacing, size=refImg_ds.GetSize())
imgShow(imgChecker(inImg_ds, refImg_ds), vmax=refThreshold)


We also apply the affine to the input mask


In [ ]:
inMask_ds = imgApplyAffine(inMask_ds, affine, useNearest=True, size=refImg_ds.GetSize())
imgShow(inMask_ds, vmax=255)
imgShow(inImg_ds, vmax=inThreshold)


LDDMM registration

Now we run LDDMM registration. Here we use imgMetamorphosisComposite. Unlike imgMetamorphosis introduced in the 2D registration notebook, this function runs LDDMM in multiple steps using the alpha values specified by in alphaList. The field and invField outputs are the composition of all steps. Once agan we use a Mutual Information cost because the input brain and reference image have very differnt intensity profiles.


In [ ]:
inImg_ds = imgResample(inImg_affine, spacing=spacing, size=refImg_ds.GetSize())
(field, invField) = imgMetamorphosisComposite(inImg_ds, refImg_ds, inMask=inMask_ds, alphaList=[0.1, 0.05,0.02],
                                              scaleList = [1.0, 1.0,1.0], useMI=True, iterations=100, verbose=True)


Step 0: alpha=0.1, beta=0.05, scale=1.0
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-4.66649e+09, 0.25308, 0, -4.66649e+09 (99.1646%), 1.100000e-03
1.	-4.85205e+09, 0.60919, 0, -4.85205e+09 (98.7616%), 1.210000e-03
2.	-4.98868e+09, 1.20675, 0, -4.98868e+09 (98.4649%), 1.331000e-03
3.	-5.09197e+09, 2.23357, 0, -5.09197e+09 (98.2406%), 1.464100e-03
4.	-5.16089e+09, 3.81516, 0, -5.16089e+09 (98.0909%), 1.610510e-03
5.	-5.22399e+09, 5.86186, 0, -5.22399e+09 (97.9538%), 1.771561e-03
6.	-5.22705e+09, 6.14551, 0, -5.22705e+09 (97.9472%), 2.435896e-04
7.	-5.22811e+09, 6.22416, 0, -5.22811e+09 (97.9449%), 6.698715e-05
8.	-5.22838e+09, 6.23496, 0, -5.22838e+09 (97.9443%), 9.210733e-06
9.	-5.22839e+09, 6.24091, 0, -5.22839e+09 (97.9443%), 5.065903e-06
10.	-5.22839e+09, 6.24172, 0, -5.22839e+09 (97.9443%), 6.965617e-07
11.	-5.2284e+09, 6.24217, 0, -5.2284e+09 (97.9443%), 3.831089e-07
12.	-5.2284e+09, 6.24229, 0, -5.2284e+09 (97.9443%), 1.053550e-07
13.	-5.2284e+09, 6.24236, 0, -5.2284e+09 (97.9443%), 5.794523e-08
14.	-5.2284e+09, 6.2424, 0, -5.2284e+09 (97.9443%), 3.186987e-08
15.	-5.2284e+09, 6.24241, 0, -5.2284e+09 (97.9443%), 8.764215e-09
16.	-5.2284e+09, 6.24242, 0, -5.2284e+09 (97.9443%), 4.820318e-09
17.	-5.2284e+09, 6.24242, 0, -5.2284e+09 (97.9443%), 6.627938e-10
18.	-5.2284e+09, 6.24242, 0, -5.2284e+09 (97.9443%), 1.822683e-10
E = -5.22837e+09 (97.9443%)
Length = 5.1692
Time = 1962.8s (32.7133m)

Step 1: alpha=0.05, beta=0.05, scale=1.0

Now we apply the displacement field


In [38]:
inImg_lddmm = imgApplyField(inImg_affine, field, size=refImg.GetSize())
imgShow(inImg_lddmm, vmax=inThreshold)


Evaluation LDDMM registration

Now we evaluate the deformable registration using a checkerboard image


In [39]:
inImg_ds = imgResample(inImg_lddmm, spacing=spacing, size=refImg_ds.GetSize())
imgShow(imgChecker(inImg_ds, refImg_ds), vmax=refThreshold)


We can also evaluate the registration by overlaying the atlas annotations over the deformed input image.


In [40]:
imgShow(inImg_lddmm, vmax=inThreshold, newFig=False)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.2, newFig=False)
plt.show()


Uploading Results

Uploading deformed input image

We can now upload the atlas-aligned input brain back into ndstore.


In [ ]:
outToken = inToken + "_to_" + refToken
imgUpload(inImg_lddmm, outToken, server=server, userToken=userToken)
### imgWrite(inImg_lddmm, "/cis/project/clarity/data/ailey/"+outToken+"_new.img")

Uploading deformed atlas annotations


In [41]:
spacing_ds = invField.GetSpacing()
size_ds = np.ceil(np.array(refAnnoImg.GetSize())*np.array(refAnnoImg.GetSpacing())/np.array(spacing_ds))
size_ds = list(size_ds.astype(int))

Before we can overlay the atlas annotations on the downloaded image we must transfom them to its space. Fortunatly this can be done since all spatial transforms in this tutorial are invertable. First we construct an inverse displacement field which transforms the annotations from the reference space to the input space before registration.


In [42]:
invAffine = affineInverse(affine)
invAffineField = affineToField(invAffine, size_ds, spacing_ds)
invField2 = fieldApplyField(invAffineField, invField)
inAnnoImg = imgApplyField(refAnnoImg, invField2,useNearest=True, size=inImgSize_reorient, spacing=inImgSpacing_reorient)
inAnnoThreshold = imgPercentile(inAnnoImg,0.99)
imgShow(inAnnoImg, vmax=inAnnoThreshold)


Were not done yet. We still need to reorient these annotations to their original


In [43]:
inAnnoImg = imgReorient(inAnnoImg, refOrient, inOrient)
imgShow(inAnnoImg, vmax=inAnnoThreshold)


We can upload these annotations at the lowest possible resolution.


In [ ]:
outToken = "ara3_to_AutA"
outChannel = "annotation_draft"
imgUpload(inAnnoImg, outToken, outChannel, resolution=5)