In [1]:
%matplotlib inline

Volumetric Registration and Analysis

In this tutorial we align a Rat brain to the Waxholm. Thus the Waxholm is our reference image, it consists of image data stored in the default "average" channel and corresponding annotations stored in a separate "annotation" channel. We begin by importing the relevant modules


In [2]:
from ndreg import *
import matplotlib
import ndio.remote.neurodata as neurodata

First we set the server and the user token from that server


In [3]:
server = "dev.neurodata.io"
userToken = txtRead("../userToken.pem").strip()

Now we download the atlas image


In [4]:
refToken = "whs2"
refImg = imgDownload(refToken, server=server, userToken=userToken)
refImgOrig = refImg[:,:,:]

Next we'll visuaize the image. To ensure that the visuization is has good contrast we'll only show intensity values below the 99th percentile.


In [5]:
refThreshold = imgPercentile(refImg, 0.99)
print(refThreshold)


18379.0509804

When given an image volume imgShow displays it as a set of evenly spaced z-slices (1st column), y-slices (2nd column) and x slices (3rd column).


In [6]:
imgShow(refImg, vmax=refThreshold)


Now we download the corresponding annotations


In [7]:
refAnnoImg = imgDownload(refToken, channel="annotation", server=server, userToken=userToken)
refAnnoImgOrig = refAnnoImg[:,:,:]
imgShow(refAnnoImg)


It's usually better to visuaize a set of annotations in color. Let's display the atlas annotations using a random colormap. We begin by creating a 1000x3 array of random values. Since we always want the backgrond (label 0) to be dark we make the 0th row is [0,0,0].


In [8]:
randValues = np.random.rand(1000,3)
randValues = np.concatenate(([[0,0,0]],randValues))

Now we can display the annotations.


In [9]:
randCmap = matplotlib.colors.ListedColormap (randValues)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap)


Let's overlay these annotations on the atlas image using alpha transparency. To do this we must set newFig to False so that matplotlib appends the annotation data to the current figure instead of creating new one.


In [10]:
imgShow(refImg, vmax=refThreshold, newFig=False)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.3, newFig=False)
plt.show()


Downloading input image

ndstore stores the brains at multiple resolutions. Resolution 0 is always the highest resolution each subsequent resoltion takes up 1/4th (under slice scaling) or 1/8th (under isotropic scaling) as many bytes as the previous resoltion. We can use ndio's get_metadata method to see a list of available resoltions.


In [11]:
inToken = "branch160121SagLeft"
nd = neurodata(hostname=server, user_token=userToken)
resolutionList = nd.get_metadata(inToken)['dataset']['voxelres'].keys()
lowestResolution = int(max(resolutionList))
print(lowestResolution)


2

We'll now download the image at that resolution. Depending on your internet connection downloading may take several minutes.


In [12]:
inImg = imgDownload(inToken, resolution=lowestResolution, server=server, userToken=userToken)

We'll only use intensity values below the 99th percentile in the visualization.


In [13]:
inThreshold = imgPercentile(inImg, 0.99)
imgShow(inImg, vmax=inThreshold)


Reorienting input image

You may have noticed that the input brain is not oriented in the same way as the atlas. Let's look at the atlas.


In [14]:
imgShow(refImg, vmax=refThreshold)


By examining the the z slices in the first column its clear that the x-axis goes from Right to Left side of the brain. The y-axis varies from Superior to Inferior and the z-axis goes from Anterior to posterior. Thus it is in RSA orientation. Looking at the the input brain...


In [15]:
imgShow(inImg, vmax=inThreshold)


...we see that the x-axis goes from Inferior to Superior, the y-axis goes from Anterior to Posterior and the Z axis goes from Left to Right. Thus it's in IAL orientation. Therefore we reorient the input image from IAL to RSA


In [16]:
inOrient = "IAL"
refOrient = "RSA"
inImg = imgReorient(inImg, inOrient, refOrient)
imgShow(inImg, vmax=inThreshold)


Compare the above to the Atlas. The slices should now correspond with the atlas. Let's make a copy of the reorianted image size and spacing because we'll need it later.


In [17]:
inImgSize_reorient = inImg.GetSize()
inImgSpacing_reorient= inImg.GetSpacing()

Create brain mask of reference image

Our reference image includes tissues outside the brain while our input image does not. Therefore we need to mask out all regions of the reference image that are not part of the brain. This can be done easily done using the reference annotations. First we generate a mask of the brain using non-zero annotation labels.


In [18]:
maskImg1 = sitk.BinaryThreshold(refAnnoImg,0,0,0,255) #lowerThreshold, UpperThreshold, insideValue, outsideValue
imgShow(maskImg1, vmax = 255)


Next we create a mask to exclude the trigeminal nerve since (Waxholm label 76) in since it's outside the brain


In [19]:
maskImg2 = sitk.BinaryThreshold(refAnnoImg,76,76,0,255) #lowerThreshold, UpperThreshold, insideValue, outsideValue
imgShow(maskImg2, vmax = 255)


Combining these masks


In [20]:
refMaskImg = sitk.Mask(maskImg1, maskImg2)
imgShow(refMaskImg, vmax = 255)


Now we apply the mask to the reference image.


In [21]:
refImg = sitk.Mask(refImg, refMaskImg)
imgShow(refImg, vmax=refThreshold)


Create and apply two-thirds mask of reference image

Our input image only two-thirds of a the cortex and excludes most of the cerebellum and olfactory bulb. Therefore we need to exclude these regions from reference image annotations as well. We do this by creating an empty image of the same size and spacing of the reference image. We then paste an Region of Interest (ROI) from the reference mask which excludes these regions into the empty image.


In [22]:
roiStart = [312,67,188]
roiSize = [183,296,447]

emptyImg = sitk.Image(refMaskImg.GetSize(),refMaskImg.GetPixelIDValue()) # Create an empty image
emptyImg.CopyInformation(refMaskImg) # Copy spacing, origin and direction from reference image
refMaskImg = sitk.Paste(emptyImg, refMaskImg, roiSize, roiStart, roiStart)
imgShow(refMaskImg, vmax=255)


Now we apply this mask to the reference image and annotations


In [23]:
refImg = sitk.Mask(refImg, refMaskImg)
imgShow(refImg, vmax=refThreshold)



In [24]:
refAnnoImg = sitk.Mask(refAnnoImg, refMaskImg)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap)


Affine Registration

We can finally begin the registration. Ideally we would do resgistration at the full scale but this would be far to computationally expensive for the purposes of this tutorial. Therefore to save time we downsample the images.


In [25]:
spacing=[0.25,0.25,0.25]
refImg_ds = imgResample(refImg, spacing=spacing)
imgShow(refImg_ds, vmax=refThreshold)
print(refImg_ds.GetSize())


(80, 80, 160)

In [26]:
inImg_ds = imgResample(inImg, spacing=spacing)
imgShow(inImg_ds, vmax=inThreshold)


We will initialize the affine registration using the reference image's region of interest


In [27]:
translation = -np.array(roiStart)*np.array(refImg.GetSpacing())
inAffine = [1.25,0,0,0,1,0,0,0,1]+translation.tolist()
print(inAffine)


[1.25, 0, 0, 0, 1, 0, 0, 0, 1, -12.168, -2.613, -7.332]

We check that our initial affine is reasonable


In [28]:
imgShow(imgApplyAffine(inImg_ds, inAffine, size=refImg_ds.GetSize(), spacing=refImg_ds.GetSpacing()), vmax=inThreshold)


Now we compute the affine transform. Unlike in the basic registration tutiorial we'll use the imgAffineComposite instead of imgAffine. imgAffine simply computes the affine transform between the input and reference images. imgAffineComposite computes a translation then a rigid then an affine transformation. It's output is the composition of those three transforms. We use Mutual Information since the input and reference images have very differnt intensity profiles. We also enable the verbose option so that each iteration is printed.


In [29]:
affine = imgAffineComposite(inImg_ds, refImg_ds, iterations=100, useMI=True, verbose=True, inAffine=inAffine)


Step translation:
0.	 -0.103464458779
1.	 -0.107444328838
2.	 -0.107966563917
3.	 -0.111358455906
4.	 -0.111947231155
5.	 -0.115002797616
6.	 -0.118236668704
7.	 -0.12030221612
8.	 -0.123215684525
9.	 -0.126669266832
10.	 -0.126456621894
11.	 -0.129326055153
12.	 -0.12950230787
13.	 -0.131781585447
14.	 -0.134818427995
15.	 -0.134896464057
16.	 -0.137083654108
17.	 -0.14009142324
18.	 -0.139839258873
19.	 -0.141666240979
20.	 -0.144723827953
21.	 -0.14430661841
22.	 -0.144108373729
23.	 -0.146451148695
24.	 -0.146038870939
25.	 -0.147618931405
26.	 -0.149973855861
27.	 -0.149396333244
28.	 -0.150474570535
29.	 -0.152827939032
30.	 -0.15223517547
31.	 -0.153472351259
32.	 -0.156562255325
33.	 -0.156318796109
34.	 -0.157475245117
35.	 -0.158810821789
36.	 -0.162060599867
37.	 -0.162167262438
38.	 -0.163321410836
39.	 -0.167013513014
40.	 -0.167557393755
41.	 -0.169088980954
42.	 -0.169901488357
43.	 -0.171409963083
44.	 -0.171896035421
45.	 -0.172897169699
46.	 -0.171839620307
47.	 -0.172630055103
48.	 -0.172070530131
49.	 -0.172926946166
50.	 -0.173537799163
51.	 -0.172304910934
52.	 -0.172736365294
53.	 -0.172585955701
54.	 -0.172614189659
55.	 -0.172435194317
56.	 -0.172445704969
57.	 -0.172413132828
58.	 -0.172505535838
59.	 -0.172556823945
60.	 -0.172632810637
Step scale:
0.	 -0.0837171494795
1.	 -0.0895016650898
2.	 -0.0906939551759
3.	 -0.0862830197718
4.	 -0.0903097589724
5.	 -0.088922189079
6.	 -0.0889990014306
7.	 -0.0890655782086
8.	 -0.089049142328
Step rigid:
0.	 -0.0876896429586
1.	 -0.0746144385306
2.	 -0.0946574698767
3.	 -0.090998884777
4.	 -0.0934267351987
5.	 -0.0945592008789
6.	 -0.0930371752164
7.	 -0.0940724532958
8.	 -0.0945629536726
9.	 -0.0939609849245
10.	 -0.0939666398091
11.	 -0.0940261895493
12.	 -0.0940721668615
Step affine:
0.	 -0.0908957706569
1.	 -0.0970488273948
2.	 -0.101918206216
3.	 -0.102598887184
4.	 -0.106510501977
5.	 -0.107544965898
6.	 -0.107693810736
7.	 -0.109022851508
8.	 -0.110533971826
9.	 -0.11196276728
10.	 -0.113325704923
11.	 -0.114893421878
12.	 -0.116420420661
13.	 -0.1185694853
14.	 -0.119059627898
15.	 -0.12014394294
16.	 -0.120859404203
17.	 -0.121583208159
18.	 -0.122269784486
19.	 -0.122952909614
20.	 -0.123589936149
21.	 -0.124258794106
22.	 -0.12477178596
23.	 -0.12528778364
24.	 -0.125632811421
25.	 -0.125936144047
26.	 -0.126233555293
27.	 -0.126602280717
28.	 -0.126950875569
29.	 -0.127338633669
30.	 -0.127679834578
31.	 -0.128065554936
32.	 -0.128390440851
33.	 -0.128758088194
34.	 -0.129047160043
35.	 -0.129402406592
36.	 -0.129664163628
37.	 -0.130012845342
38.	 -0.130246012009
39.	 -0.130569073343
40.	 -0.130778245862
41.	 -0.131068076383
42.	 -0.131260385178
43.	 -0.131534281637
44.	 -0.131728157699
45.	 -0.132003033021
46.	 -0.132198110105
47.	 -0.132471146668
48.	 -0.132666892557
49.	 -0.132933844151
50.	 -0.133127643806
51.	 -0.133386737225
52.	 -0.133577781701
53.	 -0.13384204954
54.	 -0.134040504648
55.	 -0.134370192633
56.	 -0.134573656805
57.	 -0.13494291455
58.	 -0.135156921104
59.	 -0.135542005958
60.	 -0.135763561008
61.	 -0.136154357367
62.	 -0.136373115725
63.	 -0.136754847456
64.	 -0.136962088426
65.	 -0.137331062065
66.	 -0.137519927853
67.	 -0.137859661994
68.	 -0.138039490434
69.	 -0.13834047951
70.	 -0.138524421263
71.	 -0.138794033183
72.	 -0.138977628042
73.	 -0.13925056753
74.	 -0.139430493759
75.	 -0.139747678262
76.	 -0.139924416555
77.	 -0.140283445599
78.	 -0.140462985703
79.	 -0.140849039357
80.	 -0.141035243132
81.	 -0.141437485357
82.	 -0.141631713407
83.	 -0.142038047575
84.	 -0.142232507541
85.	 -0.142634066626
86.	 -0.142820637578
87.	 -0.143206235704
88.	 -0.143382506659
89.	 -0.143743759289
90.	 -0.143908456679
91.	 -0.144248198725
92.	 -0.144357857227
93.	 -0.144528963851
94.	 -0.144697892237
95.	 -0.144866005789
96.	 -0.145033220656
97.	 -0.14520311187
98.	 -0.145374277174
99.	 -0.145545313723

Now we apply the affine transform to the input image and mask


In [30]:
inImg = imgApplyAffine(inImg, affine, size=refImg.GetSize(), spacing=refImg.GetSpacing())
imgShow(inImg, vmax=inThreshold)


We can evaluate the affine registration by generating a checkerboard of the reference and input images. In this method the input image is placed on the black squares of the metaphorical checkerboard while the reference image is placed on the red squares. Idealy anatomical sturctures should line up across squares.


In [31]:
imgShow(imgChecker(inImg, refImg), vmax=refThreshold)


LDDMM registration

Now we run LDDMM registration. Here we use imgMetamorphosisComposite. Unlike imgMetamorphosis introduced in the 2D registration notebook, this function runs LDDMM in multiple steps using the alpha values specified by in alphaList. The field and invField outputs are the composition of all steps. Once agan we use a Mutual Information cost because the input brain and reference Waxholm have very differnt intensity profiles.


In [32]:
inImg_ds = imgResample(inImg, spacing=spacing)
(field, invField) = imgMetamorphosisComposite(inImg_ds, refImg_ds, alphaList=[0.05,0.02,0.01],
                                              scaleList = [0.5,0.5,0.5], useMI=True, iterations=100, verbose=True)


Step 0: alpha=0.05, beta=0.05, scale=0.5
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-8.34763e+10, 4.80999, 0, -8.34763e+10 (99.0842%), 1.100000e-03
1.	-8.3823e+10, 13.0841, 0, -8.3823e+10 (98.6523%), 1.210000e-03
2.	-8.40038e+10, 23.8353, 0, -8.40038e+10 (98.427%), 1.331000e-03
3.	-8.41807e+10, 37.9026, 0, -8.41807e+10 (98.2066%), 1.464100e-03
4.	-8.44332e+10, 57.8959, 0, -8.44332e+10 (97.8922%), 1.610510e-03
5.	-8.45812e+10, 83.3511, 0, -8.45812e+10 (97.7078%), 1.771561e-03
6.	-8.47156e+10, 117.313, 0, -8.47156e+10 (97.5403%), 1.948717e-03
7.	-8.47794e+10, 161.042, 0, -8.47794e+10 (97.4609%), 2.143589e-03
8.	-8.50895e+10, 215.708, 0, -8.50895e+10 (97.0746%), 2.357948e-03
9.	-8.54271e+10, 296.925, 0, -8.54271e+10 (96.654%), 2.593742e-03
10.	-8.56986e+10, 393.314, 0, -8.56986e+10 (96.3158%), 2.853117e-03
11.	-8.59659e+10, 520.321, 0, -8.59659e+10 (95.9828%), 3.138428e-03
12.	-8.63341e+10, 678.325, 0, -8.63341e+10 (95.5241%), 3.452271e-03
13.	-8.66976e+10, 893.559, 0, -8.66976e+10 (95.0712%), 3.797498e-03
14.	-8.70204e+10, 1150.13, 0, -8.70204e+10 (94.6691%), 4.177248e-03
15.	-8.73724e+10, 1465.39, 0, -8.73724e+10 (94.2306%), 4.594973e-03
16.	-8.76197e+10, 1875.9, 0, -8.76197e+10 (93.9226%), 5.054470e-03
17.	-8.78926e+10, 2337.28, 0, -8.78926e+10 (93.5827%), 5.559917e-03
18.	-8.85633e+10, 3002.74, 0, -8.85633e+10 (92.7471%), 6.115909e-03
19.	-8.9061e+10, 3781.53, 0, -8.9061e+10 (92.127%), 6.727500e-03
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
20.	-8.99961e+10, 4900.6, 0, -8.99961e+10 (90.9622%), 7.400250e-03
21.	-9.08673e+10, 6121.33, 0, -9.08673e+10 (89.8769%), 8.140275e-03
22.	-9.18338e+10, 7764.37, 0, -9.18338e+10 (88.6729%), 8.954302e-03
23.	-9.27616e+10, 9624.38, 0, -9.27616e+10 (87.5171%), 9.849733e-03
24.	-9.39662e+10, 12115.2, 0, -9.39662e+10 (86.0165%), 1.083471e-02
25.	-9.50793e+10, 14951.4, 0, -9.50794e+10 (84.6298%), 1.191818e-02
26.	-9.60019e+10, 18364.8, 0, -9.60019e+10 (83.4805%), 1.310999e-02
27.	-9.70487e+10, 22313.7, 0, -9.70488e+10 (82.1764%), 1.442099e-02
28.	-9.83411e+10, 28050, 0, -9.83411e+10 (80.5665%), 1.586309e-02
29.	-9.93129e+10, 32612.1, 0, -9.9313e+10 (79.3558%), 1.744940e-02
30.	-9.96928e+10, 38248, 0, -9.96928e+10 (78.8825%), 1.919434e-02
31.	-1.01233e+11, 41040.6, 0, -1.01233e+11 (76.9639%), 2.111378e-02
32.	-1.01429e+11, 41402.9, 0, -1.01429e+11 (76.7191%), 2.903144e-03
33.	-1.01501e+11, 41923.1, 0, -1.01501e+11 (76.6305%), 3.193459e-03
34.	-1.01573e+11, 42574.6, 0, -1.01573e+11 (76.5408%), 3.512805e-03
35.	-1.01579e+11, 42615.2, 0, -1.01579e+11 (76.5333%), 4.830106e-04
36.	-1.01579e+11, 42615.2, 0, -1.01579e+11 (76.5333%), 2.533492e-10
E = -1.01579e+11 (76.5333%)
Length = 191.14
Time = 597.024s (9.9504m)

Step 1: alpha=0.02, beta=0.05, scale=0.5
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-1.02165e+11, 6.82975, 0, -1.02165e+11 (98.8835%), 1.100000e-03
1.	-1.02166e+11, 6.76503, 0, -1.02166e+11 (98.8826%), 3.781250e-05
2.	-1.02166e+11, 6.76503, 0, -1.02166e+11 (98.8826%), 2.538681e-09
3.	-1.02166e+11, 6.76503, 0, -1.02166e+11 (98.8826%), 6.981373e-10
E = -1.02166e+11 (98.8826%)
Length = 2.81291
Time = 156.409s (2.60682m)

Step 2: alpha=0.01, beta=0.05, scale=0.5
	E, E_velocity, E_rate, E_image (E_image %), LearningRate
0.	-1.03243e+11, 24.7271, 0, -1.03243e+11 (98.3391%), 1.100000e-03
1.	-1.03245e+11, 25.025, 0, -1.03245e+11 (98.3371%), 1.512500e-04
E = -1.03245e+11 (98.3371%)
Length = 5.08333
Time = 136.431s (2.27385m)

Now we apply the displacement field


In [33]:
outImg = imgApplyField(inImg, field, size=refImg.GetSize())
imgShow(outImg, vmax=inThreshold)


Evaluation using checker board

We can evaluate the deformable registration using a checkerboard image


In [34]:
imgShow(imgChecker(outImg, refImg, useHM=True), vmax=refThreshold)


Evaluation using annoatations

We can also evaluate the registration by overlaying the Waxholm annotations over the deformed input image.


In [35]:
imgShow(outImg, vmax=inThreshold, newFig=False)
imgShow(refAnnoImg, vmax=1000, cmap=randCmap, alpha=0.2, newFig=False)
plt.show()


Uploading Results

Uploading deformed input image

We can now upload the Waxholm-aligned input brain back into ndstore.


In [36]:
outToken = "branch160121ToWhs2"
imgUpload(outImg, outToken, server=server, userToken=userToken)

Visualizing deformed in put image in ndviz

We can now visulize the deformed input image on the web in NeuroData Visualization (ndviz). The function vizUrl generates a link which can be visited to examine the data.


In [38]:
tokenList = [outToken, refToken]
channelList = ["","annotation"]
url = vizUrl(tokenList, channelList, server, userToken)
print(url)


https://mri.neurodata.io/ndviz/#!{'layers':{'annotation?neariso=false':{'source':'ndstore://https://dev.neurodata.io/whs2/annotation?neariso=false'_'type':'segmentation'}_'image?neariso=false':{'source':'ndstore://https://dev.neurodata.io/branch160121ToWhs2/image?neariso=false'_'type':'image'}}_'layout':'4panel'}

Uploading deformed atlas annotations


In [ ]:
spacing_ds = invField.GetSpacing()
size_ds = np.ceil(np.array(refAnnoImg.GetSize())*np.array(refAnnoImg.GetSpacing())/np.array(spacing_ds))
size_ds = list(size_ds.astype(int))

Before we can overlay the Waxholm annotations on the downloaded image we must transfom them to its space. Fortunatly this can be done since all spatial transforms in this tutorial are invertable. First we construct an inverse displacement field which transforms the annotations from the Waxholm space to the space before registration.


In [ ]:
invAffine = affineInverse(affine)
invAffineField = affineToField(invAffine, size_ds, spacing_ds)
invField2 = fieldApplyField(invAffineField, invField)
inAnnoImg = imgApplyField(refAnnoImg, invField2,useNearest=True, size=inImgSize_reorient, spacing=inImgSpacing_reorient)
inAnnoThreshold = imgPercentile(inAnnoImg,0.99)
imgShow(inAnnoImg, vmax=inAnnoThreshold)

Were not done yet. We still need to reorient these annotations to their original


In [ ]:
inAnnoImg = imgReorient(inAnnoImg, refOrient, inOrient)
imgShow(inAnnoImg, vmax=inAnnoThreshold)

In [ ]:
imgWrite(inAnnoImg,"/cis/project/clarity/data/audrey/inAnno.img")

We can upload these annotations at the lowest possible resolution.


In [ ]:
outToken = "ara3_to_AutA"
outChannel = "annotation_draft"
imgUpload(inAnnoImg, outToken, outChannel, resolution=5)