I am completing this algorithms MD to the extent of my current knowledge. I will be meeting with Kwame on Tuesday 1/17 in order to fill in the gaps in my knowledge and obtain a more comprehensive understanding of the algorithm.
NDReg is an algorithm designed to calculate one or more coordinate transformations with the goal of aligning two images of the same scene. For our specific application, NDreg will be used to track specific synapses over time, such that stochastic statistics can be generated about the subject's synaptic activity.
In [3]:
####################################################
###PSEUDOCODE WILL GO HERE ONCE I MEET WITH KWAME###
####################################################
I will display the input and base image to be passed to the register function, and the result of applying the register function's output to the input image.
I will use the following loss function (sum of squared errors) to evaluate the success of the algorithm: $\Sigma_{voxels}L_2(inputImage[voxel] - baseImage[voxel])$
I will use the average percent reduction in squared error over a series of tests to summarize the quantitative performance of the algorithm.
I will compare the results of the register function to a function which generates and applies a random rigid body transformation in order to demonstrate how well the algorithm performes compared to random guessing.
In [4]:
import sys
sys.path.insert(0, '../../../ndreg/')
import ndreg
def register(inImg, baseImg):
return ndreg.imgAffineComposite(inImg,
baseImg,
iterations=200,
useMI=False,
verbose=False)
In [5]:
refImg = ndreg.imgDownload('ara_ccf2')
refAnnoImg = ndreg.imgDownload('ara_ccf2', channel="annotation")
inImg = ndreg.imgDownload('eacker01', 'r', 5)
In [6]:
#reorient and rescale the test image
inImgReor = ndreg.imgReorient(inImg, 'lsp', 'rsa')
inImg_ds = ndreg.imgResample(inImgReor, spacing=[.25, .25, .25])
refImg_ds = ndreg.imgResample(refImg, spacing=[.25, .25, .25])
#display the images to be registered
ndreg.imgShow(inImg_ds, vmax=10000)
ndreg.imgShow(refImg_ds, vmax=500)
In [7]:
affine = register(inImg_ds, refImg_ds)
resultImage = ndreg.imgApplyAffine(inImgReor,
affine,
size=refImg.GetSize(),
spacing=refImg.GetSpacing())
ndreg.imgShow(resultImage, vmax=10000)
The results of this functionality test are consistent with those of the NDReg documentation. I can now proceed to the validation testing phase.
The following code will generate two 100x100x100 volumes randomly populated with synapses and noise modeling that of the real world data, and then apply a random rigid body transformation to one of the volumes. The volumes will then be run thorugh the register algorithm in order to attemt to align them again.
In [8]:
import math
import numpy as np
import SimpleITK as itk
from random import randrange as rand
from random import uniform as floatRand
from affine import Affine
def generatePointSet():
center = (rand(0, 99), rand(0, 99), rand(0, 99))
toPopulate = []
for z in range(-1, 2):
for y in range(-1, 2):
for x in range(-1, 2):
curPoint = (center[0]+z, center[1]+y, center[2]+x)
#only populate valid points
valid = True
for dim in range(3):
if curPoint[dim] < 0 or curPoint[dim] >= 100:
valid = False
if valid:
toPopulate.append(curPoint)
return set(toPopulate)
def getTransform():
#generate a random rigid body transform
#error is assumed to be 5% rotationally
#and laterally
a = floatRand(-.075, .075)
b = floatRand(-.075, .075)
c = floatRand(-.075, .075)
xt = floatRand(-2.5, 2.5)
yt = floatRand(-2.5, 2.5)
zt = floatRand(-2.5, 2.5)
transform = np.stack([
[math.cos(a)*math.cos(b), math.cos(a)*math.sin(b)*math.sin(c)-math.sin(a)*math.cos(c), math.cos(a)*math.sin(b)*math.cos(c)+math.sin(a)*math.sin(c), xt],
[math.sin(a)*math.cos(b), math.sin(a)*math.sin(b)*math.sin(c)+math.cos(a)*math.cos(c), math.sin(a)*math.sin(b)*math.cos(c)-math.cos(a)*math.sin(c), yt],
[-math.sin(b), math.cos(b)*math.sin(c), math.cos(b)*math.cos(c), zt],
[0., 0., 0., 1.]
])
return transform
def applyRigid(initialVolume, transform):
rigidMatrix = np.zeros_like(initialVolume)
for z in range(initialVolume.shape[0]):
for y in range(initialVolume.shape[1]):
for x in range(initialVolume.shape[2]):
new = np.dot(transform, [x, y, z, 1])
try:
rigidMatrix[int(new[2])][int(new[1])][int(new[0])] = initialVolume[z][y][x]
#if transformed place is out of bounds, dont deal with it
except IndexError:
continue
return rigidMatrix
def generateTestVolume():
#create a test volume
volume = np.zeros((100, 100, 100))
myPointSet = set()
for _ in range(rand(500, 800)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = 60000
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
noiseVolume[z][y][x] = rand(0, 10000)
#convert numpy array to itk image
itkVolume = itk.GetImageFromArray(noiseVolume)
#perform a random rigid body transform to generate the second volume
transform = getTransform()
rigidMatrix = applyRigid(noiseVolume, transform)
rigidVolume = itk.GetImageFromArray(rigidMatrix)
return itkVolume, rigidVolume, transform
In [3]:
#a quick functionality test for the random rigid
for i in range(3):
this = np.zeros((100, 100, 100))
for x in range(0, 100):
for z in range(0, 100):
this[z][50][x] = 60000
transform = getTransform()
that = applyRigid(this, transform)
those = itk.GetImageFromArray(that)
ndreg.imgShow(itk.GetImageFromArray(this), vmax=60000)
ndreg.imgShow(those, vmax=60000)
In [5]:
pipelineDat = []
randDat = []
errorList = []
for i in range(20):
print '\nExecuting Test: ', i+1
#generate an display data
itkVolume, rigidVolume, curTransform = generateTestVolume()
print curTransform
ndreg.imgShow(itkVolume, vmax=60000)
ndreg.imgShow(rigidVolume, vmax=60000)
#calculate initial error
initialErr = np.sum((itk.GetArrayFromImage(itkVolume)-itk.GetArrayFromImage(rigidVolume))**2)
print '\tInitial Error: ', initialErr
#perform transform
transform = register(rigidVolume, itkVolume)
resultVolume = ndreg.imgApplyAffine(rigidVolume, transform, size=itkVolume.GetSize(), spacing=itkVolume.GetSpacing())
ndreg.imgShow(resultVolume, vmax=60000)
#perform random transform
randMatrix = applyRigid(itk.GetArrayFromImage(rigidVolume), getTransform())
#calculate final error
finalPipeErr = np.sum((itk.GetArrayFromImage(itkVolume)-itk.GetArrayFromImage(resultVolume))**2)
finalRandErr = np.sum((itk.GetArrayFromImage(itkVolume)-randMatrix)**2)
print '\tFinal Pipeline Error: ', finalPipeErr
print '\tFinal Random Error: ', finalRandErr
#get the reduction in error
pipeRatio = (initialErr - finalPipeErr)/initialErr
randRatio = (initialErr - finalRandErr)/initialErr
pipelineDat.append(pipeRatio)
randDat.append(randRatio)
errorList.append(initialErr)
print '\tPipeline Reduction Ratio: ', pipeRatio
print '\tRandom Reduction Ratio: ', randRatio
print '\n\n\n'
print '\nAverage Error Reduction Ratio:'
print '\tPipeline: ', np.average(pipelineDat)
print '\tRandom: ', np.average(randDat)
print '\nError Reduction Ratio Variance'
print '\tPipeline: ', np.var(pipelineDat)
print '\tRandom: ', np.var(randDat)
In [6]:
import matplotlib.pyplot as plt
plt.hist(pipelineDat, 20)
plt.show()
There is an obvious bimodal tendency to the distribution of these results. I am not yet sure what may be causing these highly variant results, but I will continue to investigate. I intend to show these to Kwame on Tuesday during our meeting in hope that he can shed some light on what the cause may be.
In terms of performance, the algorithm does perform far better than random. However, the error reduction rate is substantially skiewed by the 8 extremely low scores in the test results. My initial thought is that the pipeline's error reduction rate may be based on the magnitude of the initial error. Below is a graph of error reduction rate vs the initial error.
In [7]:
plt.scatter(errorList, pipelineDat)
plt.show()
The nonlinear nature of the scatter plot is evidence that there is some other source of noise causing poor performance in NDReg beyond magnitude of initial error. It is my hope that my meeting with Kwame on Tuesday will allow me to gain some intuition as to what this lurking variable may be.
In [ ]: