September 30 Meeting Markup

Background:

This is the Plot.ly chart of the local histogram equalized Cocaine174 image:
https://neurodatadesign.github.io/seelviz/reveal/html/Cocaine174localeq.html

This is the Plot.ly chart of the local histogram equalized atlas aligned Cocaine174 image:
https://neurodatadesign.github.io/seelviz/reveal/html/alignedCocaine174localeq.40000.brightest.html

Issue 1: Post-processing brains look like they have strange 'outline':

After running histogram equalization on atlas-aligned brains, we get a significantly different shape (see Plot.ly charts above).

Reason: My current CLAHE tile size in my local histogram equalization is not modular - this is partly because the tile size requires an integer argument, so it’s not possible to perfectly fit differently sized data. We have a function that calculates the best tile size depending on image size.

Currently our method (see code snippet below) finds the volume of the brightest points and scales the tile size depending on that - the problem is that it rounds to the nearest integer. We can think of a lot of fringe cases where this is not a standardized method; specifically for cases where the volume scaling unit is around 0.5, the scaling is not necessarily appropriate. Our method for scaling the tile size depends on a “baseline” case using the CLAHE tile size of Cocaine174 purely based off of “what looks good”.


In [ ]:
## Code Snippet Showing Our CLAHE Tile Size Selection
aligned_brain = nb.load('in.img')
aligned_data = aligned_brain.get_data()
aligned_brain = aligned_data[:,:,:]

size = aligned_brain.shape

totalTile = size[0]*size[1]*size[2] #<- Calculates total volume of the image
print totalTile    #77045760
scaleValue = totalTile/8
print scaleValue    #9630720

print float(totalTile/751091775.0)    #0.102578356686
clahe_scale = 8*0.102578356686
print clahe_scale    #0.820626853488

This is the Plot.ly chart of the local histogram equalized Cocaine174 image with a CLAHE tile size adjusted off of its volume based on the above method: https://neurodatadesign.github.io/seelviz/reveal/html/modularCLAHE.html

Additional Notes: The larger concern we have is that the atlas aligned images seem to be different than the original brains - this is likely due to a theoretical misunderstanding we have about the aligned brains. They look different both in terms of how the original images look and in terms of the bright points. We define aligned brains to be the input for registration, so these are the pre-registered brains. Currently we are under the notion that the aligned images are simply transformed versions of the original brains - our assumption seems to be wrong because the images seem to have different brightness regions.

Questions:

1) What is the best approach towards making the CLAHE tile size perfectly accurate for each image size?

2) What do you suggest that we base the tile size off of - perhaps we should create a range of different tile sizes and look at the graph statistics for each one?

3) Why do the atlas aligned images seem different in behavior from the original brains - both in how the original images look and in terms of the brightest points?

**Histogram Equalized Before Alignment**

**Histogram Equalization After Alignment**

**Raw Image of Histogram Equalized Before Alignment**

**Raw Image of Histogram Equalization After Alignment**

Questions (cont):

4) After we finish the first prototype of the pipeline, what do you suggest we work on improving?

5) Are there supposed to be around 368 regions in the brain?

Issue 2: Pipeline

Currently outputs all files and intermediaries into a folder. Can be pip installed.

After pip installation, can call functions from Python. Does not implement latest color registration, since that’s a work in progress.


In [ ]: