In [1]:
%%bash
build_tiff_stack.py --help
The --pattern argument allows you to define a regular expression pattern for the files in --dir to build a stack only from files that match the regular expression pattern, ie --pattern ChanA will build a stack from files that contain ChanA in its name.
To select files whose name begin with ChanA write --pattern ^Chan.
It took approximately 3 minutes to build a stack with 3000 files.
In [1]:
%%bash
extract_channels_from_raw.py --help
Furthermore I wrote functions that can easily load these tiff stacks into python (also an IPython notebook) as numpy arrays.
But because loading these large tiffs takes similarly long as building a stack, I added a caching layer that saves faster loading hdf5 binaries of the arrays.
This also explains the --nocache option for the build_tiff_stack.py scripts. By default the script right away saves a fast loading cache file. Its name is simply filename.hdf5. But be aware that this doubles the volume of your data.
However, using this caching tiff stacks now load like a charm:
In [2]:
datafiles = [
'/home/michael/datac/data1/ChanA_0001_0001_0001.tif',
'/home/michael/datac/data1/ChanB_0001_0001_0001.tif',
'/home/michael/datac/data2/ChanA_0001_0001_0001.tif',
'/home/michael/datac/data2/ChanB_0001_0001_0001.tif',
'/home/michael/datac/data3/ChanA_0001_0001_0001.tif',
'/home/michael/datac/data3/ChanB_0001_0001_0001.tif',
]
In [4]:
import neuralyzer
In [5]:
%%timeit
stackdata = neuralyzer.get_data(datafiles[1])
on kumo it takes on average ~ 0.8 s to load a 1.5 G stack, whereas on my computer it takes now on average 2.13 s to load the 1.5 G stacks.
We just saw, the utilities come with a logger ..
In [5]:
stackdata = neuralyzer.get_data(datafiles[0])
In [6]:
whos