Complete pipeline for online processing using CaImAn Online (OnACID). The demo demonstates the analysis of a sequence of files using the CaImAn online algorithm. The steps include i) motion correction, ii) tracking current components, iii) detecting new components, iv) updating of spatial footprints. The script demonstrates how to construct and use the params and online_cnmf objects required for the analysis, and presents the various parameters that can be passed as options. A plot of the processing time for the various steps of the algorithm is also included. @author: Eftychios Pnevmatikakis @epnev Special thanks to Andreas Tolias and his lab at Baylor College of Medicine for sharing the data used in this demo.
In [ ]:
try:
if __IPYTHON__:
# this is used for debugging purposes only. allows to reload classes when changed
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
except NameError:
pass
from IPython.display import display, clear_output
import glob
import logging
import numpy as np
import os
import scipy
import cv2
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.INFO)
import caiman as cm
from caiman.source_extraction import cnmf as cnmf
from caiman.paths import caiman_datadir
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
import bokeh.plotting as bpl
try:
from bokeh.io import vform, hplot
except:
# newer version of bokeh does not use vform & hplot, instead uses column & row
from bokeh.layouts import column as vform
from bokeh.layouts import row as hplot
from bokeh.models import CustomJS, ColumnDataSource, Slider
bpl.output_notebook()
In [ ]:
fld_name = 'Mesoscope' # folder inside ./example_movies where files will be saved
fnames = []
fnames.append(download_demo('Tolias_mesoscope_1.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_2.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_3.hdf5',fld_name))
print(fnames) # your list of files should look something like this
In [ ]:
fr = 15 # frame rate (Hz)
decay_time = 0.5 # approximate length of transient event in seconds
gSig = (4,4) # expected half size of neurons
p = 1 # order of AR indicator dynamics
min_SNR = 1 # minimum SNR for accepting new components
rval_thr = 0.90 # correlation threshold for new component inclusion
ds_factor = 1 # spatial downsampling factor (increases speed but may lose some fine structure)
gnb = 2 # number of background components
gSig = tuple(np.ceil(np.array(gSig)/ds_factor).astype('int')) # recompute gSig if downsampling is involved
mot_corr = True # flag for online motion correction
pw_rigid = False # flag for pw-rigid motion correction (slower but potentially more accurate)
max_shifts_online = np.ceil(10./ds_factor).astype('int') # maximum allowed shift during motion correction
sniper_mode = True # flag using a CNN to detect new neurons (o/w space correlation is used)
init_batch = 200 # number of frames for initialization (presumably from the first file)
expected_comps = 500 # maximum number of expected components used for memory pre-allocation (exaggerate here)
dist_shape_update = True # flag for updating shapes in a distributed way
min_num_trial = 10 # number of candidate components per frame
K = 2 # initial number of components
epochs = 2 # number of passes over the data
show_movie = False # show the movie with the results as the data gets processed
params_dict = {'fnames': fnames,
'fr': fr,
'decay_time': decay_time,
'gSig': gSig,
'p': p,
'min_SNR': min_SNR,
'rval_thr': rval_thr,
'ds_factor': ds_factor,
'nb': gnb,
'motion_correct': mot_corr,
'init_batch': init_batch,
'init_method': 'bare',
'normalize': True,
'expected_comps': expected_comps,
'sniper_mode': sniper_mode,
'dist_shape_update' : dist_shape_update,
'min_num_trial': min_num_trial,
'K': K,
'epochs': epochs,
'max_shifts_online': max_shifts_online,
'pw_rigid': pw_rigid,
'show_movie': show_movie}
opts = cnmf.params.CNMFParams(params_dict=params_dict)
The first initbatch
frames are used for initialization purposes. The initialization method chosen here bare
will only search for a small number of neurons and is mostly used to initialize the background components. Initialization with the full CNMF can also be used by choosing cnmf
.
We first create an OnACID
object located in the module online_cnmf
and we pass the parameters similarly to the case of batch processing. We then run the algorithm using the fit_online
method.
In [ ]:
cnm = cnmf.online_cnmf.OnACID(params=opts)
cnm.fit_online()
In [ ]:
logging.info('Number of components: ' + str(cnm.estimates.A.shape[-1]))
Cn = cm.load(fnames[0], subindices=slice(0,500)).local_correlations(swap_dim=False)
cnm.estimates.plot_contours(img=Cn)
Now inspect the components extracted by OnACID. Note that if single pass was used then several components would be non-zero only for the part of the time interval indicating that they were detected online by OnACID.
Note that if you get data rate error you can start Jupyter notebooks using: 'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
In [ ]:
cnm.estimates.nb_view_components(img=Cn, denoised_color='red');
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True
) the time required to generate this movie will be included here.
In [ ]:
T_motion = 1e3*np.array(cnm.t_motion)
T_detect = 1e3*np.array(cnm.t_detect)
T_shapes = 1e3*np.array(cnm.t_shapes)
T_online = 1e3*np.array(cnm.t_online) - T_motion - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_motion)), T_motion, T_online, T_detect, T_shapes)
plt.legend(labels=['motion', 'process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
plt.ylim([0,140])