Analyzing dendritic data with CaImAn

This notebook shows an example on how to analyze two-photon dendritic data with CaImAn. It follows closely the other notebooks.


In [ ]:
import cv2
import glob
import logging
import matplotlib.pyplot as plt
import numpy as np
import os

try:
    cv2.setNumThreads(0)
except():
    pass

try:
    if __IPYTHON__:
        get_ipython().magic('load_ext autoreload')
        get_ipython().magic('autoreload 2')
except NameError:
    pass

import caiman as cm
from caiman.motion_correction import MotionCorrect
from caiman.source_extraction.cnmf import cnmf as cnmf
from caiman.source_extraction.cnmf import params as params
from caiman.utils.utils import download_demo
from caiman.utils.visualization import nb_view_patches
from skimage.util import montage
import bokeh.plotting as bpl
import holoviews as hv
bpl.output_notebook()
hv.notebook_extension('bokeh')

Selecting the data

We provide an example file from mouse barrel cortex, courtesy of Clay Lacefield and Randy Bruno, Columbia University. The download_demo command will automatically download the file and store it in your caiman_data folder the first time you run it. To use the demo in your own dataset you can set:

fnames = [/path/to/file(s)].


In [ ]:
fnames = [download_demo('data_dendritic.tif')]

Viewing the data

Prior to analysis you can view the data by setting display_movie = False. The example file provided here is small, with FOV size 128 x 128. To view it larger we set magnification = 4 in the play command. This will result to a movie of resolution 512 x 512. For larger files, you will want to reduce this magnification factor (for memory and display purposes).


In [ ]:
display_movie = False
if display_movie:
    m_orig = cm.load_movie_chain(fnames)
    ds_ratio = 0.25
    m_orig.resize(1, 1, ds_ratio).play(
        q_max=99.5, fr=30, magnification=4)

Parameter setting

First we'll perform motion correction followed by CNMF.

One of the main differences when analyzing dendritic data, is that dendritic segments are not localized and can traverse significant parts of the FOV. They can also be spatially non-contiguous. To capture this property, CaImAn has two methods for initializing CNMF: sparse_nmf where a sparse non-negative matrix factorization is deployed, and graph-nmf, where the Laplacian of the pixel affinity matrix is used as a regularizer to promote spatial components that capture pixels that have similar timecourses. These can be selected with the parameter method_init.

In this demo we use the graph_nmf initialization method although sparse_nmf can also be used.

Since the dataset is small we can run CNMF without splitting the FOV in patches. This can be helpful because of the non-localized nature of the dendritic segments. To do that we set rf = None. Using patches is also supported as demonstrated below. Also note, that spatial and temporal downsampling can also be used to reduce the data dimensionality by appropriately modifying the parameters ssub and tsub. Here they are set to 1 since it is not necessary.

The graph NMF method was originally presented in the paper:

Cai, D., He, X., Han, J., & Huang, T. S. (2010). Graph regularized nonnegative matrix factorization for data representation. IEEE transactions on pattern analysis and machine intelligence, 33(8), 1548-1560.


In [ ]:
# dataset dependent parameters
fr = 30                             # imaging rate in frames per second
decay_time = 0.5

# motion correction parameters
strides = (48, 48)          # start a new patch for pw-rigid motion correction every x pixels
overlaps = (24, 24)         # overlap between pathes (size of patch strides+overlaps)
max_shifts = (6, 6)         # maximum allowed rigid shifts (in pixels)
max_deviation_rigid = 3     # maximum shifts deviation allowed for patch with respect to rigid shifts
pw_rigid = True             # flag for performing non-rigid motion correction

# parameters for source extraction and deconvolution
p = 0                       # order of the autoregressive system
gnb = 2                     # number of global background components
merge_thr = 0.75            # merging threshold, max correlation allowed
rf =  None                  # half-size of the patches in pixels. e.g., if rf=25, patches are 50x50
stride_cnmf = None          # amount of overlap between the patches in pixels
K = 60                      # number of components per patch
method_init = 'graph_nmf'   # initialization method (if analyzing dendritic data using 'sparse_nmf')
ssub = 1                    # spatial subsampling during initialization
tsub = 1                    # temporal subsampling during intialization

opts_dict = {'fnames': fnames,
            'fr': fr,
            'decay_time': decay_time,
            'strides': strides,
            'overlaps': overlaps,
            'max_shifts': max_shifts,
            'max_deviation_rigid': max_deviation_rigid,
            'pw_rigid': pw_rigid,
            'p': p,
            'nb': gnb,
            'rf': rf,
            'K': K, 
            'stride': stride_cnmf,
            'method_init': method_init,
            'rolling_sum': True,
            'only_init': True,
            'ssub': ssub,
            'tsub': tsub,
            'merge_thr': merge_thr}

opts = params.CNMFParams(params_dict=opts_dict)

In [ ]:
#%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened)
if 'dview' in locals():
    cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
    backend='local', n_processes=None, single_thread=False)

Motion correction

First we create a motion correction object and then perform the registration. The provided dataset has already been registered with a different method so we do not expect to see a lot of motion.


In [ ]:
mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion'))

In [ ]:
%%capture
#%% Run piecewise-rigid motion correction using NoRMCorre
mc.motion_correct(save_movie=True)
m_els = cm.load(mc.fname_tot_els)
border_to_0 = 0 if mc.border_nan is 'copy' else mc.border_to_0 
    # maximum shift to be used for trimming against NaNs

Display registered movie next to the original


In [ ]:
#%% compare with original movie
display_movie = False
if display_movie:
    m_orig = cm.load_movie_chain(fnames)
    ds_ratio = 0.2
    cm.concatenate([m_orig.resize(1, 1, ds_ratio) - mc.min_mov*mc.nonneg_movie,
                    m_els.resize(1, 1, ds_ratio)], 
                   axis=2).play(fr=60, q_max=99.5, magnification=4, offset=0)  # press q to exit

Memory mapping


In [ ]:
#%% MEMORY MAPPING
# memory map the file in order 'C'
fname_new = cm.save_memmap(mc.mmap_file, base_name='memmap_', order='C',
                           border_to_0=border_to_0) # exclude borders

# now load the file
Yr, dims, T = cm.load_memmap(fname_new)
images = np.reshape(Yr.T, [T] + list(dims), order='F') 
    #load frames in python format (T x X x Y)

In [ ]:
#%% restart cluster to clean up memory
cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
    backend='local', n_processes=None, single_thread=False)

Now run CNMF with the chosen initialization method

Note that in the parameter setting we have set only_init = True so only the initialization will be run.


In [ ]:
cnm = cnmf.CNMF(n_processes, params=opts, dview=dview)
cnm = cnm.fit(images)

Display components

And compute the correlation image as a reference image


In [ ]:
CI = cm.local_correlations(images[::1].transpose(1,2,0))
CI[np.isnan(CI)] = 0

In [ ]:
cnm.estimates.hv_view_components(img=CI)

Plot a montage array of all the components

We can also plot a montage of all the identified spatial footprints.


In [ ]:
A = cnm.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1])
Nc = A.shape[0]
grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int))
plt.figure(figsize=np.array(grid_shape[::-1])*1.5)
plt.imshow(montage(A, rescale_intensity=True, grid_shape=grid_shape))
plt.title('Montage of found spatial components');
plt.axis('off');

Run CNMF to sparsify the components

The components found by the initialization have captured a lot of structure but they are not sparse. Using the refit command will sparsify them.


In [ ]:
cnm2 = cnm.refit(images, dview=dview)

Now plot the components again


In [ ]:
cnm2.estimates.hv_view_components(img=CI)

In [ ]:
A2 = cnm2.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1])
Nc = A2.shape[0]
grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int))
plt.figure(figsize=np.array(grid_shape[::-1])*1.5)
plt.imshow(montage(A2, rescale_intensity=True, grid_shape=grid_shape))
plt.title('Montage of found spatial components');
plt.axis('off');

The resulting components are significantly more sparse and capture a lot of the data structure.

Play a movie with the results

The movie will show three panels with the original, denoised and residual movies respectively. The flag include_bck = False will remove the estimated background from the original and denoised panels. To include it set it to True.


In [ ]:
cnm2.estimates.play_movie(images, q_max=99.9,
                          magnification=4,
                          include_bck=False)
# press q to exit

Save the object

You can save the results of the analysis for future use and/or to load them on the GUI.


In [ ]:
#cnm2.estimates.Cn = CI # save the correlation image for displaying in the background
#cnm2.save('dendritic_analysis.hdf5')

Now try CNMF with patches

As mentioned above, the file shown here is small enough to be processed at once. For larger files patches and/or downsampling can be used. We show how to use patches below. Note that unlike in somatic imaging, for dendritic imaging we want the patches to be large and have significant overlap so enough structure of the dendritic segments is captured in the overlapping region. This will help with merging the components. However, a too large patch size can result to memory issues due to large amount of data loaded concurrently onto memory.


In [ ]:
#%% restart cluster to clean up memory
cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
    backend='local', n_processes=None, single_thread=False)

Change some parameters to enable patches

To enable patches the rf parameter should be changed from None to some integer value. Moreover the amount of overlap controlled by stride_cnmf should also change. Finally, since the parameter K specifies components per patch it should be reduced compared to its value when operating on the whole FOV at once. It is important to experiment with different values for these parameters since results can sensitive, due to the non stereotypical shapes and activity patterns of dendrites.


In [ ]:
rf = 48   # size of each patch is (2*rf, 2*rf) 
stride_cnmf = 16
Kp = 25   # reduce the number of component as it is now per patch
opts.change_params({'rf': rf,
                    'stride': stride_cnmf,
                    'K': Kp});

In [ ]:
cnm_patches = cnmf.CNMF(n_processes, params=opts, dview=dview)
cnm_patches = cnm_patches.fit(images)

In [ ]:
cnm_patches.estimates.hv_view_components(img=CI)

In [ ]:
Ap = cnm_patches.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1])
Nc = Ap.shape[0]
grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int))
plt.figure(figsize=np.array(grid_shape[::-1])*1.5)
plt.imshow(montage(Ap, rescale_intensity=True, grid_shape=grid_shape))
plt.title('Montage of found spatial components');
plt.axis('off');

Refit to sparsify

As before we can pass the results through the CNMF refit function that can yield more sparse components.


In [ ]:
cnm_patches2 = cnm_patches.refit(images, dview=dview)

In [ ]:
Ap2 = cnm_patches2.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1])
Nc = Ap2.shape[0]
grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int))
plt.figure(figsize=np.array(grid_shape[::-1])*1.5)
plt.imshow(montage(Ap2, rescale_intensity=True, grid_shape=grid_shape))
plt.title('Montage of found spatial components');
plt.axis('off');

In [ ]:
cnm_patches2.estimates.play_movie(images, q_max=99.9,
                                  magnification=4,
                                  include_bck=False)
# press q to exit

Compare the two approaches

To identify how similar components and the two approaches yield, we can compute the cross-correlation matrix between the spatial footprints derived from the two approaches.


In [ ]:
AA = np.corrcoef(cnm2.estimates.A.toarray(), cnm_patches2.estimates.A.toarray(), rowvar=False)

In [ ]:
plt.imshow(AA[:cnm2.estimates.A.shape[1], cnm2.estimates.A.shape[1]:])
plt.colorbar();
plt.xlabel('with patches')
plt.ylabel('without patches')
plt.title('Correlation coefficients');

Stop the cluster when you're done


In [ ]:
#%% STOP CLUSTER and clean up log files
cm.stop_server(dview=dview)
log_files = glob.glob('*_LOG_*')
for log_file in log_files:
    os.remove(log_file)

In [ ]: