Previous: Synchronization ::: Next Process Movie
In this notebook, we retrieve spikes inferred from the calcium traces and the synchronized visual stimuli of the type "Monet".
In [ ]:
%pylab inline
import datajoint as dj
from pipeline import preprocess, vis
First, let get the trials with the Monet stimulus. Let's review the relevant ERD below:
In [ ]:
(dj.ERD(vis.Monet)-2 + preprocess.Sync + vis.Trial).draw()
The following query denotes all trials combined with synchronization information to the two-photon scan.
In [ ]:
trials = preprocess.Sync() * vis.Trial() & 'trial_idx between first_trial and last_trial'
The preprocess.Sync
table matches the calcium data with the visual stimuli. It identifies a sinlge two-photon recording and refers to the visual session vis.Session
.
The query uses the fields first_trial
and last_trial
in preprocess.Sync
to select the correct trials from the matching visual session.
Now the table preprocess.Spikes
and its part table preprocess.Spikes.RateTrace
contain spike rate traces inferred from calcium recordings processed in one way out of several options.
Below is the relevant ERD:
In [ ]:
(dj.ERD(preprocess.Spikes)-3+1).draw()
Thus the following query identifies all the datasets recorded with Monet trials:
In [ ]:
monet_datasets = preprocess.Spikes() & (trials & vis.Monet())
Some of these datasets represent the same recordings processed in different ways, which are enumerated in the lookup tables preprocess.Method
and preprocess.SpikeMethod
:
In [ ]:
preprocess.Method() * preprocess.Method.Galvo() # extraction methods for galvo microscopes
In [ ]:
preprocess.SpikeMethod()
We restrict the datasets to one processing method:
In [ ]:
monet_datasets = (preprocess.Spikes() &
(trials & vis.Monet()) &
dict(extract_method=2, spike_method=5))
monet_datasets
Now let's fetch the primary key values of these datasets so that we can address them individually:
In [ ]:
keys = list(monet_datasets.fetch.keys()) # keys for relevant datasets
key = keys[3] # identifies one dataset
dict(key)
Now let's fetch all the relevant information for the dataset identified by key
:
In [ ]:
time = (preprocess.Sync() & key).fetch1['frame_times'].squeeze() # calcium scan frame times
traces = np.vstack((preprocess.Spikes.RateTrace() & key).fetch['rate_trace']) # fetch traces
nslices = len(time)/traces.shape[1] # number of slices in recording.
time = time[::nslices] # keep only one timestamp per frame (ignore time offsets of slices)
Now let's iterate through the visual trials:
In [ ]:
for trial_key in (trials & key).fetch.order_by('trial_idx').keys():
print(dict(trial_key))
Now let's just look at the last trial addressed by trial_key
(the last iteration of the loop above) and retrieve the stimulus movie shown during the trial:
In [ ]:
stimulus_info = (vis.Trial()*vis.Monet()*vis.MonetLookup() & trial_key).fetch1()
The flip_times
field contains the time stamps of the frames of the movie on the same clock and same units as the time
retrieved earlier for calcium traces from preprocess.Sync
:
In [ ]:
stimulus_info['flip_times'].flatten() # the frame times of the visual stimulus
In [ ]:
plt.imshow(stimulus_info['cached_movie'][:,:,100]) # show a frame from the movie
plt.grid(False)
The stimulus comprises periods of coherent motion. The times and directions are contained in
In [ ]:
stimulus_info['params'][3]['direction'][0,0].flatten() # radians
In [ ]:
stimulus_info['params'][3]['onsets'][0,0].flatten() # motion onset times (s)
In [ ]:
stimulus_info['params'][3]['onsets'][0,0].flatten() # motion offset times (s)
In [ ]:
stimulus_info['params'][3]['frametimes'][0,0].flatten() # movie frame times (s) within trial
That's it. That is all it takes to map receptive fields and directional tunign of cells using the Monet stimulus.
Next Process Movie