Previous: Process Monet ::: Next pipeline_tuning

Example: processing the Movie stimulus

In this notebook, we retrieve spikes inferred from the calcium traces and the synchronized movie stimulus, which may include static frames from the movie.


In [ ]:
%pylab inline
import datajoint as dj
from pipeline import preprocess, vis

Here we consider the more complex case wherein several different types of stimuli used during each calcium trace scan: vis.MovieStillCond, vis.MovieClipCond, vis.MovieSeqCond.

All these conditions rely on movie information cached in lookup tables vis.Movie, vis.MovieClip, and vis.MovieStill.

Here are the relevant tables in the vis schema:


In [ ]:
(dj.ERD(vis.Movie)+2+vis.Condition+vis.Trial-1).draw()

The following query denotes all trials combined with synchronization information to their two-photon scans.


In [ ]:
trials = preprocess.Sync() * vis.Trial() & 'trial_idx between first_trial and last_trial'

The following step of retrieving the spike traces was explained in detail in Process Monet.


In [ ]:
movie_conditions = [vis.MovieSeqCond(), vis.MovieClipCond(), vis.MovieStillCond()]
datasets = (preprocess.Spikes() & 
            (trials & movie_conditions) & 
            dict(extract_method=2, spike_method=5))
datasets

Let's fetch the primary key values of these datasets so that we can address them individually:


In [ ]:
keys = list(datasets.fetch.keys())     # keys for relevant datasets
key = keys[1]   #  identifies one dataset
dict(key)

Now let's fetch all the relevant information for the dataset identified by key:


In [ ]:
time = (preprocess.Sync() & key).fetch1['frame_times'].squeeze()  # calcium scan frame times
traces = np.vstack((preprocess.Spikes.RateTrace() & key).fetch['rate_trace'])  # fetch traces
nslices = len(time)/traces.shape[1]   # number of slices in recording.
time = time[::nslices]   # keep only one timestamp per frame (ignore time offsets of slices)

Now let's iterate through trials of each stimulus type separately

Movie clip stimulus

Let's iterate through the trials of type vis.MovieClipCond. Keep in mind that the trials of each type are interleaved.

We are not processing each one now but will only show how to process the last one.


In [ ]:
for clip_key in (trials * vis.MovieClipCond() & key).fetch.order_by('trial_idx').keys():
    print('.', end='')

Now let's just look at the last trial addressed by clip_key (the last iteration of the loop above) and retrieve the stimulus movie shown during the trial:


In [ ]:
clip_info = (vis.Trial()*vis.MovieClipCond()*vis.Movie.Clip() & clip_key).fetch1()

In [ ]:
clip_info['clip_number']   # the clip number. Some clips are repeated multiple times.

In [ ]:
clip_info['cut_after']  # the cut time.  Some clips are cut short.

The flip_times field contains the time stamps of the frames of the movie on the same clock and same units as the time retrieved earlier for calcium traces from preprocess.Sync:


In [ ]:
frame_times = clip_info['flip_times'].flatten()   # the frame times of the visual stimulus

In [ ]:
# load the movie
import io, imageio
vid = imageio.get_reader(io.BytesIO(clip_info['clip'].tobytes()), 'ffmpeg')

# show a frame from the movie
plt.imshow(vid.get_data(50))
grid(False)

You may now relate the information in traces on time with the movie vid on clip_info['flip_times'].

Still frame stimulus

Now let's process responses to stimuli comprising still frames from the movie. Again let's iterate through all the trials but only look at the last one in depth.


In [ ]:
for still_key in (trials * vis.MovieStillCond() & key).fetch.order_by('trial_idx').keys():
    print('.', end='')

Let's look at the last trial in depth.


In [ ]:
still_info = (vis.Trial()*vis.MovieStillCond()*vis.Movie.Still() & still_key).fetch1()

In [ ]:
plt.imshow(still_info['still_frame'], cmap='gray')
grid(False)

In [ ]:
still_info['flip_times'][0,0]  # frame time on the same clock as `time` for `traces`
still_info['duration']   # duration on the screen

Sequence stimulus

The sequence stimulus is still frame stimulus with fixed sequences of frames. Again, we will iterate through all the trials and examine the last trial in depth.


In [ ]:
for seq_key in (trials * vis.MovieSeqCond() & key).fetch.order_by('trial_idx').keys():
    print('.', end='')

In [ ]:
seq_info = (vis.Trial()*vis.MovieSeqCond() & seq_key).fetch1()

In [ ]:
seq_info['seq_length']

In [ ]:
seq_info['flip_times'].flatten()   # frame display times on the same clock

In [ ]:
np.uint32(seq_info['movie_still_ids'].flatten())   # Movie.Still id numbers

In [ ]:
f, ax = plt.subplots(4,int(ceil(seq_info['seq_length']/4)))
for i, a in zip(seq_info['movie_still_ids'].flatten(), ax.flatten()):
    im = (vis.MovieSeqCond()*vis.Movie.Still() & 
          dict(seq_key, still_id=i)).fetch1['still_frame']
    a.imshow(im, cmap='gray')
    a.grid(False)

That's it. That is all it takes to retrieve the data for computing neuronal responses in response to the movie stimulus.

Next pipeline_tuning