The hippocampus is important for the learning of episodic and spatial memory, but how it coordinates its activity with other memory-related brain structures is not well known. Of particular interest is the prefrontal cortex (PFC), because of its diverse roles in attention, working memory, long-term memory storage and memory-guided decision making. The goal of this project is to investigate whether neural activity is systematically coordinated between hippocampus and PFC during memory-guided decision making.
Sharp wave ripples (SWR) are a prominent high-frequency hippocampal oscillation linked to memory-guided decision making that occur when an animal is immobile. During a SWR, neurons that were previously active for a particular spatial location in an environment "replay" their activity in a compressed sequence, as if the animal were currently moving through the environment. Interestingly, these compressed sequences of neural activity occur in both the forward and reverse direction -- that is, both following a path the animal might have taken through the environment (forward SWR) and retracing a path backwards (reverse SWR). Because these SWR events often occur at critical decision points, during the receiving of reward, or during sleep, they are thought to reflect planning of future actions based on past memories (memory recall) and/or consolidation of rewarded behaviors. Furthermore, reverse replay sequences are modulated by the rate of reward while forward replay are not (Ambrose et al. 2016), suggesting that reverse replay events are more involved in the consolidation of memories during learning whereas forward replays are more important for retrieval of past memories. If this is the case, then we might expect memory information to be transferred from PFC to hippocampus during forward replays -- because of the role of PFC in long term storage of memories -- and memory information to be transferred from hippocampus to PFC during reverse replays -- to consolidate memories during learning.
To test this hypothesis, neural activity was simultaneously recorded in PFC (pre-limbic and infralimbic) and hippocampus (CA1 and intermediate CA1) of three rats learning a W-track spatial alternation task (Figure 1a,c). In the task, the rats had to run to the opposite reward well at the end of each arm of the track (left and right arms) while returning to the center well in-between visits to each arm. We analyzed rhythmic local field potential activity occurring during sharp wave ripples in both hippocampus and PFC.
We are interested in answering the following questions:
Previously I had been comparing the power or connectivity measure -400 ms before the ripple to the power or connectivity measure around the time of the ripple. I thought this was the method used in:
Carr, M.F., Karlsson, M.P., and Frank, L.M. (2012). Transient Slow Gamma Synchrony Underlies Hippocampal Memory Replay. Neuron 75, 700–713.
It turns out, their baseline was actually the power over the entire epoch. This is also what the Frank lab recommended to Long Tao on the Slack channel as a baseline for comparison.
So my first solution was to compute the Fourier transform over all time points in the epoch and match the frequency resolution. But this resulted in too many independent estimates in the frequency domain and trying to specify the right number of tapers to smooth this wasn't computationally feasible. In addition, I'd have way more frequency samples than the power/connectivity measure in the peri-ripple period.
My second solution was to just use the same parameters as the ripple period and average the sliding window over the entire epoch (even if this introduces the windowing effect). This worked fine for power and coherence, but for group delay and pairwise spectral granger, this took a long time computationally and resulted in enormous files.
So I have to find a way to speed up the spectral matrix decomposition for the spectral granger and the linear regression for the group delay or find an alternative solution. Most of time is taken up by the spectral matrix decomposition at ~9 minutes per iteration with a maximum of 30 iterations.
We previously talked about examining the connectivity over the entire session, which I computed as part of the baseline, but I ran into the problems above (computational time and storage issues).
I wrote a tutorial that makes it easy to set up the same code environment that I'm using, including all the various software packages and dependencies. Also in this tutorial, I explain which functions in my code are useful for accessing information about the experiment, the spiking data, the position of the animals, etc.
Mehrnoosh successfully forked my code repository and installed the code environment. She also now has access to the data in the Dropbox and I showed her where to put the data so that it would work with my code. The code for accessing the data now works after some minor debugging.
The cluster requires you to specify the number of cores for a given job on the cluster. If you take up more than the requested number of cores, your job is killed. Turns out the openBLAS library greedily uses as many cores as possible on a machine, so even if you specify a specific number of cores, say 12, if your job is run on a machine with 24 cores, it will use all 24 cores. The solution is to use an environmental variable to set the maximum number of threads openBLAS can use. Frustrating.
This is my general understanding of the algorithm, but I'm not quite getting all the details from his paper:
So you have three sources of information:
You're calculating the likelihood of each of these and multiplying them together to combine the sources of information. The likelihoods are:
You also have two(?) state transitions models:
At each time step, you're calculating the posterior density of the replay state indicator and semi-latent state to get a one step prediction and then updating this prediction using the likelihood.
Having tested collecting the data for one file and multiple files locally, I've tested the system on the cluster. After fixing an issue with the netCDF format, it works fine.
Previously my connectivity code returned numpy arrays (which are just like Matlab arrays). I was then externally converting these to xarray objects, which are convenient because they allow easy indexing, output to HDF5, plotting (see last week's report for more information). I started refactoring the code to return these by default, so I don't have to repeat the same code all the time.
Still working on producing the plots. The issues with the cluster (jobs were being killed, computing baseline over entire session taking too long and taking too much space) have slowed this down. I hope to at least show plots with the old baseline (-400 ms before ripple onset) by tomorrow morning, but I'm waiting for last cluster jobs to run tonight.
In [ ]: