Introduction

To complete the tasks below, write Python scripts that will reproduce your results and the corresponding result figures when you run them next time. We recommend to use ipython with the pylab option. Comment your scripts ! Use your scripts for notes, e.g. on different parameters that you have used or observations you have made. Print each of your result figures to an image file (using the Python 'savefig' command) and incorporate them in your final protocol. Write an appropriate figure caption for each figure, which must be incorporated in the protocol; axes labels need dimension and units; figure captions should include all relevant parameter values of the respective analysis. You may write the figure captions after you have finished an exercise or a section and before you start with a new section and new exercises. Please number the tasks in your protocol and answer all questions. In the protocol, describe with few sentences what you did to achieve each of the tasks.

1. Firing rate estimation

Exercises

You may choose to either complete the exercises 1.1. or, alternatively, the exercises 1.2. Both exercises deal with the time-resolved estimation of the neuronal firing rate.

Data

For firing rate estimation, I provide two files that contain output spike trains from a model neuron. The model neuron is a conductance-based leaky-integrate and fire neuron which includes a phenomenological model of spike frequency adaptation [2,8].

  • SFA_reg_1.5q_400tau_mn.gdf | SFA parameters j=-1.5nS and tau_SFA=400ms
  • SFA_reg_1.2q_200tau_mn.gdf | SFA parameters j=-1.2nS and tau_SFA=200ms

The model neuron adapts to changes in its input. The simulation data assumes that after 3s of spontaneous input the neuron receives an increased input (stimulus) which lasts for 2s (stimulus duration), before it returns to the spontaneous level. The input was modeled by a Poisson process realization, which followed an intensity that stepped from an initial low value to a much higher value after 3s (cf. Fig. 7a-c in [8]). The simulation was run with a time resolution of 0.1ms (corresponding to a sample frequency of 10kHz).


In [5]:
# Please choose the right path to the data and the dataset here
spikes_sfa_200 = loadtxt('data/SFA_reg_1.2q_200tau_mn.gdf')
spikes_sfa_400 = loadtxt('data/SFA_reg_1.5q_400tau_mn.gdf')

1.1. Peri Stimulus Time Histogram (PSTH)

The PSTH is the classical tool for visualizing and analyzing time-varying firing rates of neurons (e.g. [3,4,1]).

  • Chose one of the ASCII files from the list above and load it into the memory. It contains two columns of text-encoded numbers. The second row gives the spike times with a time resolution of 0.1ms (10-4s), while the first row contains the corresponding trial id's (i.e. the number of the experimental trial during which the corresponding spike was recorded) for each spike. Thus, in each row you have a pairof trial id and spike time. The simulations were repeated for a total of 100 trials. How many spikes had been recorded in total?

In [11]:
data = spikes_sfa_200
print len(data), 'spikes have been recorded in total'


9062 spikes have been recorded in total
  • Assign a new variable s to the spike times. Convert spike times to a 1 ms time resolution. Reference the spike times such that time t = 0 corresponds to the time of the stimulus onset $t_S$ by simply subtracting $t_S$ from the spike times. Assign the variable id to the trial ids in the first column of the original data array.

In [18]:
trial_n = data[:,0]
spike_times = data[:,1]
# you need to construct s from spike_times as described above.
# the ids are stored in the variable spike_times
  • Raster display: Your first task is to visualize the spike trains – one for each experimental trial – in a so-called 'raster display' or 'dot display'.

    Design Figure 1 for 3 panels which are arranged in one column. Open a new figure and use the below command to open the top axes. For the raster diagram plot spike times (x-axis) against the corresponding trial numbers (y-axis) such that each spike is represented by either a single point or by a small vertical line element. Set the limits of the x-axis such that the full stimulus duration is covered.


In [3]:
subplot(3,1,1)


Out[3]:
<matplotlib.axes.AxesSubplot at 0x2b85d10>
  • PSTH: Compute a PSTH from all spikes in all 100 trials following these steps:
    1. define the bin width (the temporal width of your histogram classes) in ms
    2. construct a suitable bin vector (1d array of the histogram edges). All spike data should lie within the histogram classes, i.e. within the total experimental interval starting firs edge till the last edge;
    3. in Python use the function numpy.histogram for constructing the histogram based on the bin array
    4. visualize your result in a separate temporary figure. Choose different values of the parameter bin width and repeat analysis steps 2-4 until you are satisfied. Which bin width did you finally choose? What happens if you are choosing a very small or very large bin width?

In [ ]:
# you need to contruct binarray first
h, bins = numpy.histogram(s, binarray)
  • Visualize the PSTH in the second panel of Figure 1 using the pylab.bar command (see below):

    You can change settings of the bar plot by accessing the object b and using the 'setp' function. This way, you may for example change the properties of the bar histogram in order to generate a uniformly colored surface (cf. Appendix):


In [20]:
# Example Code
bins = arange(-3,4,1) #star,stop,step
dat = randn(1000)

#calculate the histogram
h, bins = numpy.histogram(dat, bins)

#plotting
b = bar(bins[0:h.size], h)
_ = setp(b, width=1)



In [21]:
# Aternatively, use hist to contruct the histogram and plot in one step
h,bins,_ = hist(dat, bins)


  • Fano factor. Count the number of spikes during the stimulus epoch for each trial separately, then compute the variance of the spike count across trials. Now compute the Fano factor which is defined as the ratio of the variance of the spike count and the mean spike count. The Fano factor is a dimensionless (index), where the variance is normalized by the mean. It is often used to measure the trial-by-trial variability of neuronal activity. . A Poisson process has a Fano factor of 1. Is your data more or less variable than a Poisson process?
  • Check with your tutor for the correctness of tasks 1-6.

1.2. Kernel convolution