There are many python libraries that can be used in aide of data analysis and plotting. However, all (mature, popular) are built on top of numpy, a 3rd-party numerics package. You can get Python with numpy, in addition to a host of other relevant scientific packages, in one install process through Anaconda.
The following notebook is an example of loading and plotting some brain recording data from the Portfors' Hearing Reasearch Lab at Washington State University, Vancouver.
In [2]:
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
We are going to be looking at the spike timing of neurons. A spike is an electrical impulse that cells of the nervous system (e.g. your brain) use to send information. Lets import brain recording data, to see what this kind of data looks like. We have a data set of brain recordings from a single cell, where a stimulus was repeated 100 times.
In [3]:
# load in that data...
data = np.loadtxt('spike_sample.csv', delimiter=',')
# display dimension of imported data
print 'data dimensions (rows, columns):', data.shape
x = np.arange(data.shape[-1])/50000.
plt.plot(x, data[0,:])
plt.xlabel('time (s)')
plt.ylabel('voltage (V)')
# add a pointer for the spike
ax = plt.gca()
ax.annotate('this is a spike', xy=(0.060, 0.035), xytext=(0.1, 0.035), arrowprops={'facecolor': 'black'});
I have saved to file the spike times from these 100 recordings. This file contains one line per stimulus, with as many spikes as were detected per line.
If you are on a unix system, you can use the head
command to take a look at the file. You would get ouput like this (first 10 lines of the file):
0.05946
0.05842,0.1589
0.05632
0.00316,0.04972
0.0593
0.06124,0.07648
0.05784
0.04674,0.0602,0.07572,0.12892,0.1964
0.05548
We can use numpy to import this data into our IPython notebook
If we try to import the spike times using numpy, we get an error becuase the data is not "well-formed". Therefore, we must adjust our import procedure, or prune/reformate the data to get it into a more accurate representation.
This sort of task is actually about 75-90% of what data analysis typically is. It is helpful to have enough domain knowledge to recoginze when something is very off, and a problem needs to be addressed before moving on.
In [4]:
# numpy too naive for non-rectular data shapes...
#spike_times = np.loadtxt('spike_times.csv', delimiter=',')
# use csv module in Python standard lib to read in all times
spike_times = []
with open('spike_times.csv', 'r') as df:
reader = csv.reader(df)
for row in reader:
floatrow = [float(item) for item in row]
spike_times.append(floatrow)
print len(spike_times)
In [5]:
# In this case I don't care which spike times belong to which recording, so lets just get a list of all the times together.
# Flatten into a single list of times:
# spike_times = reduce(lambda a,b: a+b, spike_times)
all_spike_times = sum(spike_times, [])
# spike_times = [spike for trace in spike_times for spike in trace]
print all_spike_times
Now that we have our data, lets plot all the spike times together to take a look at the characteristic timing of this cell. We will first plot the data using Matplotlib, a plotting library based off of the MATLAB interface. It is the most mature, and popular plotting library in python. It is and higly customizable, and capable of producing publication-quailty graphs.
In [6]:
# count bins using numpy
bins, edges = np.histogram(all_spike_times, bins=20, range=(0, 0.2))
print bins, edges
In [7]:
# count bins using pure python
binsz = 0.02
bins = [int(x/binsz) for x in all_spike_times]
bin_counts = [bins.count(i) for i in range(10)]
bin_edges = [i*binsz for i in range(10)]
print bin_edges, bin_counts
In [8]:
n, bins, patches = plt.hist(all_spike_times, 20, range=(0,0.2))
plt.xlabel("time (s)")
plt.ylabel("no. spikes")
plt.title("Cell Spike Timing");
Seaborn is another Python plotting package, that is built on top of matplotlib, and aims to make the plot prettier and easier to work with. Just import seaborn and use matplotlib as usual, or use seaborn's high level plotting helpers:
In [9]:
import seaborn as sns
n, bins, patches = plt.hist(all_spike_times, 20, range=(0,0.2))
plt.xlabel("time (s)")
plt.ylabel("no. spikes")
plt.title("Cell Spike Timing");
Bokeh is an interactive visualization library that targets modern web browsers for presentation.
In [10]:
from bokeh.charts import Bar, Histogram, show, output_notebook
output_notebook()
hm = Histogram(all_spike_times, bins=20, xlabel='time (s)', ylabel='no. spikes', title='Spike timing')
show(hm)
Pandas provides data structures designed to make data munging, modeling and plotting easier in Python. It aims to add R-like functionality to Python. It is best suited to tabular data
In [11]:
spike_table = pd.read_csv('spike_times.csv', sep=',', names=range(5))
# note that the import skips the empty rows:
print 'Table shape', spike_table.shape
spike_table.plot(kind='hist', bins=20, range=(0,0.2));
plt.xlabel("time (s)")
plt.ylabel("no. spikes")
plt.title("Cell Spike Timing");
In [16]:
# spike binning from pandas without plotting
all_spikes = spike_table.values.flatten()
all_spikes = all_spikes[~np.isnan(all_spikes)]
print 'no. total spikes', len(all_spikes)
bin_edges = [i*0.02 for i in range(10)] + [0.2]
cats = pd.cut(all_spikes, bin_edges, labels=False)
bin_counts = np.bincount(cats)
print bin_counts, bin_edges
Resources for learing more: