Benchmarking a Method with SpikeSorting.jl

You can use SpikeSorting.jl to calculate quantitive metrics of the accuracy of the method. To do this, we need 1) ground truth data sets and 2) measures of "accuracy." To find the ground truth datasets, we will simulate extracellular potentials created by tens of thousands of pyramidal neurons. To measure performance, we will calculate the false positives attributable to detection and clustering steps, false negatives attributable to the detection threshold and overlap of multiple potentials, and the true positive percentage. These metrics as described in detail here:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3123734/pdf/nihms303529.pdf


In [ ]:
using SpikeSorting, JLD

In [ ]:
a=load("../test/data/spikes2.jld")
time_stamps=a["time_stamps"] #3x1 array of arrays of spike times for each ground truth neuron
fv=a["fv"] #voltage vs time. ~5 minutes sampled at 20 khz with 3 neurons

cal_time=180.0; #calibration time in seconds
sr = 20000; #sample rate in hertz

In [5]:
detect=DetectPower()
cluster=ClusterOSort()
align=AlignMin()
feature=FeatureTime()
reduce=ReductionNone()
thres=ThresholdMean(2.0)
s1=create_multi(detect,cluster,align,feature,reduce,thres,1)

ss=SpikeSorting.benchmark(fv,time_stamps,s1[1],cal_time,sr);


Spike totals: [7430,8630,6899]
False Positive Total: 2796
False Positive Clustering: 0
False Positive Overlap: 1156
False Positive Noise: 1640
True Positive: 4577
Total False Negative: 18382
False Negative Overlap: 7294
False Negative Threshold: 11088

In [ ]: