This notebook contains scripts for running JNeuron simulations using the IJulia kernel, and measuring the time necessary for each simulation. The simulations are meant to test performance under various conditions such as 1) varying numbers of ion channels, 2) varying number of cells, and 3) parallel performance. Comparable Neuron implementations of the same simulations have been included where possible, and these can be called using IPython.
Finding the time to execute a method in Julia can be easily accomplished with the @time macro. Remember that there should be a first "dummy" run with this macro to allow for the necessary compilation to let time work.
In [ ]:
using JNeuron
#Load 3D Neurolucida file
myimport=input("./data/cell2.asc");
#Generate instance of neuron type with appropriate sections from 3D data
blank_neuron=instantiate(myimport);
#Create segments based on lambda rule
set_nsegs!(blank_neuron);
#add Hodgkin-Huxley and Passive channels to all segments
myneuron=add(blank_neuron,(HH(),Passive()));
#Create network with neurons and simulation stop time of 1000.0 ms
mynetwork=Network(myneuron,1000.0);
In [ ]:
run!(mynetwork,true);
In [ ]:
@time run!(mynetwork,false);
In [ ]:
import neuron
import time
neuron.h.load_file('stdlib.hoc')
neuron.h.load_file('import3d.hoc')
neuron.h.load_file('stdrun.hoc')
neuron.h('objref this')
Import = neuron.h.Import3d_Neurolucida3()
Import.input('./data/cell2.asc')
imprt = neuron.h.Import3d_GUI(Import, 0)
imprt.instantiate(neuron.h.this)
d_lambda=0.1
frequency=100
for sec in neuron.h.allsec():
sec.nseg = int((sec.L / (d_lambda*neuron.h.lambda_f(frequency,sec=sec)) + .9)/ 2 )*2 + 1
neuron.h.define_shape()
for sec in neuron.h.allsec():
sec.insert('hh')
sec.insert('pas')
neuron.h.finitialize()
neuron.h.fcurrent()
neuron.h.init()
neuron.h.tstop=1000.0
In [ ]:
start=time.time(); neuron.h.run(); end=time.time(); print end-start
Intracellular Stimulation and Extracellular Stimulation and Recording have been integrated into the main run loop for JNeuron; consequently any one or a combination of these can be quickly implemented with little additional computational cost.
Julia allows for easy implementation of parallel computing, as well as easy deployment in places with great computational power, like the cloud. Below we demonstrate how JNeuron performance improves with additional cores, both on a local machine and using Amazon Web Services
In [ ]: