NOTE: The network generated by this script does generate dynamics in which the activity of the entire system, especially Rp and Vp oscillates with approx 5 Hz. This is different from the full model. Deviations are due to the different model type and the elimination of a number of connections, with no changes to the weights.
This tutorial shows you how to implement a simplified version of the Hill-Tononi model of the early visual pathway using the NEST Topology module. The model is described in the paper
S. L. Hill and G. Tononi. Modeling Sleep and Wakefulness in the Thalamocortical System. J Neurophysiology 93:1671-1698 (2005). Freely available via `doi 10.1152/jn.00915.2004 http://dx.doi.org/10.1152/jn.00915.2004
We simplify the model somewhat both to keep this tutorial a bit shorter, and because some details of the Hill-Tononi model are not currently supported by NEST. Simplifications include:
We use the iaf_cond_alpha
neuron model, which is
simpler than the Hill-Tononi model.
As the iaf_cond_alpha
neuron model only supports two
synapses (labeled "ex" and "in"), we only include AMPA and
GABA_A synapses.
We ignore the secondary pathway (Ts, Rs, Vs), since it adds just more of the same from a technical point of view.
Synaptic delays follow a Gaussian distribution in the HT model. This implies actually a Gaussian distributions clipped at some small, non-zero delay, since delays must be positive. Currently, there is a bug in the Topology module when using clipped Gaussian distribution. We therefore draw delays from a uniform distribution.
Some further adaptations are given at the appropriate locations in the script.
This tutorial is divided in the following sections:
Philosophy Discusses the philosophy applied to model implementation in this tutorial
Preparations Neccessary steps to use NEST and the Topology Module
Configurable Parameters Define adjustable network parameters
Neuron Models Define the neuron models needed by the network model
Populations Create Populations
Synapse models Define the synapse models used in the network model
Connections Create Connections
Example simulation Perform a small simulation for illustration. This section also discusses the setup for recording.
A network models has two essential components: populations and
projections. We first use NEST's CopyModel()
mechanism to
create specific models for all populations and subpopulations in
the network, and then create the populations using the Topology
modules CreateLayer()
function.
We use a two-stage process to create the connections, mainly because the same configurations are required for a number of projections: we first define dictionaries specifying the connections, then apply these dictionaries later.
The way in which we declare the network model here is an example. You should not consider it the last word: we expect to see a significant development in strategies and tools for network descriptions in the future. The following contributions to CNS*09 seem particularly interesting
Sharon Crook, R. Angus Silver, & Padraig Gleeson. Describing and exchanging models of neurons and neuronal networks with NeuroML (F1);
as well as the following paper which will apply in PLoS Computational Biology shortly:
Eilen Nordlie, Marc-Oliver Gewaltig, & Hans Ekkehard Plesser. Towards reproducible descriptions of neuronal network models.
In [1]:
import sys
sys.path.append('/opt/lib/python2.7/site-packages/')
In [2]:
SHOW_FIGURES = True
import pylab
if not SHOW_FIGURES:
pylab_show = pylab.show
def nop(s=None): pass
pylab.show = nop
else:
pylab.ion()
In [3]:
# Load pynest
import nest
# Load NEST Topoplogy module (NEST version>2.2)
import nest.topology as topo
# Make sure we start with a clean slate, even if we re-run the script
# in the same Python session.
nest.ResetKernel()
# Import math, we need Pi
import math
Here we define those parameters that we take to be configurable. The choice of configurable parameters is obviously arbitrary, and in practice one would have far more configurable parameters. We restrict ourselves to:
N
, each layer is N x N
.visSize
, in degree.f_dg
, in Hz.lambda_dg
and phi_dg
, in degree/radian.retDC
and retAC
, in Hz.simtime
; actual simulation is split into
intervals of sim_interval
length, so that the network state
can be visualized in those intervals. Times are in ms.
In [4]:
Params = {'N' : 40,
'visSize' : 8.0,
'f_dg' : 2.0,
'lambda_dg' : 2.0,
'phi_dg' : 0.0,
'retDC' : 30.0,
'retAC' : 30.0,
'simtime' : 100.0,
'sim_interval': 5.0
}
We declare models in two steps:
We create three copies of this dictionary with parameters adjusted to the three model variants specified in Table~2 of Hill & Tononi (2005) (cortical excitatory, cortical inhibitory, thalamic)
In addition, we declare the models for the stimulation and recording devices.
We use the iaf_cond_alpha
neuron, which is an
integrate-and-fire neuron with two conductance-based synapses which
have alpha-function time course. Any input with positive weights
will automatically directed to the synapse labeled _ex
, any
with negative weights to the synapes labeled _in
. We define
all parameters explicitly here, so that no information is
hidden in the model definition in NEST. V_m
is the membrane
potential to which the model neurons will be initialized.
The model equations and parameters for the Hill-Tononi neuron model
are given on pp. 1677f and Tables 2 and 3 in that paper. Note some
peculiarities and adjustments:
iaf_cond_alpha
model is based on the
membrane capcitance. Interestingly, conducantces are unitless in
the H&T model. We thus can use the time constant directly as
membrane capacitance.
In [5]:
nest.CopyModel('iaf_cond_alpha', 'NeuronModel',
params = {'C_m' : 16.0,
'E_L' : (0.2 * 30.0 + 1.5 * -90.0)/(0.2 + 1.5),
'g_L' : 0.2 + 1.5,
'E_ex' : 0.0,
'E_in' : -70.0,
'V_reset' : -60.0,
'V_th' : -51.0,
't_ref' : 2.0,
'tau_syn_ex': 1.0,
'tau_syn_in': 2.0,
'I_e' : 0.0,
'V_m' : -70.0})
In [6]:
# Cortical excitatory cells
# .........................
# Parameters are the same as above, so we need not adapt anything
nest.CopyModel('NeuronModel', 'CtxExNeuron')
In [7]:
# Cortical inhibitory cells
# .........................
nest.CopyModel('NeuronModel', 'CtxInNeuron',
params = {'C_m' : 8.0,
'V_th' : -53.0,
't_ref': 1.0})
In [8]:
# Thalamic cells
# ..............
nest.CopyModel('NeuronModel', 'ThalamicNeuron',
params = {'C_m' : 8.0,
'V_th' : -53.0,
't_ref': 1.0,
'E_in' : -80.0})
Input is generated by sinusoidally modulating Poisson generators, organized in a square layer of retina nodes. These nodes require a slightly more complicated initialization than all other elements of the network:
DC
, firing rate modulation depth AC
, and
temporal modulation frequency Freq
are the same for all retinal
nodes and are set directly below.Phi
of each node depends on its position in
the grating and can only be assigned after the retinal layer has
been created. We therefore specify a function for initalizing the
phase Phi
. This function will be called for each node.
In [9]:
def phiInit(pos, lam, alpha):
'''Initializer function for phase of drifting grating nodes.
pos : position (x,y) of node, in degree
lam : wavelength of grating, in degree
alpha: angle of grating in radian, zero is horizontal
Returns number to be used as phase of AC Poisson generator.
'''
return 2.0 * math.pi / lam * (math.cos(alpha) * pos[0] + math.sin(alpha) * pos[1])
nest.CopyModel('sinusoidal_poisson_generator', 'RetinaNode',
params = {'ac' : Params['retAC'],
'dc' : Params['retDC'],
'freq' : Params['f_dg'],
'phi' : 0.0,
'individual_spike_trains': False})
We use the new multimeter
device for recording from the model
neurons. At present, iaf_cond_alpha
is one of few models
supporting multimeter
recording. Support for more models will
be added soon; until then, you need to use voltmeter
to record
from other models.
We configure multimeter to record membrane potential to membrane potential at certain intervals to memory only. We record the GID of the recorded neurons, but not the time.
In [10]:
nest.CopyModel('multimeter', 'RecordingNode',
params = {'interval' : Params['sim_interval'],
'record_from': ['V_m'],
'record_to' : ['memory'],
'withgid' : True,
'withtime' : False})
In [11]:
layerProps = {'rows' : Params['N'],
'columns' : Params['N'],
'extent' : [Params['visSize'], Params['visSize']],
'edge_wrap': True}
# This dictionary does not yet specify the elements to put into the
# layer, since they will differ from layer to layer. We will add them
# below by updating the `'elements'` dictionary entry for each
# population.
In [12]:
layerProps.update({'elements': 'RetinaNode'})
retina = topo.CreateLayer(layerProps)
# Now set phases of retinal oscillators; we use a list comprehension instead
# of a loop.
[nest.SetStatus([n], {"phi": phiInit(topo.GetPosition([n])[0],
Params["lambda_dg"],
Params["phi_dg"])})
for n in nest.GetLeaves(retina)[0]];
In [13]:
# We use a list comprehension to do the model copies.
[nest.CopyModel('ThalamicNeuron', SpecificModel) for SpecificModel in ('TpRelay', 'TpInter')]
# Now we can create the layer, with one relay cell and one
# interneuron per location:
layerProps.update({'elements': ['TpRelay', 'TpInter']})
Tp = topo.CreateLayer(layerProps)
In [14]:
[nest.CopyModel('ThalamicNeuron', SpecificModel) for SpecificModel in ('RpNeuron',)]
layerProps.update({'elements': 'RpNeuron'})
Rp = topo.CreateLayer(layerProps)
We follow again the same approach. We differentiate neuron types between layers and between pyramidal cells and interneurons. At each location, there are two pyramidal cells and one interneuron in each of layers 2-3, 4, and 5-6. Finally, we need to differentiate between vertically and horizontally tuned populations. When creating the populations, we create the vertically and the horizontally tuned populations as separate populations.
In [15]:
# We use list comprehesions to create all neuron types:
[nest.CopyModel('CtxExNeuron', layer+'pyr') for layer in ('L23','L4','L56')]
[nest.CopyModel('CtxInNeuron', layer+'in' ) for layer in ('L23','L4','L56')]
# Now we can create the populations, suffixes h and v indicate tuning
layerProps.update({'elements': ['L23pyr', 2, 'L23in', 1,
'L4pyr' , 2, 'L4in' , 1,
'L56pyr', 2, 'L56in', 1]})
Vp_h = topo.CreateLayer(layerProps)
Vp_v = topo.CreateLayer(layerProps)
In [16]:
# For reference purposes, e.g., printing, we collect all populations
# in a tuple:
populations = (retina, Tp, Rp, Vp_h, Vp_v)
In [17]:
# We can now look at the network using `PrintNetwork`:
nest.PrintNetwork()
# We can also try to plot a single layer in a network. For
# simplicity, we use Rp, which has only a single neuron per position.
topo.PlotLayer(Rp)
pylab.title('Layer Rp')
pylab.show()
Actual synapse dynamics, e.g., properties such as the synaptic time
course, time constants, reversal potentials, are properties of
neuron models in NEST and we set them in section Neuron models
_
above. When we refer to synapse models in NEST, we actually mean
connectors which store information about connection weights and
delays, as well as port numbers at the target neuron (rport
)
and implement synaptic plasticity. The latter two aspects are not
relevant here.
We just use NEST's static_synapse
connector but copy it to
synapse models AMPA
and GABA_A
for the sake of
explicitness. Weights and delays are set as needed in section
Connections
_ below, as they are different from projection to
projection. De facto, the sign of the synaptic weight decides
whether input via a connection is handle by the _ex
or the
_in
synapse.
In [18]:
nest.CopyModel('static_synapse', 'AMPA')
nest.CopyModel('static_synapse', 'GABA_A')
Building connections is the most complex part of network construction. Connections are specified in Table 1 in the Hill-Tononi paper. As pointed out above, we only consider AMPA and GABA_A synapses here. Adding other synapses is tedious work, but should pose no new principal challenges. We also use a uniform in stead of a Gaussian distribution for the weights.
The model has two identical primary visual cortex populations,
Vp_v
and Vp_h
, tuned to vertical and horizonal gratings,
respectively. The only difference in the connection patterns
between the two populations is the thalamocortical input to layers
L4 and L5-6 is from a population of 8x2 and 2x8 grid locations,
respectively. Furthermore, inhibitory connection in cortex go to
the opposing orientation population as to the own.
To save us a lot of code doubling, we thus defined properties dictionaries for all connections first and then use this to connect both populations. We follow the subdivision of connections as in the Hill & Tononi paper.
Note: Hill & Tononi state that their model spans 8 degrees of visual angle and stimuli are specified according to this. On the other hand, all connection patterns are defined in terms of cell grid positions. Since the NEST Topology Module defines connection patterns in terms of the extent given in degrees, we need to apply the following scaling factor to all lengths in connections:
In [19]:
dpc = Params['visSize'] / (Params['N'] - 1)
# We will collect all same-orientation cortico-cortical connections in
ccConnections = []
# the cross-orientation cortico-cortical connections in
ccxConnections = []
# and all cortico-thalamic connections in
ctConnections = []
In [20]:
# We first define a dictionary with the (most) common properties for
# horizontal intralaminar connection. We then create copies in which
# we adapt those values that need adapting, and
horIntraBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.05, "sigma": 7.5 * dpc}},
"weights": 1.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
In [21]:
# We use a loop to do the for for us. The loop runs over a list of
# dictionaries with all values that need updating
for conn in [{"sources": {"model": "L23pyr"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L23pyr"}, "targets": {"model": "L23in" }},
{"sources": {"model": "L4pyr" }, "targets": {"model": "L4pyr" },
"mask" : {"circular": {"radius": 7.0 * dpc}}},
{"sources": {"model": "L4pyr" }, "targets": {"model": "L4in" },
"mask" : {"circular": {"radius": 7.0 * dpc}}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L56pyr" }},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L56in" }}]:
ndict = horIntraBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
In [22]:
# We proceed as above.
verIntraBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 1.0, "sigma": 7.5 * dpc}},
"weights": 2.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
In [23]:
for conn in [{"sources": {"model": "L23pyr"}, "targets": {"model": "L56pyr"}, "weights": 1.0},
{"sources": {"model": "L23pyr"}, "targets": {"model": "L23in" }, "weights": 1.0},
{"sources": {"model": "L4pyr" }, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L4pyr" }, "targets": {"model": "L23in" }},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L23in" }},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L4pyr" }},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L4in" }}]:
ndict = verIntraBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
In [24]:
# Note that we have to specify the **weight with negative sign** to make
# the connections inhibitory.
intraInhBase = {"connection_type": "divergent",
"synapse_model": "GABA_A",
"mask": {"circular": {"radius": 7.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.25, "sigma": 7.5 * dpc}},
"weights": -2.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
In [25]:
# We use a loop to do the for for us. The loop runs over a list of
# dictionaries with all values that need updating
for conn in [{"sources": {"model": "L23in"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L23in"}, "targets": {"model": "L23in" }},
{"sources": {"model": "L4in" }, "targets": {"model": "L4pyr" }},
{"sources": {"model": "L4in" }, "targets": {"model": "L4in" }},
{"sources": {"model": "L56in"}, "targets": {"model": "L56pyr"}},
{"sources": {"model": "L56in"}, "targets": {"model": "L56in" }}]:
ndict = intraInhBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
ccxConnections.append(ndict)
In [26]:
corThalBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 5.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.5, "sigma": 7.5 * dpc}},
"weights": 1.0,
"delays": {"uniform": {"min": 7.5, "max": 8.5}}}
In [27]:
# We use a loop to do the for for us. The loop runs over a list of
# dictionaries with all values that need updating
for conn in [{"sources": {"model": "L56pyr"}, "targets": {"model": "TpRelay" }},
{"sources": {"model": "L56pyr"}, "targets": {"model": "TpInter" }}]:
ndict = intraInhBase.copy()
ndict.update(conn)
ctConnections.append(ndict)
In [28]:
# In this case, there is only a single connection, so we write the
# dictionary itself; it is very similar to the corThalBase, and to
# show that, we copy first, then update. We need no `targets` entry,
# since Rp has only one neuron per location.
corRet = corThalBase.copy()
corRet.update({"sources": {"model": "L56pyr"}, "weights": 2.5})
In [29]:
# Cortico-cortical, same orientation
print("Connecting: cortico-cortical, same orientation")
[topo.ConnectLayers(Vp_h, Vp_h, conn) for conn in ccConnections]
[topo.ConnectLayers(Vp_v, Vp_v, conn) for conn in ccConnections]
# Cortico-cortical, cross-orientation
print("Connecting: cortico-cortical, other orientation")
[topo.ConnectLayers(Vp_h, Vp_v, conn) for conn in ccxConnections]
[topo.ConnectLayers(Vp_v, Vp_h, conn) for conn in ccxConnections]
# Cortico-thalamic connections
print("Connecting: cortico-thalamic")
[topo.ConnectLayers(Vp_h, Tp, conn) for conn in ctConnections]
[topo.ConnectLayers(Vp_v, Tp, conn) for conn in ctConnections]
topo.ConnectLayers(Vp_h, Rp, corRet)
topo.ConnectLayers(Vp_v, Rp, corRet)
In [30]:
thalCorRect = {"connection_type": "convergent",
"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"weights": 5.0,
"delays": {"uniform": {"min": 2.75, "max": 3.25}}}
print("Connecting: thalamo-cortical")
In [31]:
# Horizontally tuned
thalCorRect.update({"mask": {"rectangular": {"lower_left" : [-4.0*dpc, -1.0*dpc],
"upper_right": [ 4.0*dpc, 1.0*dpc]}}})
for conn in [{"targets": {"model": "L4pyr" }, "kernel": 0.5},
{"targets": {"model": "L56pyr"}, "kernel": 0.3}]:
thalCorRect.update(conn)
topo.ConnectLayers(Tp, Vp_h, thalCorRect)
In [32]:
# Vertically tuned
thalCorRect.update({"mask": {"rectangular": {"lower_left" : [-1.0*dpc, -4.0*dpc],
"upper_right": [ 1.0*dpc, 4.0*dpc]}}})
for conn in [{"targets": {"model": "L4pyr" }, "kernel": 0.5},
{"targets": {"model": "L56pyr"}, "kernel": 0.3}]:
thalCorRect.update(conn)
topo.ConnectLayers(Tp, Vp_v, thalCorRect)
In [33]:
# Diffuse connections
thalCorDiff = {"connection_type": "convergent",
"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"weights": 5.0,
"mask": {"circular": {"radius": 5.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.1, "sigma": 7.5 * dpc}},
"delays": {"uniform": {"min": 2.75, "max": 3.25}}}
In [34]:
for conn in [{"targets": {"model": "L4pyr" }},
{"targets": {"model": "L56pyr"}}]:
thalCorDiff.update(conn)
topo.ConnectLayers(Tp, Vp_h, thalCorDiff)
topo.ConnectLayers(Tp, Vp_v, thalCorDiff)
Connections inside thalamus, including Rp
Note: In Hill & Tononi, the inhibition between Rp cells is mediated by GABA_B receptors. We use GABA_A receptors here to provide some self-dampening of Rp.
Note: The following code had a serious bug in v. 0.1: During the first iteration of the loop, "synapse_model" and "weights" were set to "AMPA" and "0.1", respectively and remained unchanged, so that all connections were created as excitatory connections, even though they should have been inhibitory. We now specify synapse_model and weight explicitly for each connection to avoid this.
In [35]:
thalBase = {"connection_type": "divergent",
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
print("Connecting: intra-thalamic")
for src, tgt, conn in [(Tp, Rp, {"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 1.0, "sigma": 7.5 * dpc}},
"weights": 2.0}),
(Tp, Tp, {"sources": {"model": "TpInter"},
"targets": {"model": "TpRelay"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.25, "sigma": 7.5 * dpc}}}),
(Tp, Tp, {"sources": {"model": "TpInter"},
"targets": {"model": "TpInter"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.25, "sigma": 7.5 * dpc}}}),
(Rp, Tp, {"targets": {"model": "TpRelay"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.15, "sigma": 7.5 * dpc}}}),
(Rp, Tp, {"targets": {"model": "TpInter"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.15, "sigma": 7.5 * dpc}}}),
(Rp, Rp, {"targets": {"model": "RpNeuron"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.5, "sigma": 7.5 * dpc}}})]:
thalBase.update(conn)
topo.ConnectLayers(src, tgt, thalBase)
In [36]:
# We use 1 ms here.
retThal = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 1.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.75, "sigma": 2.5 * dpc}},
"weights": 10.0,
"delays": 1.0}
print("Connecting: retino-thalamic")
for conn in [{"targets": {"model": "TpRelay"}},
{"targets": {"model": "TpInter"}}]:
retThal.update(conn)
topo.ConnectLayers(retina, Tp, retThal)
In [37]:
# Connections from Retina to TpRelay
topo.PlotTargets(topo.FindCenterElement(retina), Tp, 'TpRelay', 'AMPA')
pylab.title('Connections Retina -> TpRelay')
pylab.show()
# Connections from TpRelay to L4pyr in Vp (horizontally tuned)
topo.PlotTargets(topo.FindCenterElement(Tp), Vp_h, 'L4pyr', 'AMPA')
pylab.title('Connections TpRelay -> Vp(h) L4pyr')
pylab.show()
# Connections from TpRelay to L4pyr in Vp (vertically tuned)
topo.PlotTargets(topo.FindCenterElement(Tp), Vp_v, 'L4pyr', 'AMPA')
pylab.title('Connections TpRelay -> Vp(v) L4pyr')
pylab.show()
In [38]:
print("Connecting: Recording devices")
recorders = {}
for name, loc, population, model in [('TpRelay' , 1, Tp , 'TpRelay'),
('Rp' , 2, Rp , 'RpNeuron'),
('Vp_v L4pyr', 3, Vp_v, 'L4pyr'),
('Vp_h L4pyr', 4, Vp_h, 'L4pyr')]:
recorders[name] = (nest.Create('RecordingNode'), loc)
tgts = [nd for nd in nest.GetLeaves(population)[0]
if nest.GetStatus([nd], 'model')[0]==model]
nest.Connect(recorders[name][0], tgts) # one recorder to all targets
In [39]:
# show time during simulation
nest.SetStatus([0],{'print_time': True})
# lower and upper limits for color scale, for each of the four
# populations recorded.
vmn=[-80,-80,-80,-80]
vmx=[-50,-50,-50,-50]
nest.Simulate(Params['sim_interval'])
# loop over simulation intervals
for t in pylab.arange(Params['sim_interval'], Params['simtime'], Params['sim_interval']):
# do the simulation
nest.Simulate(Params['sim_interval'])
# clear figure and choose colormap
pylab.clf()
pylab.jet()
# now plot data from each recorder in turn, assume four recorders
for name, r in recorders.items():
rec = r[0]
sp = r[1]
pylab.subplot(2,2,sp)
d = nest.GetStatus(rec)[0]['events']['V_m']
if len(d) != Params['N']**2:
# cortical layer with two neurons in each location, take average
d = 0.5 * ( d[::2] + d[1::2] )
# clear data from multimeter
nest.SetStatus(rec, {'n_events': 0})
pylab.imshow(pylab.reshape(d, (Params['N'],Params['N'])),
aspect='equal', interpolation='nearest',
extent=(0,Params['N']+1,0,Params['N']+1),
vmin=vmn[sp-1], vmax=vmx[sp-1])
pylab.colorbar()
pylab.title(name + ', t = %6.1f ms' % nest.GetKernelStatus()['time'])
pylab.draw() # force drawing inside loop
pylab.show() # required by `pyreport`
# just for some information at the end
print(nest.GetKernelStatus())
In [39]: