Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons, Srdjan Ostojic, Nature Neuroscience, 2014

Materials and Methods

Model

$N=10000$ LIF neurons, $f=0.8$ are excitatory, rest inhibitory.

$\tau_m \frac{dV_i}{dt}=-V_i + \mu_0 + R I_i(t) + \mu_{ext}(t)$

0 mV resting, 20 mV threshold, 10 mV reset, 0.5 ms refractory period, rest self-explanatory, values in text.

synaptic input:

$RI_i(t)=\tau_m \sum_j J_{ij}\sum_k\delta(t-t_k^{(j)}-\Delta)$

$\Delta = 0.55$ ms is the synaptic delay. Delay is needed else spikes within refractory period have no effect. [Don't understand this logic -- only needed if there is some clock or synchronous computation, else some other spike will appear during the refractory period.]

Excitatory synapses have strength $J$, while inhibitory synapses have strength $-gJ$.

Mean field theory

Validity regime:

Each neuron receives a large number of small inputs (each too small to generate AP). Then the total synaptic input to a neuron can be approximated by a Gaussian white noise process that is independent across neurons.

Homogeneous network:

Assume each neuron emits spikes as a Poisson process of constant rate $\nu_0$. Then the mean $\mu$ and s.d. $\sigma$ of the equivalent white noise input are given by (Amit and Brunel, 1997 Eqn(8) -- see my ipynb):

$\mu = \mu_0 + JC(f-(1-f)g)\tau_m\nu_0$

$\sigma^2 = \tau_m\nu_0CJ^2(f-(1-f)g^2)$

where $C$ are the number of synapses per neuron. This is an approximation assuming membrane potential does not decay in time $\tau_m$.

When $g=f/(1-f)$, mean $\mu$ vanishes i.e. balance of excitation and inhibition.


In [2]:
import sympy as sym
from sympy import *
from sympy import init_printing; init_printing()

In [16]:
mu, sigma, tau_m, nu_0, C, J, f, g, mu_0 = symbols('mu sigma tau_m nu_0 C J f g mu_0')
mu_expr = tau_m*nu_0*C*J*(f-(1-f)*g)+mu_0
mu_expr


Out[16]:
$$C J \nu_{0} \tau_{m} \left(f - g \left(- f + 1\right)\right) + \mu_{0}$$

In [17]:
sigma_expr = sqrt(tau_m*nu_0*C*J**2*(f+(1-f)*g**2))
sigma_expr


Out[17]:
$$\sqrt{C J^{2} \nu_{0} \tau_{m} \left(f + g^{2} \left(- f + 1\right)\right)}$$

Equilibrium rate $\nu_0$ is given by the self-consistency equation:

$\nu_0=F(\mu(\nu_0),\sigma^2(\nu_0))$ ...Eqn(1)

where $F$ is the current-to-rate transfer function of the LIF neuron receiving a white noise input:


In [18]:
F, tau_r, V_th, V_r, u, nu = symbols('F tau_r V_th, V_r u nu')
F_expr = ( tau_r + 2*tau_m*Integral( exp(u**2)*Integral(exp(-nu**2),(nu,-oo,0)) ,(u,V_r-mu/sigma,V_th-mu/sigma) ) )**-1
F_expr


Out[18]:
$$\frac{1}{2 \tau_{m} \int_{V_{r} - \frac{\mu}{\sigma}}^{V_{th} - \frac{\mu}{\sigma}} e^{u^{2}} \int_{-\infty}^{0} e^{- \nu^{2}}\, d\nu\, du + \tau_{r}}$$

Vary firing rate of $j$th neuron: $\nu_0 \to \nu_0+\nu_j(t)$.

Mean $\mu_i = \mu + \tau_m\sum_j J_{ij}\nu_j(t)$ ...Eqn(9).

Variance $\sigma_i^2 = \sigma^2 + \tau_m\sum_j J_{ij}^2\nu_j(t)$ ...Eqn(10).

[You can break a Poisson delta-function train with rate $\nu_0+\nu_j$ as a 'sum' of Poisson delta-function trains with rates $\nu_0$ and $\nu_j$. Think of it as a process which produces a spike with probability $(\nu_0+\nu_j)dt$ in time $dt$. But if for some crazy reason you add the corresponding spike-times up, then the summed-spike-time distribution tends to a Gaussian by the Central Limit Theorem. However here, the number of spikes in time $\tau_m$ multiplied by $J$ are being added giving a Gaussian distribution.]


In [ ]: