SYDE 556/750: Simulating Neurobiological Systems

Accompanying Readings: Chapter 8

Terry Stewart

Dynamics

  • Everything we've looked at so far has been feed-forward functions
    • There's some pattern of activity in one group of neurons representing $x$
    • We want that to cause some pattern of activity in another group of neurons to represent $y=f(x)$
    • These can be chained together to make more complex systems $z=q(f(x)+g(y))$
  • these are all feed-forward networks
    • what about recurrent networks?

  • What happens when we connect a neural group back to itself?

Recurrent functions

  • What if we do exactly what we've done so far in the past, but instead of connecting one group of neurons to another, we just connect it back to itself
    • Instead of $y=f(x)$
    • We get $x=f(x)$ (???)
  • This is clearly non-sensical
    • For example, if we do $f(x)=x+1$ then we'd have $x=x+1$, or $x-x=1$, or $0=1$
    • What would happen if we built this?
    • What about $f(x)=-x$? or $f(x)=x^2$?

Try it out

  • Let's try implementing this.
  • Hmm, that's a bit difficult with what we've done so far
    • All of our code so far is set where we figure out what happens to one group of neurons, then take all that data and feed it to the next group of neurons
    • Much easier to write code that way
    • But difficult when we want to have recurrent connections
  • So let's switch to other software
  • Start with $x=x+1$

In [ ]:
import nef
net = nef.Network('Recurrent Test 1')
net.make('A', neurons=1000, dimensions=1)
def feedback(x):
    return x[0]+1
net.connect('A', 'A', func=feedback, pstc=0.1)
net.add_to_nengo()
net.view()

  • That sort of makes sense
    • $x$ increases quickly, then hits an upper bound
  • How quickly?
    • What parameters of the system affect this?
  • What are the precise dynamics?

  • What about $f(x)=-x$?


In [ ]:
import nef
net = nef.Network('Recurrent Test 1')
net.make('A', neurons=100, dimensions=1)
def feedback(x):
    return -x[0]
net.connect('A', 'A', func=feedback)
net.add_to_nengo()
net.view()

  • That also makes sense. What if we nudge it away from zero?

In [ ]:
import nef
net = nef.Network('Recurrent Test 1')
net.make('A', neurons=100, dimensions=1)
def feedback(x):
    return -x[0]
net.connect('A', 'A', func=feedback)

net.make_input('input', [0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()

  • With an input of 1, $x=0.5$
  • With an input of -1, $x=-0.5$
  • With an input of 0, it goes back to $x=0$
  • Does this make sense?

    • Why / Why not?
    • And why that particular timing?
  • What about $f(x)=x^2$?


In [ ]:
import nef
net = nef.Network('Recurrent Test 1')
net.make('A', neurons=100, dimensions=1)
def feedback(x):
    return x[0]*x[0]
net.connect('A', 'A', func=feedback)

net.make_input('input', [0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()

  • Well that's weird
  • Stable at $x=0$ with no input but something very strange happens around $x=1$ with no input
  • Why is this happening?

Making sense of dynamics

  • Let's go back to something simple
  • Just a single feed-forward neural population
    • encode $x$ into current, compute spikes, decode filtered spikes into $\hat{x}$
  • Instead of a constant input, let's change the input
    • Change it suddenly from zero to one to get a sense of what's happening with changes

In [18]:
import syde556

dt = 0.001

t = numpy.arange(1000)*dt
x = numpy.zeros(1000)
x[300:]=1

A = syde556.Ensemble(neurons=80, dimensions=1, max_rate=(50,100))

decoder = A.compute_decoder(x, x, noise=0.2)

spikes_A, xhat = A.simulate_spikes(x, decoder, tau=0.1)

import pylab
pylab.plot(t, x, label='$x$')
pylab.plot(t, xhat, label='$\hat{x}$')
pylab.legend(loc='best')
pylab.ylim(-0.2, 1.2)
pylab.show()


  • This was supposed to compute $f(x)=x$
  • For a constant input, that works
    • But we get something else when there's a change in the input
  • What is this difference?
    • What affects it?

In [20]:
import syde556

dt = 0.001

t = numpy.arange(1000)*dt
x = numpy.zeros(1000)
x[300:]=1

A = syde556.Ensemble(neurons=80, dimensions=1, max_rate=(50,100))

decoder = A.compute_decoder(x, x, noise=0.2)

spikes_A, xhat = A.simulate_spikes(x, decoder, tau=0.03)

import pylab
pylab.plot(t, x, label='$x$')
pylab.plot(t, xhat, label='$\hat{x}$')
pylab.legend(loc='best')
pylab.ylim(-0.2, 1.2)
pylab.show()


  • The time constant of the post-synaptic filter
  • We're not getting $f(x)=x$
  • Instead we're getting $f(x(t))=x(t)*h(t)$

In [28]:
import syde556

dt = 0.001

t = numpy.arange(1000)*dt
x = numpy.zeros(1000)
x[300:]=1

h = syde556.psc_filter(t-0.5, 0.1)
fx = syde556.apply_filter(numpy.array(x)[:,None], h)*dt

A = syde556.Ensemble(neurons=80, dimensions=1, max_rate=(50,100))

decoder = A.compute_decoder(x, x, noise=0.2)

spikes_A, xhat = A.simulate_spikes(x, decoder, tau=0.1)

import pylab
pylab.plot(t, x, label='$x(t)$')
pylab.plot(t, xhat, label='$\hat{x}(t)$')
pylab.plot(t, fx, label='$x(t)*h(t)$')
pylab.legend(loc='lower right')
pylab.ylim(-0.2, 1.2)
pylab.show()


Recurrent connections

  • So a connection actually approximates $f(x(t))*h(t)$
  • So what does a recurrent connection do?

$x(t) = f(x(t))*h(t)$

  • where $$ h(t) = \begin{cases} e^{-t/\tau} &\mbox{if } t > 0 \\ 0 &\mbox{otherwise} \end{cases} $$

  • How can we work with this?

  • General rule of thumb: convolutions are annoying, so let's get rid of them

  • We could do a Fourier transform
    • $X(\omega)=F(\omega)H(\omega)$
    • but $h(t)$ is this really nice exponential and is zero for values of t<0, so there's another transform that might be a bit easier
  • Laplace transform
    • $X(s)=F(s)H(s)$
    • $H(s)={1 \over {1+s\tau}}$
  • rearranging:

$X=F {1 \over {1+s\tau}}$

$X(1+s\tau) = F$

$X + Xs\tau = F$

$sX = {1 \over \tau} (F-X)$

  • convert back into the time domain:

${dx \over dt} = {1 \over \tau} (f(x)-x)$

Dynamics

  • This says that if we do a recurrent connection, we actually end up implementing a differential equation

  • So what happened with $f(x)=x+1$?

    • ${dx \over dt} = {1 \over \tau} (x+1-x)$
    • ${dx \over dt} = {1 \over \tau}$
  • what about $f(x)=-x$?
    • ${dx \over dt} = {1 \over \tau} (-x-x)$
    • ${dx \over dt} = {-2x \over \tau}$
  • and $f(x)=x^2$?

    • ${dx \over dt} = {1 \over \tau} (x^2-x)$
  • What if there's some differential equation we really want to implement?

    • we want ${dx \over dt} = f(x)$
    • so we do a recurrent connection of $f'(x)=\tau f(x)+x$
    • the resulting model will end up implementing ${dx \over dt} = {1 \over \tau} (\tau f(x)+x-x)=f(x)$

Inputs

  • What happens if there's an input as well?
    • We'll call the input $u$ from another population, and it is also computing some function $g(u)$
    • $x(t) = f(x(t))*h(t)+g(u(t))*h(t)$
  • Follow the same derivation steps

${dx \over dt} = {1 \over \tau} (f(x)-x + g(u))$

  • So if you have some input that you want added to ${dx \over dt}$, you need to scale it by $\tau$

  • This lets us do any differential equation of the form ${dx \over dt}=f(x)+g(u)$

Applications

Eye control

  • Part of the brainstem called the nuclei prepositus hypoglossi
  • Input is eye velocity $v$
  • Output is eye position $p$
  • ${dp \over dt}=v$

    • This is an integrator ($p$ is the integral of $v$)
  • So, to get it in the standard form ${dx \over dt}=f(x)+g(u)$ we have:

    • $f(p)=0$
    • $g(v)=v$
  • So that means we need a recurrent function of $f'(p)=\tau(0)+p = p$ and $g'(v)=\tau v$

In [ ]:
import nef

tau=0.1

net = nef.Network('Neural Integrator')
net.make('velocity', neurons=100, dimensions=1)
net.make('position', neurons=100, dimensions=1)
def recurrent(x):
    return x[0]
net.connect('position', 'position', func=recurrent, pstc=tau)
net.connect('velocity', 'position', transform=[[tau]])

net.make_input('input', [0])
net.connect('input', 'velocity')

net.add_to_nengo()
net.view()

  • But, in order to be a perfect integrator, we'd need exactly $f'(p)=p$
    • We won't get exactly that
    • Neural implementations are always approximations
  • Two forms of error:
    • $E_{distortion}$, the decoding error
    • $E_{noise}$, the random noise error
  • What will they do?

  • What affects this?

In [6]:
import syde556
reload(syde556)

x = numpy.linspace(-1, 1, 100)

A = syde556.Ensemble(neurons=200, dimensions=1, seed=1, tau_rc=0.02)
decoder = A.compute_decoder(x, x, noise=0.1)
activity, xhat = A.simulate_rate(x, decoder)

import pylab
pylab.axhline(0, color='k')
pylab.plot(x, x-xhat, label='$x-\hat{x}$')
pylab.legend(loc='best')
pylab.xlabel('$x$')
pylab.show()


  • So we can think of the distortion error as introducing a bunch of local attractors into the representation
    • There will be a tendency to drift towards one of these even if the input is zero
  • What will random noise do?
    • Push the representation back and forth
    • What if it is small?
    • What if it is large?
  • So what will changing the post-synaptic time constant $\tau$ do?
    • How does that interact with noise?
  • But real eyes aren't perfect integrators
    • If you get someone to look at someting, then turn off the lights but tell them to keep looking in the same direction, their eye will drift back to centre
    • How do we implement that?

${dp \over dt}=v - {p \over \tau_c}$

  • $\tau_c$ is the time constant of that return to centre

  • $f'(p)=\tau ({-p \over \tau_c})+p$


In [ ]:
import nef

tau=0.1
tau_centre=1.0
net = nef.Network('Neural Integrator')
net.make('velocity', neurons=100, dimensions=1)
net.make('position', neurons=100, dimensions=1)
def recurrent(x):
    return (1-tau/tau_centre)*x[0]
net.connect('position', 'position', func=recurrent, pstc=tau)
net.connect('velocity', 'position', transform=[[tau]])

net.make_input('input', [0])
net.connect('input', 'velocity')

net.add_to_nengo()
net.view()
  • Humans (a) and Goldfish (b)
  • Humans have more neurons doing this than goldfish (~70 vs ~20)
  • They also have slower decay.
  • Why do these fit together?

  • With fewer neurons, the stable points on the integrator are more attractive, and so a gradual decay is more likely to get stuck

Controlled Integrator

  • What if we want an integrator where we can adjust the decay?
  • Separate input telling us what the decay constant $d$ should be

${dp \over dt}= v - p d$

  • So there are two inputs: $v$ and $d$

  • Can we write this as ${dx \over dt}=f(x)+g(u)$?

  • We need to compute a nonlinear function of an input ($d$) and the state variable ($p$)
  • How can we do this?
    • change of variables
    • let's have the state variable be $[p, d]$

In [ ]:
import nef

tau=0.1
tau_centre=1.0
net = nef.Network('Controlled Neural Integrator')
net.make('decay', neurons=100, dimensions=1)
net.make('velocity', neurons=100, dimensions=1)
net.make('position', neurons=200, dimensions=2)
def recurrent(x):
    return x[0]-x[1]*x[0], 0
net.connect('position', 'position', func=recurrent, pstc=tau)
net.connect('velocity', 'position', transform=[tau, 0])
net.connect('decay', 'position', transform=[0, 1])

net.make_input('input', [0])
net.connect('input', 'velocity')

net.make_input('input_decay', [0])
net.connect('input_decay', 'decay')

net.add_to_nengo()
net.view()

Other fun functions

  • Oscillator
    • ${dx \over dt}=[x_1, -x_0]$

In [ ]:
import nef
net = nef.Network('Linear Oscillator')
net.make('A', neurons=100, dimensions=2)
def feedback(x):
    return x[0]+x[1], -x[0]+x[1]
net.connect('A', 'A', func=feedback, pstc=0.01)

net.make_input('input', [0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()

  • Lorenz Attractor

    • ${dx \over dt}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$

In [ ]:
tau = 0.1
sigma = 10
beta = 8.0/3
rho = 28

import nef
net = nef.Network('Lorenz attractor', seed=6)
net.make('A', 2000, 3, radius=60)

def feedback(x):
    dx0 = -sigma * x[0] + sigma * x[1]
    dx1 = -x[0] * x[2] - x[1]
    dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho
    return [dx0 * tau + x[0], 
            dx1 * tau + x[1], 
            dx2 * tau + x[2]]
net.connect('A', 'A', func=feedback, pstc=tau)

net.view()

  • Note: This is not the original Lorenz attractor.
    • The original is ${dx \over dt}=[10x_1-10x_0, x_0 (28-x_2)-x_1, x_0 x_1 - {8 \over 3}(x_2)]$
    • Why change it to ${dx \over dt}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$?
    • What's being changed here?

Oscillators with different paths

  • Since we can implement any function, we're not limited to linear oscillators
  • What about a "square" oscillator?
    • Instead of the value going in a circle, it traces out a square
$$ {{dx} \over {dt}} = \begin{cases} [r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1>0 \\ [-r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1<0 \\ [0, -r] &\mbox{if } |x_1|<|x_0| \wedge x_0>0 \\ [0, r] &\mbox{if } |x_1|<|x_0| \wedge x_0<0 \\ \end{cases} $$


In [ ]:
import nef
net = nef.Network('Square Oscillator')
net.make('A', neurons=1000, dimensions=2)

tau = 0.02
r=4
def feedback(x):    
    if abs(x[1])>abs(x[0]):
        if x[1]>0: dx=[r, 0]
        else: dx=[-r, 0]
    else:
        if x[0]>0: dx=[0, -r]
        else: dx=[0, r]
    return [tau*dx[0]+x[0], tau*dx[1]+x[1]]        
net.connect('A', 'A', func=feedback, pstc=tau)

net.make_input('input', [0,0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()

  • Does this do what you expect?
  • How is it affected by:
    • Number of neurons?
    • Post-synaptic time constant?
    • Decoding filter time constant?
    • Speed of oscillation (r)?
  • What about this shape?

In [ ]:
import nef
net = nef.Network('Heart Oscillator')
net.make('A', neurons=1000, dimensions=2)

tau = 0.02
r=4
def feedback(x):    
    return [-tau*r*x[1]+x[0], tau*r*x[0]+x[1]]
net.connect('A', 'A', func=feedback, pstc=tau)

from math import atan2, sin, cos, sqrt
def heart(x):
    theta = atan2(x[1], x[0])
    r = 2 - 2 * sin(theta) + sin(theta)*sqrt(abs(cos(theta)))/(sin(theta)+1.4)
    return [-r*cos(theta), r*sin(theta)]

net.make('B', neurons=10, dimensions=2)
net.connect('A', 'B', func=heart, pstc=tau)
net.make_input('input', [0,0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()

  • We are doing things differently here
  • The actual $x$ value is a normal circle oscillator
  • The heart shape is a function of $x$
    • But that's just a different decoder
  • Would it be possible to do an oscillator where $x$ followed this shape?
    • How could we tell them apart in terms of neural behaviour?

Controlled Oscillator

  • ${dx \over dt}=[x_1 x_2, -x_0 x_2]$

In [ ]:
import nef
net = nef.Network('Controlled Oscillator')
net.make('A', neurons=500, dimensions=3, radius=1.7)

tau = 0.1
scale = 10
def feedback(x):
    return x[1]*x[2]*scale*tau+1.1*x[0], -x[0]*x[2]*scale*tau+1.1*x[1], 0
net.connect('A', 'A', func=feedback, pstc=tau)

net.make_input('input', [0,0,0])
net.connect('input', 'A')

net.add_to_nengo()
net.view()