Nengo Example: Controlled Integrator

A controlled integrator is a circuit that acts on two signals:

  1. Input - the signal being integrated
  2. Control - the control signal to the integrator

A controlled integrator accumulates input, but its state can be directly manipulated by the control signal. We can write the dynamics of a simple controlled integrator like this:

$$ \dot{a}(t) = \mathrm{control}(t) \cdot a(t) + B \cdot \mathrm{input}(t) $$

In this notebook, we will build a controlled intgrator with Leaky Integrate and Fire (LIF) neurons. The Neural Engineering Framework (NEF) equivalent equation for this integrator is:

$$ \dot{a}(t) = \mathrm{control}(t) \cdot a(t) + \tau \cdot \mathrm{input}(t). $$

We call the coefficient $\tau$ here a recurrent time constant because it governs the rate of integration.

Network behaviour: A = tau * Input + Input * Control

(MISSING: Network circuit diagram - can maybe generate this in-line?)


In [1]:
### do some setup before we start
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np

Step 1: Create the network

We can use standard network-creation commands to begin creating our controlled integrator. We create a Network, and then we create a population of neurons (called an ensemble). This population of neurons will represent the state of our integrator, and the connections between the neurons in the ensemble will define the dynamics of our integrator.


In [2]:
import nengo
from nengo.old_api import Network, compute_transform

tau = 0.1

# Create the nengo model
model = Network('Controlled Integrator')

# Create the neuronal ensembles
A = model.make_array('A', 225, 1, 2,            # Make a population with 225 neurons, 
                     radius = 1.5)              #   2 dimensions, and a larger radius 
                                                #   to accommodate large simulataneous 
                                                #   inputs

Step 2: Define the 'input' signal to integrate

We will be running 1 second of simulation time, so we will use a Python function input_func to define our input signal for real values of time t from 0 to 1. We'll define our signal to be a step function using if-then-else code. Our piece-wise function sits at 0 until .2 seconds into the simulation, then jumps up to 5, back to 0, down to -10, back to 0, then up to 5, and then back to 0. Our integrator will respond by ramping up when the input is positive, and descending when the input is negative.


In [3]:
def input_func(t):              # Create a function that outputs
    if   t < 0.2:  return 0     # 0 for t < 0.2
    elif t < 0.3:  return 5     # 5 for 0.2 < t < 0.3
    elif t < 0.44: return 0     # 0 for 0.3 < t < 0.44
    elif t < 0.54: return -10   # -10 for 0.44 < t < 0.54
    elif t < 0.8:  return 0     # 0 for 0.54 < t < 0.8
    elif t < 0.9:  return 5     # 5 for 0.8 < t < 0.9
    else:          return 0     # 0 for t > 0.9
    
t = np.linspace(0, 1, 101)
plt.plot(t, map(input_func, t))
plt.ylim((-11,11));


We include this input function (input_func) into our neural model like this:


In [4]:
# Define an input signal within our model
model.make_input('Input', input_func)
 
# Connect the Input signal to ensemble A.
# The `transform` argument means "connect real-valued signal "Input" to the
# first of the two input channels of A."
model.connect('Input', 'A',
              transform=[[tau], [0]],
              pstc=tau)

Step 3: Define the 'control' signal

We also need to create a control signal that controls how the integrator behaves. We will make this signal 1 for the first part of the simulation, and 0.5 for the second part. This means that at the beginning of the simulation, the integrator will act as an optimal integrator, and partway though the simulation (at t = 0.6), it will switch to being a leaky integrator.


In [5]:
def control_func(t):            # Create a function that outputs
    if   t < 0.6: return 1      # 1 for t < 0.65
    else:         return 0.5    # 0.5 for t > 0.65
    
t = np.linspace(0, 1, 101)
plt.plot(t, map(control_func, t))
plt.ylim(0,1.1);


We add the control signal to the network like we added the input signal, but this this time we connect it to the second dimension of our neural population.


In [6]:
model.make_input('Control', control_func)

# -- Connect the "Control" signal to the second of A's two input channels
#    using the `transform` matrix.
model.connect('Control', 'A', transform=[[0], [1]], pstc=0.005)

Step 4: Define the integrator dynamics

We set up integratorconnect population 'A' to itself. We set up feedback in the model to handle integration of the input. The time constant $\tau$ on the recurrent weights affects both the rate and accuracy of integration, try adjusting it and look what happens.


In [7]:
# Create a recurrent connection that first takes the product
# of both dimensions in A (i.e., the value times the control)
# and then adds this back into the first dimension of A using
# a transform
model.connect('A', 'A',
              func=lambda x: x[0] * x[1],  # -- function is applied first to A
              transform=[[1], [0]],        # -- transform converts function output to new state inputs
              pstc=tau)

# Make a probe to record both dimensions of A
#p = model.make_probe('A', dt_sample=0.02, pstc=0.02)
p = model.make_probe('A', dt_sample=0.001, pstc=0.02)
 
# Run the model
t_final = 1.4
model.run(t_final)          # Run the model for 1.4 seconds

A_data = p.get_data()       # get data about A's 2-D value
value, control = A_data.T   # split A's data into it's two dimensions

Step 5: Plot and interpret the result


In [8]:
# Plot the value and control signals, along with the exact integral
dt = 0.001
input_sig = map(input_func, dt*np.arange(t_final/dt))
control_sig = map(control_func, dt*np.arange(t_final/dt))
ref = dt*np.cumsum(input_sig)
t = lambda x: (t_final/float(len(x)))*np.arange(len(x))

fig = plt.figure(figsize=(6,8))
ax = fig.add_subplot(211)
ax.plot(t(input_sig), input_sig)
ax.set_ylim([-11,11])
ax.set_ylabel('input')
ax.legend(['input'], loc=3, frameon=False)

ax = fig.add_subplot(212)
ax.plot(t(ref), ref, 'k--')
ax.plot(t(value), value)
ax.plot(t(control), control)
ax.set_ylim([-1.1,1.1])
ax.set_xlabel('time [s]')
ax.set_ylabel('x(t)')
ax.legend(['exact', 'A[0] (value)', 'A[1] (control)'], loc=3, frameon=False);


The above plot shows the output of our system, specifically the (integrated) value stored by the A population, along with the control signal represented by the A population. The exact value of the integral, as performed by a perfect (non-neural) integrator, is shown for reference.

When the control value is 1 (t < 0.6), the neural integrator performs near-perfect integration. However, when the control value drops to 0.5 (t > 0.6), the integrator becomes a leaky integrator. This means that in the absence of input, its stored value drifts towards zero.