In Nengo, I might have an Ensemble representing $x(t)$. However, when that data is used, it is filtered by whatever synapse is being used, resulting in a measured value of $x(t)*h(t)$.
However, we can cheat a little bit. We can use the NEF dynamics principle and add a recurrent connection such that the effective filter is of a different time constant than what we're actually using. That is, if we scale the ensemble's input by $B'$ and add a recurrent connection with a gain of $A'$, and if the actual synaptic time constant is $\tau$, then the effect of the whole system will be
$X(s) = {{B'} \over {\tau s + 1 - A'}} U(s)$
Suppose we have an actual synapse of $\tau=0.01$, but we'd like an effective synapse of $\tau_{desired}=0.001$. That is, we want $X(s)={1 \over {0.001s + 1}}U(s)$. Solving, we get $A'=-9$ and $B'=10$. In general, $B'=\tau / \tau_{desired}$ and $A=1-B$.
Here is a nengo network that builds this system for us. It creates the Ensemble and some temporary passthrough Nodes that simplify other systems connecting to this component (they can connect to it without knowing anything about the internals). At build time, these passthrough Nodes will disappear, leaving only the actual Connections with the appropriate transforms.
In [28]:
import nengo
def DynamicCompensation(synapse, desired_synapse, n_neurons, dimensions, radius=1, verbose=True):
B = synapse / desired_synapse
A = 1 - B
if verbose:
print "input gain (B'):", B
print "feedback gain (A'):", A
net = nengo.Network(label="Dynamic Compensation")
with net:
net.ensemble = nengo.Ensemble(n_neurons, dimensions, radius=radius) # make the ensemble
net.input = nengo.Node(None, size_in=dimensions)
net.output = nengo.Node(None, size_in=dimensions)
nengo.Connection(net.input, net.ensemble, transform=B, synapse=None) # scale inputs
nengo.Connection(net.ensemble, net.ensemble, transform=A, synapse=synapse) # feedback
nengo.Connection(net.ensemble, net.output, synapse=None)
return net
Now we can test our component. This code makes a square-wave input, passes it into our system, and plots the output. For comparison, it also plots the input filtered by the desired filter.
In [29]:
def test_dyn(n_neurons=100, synapse=0.01, desired_synapse=0.001, dt=0.001, radius=1,
show_plot=True):
model = nengo.Network()
with model:
stimulus = nengo.Node(lambda t: int(t*10) % 2 - 0.5) # square wave
dyn = DynamicCompensation(synapse=synapse, desired_synapse=desired_synapse,
n_neurons=n_neurons, dimensions=1, radius=radius,
verbose=show_plot)
nengo.Connection(stimulus, dyn.input, synapse=None)
probe_stim = nengo.Probe(stimulus)
probe = nengo.Probe(dyn.output, synapse=synapse)
probe_spikes = nengo.Probe(dyn.ensemble.neurons)
sim = nengo.Simulator(model, dt=dt)
sim.run(0.6, progress_bar=False)
rmse = np.sqrt(np.mean((sim.data[probe]-nengo.synapses.filt(sim.data[probe_stim],
nengo.synapses.Lowpass(desired_synapse), dt=dt))**2))
rate = np.mean(sim.data[probe_spikes])
if show_plot:
figure(figsize=(12,4))
plot(sim.trange(), sim.data[probe], label='neurons')
plot(sim.trange(), nengo.synapses.filt(sim.data[probe_stim],
nengo.synapses.Lowpass(desired_synapse), dt=dt), label='ideal')
legend(loc='best')
print 'rmse:', rmse
print 'average neuron spike rate (Hz):', rate
show()
return rmse, rate
Testing when the desired synapse is the same as the actual synapse. This should mean the feedback gain is 0 (i.e do nothing)
In [30]:
test_dyn(desired_synapse=0.01, synapse=0.01)
Out[30]:
Now we try simulating a faster synapse than we actually have. Here we desire $\tau=0.003$ but actually have synapses of $\tau=0.010$. This seems to work pretty well.
In [20]:
test_dyn(desired_synapse=0.003, synapse=0.01)
Here we try for $\tau=0.001$. This doesn't seem to work quite as well, but it's pretty close.
In [21]:
test_dyn(desired_synapse=0.001, synapse=0.01, radius=1)
But, if we increase the radius that the Ensemble represents, we can get pretty close. Note that the radius needed is because the radius is about how big a range we need for $x(t)$, but we're actually plotting $x(t)*h(t)$. It should be possible to compute what sort of increased range is needed, I think (although it'll depend on how quickly $x(t)$ varies, I think....)
In [22]:
test_dyn(desired_synapse=0.001, synapse=0.01, radius=2)
One interesting question is what effect this way of doing dynamics has on noise. Here we adjust the actual synapse while keeping the effective synapse the same, and measure the rmse and the average firing rates of the neurons.
This seems to indicate that we might get a slight drop in the firing rates, but no change in the rmse. The drop in the firing rates is likely due to the the particulars of the standard nengo LIF tuning curve (slightly lower average firing rates for values in the middle of the representational range).
In [41]:
N = 20
for synapse, radius in [(0.001, 0.5), (0.01, 2), (0.1, 20)]:
print 'actual synapse:', synapse
print 'radius:', radius
data = [test_dyn(desired_synapse=0.001, synapse=synapse, radius=radius, show_plot=False) for i in range(N)]
rmse, rate = np.mean(data, axis=0)
print 'rmse:', rmse
print 'average firing rate (Hz):', rate
print
Here we use the above trick as a way of compensating for the filter that is applied on the input to a dynamic system. That is, since the input $u(t)$ gets filtered by an $h(t)$ due to the input synapse, the actual effective input to the system is $u(t)*h(t)$. So, we can do the same transformation purely on the input. If this input is coming from a Node, we can do it purely in math (and if it is coming from an Ensemble, then we can do it with that Ensemble in neurons, as above).
Below is an example where the dynamical system is an integrator.
Here is the network builder
In [3]:
def test_inputs(n_neurons=100, synapse=0.01, desired_synapse=0.001, dt=0.001,
radius=1, input_as_neurons=False):
model = nengo.Network()
with model:
stimulus = nengo.Node(lambda t: int(t*10) % 2 - 0.5) # square wave
dyn = DynamicCompensation(synapse=synapse, desired_synapse=desired_synapse,
n_neurons=n_neurons, dimensions=1, radius=radius)
nengo.Connection(stimulus, dyn.input, synapse=None)
if not input_as_neurons:
dyn.ensemble.neuron_type=nengo.Direct()
system = nengo.Ensemble(500, 1)
nengo.Connection(system, system, synapse=synapse)
nengo.Connection(dyn.output, system, synapse=synapse, transform=synapse*10)
probe_stim = nengo.Probe(stimulus, synapse=None)
probe = nengo.Probe(dyn.output, synapse=synapse)
probe_unfilt = nengo.Probe(dyn.output, synapse=None)
probe_sys = nengo.Probe(system, synapse=synapse)
sim = nengo.Simulator(model, dt=dt)
sim.run(0.6, progress_bar=False)
figure(figsize=(12,8))
plot(sim.trange(), sim.data[probe], label='effective system input $u(t)*h(t)$')
plot(sim.trange(), sim.data[probe_unfilt], label='unfiltered system input $u(t)$')
plot(sim.trange(), sim.data[probe_stim], label='ideal input')
plot(sim.trange(), sim.data[probe_sys], label='system $x(t)$ (after filter)')
plot(sim.trange(), np.cumsum(sim.data[probe_stim])*dt*10, label='ideal system')
ylim(-1.2, 1.2)
legend(loc='upper right')
show()
First, we try it without compensation. Notice that the integrator system state lags a bit and follows a slightly curved trajectory.
In [78]:
test_inputs(desired_synapse=0.01)
Now we turn on compensation. This adjusts the input so that instead of a $\tau=0.01$, we get an effective $\tau=0.001$, even though the actual synapse is still at $\tau=0.01$.
Notice that the (filtered) system state (i.e. after applying the spike filter) is now quite straight and non-laggy.
In [84]:
test_inputs(desired_synapse=0.001)
This trick also works pretty well when the input to the dynamical system comes from an Ensemble (rather than from an external Node, as above).
Note that the only thing required for this trick to work is that the input Ensemble does not have a recurrent connection on it. This is, of course, perfect, since if it does have a recurrent connection on it, then it's been defined with the dynamics equation, and so the desired $x(t)$ is after the filter. So we should be able to use this trick anywhere.
In [18]:
test_inputs(desired_synapse=0.001, input_as_neurons=True, radius=2)
In [ ]: