Some sort of mapping between activity and some value
Let's call this "representation"
$x = 13$
$a_0 = 1$
$a_1 = 0$
$a_2 = 1$
$a_3 = 1$
$\hat{x} = 1*1+0*2+1*4+1*8 = 13$
In [1]:
# Rectified linear neuron
import numpy
def rectified_linear(J, gain=100):
return numpy.maximum(J*gain,0)
J = numpy.linspace(-1,1,100)
import pylab
pylab.plot(J, rectified_linear(J))
pylab.xlabel('J (current)')
pylab.ylabel('$a$ (Hz)')
pylab.show()
In [5]:
# Leaky integrate and fire
import numpy
import syde556
J = numpy.linspace(-1,10,100)
import pylab
pylab.plot(J, syde556.lif(J, tau_ref=0.002, tau_rc=0.02))
pylab.xlabel('J (current)')
pylab.ylabel('$a$ (Hz)')
pylab.show()
These are called "tuning curves"
Neurons seem to be sensitive to particular values of $x$
What's the mapping between $x$ and $J$?
In [7]:
import numpy
import syde556
x = numpy.linspace(-100,0,100)
import pylab
pylab.plot(x, syde556.lif(x*1+50))
pylab.plot(x, syde556.lif(x*0.1+10))
pylab.plot(x, syde556.lif(x*0.05+5))
pylab.plot(x, syde556.lif(x*0.1+4))
pylab.xlabel('x')
pylab.ylabel('a')
pylab.show()
$ J = \alpha x + J^{bias} $
$ J = - \alpha x + J^{bias} $
In [42]:
import numpy
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
a = numpy.linspace(-1,1,50)
b = numpy.linspace(-1,1,50)
X,Y = meshgrid(a, b)
import pylab
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = pylab.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_surface(X, Y, syde556.lif((X*e[0]+Y*e[1])+1.5), linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)
In [40]:
theta, x = syde556.sample_around_unit_circle(100)
e = numpy.array([1.0, 0.0])
alpha = 1
bias = 1.5
J = alpha*numpy.dot(x.T, e)+bias
pylab.plot(theta, syde556.lif(J))
pylab.xlabel('angle')
pylab.ylabel('firing rate')
pylab.xlim(-numpy.pi, numpy.pi)
pylab.show()
$ E = {\int_{-1}^1 (x-\hat{x})^2 dx } \over {2}$
$ E = {1 \over 2} \int_{-1}^1 (x-\sum_i(a_i d_i))^2 dx $
$ {{\partial E} \over {\partial d_i}} = {1 \over 2} \int_{-1}^1 2 [x-\sum_j(a_j d_j)](-a_i) dx $
$ {{\partial E} \over {\partial d_i}} = - \int_{-1}^1 a_i x dx + \int_{-1}^1 \sum_j(a_j d_j a_i) dx $
$ \int_{-1}^1 a_i x dx = \int_{-1}^1 \sum_j(a_j d_j a_i) dx $
$ \int_{-1}^1 a_i x dx = \sum_j (\int_{-1}^1 a_i a_j dx)d_j $
$ \Upsilon = \Gamma d $
where
$ \Upsilon_i = {1 \over 2} \int_{-1}^1 a_i x dx$
$ \Gamma_{ij} = {1 \over 2} \int_{-1}^1 a_i a_j dx $
$ \sum_x a_i x / S = \sum_j (\sum_x a_i a_j /S )d_j $
$ \Upsilon = \Gamma d $
where
$ \Upsilon_i = \sum_x a_i x / S$
$ \Gamma_{ij} = \sum_x a_i a_j / S $
So given
$ \Upsilon = \Gamma d $
then
$ d = \Gamma^{-1} \Upsilon $
or, equivalently
$ d_i = \sum_j \Gamma^{-1}_{ij} \Upsilon_j $
In [7]:
import numpy
import syde556
N = 30
x = numpy.linspace(-1, 1, 100)
encoders = numpy.random.choice([-1, 1], N)
alpha = numpy.random.uniform(0.5,2, N)
bias = numpy.random.uniform(-2, 2, N)
A = syde556.activity(x, encoders, alpha, bias)
d = syde556.decoders(A, x)
xhat = numpy.dot(A, d)
figure()
plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
figure()
plot(x, xhat-x)
xlabel('$x$')
ylabel('$\hat{x}-x$')
xlim(-1, 1)
show()
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
In [6]:
import numpy
import syde556
N = 30
x = numpy.linspace(-1, 1, 100)
encoders = numpy.random.choice([-1, 1], N)
alpha = numpy.random.uniform(0.5,2, N)
bias = numpy.random.uniform(-2, 2, N)
A = syde556.activity(x, encoders, alpha, bias)
A_noisy = syde556.add_noise(A, 0.2)
d = syde556.decoders(A, x)
xhat = numpy.dot(A_noisy, d)
figure()
plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
ylim(-2, 2)
xlim(-1, 1)
show()
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
include noise while solving for decoders
$ \hat{x} = \sum_i(a_i+\eta)d_i $
$ E = {1 \over 2} \int_{-1}^1 [x-\hat{x}]^2 dx $
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i+\eta)d_i]^2 dx $
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i) - \sum(\eta d_i)]^2 dx $
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i)]^2 dx + \sum_{i,j}(d_i d_j \eta_i \eta_j) $
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i)]^2 dx + \sum_{i}(d_i d_i \eta_i \eta_i) $
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i)]^2 dx + \sigma^2 \sum_i d_i^2 $
$ \Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij}$
where $\delta_{ij}$ is the Kronecker delta: http://en.wikipedia.org/wiki/Kronecker_delta
to simplfy computing this using matrices, this can be written as $\Gamma=A^T A /S + \sigma^2 I$
In [1]:
import numpy
import syde556
N = 300
x = numpy.linspace(-1, 1, 100)
encoders = numpy.random.choice([-1, 1], N)
alpha = numpy.random.uniform(0.5,2, N)
bias = numpy.random.uniform(-2, 2, N)
A = syde556.activity(x, encoders, alpha, bias)
d = syde556.decoders(A, x, noise=0.2, dx=2.0/len(x))
A_noisy = syde556.add_noise(A, 0.2)
xhat = numpy.dot(A_noisy, d)
figure()
plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
ylim(-2, 2)
xlim(-1, 1)
show()
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
$ E = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i)]^2 dx + \sigma^2 \sum_i d_i^2 $
$ E_{distortion} = {1 \over 2} \int_{-1}^1 [x-\sum_i(a_i d_i)]^2 dx $
$ E_{noise} = \sigma^2 \sum_i d_i^2 $
In [8]:
import numpy
import syde556
noise = 0.2
x = numpy.linspace(-1, 1, 100)
dx = 2.0/100
Ns = numpy.array([8, 16, 32, 64, 128, 256, 512])
E_distortion = []
E_noise = []
for N in Ns:
encoders = numpy.random.choice([-1, 1], N)
intercepts = numpy.random.uniform(-0.9, 0.9, N)
max_rates = numpy.random.uniform(100, 200, N)
alpha, bias = syde556.find_alpha_and_bias(intercepts, max_rates)
A = syde556.activity(x, encoders, alpha, bias)
d = syde556.decoders(A, x, noise=noise, dx=dx)
A_noisy = syde556.add_noise(A, noise)
xhat = numpy.dot(A, d)
xhat_noisy = numpy.dot(A_noisy, d)
E_d = syde556.compute_error_distortion(x, xhat)
E_n = syde556.compute_error_noise(noise*np.max(A), d)
E_distortion.append(E_d)
E_noise.append(E_n)
figure()
loglog(Ns, E_distortion, label='$E_{dist}$')
loglog(Ns, 1.0/Ns, label='$1/N$')
loglog(Ns, 0.1/(Ns**2), label='$0.1/N^2$')
xlabel('N')
ylabel('E distortion')
legend(loc='best')
figure()
loglog(Ns, E_noise, label='$E_{noise}$')
loglog(Ns, 0.2/(Ns), label='$0.2/N$')
xlabel('N')
ylabel('E noise')
legend(loc='best')
show()
$ E = {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 dx $
So that's actually a squared error term
Also, as number of neurons is greater than 100 or so, the error is dominated by the noise term ($1/N$).
From http://www.nature.com/nrn/journal/v3/n12/full/nrn986.html
There are also neurons whose response goes the other way. These neurons are directly connected to the muscle controlling the horizontal direction of the eye, and that's the only thing that muscle does, so we're pretty sure this is what's being repreesnted.
System Description
Design Specification
Implementation
In [8]:
import numpy
import syde556
N = 40
tau_rc = 0.2
x = numpy.linspace(-1, 1, 100)
encoders = numpy.random.choice([1, -1], N)
intercepts = numpy.random.uniform(-0.9, 0.9, N)
max_rates = numpy.random.uniform(250, 300, N)
alpha, bias = syde556.find_alpha_and_bias(intercepts, max_rates, tau_rc=tau_rc)
A = syde556.activity(x, encoders, alpha, bias, tau_rc=tau_rc)
figure()
plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)')
show()
In [28]:
noise = 0.2
dx = 2.0/100
d = syde556.decoders(A, x, noise=noise, dx=dx)
A_noisy = syde556.add_noise(A, noise)
xhat = numpy.dot(A_noisy, d)
print 'RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2)))
figure()
plot(x, x)
plot(x, xhat)
ylim(-1, 1)
xlim(-1, 1)
xlabel('$x$')
ylabel('$\hat{x}$')
show()
System Description
Design Specification
In [44]:
import syde556
theta, x = syde556.sample_around_unit_circle(100)
e = numpy.array([1.0, 0])
J = 0.2*numpy.dot(x.T, e)+1.5
plot(theta*180/numpy.pi, syde556.lif(J))
xlabel('angle')
ylabel('firing rate')
xlim(-180, 180)
show()
Does it match empirical data?
Interestingly, Georgopoulos also suggests doing linear decoding:
You can use any programming language you like, but it is recommended that you use a language with a matrix library and graphing capabilities. Two main suggestions are Python and MATLAB.
Tutoring Services