Pretty much all of them use some representation framework like this:
after(eight, nine)
chase(dogs, cats)
knows(Anne, thinks(Bill, likes(Charlie, Dave)))
Cognitive models manipulate these sorts of representations
Problems
Implementing Symbol Systems in Neurons
Problems
Vector operators
Based on vectors and functions on those vectors
Example
Why circular convolution?
Examples:
BLUE
$\circledast$ SQUARE + RED
$\circledast$ CIRCLE
DOG
$\circledast$ AGENT + CAT
$\circledast$ THEME + VERB
$\circledast$ CHASE
Lots of nice properties
after(eight, nine)
NUMBER
$\circledast$ EIGHT + NEXT
$\circledast$ NINE
knows(Anne, thinks(Bill, likes(Charlie, Dave)))
SUBJ
$\circledast$ ANNE + ACT
$\circledast$ KNOWS + OBJ
$\circledast$ (SUBJ
$\circledast$ BILL + ACT
$\circledast$ THINKS + OBJ
$\circledast$ (SUBJ
$\circledast$ CHARLIE + ACT
$\circledast$ LIKES + OBJ
$\circledast$ DAVE))
RED
is similar to PINK
then RED
$\circledast$ CIRCLE
is similar to PINK
$\circledast$ CIRCLE
But rather complicated
RED
$\circledast$ CIRCLE + BLUE
$\circledast$ TRIANGLE
CIRCLE'
'
is "inverse"RED
$\circledast$ CIRCLE + BLUE
$\circledast$ TRIANGLE
) $\circledast$ CIRCLE'
RED
$\circledast$ CIRCLE
$\circledast$ CIRCLE' + BLUE
$\circledast$ TRIANGLE
$\circledast$ CIRCLE'
RED + BLUE
$\circledast$ TRIANGLE
$\circledast$ CIRCLE'
RED + noise
RED
OBJ1
$\circledast$ (TYPE
$\circledast$ STAR + SIZE
$\circledast$ LITTLE) + OBJ2
$\circledast$ (TYPE
$\circledast$ STAR + SIZE
$\circledast$ BIG) + BESIDE
$\circledast$ OBJ1
$\circledast$ OBJ2
BESIDE
$\circledast$ OBJ1
$\circledast$ OBJ2
= BESIDE
$\circledast$ OBJ2
$\circledast$ OBJ1
S
= RED
$\circledast$ NOUN
VAR
= BALL
$\circledast$ NOUN'
S
$\circledast$ VAR
= RED
$\circledast$ BALL
This is not an actual question on the test
How can we model people doing this task?
A fair number of different attempts
Does this vector approach offer an alternative?
First we need to represent the different patterns as a vector
How do we represent a picture?
SHAPE
$\circledast$ ARROW + NUMBER
$\circledast$ ONE + DIRECTION
$\circledast$ UP
We have shown that it's possible to build these sorts of representations up directly from visual stimuli
In [2]:
from IPython.display import YouTubeVideo
YouTubeVideo('U_Q6Xjz9QHg', width=720, height=400, loop=1, autoplay=0, playlist='U_Q6Xjz9QHg')
Out[2]:
The memory of the list is built up by using a basal ganglia action selection system to control feeding values into an integrator
The same system can be used to do a version of the Raven's Matrices task
In [4]:
from IPython.display import YouTubeVideo
YouTubeVideo('Q_LRvnwnYp8', width=720, height=400, loop=1, autoplay=0, playlist='Q_LRvnwnYp8')
Out[4]:
S1 = ONE
$\circledast$ P1
S2 = ONE
$\circledast$ P1 + ONE
$\circledast$ P2
S3 = ONE
$\circledast$ P1 + ONE
$\circledast$ P2 + ONE
$\circledast$ P3
S4 = FOUR
$\circledast$ P1
S5 = FOUR
$\circledast$ P1 + FOUR
$\circledast$ P2
S6 = FOUR
$\circledast$ P1 + FOUR
$\circledast$ P2 + FOUR
$\circledast$ P3
S7 = FIVE
$\circledast$ P1
S8 = FIVE
$\circledast$ P1 + FIVE
$\circledast$ P2
what is S9
?
T1 = S2
$\circledast$ S1'
T2 = S3
$\circledast$ S2'
T3 = S5
$\circledast$ S4'
T4 = S6
$\circledast$ S5'
T5 = S8
$\circledast$ S7'
T = (T1 + T2 + T3 + T4 + T5)/5
S9 = S8
$\circledast$ T
S9 = FIVE
$\circledast$ P1 + FIVE
$\circledast$ P2 + FIVE
$\circledast$ P3
This becomes a novel way of manipulating structured information
In [1]:
%pylab inline
import nengo
from nengo import spa
In [3]:
# Number of dimensions for the Semantic Pointers
dimensions = 32
model = spa.SPA(label="Simple question answering")
with model:
model.color_in = spa.Buffer(dimensions=dimensions)
model.shape_in = spa.Buffer(dimensions=dimensions)
model.conv = spa.Buffer(dimensions=dimensions)
model.cue = spa.Buffer(dimensions=dimensions)
model.out = spa.Buffer(dimensions=dimensions)
# Connect the buffers
cortical_actions = spa.Actions(
'conv = color_in * shape_in',
'out = conv * ~cue'
)
model.cortical = spa.Cortical(cortical_actions)
The input will switch every 0.5 seconds between RED
and BLUE
. In the same way the shape input switches between CIRCLE
and SQUARE
. Thus, the network will bind alternatingly RED * CIRCLE
and BLUE * SQUARE
for 0.5 seconds each.
The cue for deconvolving bound semantic pointers cycles through CIRCLE
, RED
, SQUARE
, and BLUE
within one second.
In [4]:
def color_input(t):
if (t // 0.5) % 2 == 0:
return 'RED'
else:
return 'BLUE'
def shape_input(t):
if (t // 0.5) % 2 == 0:
return 'CIRCLE'
else:
return 'SQUARE'
def cue_input(t):
sequence = ['0', 'CIRCLE', 'RED', '0', 'SQUARE', 'BLUE']
idx = int((t // (1. / len(sequence))) % len(sequence))
return sequence[idx]
with model:
model.inp = spa.Input(color_in=color_input, shape_in=shape_input, cue=cue_input)
In [5]:
with model:
model.config[nengo.Probe].synapse = nengo.Lowpass(0.03)
color_in = nengo.Probe(model.color_in.state.output)
shape_in = nengo.Probe(model.shape_in.state.output)
cue = nengo.Probe(model.cue.state.output)
conv = nengo.Probe(model.conv.state.output)
out = nengo.Probe(model.out.state.output)
sim = nengo.Simulator(model)
sim.run(3.)
In [6]:
plt.figure(figsize=(10, 10))
vocab = model.get_default_vocab(dimensions)
plt.subplot(5, 1, 1)
plt.plot(sim.trange(), model.similarity(sim.data, color_in))
plt.legend(model.get_output_vocab('color_in').keys, fontsize='x-small')
plt.ylabel("color")
plt.subplot(5, 1, 2)
plt.plot(sim.trange(), model.similarity(sim.data, shape_in))
plt.legend(model.get_output_vocab('shape_in').keys, fontsize='x-small')
plt.ylabel("shape")
plt.subplot(5, 1, 3)
plt.plot(sim.trange(), model.similarity(sim.data, cue))
plt.legend(model.get_output_vocab('cue').keys, fontsize='x-small')
plt.ylabel("cue")
plt.subplot(5, 1, 4)
for pointer in ['RED * CIRCLE', 'BLUE * SQUARE']:
plt.plot(sim.trange(), vocab.parse(pointer).dot(sim.data[conv].T), label=pointer)
plt.legend(fontsize='x-small')
plt.ylabel("convolved")
plt.subplot(5, 1, 5)
plt.plot(sim.trange(), spa.similarity(sim.data[out], vocab))
plt.legend(model.get_output_vocab('out').keys, fontsize='x-small')
plt.ylabel("output")
plt.xlabel("time [s]");
In [7]:
sum(ens.n_neurons for ens in model.all_ensembles) #Total number of neurons
Out[7]:
In [ ]: