In Nengo GUI, if you click on the "open" icon in the top-left and select built-in examples
and then tutorial
you will find a list of tutorial models that cover a wide variety of Nengo functionality. For this assignment, first do tutorials 19, 23, and 24 (spa
, spa-binding
, and spa-unbinding
).
Build a model that has two groups of neurons for holding "semantic pointers". The first is a simple input (spa.State(D)
where D
is the number of dimensions) and the second is a working memory (spa.State(D, feedback=1)
). Connect the output of the first one to the input of the second one, and use transform=0.1
on this Connection.
(Note that this is identical to the 19-spa.py
tutorial model)
Feed the semantic pointer DOG
into the first group. What happens to the second group? How quickly does it happen?
After feeding the semantic pointer DOG
into the first group for a while, feed the semantic pointer CAT
into the first group. What happens to the second group? How quickly does it happen?
What could you change about the system to change how quickly it changes in response to the input? Try making it as fast as possible.
What are the negative side effects of making it faster? In particular, what happens to information that was previously stored in the working memory when you try to quickly load in something new?
How accurate a memory system is depends on how many terms are loaded into them, the total size of the vocabulary (i.e. the set of possible pointers they know about), and the number of dimensions used.
Currently, we have to manually give the model the set of words it knows by typing them into the graphs (i.e. the way we've been feeding DOG
and CAT
into the system. We can instead define these terms in the code for the model itself, which will simplify things if we want to work with hundreds or thousands of vocabulary items.
Here is how we can define a vocabulary of 10 words. These words will be W0
, W1
, W2
, ... W9
:
import nengo.spa as spa
D = 32 # the dimensionality of the vectors
vocab = spa.Vocabulary(D)
M = 10 # the number of words to create
for i in range(M):
vocab.parse('W{}'.format(i))
This makes a new Vocabulary
object, and fills it with 10 words.
Now we have to tell the spa.State
objects to use this vocabulary.
model.vision = spa.State(D, vocab=vocab)
model.memory = spa.State(D, vocab=vocab, feedback=1)
Run the model and feed in W0
into the input. Let that run until W0
is stable in the memory. Now feed in W1
. Let that run until W0
and W1
are both stable in memory. Now feed in nothing (set the input value to nothing at all). Let the model run. How stable is the memory? What happens to the other possible words in memory?
Now try it with a larger number of vocabulary words (M=100
). Try doing the same thing as before. What happens now?
To improve performance, try increasing the number of dimensions. Try D=64
, D=128
, and D=256
. What happens now?
Why shouldn't we just use D=256
(or more) all the time?
In [ ]: