Connecting the Retina and the Lamina LPUs through Neurokernel API

We demonstrate in this notebook how to plug in individual LPUs into a Neurokernel simulation, and connect these LPUs through the Neurokernel API. The two LPUs that we use here are the models for the retina and the lamina, the first two LPUs in the visual sytem of the fruit fly. According to the Neurokernel API, a successful integration of these LPUs requires each of them to expose named ports. We briefly summarize the ports exposed by the two LPUs first, and then detail the connection between them. A schematic diagram of the two connected LPUs is show in Fig. 1. This notebook is meant to be narrative, but not directly executable due to MPI limitation. For executable example, see examples/retlam_demo/retlam_demo.py in the neurokernel/retina-lamina repository for integrating the retina and lamina models.

Figure 1. Schematic diagram of integrated simulation of the retina and the lamina LPUs under Neurokernel. The retina and the lamina both expose their ports to Neurokernel using the Neurokernel API, and Neurokernel handles the communication between the two LPUs.

Configuration

Before simulation we need to provide a configuration of various parameters of both LPUs. The configuration is assumed to have the form of a dictionary. The configuration combines those in the retina and the lamina modules. For easier manipulation this configuration can be read from a configuration file. Details of each parameter can be found in the configuration template in examples/template_spec.cfg.


In [ ]:
config = {}
config['General'] = {}
config['General']['dt'] = 1e-4
config['General']['steps'] = 1000
    
config['Retina'] = {}
config['Retina']['model'] = 'vision_model_template'
config['Retina']['acceptance_factor'] = 1
config['Retina']['screentype'] = 'Sphere'
config['Retina']['filtermethod'] = 'gpu'
config['Retina']['intype'] = 'Bar'
config['Retina']['time_rep'] = 1
config['Retina']['space_rep'] = 1

config['Lamina'] = {}
config['Lamina']['model'] = 'vision_model_template'
config['Lamina']['relative_am'] = 'half'

config['Screen'] = {}
config['Screen']['SphereScreen'] = {}
config['Screen']['SphereScreen']['parallels'] = 50
config['Screen']['SphereScreen']['meridians'] = 100
config['Screen']['SphereScreen']['radius'] = 10
config['Screen']['SphereScreen']['half'] = False
config['Screen']['SphereScreen']['image_map'] = 'AlbersProjectionMap'

config['InputType'] = {}
config['InputType']['shape'] = [100, 100]
config['InputType']['infilename'] = ''
config['InputType']['writefile'] = False

config['InputType']['Bar'] = {}
config['InputType']['Bar']['bar_width'] = 10
config['InputType']['Bar']['direction'] = 'v'
config['InputType']['Bar']['levels'] = [3e3, 3e4]
config['InputType']['Bar']['speed'] = 1000
config['InputType']['Bar']['double'] = False

Creation of Neurokernel Manager

We first create the Neurokernel manager.


In [ ]:
import neurokernel.core_gpu as core
import neurokernel.mpi_relaunch

manager = core.Manager()

The Retina and its Interface

We construct the retina LPU from the retina module and add it to the Neurokernel simulation. We follow closely the retina example in which the retina LPU is executed in isolation.


In [ ]:
import networkx as nx
import retina.geometry.hexagon as ret_hx
import retina.retina as ret
from retina.LPU import LPU as rLPU

from retina.InputProcessors.RetinaInputProcessor import RetinaInputProcessor
from neurokernel.LPU.OutputProcessors.FileOutputProcessor import FileOutputProcessor
from retina.NDComponents.MembraneModels.PhotoreceptorModel import PhotoreceptorModel
from retina.NDComponents.MembraneModels.BufferPhoton import BufferPhoton

# create a hexagonal array for the retina
# This describes the array of positions of neurons, their arrangement in space and a way to query for neighbors 
ret_hexagon = ret_hx.HexagonArray(num_rings=14, radius=1)
# create a retina object that contains neuron and synapse information
retina_array = ret.RetinaArray(ret_hexagon, config)

# parameters from the configuration dictionary
dt = config['General']['dt']

output_file = 'retina_output.h5'
gexf_file = 'retina.gexf.gz'

input_processor = RetinaFileInputProcessor(config, retina)
output_processor = FileOutputProcessor([('V',None)], output_file, sample_interval=1)

G = retina_array.get_worker_nomaster_graph()
# export the configuration of neurons and synapses to a GEXF file
nx.write_gexf(G, gexf_file)
# parse GEXF file
(comp_dict, conns) = LPU.graph_to_dicts(G)
retina_id = 'retina0'

extra_comps = [PhotoreceptorModel, BufferPhoton]

# add the retina LPU to Neurokernel manager
manager.add(LPU, retina_id, dt, comp_dict, conns,
            device = retina_index, input_processors = [input_processor],
            output_processors = [output_processor],
            debug=debug, time_sync=time_sync, extra_comps = extra_comps)

The Retina model (Neurokernel RFC #3) exposes their outputs, the photoreceptors R1-R6, to the Neurokernel. The naming convention of the ports is as follows: A port associated with a photoreceptor is named as /ret/<omm_id>/<photor_name>, where omm_id is a unique numeric identifier of the ommatidium that the photoreceptor resides, and photor_name the name of photoreceptor.

The Lamina and its Interface

We construct the lamina LPU from the lamina module and add it to the Neurokernel simulation. We follow closely the lamina example in which the lamina LPU is executed in isolation.


In [ ]:
import lamina.geometry.hexagon as lam_hx
import lamina.lamina as lam
from lamina.LPU import LPU as lLPU

from retina.NDComponents.MembraneModels.BufferVoltage import BufferVoltage

# create a hexagonal array for the lamina
lam_hexagon = lam_hx.HexagonArray(num_rings=14, radius=1)
# create a lamina object that contains neuron and synapse information
lamina_array = lam.LaminaArray(lam_hexagon, config)

# parameters from the configuration dictionary
dt = config['General']['dt']

output_file = 'lamina_output.h5'
gexf_file = 'lamina.gexf.gz'
G = lamina_array.get_graph()
# export the configuration of neurons and synapses to a GEXF file
nx.write_gexf(G, gexf_file)
# parse GEXF file
(comp_dict, conns) = LPU.graph_to_dicts(G)
lamina_id = 'lamina0'
extra_comps = [BufferVoltage]

# add the lamina LPU to Neurokernel manager
manager.add(LPU, lamina_id, dt, comp_dict, conns,
            output_processors = [output_processor],
            device=lamina_index+1, debug=debug, time_sync=time_sync,
            extra_comps = extra_comps)

Photoreceptors R1-R6 constitute the inputs to the lamina LPU from the retina LPU. The input ports in the lamina LPU follows the naming convention: a photoreceptor input port is named as /lam/<cart_id>/<photor_name> where cart_id is the unique numeric identifier of the cartridge that the input port belongs to, and photor_name is the name of the photoreceptor. Note that retinotopy in the early visual system of the fruit fly is imposed by the hexagonal array of ommatidia in the retina and that of cartridges in the lamina. The two arrays are assumed to be compatible to each other, i.e., the ommatidia have a one-to-one correspondence to the cartridges.

Connection between the Retina and the Lamina

The connections between retina and lamina follow the neural superposition rule of the fly's compound eye (see Neurokernel RFC#2). The superposition rule is also illustrated in the Fig. 2.

Figure 2. Neural superposition rule. The solid circles represent ommatidia in the retina, and dashed circles represent cartridges in the lamina. Individual photoreceptors R1-R6 are numbered and their relative positon highlighted in some of the ommatidia. On the left, cartridge A receives 6 photoreceptor inputs, each from a different ommatidium. On the right, 6 photoreceptors from a single ommatidium each projects to a different cartridge.

The two LPUs added to the Neurokernel have properly exposed their I/O to the Neurokernel. The connection between the two LPUs can then be configured through the Pattern provided by the Neurokernel API.


In [ ]:
from neurokernel.pattern import Pattern

retina_id = get_retina_id(index)
lamina_id = get_lamina_id(index)
    
retina_selectors = retina_array.get_all_selectors()
lamina_selectors = []

# obtain two lists of selectors,
# each corresponding entry denotes
# the selector of the outgoing port
# and the selector of the incoming port
from_list = []
to_list = []

rulemap = retina_array.rulemap
for ret_sel in retina_selectors:
    if not ret_sel.endswith('agg'):
        # format should be '/ret/<omm_id>/<photor_name>'
        _, lpu, ommid, n_name = ret_sel.split('/')

        # find neighbor of neural superposition
        neighborid = rulemap.neighbor_for_photor(int(ommid), n_name)

        # format should be '/lam/<cart_id>/<photor_name>'
        lam_sel = lamina_array.get_selector(neighborid, n_name)

        # concatenate the selector to from and to lists
        from_list.append(ret_sel)
        to_list.append(lam_sel)
        
        # append aggregators
        from_list.append(lam_sel+'_agg')
        to_list.append(ret_sel+'_agg')
        lamina_selectors.append(lam_sel)
        lamina_selectors.append(lam_sel+'_agg')

# create pattern from the two lists using from_concat method
# This method is faster than creating pattern using __setitem__
pattern = Pattern.from_concat(','.join(retina_selectors),
                              ','.join(lamina_selectors),
                              from_sel=','.join(from_list),
                              to_sel=','.join(to_list),
                              gpot_sel=','.join(from_list+to_list))

Finally, we connect the retina and the lamina LPUs using the pattern.


In [ ]:
manager.connect(retina_id, lamina_id, pattern)

Simulation of the connected retina and lamina LPUs can then be started.


In [ ]:
steps = config['General']['steps']
manager.spawn()
manager.start(steps=steps)
manager.wait()