Introduction to Neurokernel's API

This notebook illustrates how to define and connect local processing unit (LPU) models using Neurokernel.

Background

An LPU comprises two distinct populations of neurons (Chiang et al., 2011): local neurons may only project to other neurons in the LPU, while projection neurons may project both to local neurons and neurons in other LPUs. All synapses between neurons are comprised by internal connectivity patterns. LPUs are linked by inter-LPU connectivity patterns that map one LPU's outputs to inputs in other LPUs. The general structure of an LPU is shown below:

Defining an LPU Interface

Interface Ports

All communication between LPUs must pass through ports that are internally associated with modeling elements that must emit or receive external data. An LPU's interface is defined as the set of ports it exposes to other LPUs. Each port is defined by a unique identifier string and attributes that indicate whether

  • it transmits spikes (i.e., boolean values) or graded potentials (i.e., floating point numbers) at each step of model execution and whether
  • it accepts input or emits output.

To facilitate management of a large numbers of ports, Neurokernel requires that port identifiers conform to a hierarchical format similar to that used to label files or elements in structured documents. Each identifier may comprise multiple levels joined by separators (/ and []). Neurokernel also defines an extended format for selecting multiple ports with a single selector; a selector that cannot be expanded to an explicit list of individual port identifiers is said to be ambiguous. Rather than define a formal grammar for this format, the following table depicts examples of how it may be used to refer to multiple ports.

Identifier/Selector Comments
/med/L1[0] selects a single port
/med/L1[0] equivalent to /med/L1[0]
/med+/L[1] equivalent to /med/L1[0]
/med[L1,L2][0] selects two ports
/med/L1[0,1] another example of two ports
/med/L1[0],/med/L1[1] equivalent to /med/L1[0,1]
/med/L1[0:10] selects a range of 10 ports
/med/L1/* selects all ports starting with /med/L1
(/med/L1,/med/L2)+[0] equivalent to /med/[L1,L2][0]
/med/[L1,L2].+[0:2] equivalent to /med/L1[0],/med/L2[1]

Inter-LPU Connectivity Patterns

All connections between LPUs must be defined in inter-LPU connectivity patterns that map the output ports of one LPU to the input ports of another LPU. Since individual LPUs may internally implement multiplexing of input signals to a single destination in different ways, the LPU interface only permits fan-out from individual output ports to multiple input ports; connections from multiple output ports may not converge on a single input port. A single pattern may define connections in both directions.

A connectivity pattern between two LPUs is fully specified by the identifiers and attributes of the ports in its two interfaces and the directed graph of connections defined between them. An example of such pattern defined between ports /lam[0:6] and /med[0:5] follows:

PortInterfaceI/OPort Type
/lam[0]0ingraded potential
/lam[1]0ingraded potential
/lam[2]0outgraded potential
/lam[3]0outspiking
/lam[4]0outspiking
/lam[5]0outspiking
/med[0]1outgraded potential
/med[1]1outgraded potential
/med[2]1outgraded potential
/med[3]1inspiking
/med[4]1inspiking
FromTo
/lam[0]/med[0]
/lam[0]/med[1]
/lam[1]/med[2]
/med[3]/lam[3]
/med[4]/lam[4]
/med[4]/lam[5]

Using Neurokernel's API

Neurokernel provides Python classes for defining LPUs and connectivity patterns that can be used to link them together. The former (neurokernel.core.Module) requires an LPU designer to implement all of the LPU's internals from the ground up; no explicit constraints are placed upon how the LPU uses GPU resources. In order to enable independently implemented LPUs to communicate with each other, each LPU must implement a method called run_step() called during each step of execution that consumes incoming data from other LPUs and produces data for transmission to other LPUs. The example below generates random data in its run_step() method:


In [1]:
import warnings
warnings.filterwarnings("ignore")
import numpy as np

from neurokernel.base import PORT_DATA, PORT_CTRL
from neurokernel.core import Module

class MyModule(Module):
    def __init__(self, sel, 
                       sel_in_gpot, sel_in_spike,
                       sel_out_gpot, sel_out_spike,
                       data_gpot, data_spike,
                       port_data=PORT_DATA, port_ctrl=PORT_CTRL,
                       id=None, device=None):
        super(MyModule, self).__init__(sel, ','.join([sel_in_gpot, sel_out_gpot]),
                                                        ','.join([sel_in_spike, sel_out_spike]),
                                                        data_gpot, data_spike,
                                                        columns=['interface', 'io', 'type'],
                                                        port_data=port_data, port_ctrl=port_ctrl,
                                                        id=id, device=device)
        
        # Set up module interface using specified selectors:
        self.interface[sel_in_gpot, 'io', 'type'] = ['in', 'gpot']
        self.interface[sel_out_gpot, 'io', 'type'] = ['out', 'gpot']
        self.interface[sel_in_spike, 'io', 'type'] = ['in', 'spike']
        self.interface[sel_out_spike, 'io', 'type'] = ['out', 'spike']

    # Process incoming data and set outgoing data:
    def run_step(self):       
        super(MyModule, self).run_step()

        # Display input graded potential data:
        in_gpot_ports = self.interface.in_ports().gpot_ports().to_tuples()
        self.logger.info('input gpot port data: '+str(self.pm['gpot'][in_gpot_ports]))
        
        # Display input spike data:
        in_spike_ports = self.interface.in_ports().spike_ports().to_tuples()
        self.logger.info('input spike port data: '+str(self.pm['spike'][in_spike_ports]))

        # Output random graded potential data:
        out_gpot_ports = self.interface.out_ports().gpot_ports().to_tuples()
        self.pm['gpot'][out_gpot_ports] = np.random.rand(len(out_gpot_ports))
            
        # Randomly select output ports to emit spikes:
        out_spike_ports = self.interface.out_ports().spike_ports().to_tuples()
        self.pm['spike'][out_spike_ports] = np.random.randint(0, 2, len(out_spike_ports))

Notice that every LPU instance must be associated with a unique identifier (id). An LPU contains a port-mapper attribute (pm) that maps input and output ports to a data array that may be accessed by the LPU's internal implementation; after each step of execution, the array associated with the port-mapper is updated with input data from source LPUs while output data from the array is transmitted to destination LPUs.

One can instantiate the above LPU class as follows:


In [2]:
from neurokernel.plsel import PathLikeSelector
from neurokernel.tools.comm import get_random_port

port_data = get_random_port()
port_ctrl = get_random_port()

m1_int_sel_in_gpot = '/a/in/gpot0,/a/in/gpot1'
m1_int_sel_out_gpot = '/a/out/gpot0,/a/out/gpot1'
m1_int_sel_in_spike = '/a/in/spike0,/a/in/spike1'
m1_int_sel_out_spike = '/a/out/spike0,/a/out/spike1'
m1_int_sel = ','.join([m1_int_sel_in_gpot, m1_int_sel_out_gpot,
                                 m1_int_sel_in_spike, m1_int_sel_out_spike])
N1_gpot = PathLikeSelector.count_ports(','.join([m1_int_sel_in_gpot, m1_int_sel_out_gpot]))
N1_spike = PathLikeSelector.count_ports(','.join([m1_int_sel_in_spike, m1_int_sel_out_spike]))
m1 = MyModule(m1_int_sel,
                           m1_int_sel_in_gpot, m1_int_sel_in_spike,
                           m1_int_sel_out_gpot, m1_int_sel_out_spike,
                           np.zeros(N1_gpot, np.float64),
                           np.zeros(N1_spike, int),
                           port_data, port_ctrl, 'm1')

m2_int_sel_in_gpot = '/b/in/gpot0,/b/in/gpot1'
m2_int_sel_out_gpot = '/b/out/gpot0,/b/out/gpot1'
m2_int_sel_in_spike = '/b/in/spike0,/b/in/spike1'
m2_int_sel_out_spike = '/b/out/spike0,/b/out/spike1'
m2_int_sel = ','.join([m2_int_sel_in_gpot, m2_int_sel_out_gpot,
                                 m2_int_sel_in_spike, m2_int_sel_out_spike])
N2_gpot = PathLikeSelector.count_ports(','.join([m2_int_sel_in_gpot, m2_int_sel_out_gpot]))
N2_spike = PathLikeSelector.count_ports(','.join([m2_int_sel_in_spike, m2_int_sel_out_spike]))
m2 = MyModule(m2_int_sel,
                           m2_int_sel_in_gpot, m2_int_sel_in_spike,
                           m2_int_sel_out_gpot, m2_int_sel_out_spike,
                           np.zeros(N2_gpot, np.float64),
                           np.zeros(N2_spike, int),
                           port_data, port_ctrl, 'm2')

Using the ports in each of the above LPUs' interfaces, one can define a connectivity pattern between them as follows:


In [3]:
from neurokernel.pattern import Pattern

pat12 = Pattern(m1_int_sel, m2_int_sel)

# Set port attributes:
pat12.interface[m1_int_sel_out_gpot] = [0, 'in', 'gpot']
pat12.interface[m1_int_sel_in_gpot] = [0, 'out', 'gpot']
pat12.interface[m1_int_sel_out_spike] = [0, 'in', 'spike']
pat12.interface[m1_int_sel_in_spike] = [0, 'out', 'spike']
pat12.interface[m2_int_sel_in_gpot] = [1, 'out', 'gpot']
pat12.interface[m2_int_sel_out_gpot] = [1, 'in', 'gpot']
pat12.interface[m2_int_sel_in_spike] = [1, 'out', 'spike']
pat12.interface[m2_int_sel_out_spike] = [1, 'in', 'spike']

# Define connections:
pat12['/a/out/gpot0', '/b/in/gpot0'] = 1
pat12['/a/out/gpot1', '/b/in/gpot1'] = 1
pat12['/b/out/gpot0', '/a/in/gpot0'] = 1
pat12['/b/out/gpot1', '/a/in/gpot1'] = 1
pat12['/a/out/spike0', '/b/in/spike0'] = 1
pat12['/a/out/spike1', '/b/in/spike1'] = 1
pat12['/b/out/spike0', '/a/in/spike0'] = 1
pat12['/b/out/spike1', '/a/in/spike1'] = 1

A Simple Example: Creating an LPU

To obviate the need to implement an LPU completely from scratch, Neurokernel also contains a functional LPU class (neurokernel.LPU.LPU.LPU) that supports the following neuron and synapse models:

  • Leaky Integrate-and-Fire (LIF) neuron (spiking neuron)
  • Morris-Lecar (ML) neuron (graded potential neuron),
  • Alpha function synapse
  • Conductance-based synapse (referred to as power_gpot_gpot).

Note that although the ML model can in principle be configured as a spiking neuron model, the implementation in the LPU class is configured to output its membrane potential.

Alpha function synapses may be used to connect any type of presynaptic neuron to any type of of postsynaptic neuron; the neuron presynaptic to a conductance-based synapse must be a graded potential neuron.

It should be emphasized that the above LPU implementation and the currently support models are not necessarily optimal and may be replaced with improved implementations in the future.

The LPU class provided by Neurokernel may be instantiated with a graph describing its internal structure. The graph must be stored in GEXF file format with nodes and edges respectively corresponding to instances of the supported neuron and synapse models. To facilitate construction of an LPU, the networkx Python package may be used to set the parameters of the model instances. For example, the following code defines a simple network consisting of an LIF neuron with a single synaptic connection to an ML neuron; the synaptic current elicited by the LIF neuron's spikes is modeled by an alpha function:


In [4]:
import numpy as np
import networkx as nx

G = nx.DiGraph() # or nx.MultiDiGraph()
G.add_nodes_from([0, 1])
G.node[0] = {
    'model': 'LeakyIAF',
    'name': 'neuron_0',
    'extern': True,     # indicates whether the neuron can receive an external input signal
    'public': True,      # indicates whether the neuron can emit output to other LPUs 
    'spiking': True,    # indicates whether the neuron's output is spikes or its membrane voltage
    'selector': '/a[0]', # every public neuron must have a selector
    'V': np.random.uniform(-0.06, -0.025), # initial membrane voltage
    'Vr': -0.0675489770451,                # reset voltage
    'Vt': -0.0251355161007,                # spike threshold
    'R': 1.02445570216,                    # membrane resistance
    'C': 0.0669810502993}                  # membrane capacitance
G.node[1] = {
    'model': "MorrisLecar",
    'name': 'neuron_1',
    'extern': False,
    'public': True,
    'spiking': False,
    'selector': '/a[1]',
    'V1': 0.03,
    'V2': 0.015,
    'V3': 0,
    'V4': 0.03,
    'phi': 0.025,
    'offset': 0,
    'initV': -0.05214,
    'initn': 0.03}
G.add_edge(0, 1, type='directed', attr_dict={
    'model': 'AlphaSynapse',
    'name': 'synapse_0_1',
    'class': 0,           # 0 = spike->spike, 1 = spike->gpot, 2 = 'gpot->spike', 3 = 'gpot->gpot'
    'ar': 1.1*1e2,        # decay rate
    'ad': 1.9*1e3,        # rise rate
    'reverse': 65*1e-3,   #
    'gmax': 2*1e-3,       # maximum conductance
    'conductance': True}) # indicates whether the synapse's output is a conductance or current
nx.write_gexf(G, 'simple_lpu.gexf.gz')

We can prepare a simple pulse input and save it in an HDF5 file to pass to neuron_0 as follows:


In [5]:
import h5py

dt = 1e-4           # time resolution of model execution in seconds
dur = 1.0           # duration in seconds
Nt = int(dur/dt)    # number of data points in time

start = 0.3
stop = 0.6

I_max = 0.6
t = np.arange(0, dt*Nt, dt)
I = np.zeros((Nt, 1), dtype=np.double)
I[np.logical_and(t>start, t<stop)] = I_max

with h5py.File('simple_input.h5', 'w') as f:
    f.create_dataset('array', (Nt, 1),
                     dtype=np.double,
                     data=I)

The LPU defined earlier can be instantiated and executed as follows:


In [6]:
from neurokernel.core import Manager
from neurokernel.LPU.LPU import LPU
from neurokernel.tools.comm import get_random_port

port_data = get_random_port()
port_ctrl = get_random_port()

(n_dict, s_dict) = LPU.lpu_parser('simple_lpu.gexf.gz')
lpu = LPU(dt, n_dict, s_dict, 
          input_file='simple_input.h5',
          output_file='simple_output.h5',
          port_ctrl=port_ctrl, port_data=port_data,
          device=0, id='simple',
          debug=False)

man = Manager(port_data, port_ctrl)
man.add_brok()
man.add_mod(lpu)
man.start(steps=Nt)
man.stop()

The spikes emitted by neuron_0 and the membrane potential of neuron_1 are respectively stored in simple_output_spike.h5 and simple_output_gpot.h5. These can be visualized using a built-in LPU output visualization class that can render the output in video and image format:


In [7]:
import matplotlib as mpl
mpl.use('agg')

import neurokernel.LPU.utils.visualizer as vis
import networkx as nx

# Temporary fix for bug in networkx 1.8:
nx.readwrite.gexf.GEXF.convert_bool = {'false':False, 'False':False,
                                       'true':True, 'True':True}

V = vis.visualizer()

V.add_LPU('simple_input.h5', LPU='Input')
V.add_plot({'type': 'waveform', 'ids': [[0]]}, 'input_Input')

V.add_LPU('simple_output_spike.h5',
          './simple_lpu.gexf.gz', 'Simple LPU (Spikes)')
V.add_plot({'type':'raster', 'ids': {0: [0]},
            'yticks': [0], 'yticklabels': [0]},
            'Simple LPU (Spikes)','Output')
V.add_LPU('simple_output_gpot.h5',
          './simple_lpu.gexf.gz', 'Simple LPU (Graded Potential)')
V.add_plot({'type': 'waveform', 'ids': {0:[0]}},
            'Simple LPU (Graded Potential)', 'Output')

V._update_interval = 50
V.rows = 3
V.cols = 1
V.fontsize = 10
V.out_filename = 'simple_output.avi'
V.codec = 'libtheora'
V.dt = 0.0001
V.xlim = [0, 1.0]
V.figsize = (6, 4)
V.run('simple_output.png', 120)

# Don't automatically display the output image:
plt.close(gcf())

Here is the generated image of the output:

A More Complex Example: Connecting LPUs

The following example demonstrates the creation and connection of two LPUs containing multiple neurons with a connectivity pattern. The number of neurons and synapses in each of the LPUs' internal populations are randomly generated: the number of neurons in each populations is randomly selected between 30 to 40. The LPUs' projection neurons - as well as populations of input neurons presynaptic to the LPUs that accept an input stimulus - are modeled as LIF neurons, while each of the local neurons is modeled as either an LIF neuron or a graded potential ML neuron. Synaptic currents are modeled with alpha functions. Synapses between the LPU's local and projection neurons, as well as synpases between the input neurons and the LPU's internal neurons, are also created randomly; roughly half of the total number of pairs of neurons are connected.

To generate the LPUs and input signals used in this demo, we run the following script:


In [8]:
%cd -q ~/neurokernel/examples/intro/data
%run gen_generic_lpu.py -s 0 -l lpu_0 generic_lpu_0.gexf.gz generic_lpu_0_input.h5
%run gen_generic_lpu.py -s 1 -l lpu_1 generic_lpu_1.gexf.gz generic_lpu_1_input.h5

Since the neurons and synapses in the generated LPUs are stored in GEXF format, they can be easily accessed using networkx and pandas. Neurokernel provides a convenience function to convert between networkx graphs and pandas' DataFrame class:


In [9]:
import neurokernel.tools.graph
g_0 = nx.read_gexf('generic_lpu_0.gexf.gz')
df_neu_0, df_syn_0 = neurokernel.tools.graph.graph_to_df(g_0)

Say one wishes to explore several LIF neurons in LPU 0. Here is how to access their parameters:


In [10]:
df_neu_0[df_neu_0['model'] == 'LeakyIAF'][50:55][['name','selector','model','extern',
                                'public','spiking','V','Vr','Vt','R','C']]


Out[10]:
name selector model extern public spiking V Vr Vt R C
70 proj_3_s /lpu_0/out/spk/3 LeakyIAF False True True -0.04021839 -0.06754898 -0.02513552 1.024456 0.06698105
71 proj_4_s /lpu_0/out/spk/4 LeakyIAF False True True -0.02972141 -0.06754898 -0.02513552 1.024456 0.06698105
72 proj_5_s /lpu_0/out/spk/5 LeakyIAF False True True -0.04218609 -0.06754898 -0.02513552 1.024456 0.06698105
73 proj_6_s /lpu_0/out/spk/6 LeakyIAF False True True -0.0279147 -0.06754898 -0.02513552 1.024456 0.06698105
74 proj_7_s /lpu_0/out/spk/7 LeakyIAF False True True -0.02775948 -0.06754898 -0.02513552 1.024456 0.06698105

Say one wishes to find the parameters of the synapses linking neuron output_3_s to other destination neurons; these can be accessed using the following query:


In [11]:
ind = df_syn_0['name'].str.startswith('proj_3_s')
df_syn_0[ind][['name','model','class','ar','ad','reverse','gmax','conductance']]


Out[11]:
name model class ar ad reverse gmax conductance
70 37 proj_3_s-local_1_s AlphaSynapse 0 110 1900 0.065 0.003 True
42 proj_3_s-local_6_g AlphaSynapse 1 110 1900 0.01 0.00031 True
45 proj_3_s-local_9_g AlphaSynapse 1 110 1900 0.01 0.00031 True
47 proj_3_s-local_11_g AlphaSynapse 1 110 1900 0.01 0.00031 True
48 proj_3_s-local_12_g AlphaSynapse 1 110 1900 0.01 0.00031 True
50 proj_3_s-local_14_g AlphaSynapse 1 110 1900 0.01 0.00031 True
55 proj_3_s-local_19_g AlphaSynapse 1 110 1900 0.01 0.00031 True
57 proj_3_s-local_21_s AlphaSynapse 0 110 1900 0.065 0.003 True
59 proj_3_s-local_23_g AlphaSynapse 1 110 1900 0.01 0.00031 True
63 proj_3_s-local_27_s AlphaSynapse 0 110 1900 0.065 0.003 True
64 proj_3_s-local_28_g AlphaSynapse 1 110 1900 0.01 0.00031 True
65 proj_3_s-local_29_s AlphaSynapse 0 110 1900 0.065 0.003 True

Once the configuration and the input stimulus are ready, we execute the entire model both with and without connections defined between the LPUs:


In [12]:
%cd -q ~/neurokernel/examples/intro
%run intro_demo.py

Finally, we generate videos that depict the progress of the model execution with and without connections between the LPUs:


In [13]:
%run visualize_output.py

Here is the output of the unconnected LPUs:


In [14]:
IPython.display.YouTubeVideo('FY810D-hhD8')


Out[14]:

Here is the output of the LPUs with synaptic connections from neurons in LPU 0 to LPU 1:


In [15]:
IPython.display.YouTubeVideo('U2FGNbQ5ibg')


Out[15]:

If one compares the two videos, one will observe that the output of LPU 0 in both videos remains the same while that of LPU 1 exhibits changes when connected to LPU 0. This confirms that the one-way connectivity pattern defined earlier is transmitting data from one LPU to the other during model execution.

References

Chiang, A.-S., Lin, C.-Y., Chuang, C.-C., Chang, H.-M., Hsieh, C.-H., Yeh, C.-W., et al. (2011), Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution, Current Biology, 21, 1, 1–11, doi:10.1016/j.cub.2010.11.056