Mapping a Network of LPUs onto Multiple GPUs

This notebook illustrates how to connect and execute several generic LPUs on multiple GPUs.

Background

Neurokernel's architecture enables one to specify complex networks of LPUs that interact via different connectivity patterns and map the LPUs to individual GPUs. This functionality is essential both to express models of the entire fly brain in terms of their constituent processing units and to the development of future resource allocation mechanisms that will be able to take advantage of available GPU resources in an automated manner.

Constructing an LPU Network

Since each LPU instance in a multi-LPU model must possess a unique identifier, construction of an LPU network is a matter of instantiating connectivity patterns between those pairs of LPUs that one wishes to connect and populating them with data describing the connections between ports exposed by the respective LPUs.

In the example below, we first create an input signal and instantiate N generic LPUs containing fixed numbers of local and projection neurons. Each LPU is configured to run on a different GPU (where the at least N GPUs are assumed to be available). Notice that only one LPU receives the input signal:


In [1]:
%cd -q ~/neurokernel/examples/multi/data

import itertools
import random

import gen_generic_lpu as g

%cd -q ~/neurokernel/examples/multi

import neurokernel.core as core
from neurokernel.tools.comm import get_random_port

import neurokernel.pattern as pattern
from neurokernel.LPU.LPU import LPU

# Execution parameters:
dt = 1e-4
dur = 1.0
start = 0.3
stop = 0.6
I_max = 0.6
steps = int(dur/dt)

N_sensory = 30 # number of sensory neurons
N_local = 30      # number of local neurons
N_output = 30   # number of projection neurons

N = 3

# Only LPU 0 receives input and should therefore be associated with a population                                   
# of sensory neurons:  
neu_dict = {i: [0, N_local, N_output] for i in xrange(N)}
neu_dict[0][0] = N_sensory

# Create input signal for LPU 0:                                                                                   
in_file_name_0 = 'data/generic_input.h5'
g.create_input(in_file_name_0, neu_dict[0][0], dt, dur, start, stop, I_max)

# Store info for all instantiated LPUs in the following dict:                                                      
lpu_dict = {}

# Create several LPUs:                                                                                             
port_data = get_random_port()
port_ctrl = get_random_port()

for i, neu_num in neu_dict.iteritems():
    lpu_entry = {}

    if i == 0:
        in_file_name = in_file_name_0
    else:
        in_file_name = None
    lpu_file_name = 'data/generic_lpu_%s.gexf.gz' % i
    out_file_name = 'generic_output_%s.h5' % i

    id = 'lpu_%s' % i
    
    g.create_lpu(lpu_file_name, id, *neu_num)
    (n_dict, s_dict) = LPU.lpu_parser(lpu_file_name)

    lpu = LPU(dt, n_dict, s_dict, input_file=in_file_name,
              output_file=out_file_name,
              port_ctrl=port_ctrl, port_data=port_data,
              device=i, id=id,
              debug=False)

    lpu_entry['lpu_file_name'] = lpu_file_name
    lpu_entry['in_file_name'] = in_file_name
    lpu_entry['out_file_name'] = out_file_name
    lpu_entry['lpu'] = lpu
    lpu_entry['id'] = id

    lpu_dict[i] = lpu_entry

Each LPU exposes input and output communication ports. The generic LPU generator invoked above associates an output port with each projection neuron in an LPU and an input port with a node connected to a synapse that is in turn connected to some neuron in the LPU.

Once the LPUs have been instantiated, we use information about the ports exposed by each LPU to define connectivity patterns between those LPUs we wish to connect. Notice that since the Pattern class enables one to specify connections in both directions between two LPUs, it is only necessary to consider combinations of LPUs without regard to their order. In the example below, we define connections between all pairs of LPUs in the network, i.e., the graph of all LPUs is complete, and we only connect spiking neurons exposed by the LPUs:


In [2]:
man = core.Manager(port_data, port_ctrl)
man.add_brok()

random.seed(0)

# Since each connectivity pattern between two LPUs contains the synapses in both
# directions, create connectivity patterns between each combination of LPU
# pairs:
for id_0, id_1 in itertools.combinations(lpu_dict.keys(), 2):

    lpu_0 = lpu_dict[id_0]['lpu']
    lpu_1 = lpu_dict[id_1]['lpu']

    # Find all output and input port selectors in each LPU:                                                                           
    out_ports_0 = lpu_0.interface.out_ports().to_selectors()
    out_ports_1 = lpu_1.interface.out_ports().to_selectors()

    in_ports_0 = lpu_0.interface.in_ports().to_selectors()
    in_ports_1 = lpu_1.interface.in_ports().to_selectors()

    out_ports_spk_0 = lpu_0.interface.out_ports().spike_ports().to_selectors()
    out_ports_gpot_0 = lpu_0.interface.out_ports().gpot_ports().to_selectors()

    out_ports_spk_1 = lpu_1.interface.out_ports().spike_ports().to_selectors()
    out_ports_gpot_1 = lpu_1.interface.out_ports().gpot_ports().to_selectors()

    in_ports_spk_0 = lpu_0.interface.in_ports().spike_ports().to_selectors()
    in_ports_gpot_0 = lpu_0.interface.in_ports().gpot_ports().to_selectors()

    in_ports_spk_1 = lpu_1.interface.in_ports().spike_ports().to_selectors()
    in_ports_gpot_1 = lpu_1.interface.in_ports().gpot_ports().to_selectors()

    # Initialize a connectivity pattern between the two sets of port 
    # selectors:                                                                                                         
    pat = pattern.Pattern(','.join(out_ports_0+in_ports_0),
                          ','.join(out_ports_1+in_ports_1))
    
    # Create connections from the ports with identifiers matching the output
    # ports of one LPU to the ports with identifiers matching the input
    # ports of the other LPU. First, define connections from LPU0 to LPU1:
    N_conn_spk_0_1 = min(len(out_ports_spk_0), len(in_ports_spk_1))
    N_conn_gpot_0_1 = min(len(out_ports_gpot_0), len(in_ports_gpot_1))
    for src, dest in zip(random.sample(out_ports_spk_0, N_conn_spk_0_1),
                         random.sample(in_ports_spk_1, N_conn_spk_0_1)):
        pat[src, dest] = 1
        pat.interface[src, 'type'] = 'spike'
        pat.interface[dest, 'type'] = 'spike'
    for src, dest in zip(random.sample(out_ports_gpot_0, N_conn_gpot_0_1),
                         random.sample(in_ports_gpot_1, N_conn_gpot_0_1)):
        pat[src, dest] = 1
        pat.interface[src, 'type'] = 'gpot'
        pat.interface[dest, 'type'] = 'gpot'

    # Next, define connections from LPU1 to LPU0:
    N_conn_spk_1_0 = min(len(out_ports_spk_1), len(in_ports_spk_0))
    N_conn_gpot_1_0 = min(len(out_ports_gpot_1), len(in_ports_gpot_0))
    for src, dest in zip(random.sample(out_ports_spk_1, N_conn_spk_1_0),
                         random.sample(in_ports_spk_0, N_conn_spk_1_0)):
        pat[src, dest] = 1
        pat.interface[src, 'type'] = 'spike'
        pat.interface[dest, 'type'] = 'spike'
    for src, dest in zip(random.sample(out_ports_gpot_1, N_conn_gpot_1_0),
                         random.sample(in_ports_gpot_0, N_conn_gpot_1_0)):
        pat[src, dest] = 1
        pat.interface[src, 'type'] = 'gpot'
        pat.interface[dest, 'type'] = 'gpot'
        
    man.connect(lpu_0, lpu_1, pat, 0, 1)

Once all of the connections are in place, the entire network may be executed as follows:


In [3]:
man.start(steps=steps)
man.stop()

Generated output for each LPU is stored in HDF5 files.

Assuming that the Neurokernel source code has been cloned to ~/neurokernel, the above demo can also be run in script form as follows. The parameters below specify a model comprising 30 sensory neurons connected to one LPU in a network of 3 LPUs connected to each other, each of which contains 30 local neurons and 30 output neurons:


In [4]:
%cd -q ~/neurokernel/examples/multi
%run multi_demo.py -y 30 -n 30 -o 30 -u 3