Vision Model Demo

This notebook illustrates how to run a Neurokernel-based model of portions of the fly's vision system.

Background

In addition to the retina where the photo-transduction takes place, the optic lobe of the Drosophila can be divided into 4 major LPUs on each side of the fly brain respectively referred to as the lamina, medulla, lobula and lobula plate. Visual information progresses along a processing path that starts at the retina and successively passes through the lamina, medulla, and either the lobula or the lobula plate. The spatial structure of the visual stimulus is preserved by the retinotopic columnar organization of most of these LPUs.

There are at least 120 different types of neurons in the optic lobe. Most of the neurons in the optic lobe (if not all) do not emit spikes; rather, they communicate via chemical synapses where neurotransmitter is tonically released based on the graded potential of the presynaptic neurons. The synapses can have varying amount of delays based on the different neurotransmitters. Many neurons in the optic lobe also communicate through gap junctions.

The current vision system model is based upon available connectome data for the lamina (Rivera-Alba et al., 2011) and medulla (Fischbach et al., 1989; Higgins et al., 2004). The model consists of two LPUs; the first contains 9516 neurons (or about 90% of the cells) in the retina and lamina, while the second contains 6920 (or about 17% of the cells) in the medulla and several neurons that connect to both the medulla and first layer of the lobula. All neurons are modeled using the Morris-Lecar model with parameters selected to not elicit spiking activity. Synapses are modeled using a simple model of tonic neurotransmitter release and its effect upon postsynaptic conductance. The model does not currently comprise gap junctions.

Executing the Model

Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first create GEXF files containing the model configuration.


In [1]:
%matplotlib inline
%cd -q ~/neurokernel/examples/vision/data
%run generate_vision_gexf.py

Then, we generate an input of duration 1.0 seconds and execute the model. Note that if you have access to only 1 GPU, replace --med_dev 1 with --med_dev 0 in the third line below.


In [2]:
%run gen_vis_input.py
%cd -q ~/neurokernel/examples/vision
%run vision_demo.py --lam_dev 0 --med_dev 1

Next, we generate a video of the membrane potentials of specific neurons in the two LPUs:


In [3]:
%run visualize_output.py

The visualization script produces a video that depicts an input signal provided to a grid comprising neurons associated with each of the 768 cartridges in one of the fly's eyes as well as the response of select neurons in the corresponding columns in the retina/lamina and medulla LPUs. The cartridges and columns are organized in a hexagonal grid similar to the following; each pixel in the visualization corresponds to the neuron associated with one cartridge or column.

The resulting video (hosted on YouTube) can be viewed below:


In [4]:
import IPython.display
IPython.display.YouTubeVideo('5eB78fLl1AM')


Out[4]:

The three response animations correspond to the specific neurons depicted below:

Acknowledgements

The vision model demonstrated in this notebook was developed by Nikul H. Ukani and Yiyin Zhou.

References

Fischbach, K.-F. and Dittrich, A. (1989), The optic lobe of Drosophila melanogaster. I. a Golgi analysis of wild-type structure, Cell and Tissue Research, 258, 3, doi:10.1007/BF00218858

Higgins, C. M., Douglass, J. K., and Strausfeld, N. J. (2004), The computational basis of an identified neuronal circuit for elementary motion detection in dipterous insects, Visual Neuroscience, 21, 04, 567–586, doi:10.1017/S0952523804214079

Rivera-Alba, M., Vitaladevuni, S. N., Mishchenko, Y., Lu, Z., Takemura, S.-Y., Scheffer, L., et al. (2011), Wiring economy and volume exclusion determine neuronal placement in the Drosophila brain, Current Biology, 21, 23, 2000–2005, doi:10.1016/j.cub.2011.10.022