In [ ]:
import neuroglancer
import numpy as np
Create a new (initially empty) viewer. This starts a webserver in a background thread, which serves a copy of the Neuroglancer client, and which also can serve local volume data and handles sending and receiving Neuroglancer state updates.
In [ ]:
viewer = neuroglancer.Viewer()
Print a link to the viewer (only valid while the notebook kernel is running). Note that while the Viewer is running, anyone with the link can obtain any authentication credentials that the neuroglancer Python module obtains. Therefore, be very careful about sharing the link, and keep in mind that sharing the notebook will likely also share viewer links.
In [ ]:
viewer
Add some example layers using the precomputed data source (HHMI Janelia FlyEM FIB-25 dataset).
In [ ]:
with viewer.txn() as s:
s.layers['image'] = neuroglancer.ImageLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/image')
s.layers['segmentation'] = neuroglancer.SegmentationLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth', selected_alpha=0.3)
Display a numpy array as an additional layer. A reference to the numpy array is kept only as long as the layer remains in the viewer.
Move the viewer position.
In [ ]:
with viewer.txn() as s:
s.voxel_coordinates = [3000, 3000, 3000]
Hide the segmentation layer.
In [ ]:
with viewer.txn() as s:
s.layers['segmentation'].visible = False
In [ ]:
import cloudvolume
image_vol = cloudvolume.CloudVolume('https://storage.googleapis.com/neuroglancer-public-data/flyem_fib-25/image',
mip=0, bounded=True, progress=False, provenance={})
a = np.zeros((200,200,200), np.uint8)
def make_thresholded(threshold):
a[...] = image_vol[3000:3200,3000:3200,3000:3200][...,0] > threshold
make_thresholded(110)
# This volume handle can be used to notify the viewer that the data has changed.
volume = neuroglancer.LocalVolume(
a,
dimensions=neuroglancer.CoordinateSpace(
names=['x', 'y', 'z'],
units='nm',
scales=[8, 8, 8],
),
voxel_offset=[3000, 3000, 3000])
with viewer.txn() as s:
s.layers['overlay'] = neuroglancer.ImageLayer(
source=volume,
# Define a custom shader to display this mask array as red+alpha.
shader="""
void main() {
float v = toNormalized(getDataValue(0)) * 255.0;
emitRGBA(vec4(v, 0.0, 0.0, v));
}
""",
)
Modify the overlay volume, and call invalidate()
to notify the Neuroglancer client.
In [ ]:
make_thresholded(100)
volume.invalidate()
Select a couple segments.
In [ ]:
with viewer.txn() as s:
s.layers['segmentation'].segments.update([1752, 88847])
s.layers['segmentation'].visible = True
Print the neuroglancer viewer state. The Neuroglancer Python library provides a set of Python objects that wrap the JSON-encoded viewer state. viewer.state
returns a read-only snapshot of the state. To modify the state, use the viewer.txn()
function, or viewer.set_state
.
In [ ]:
viewer.state
Print the set of selected segments.|
In [ ]:
viewer.state.layers['segmentation'].segments
Update the state by calling set_state
directly.
In [ ]:
import copy
new_state = copy.deepcopy(viewer.state)
new_state.layers['segmentation'].segments.add(10625)
viewer.set_state(new_state)
Bind the 't' key in neuroglancer to a Python action.
In [ ]:
num_actions = 0
def my_action(s):
global num_actions
num_actions += 1
with viewer.config_state.txn() as st:
st.status_messages['hello'] = ('Got action %d: mouse position = %r' %
(num_actions, s.mouse_voxel_coordinates))
print('Got my-action')
print(' Mouse position: %s' % (s.mouse_voxel_coordinates,))
print(' Layer selected values: %s' % (s.selected_values,))
viewer.actions.add('my-action', my_action)
with viewer.config_state.txn() as s:
s.input_event_bindings.viewer['keyt'] = 'my-action'
s.status_messages['hello'] = 'Welcome to this example'
Change the view layout to 3-d.
In [ ]:
with viewer.txn() as s:
s.layout = '3d'
s.projection_scale = 3000
Take a screenshot (useful for creating publication figures, or for generating videos). While capturing the screenshot, we hide the UI and specify the viewer size so that we get a result independent of the browser size.
In [ ]:
from ipywidgets import Image
screenshot = viewer.screenshot(size=[1000, 1000])
screenshot_image = Image(value=screenshot.screenshot.image)
screenshot_image
Change the view layout to show the segmentation side by side with the image, rather than overlayed. This can also be done from the UI by dragging and dropping. The side by side views by default have synchronized position, orientation, and zoom level, but this can be changed.
In [ ]:
with viewer.txn() as s:
s.layout = neuroglancer.row_layout(
[neuroglancer.LayerGroupViewer(layers=['image', 'overlay']),
neuroglancer.LayerGroupViewer(layers=['segmentation'])])
Remove the overlay layer.
In [ ]:
with viewer.txn() as s:
s.layout = neuroglancer.row_layout(
[neuroglancer.LayerGroupViewer(layers=['image']),
neuroglancer.LayerGroupViewer(layers=['segmentation'])])
Create a publicly sharable URL to the viewer state (only works for external data sources, not layers served from Python). The Python objects for representing the viewer state (neuroglancer.ViewerState
and friends) can also be used independently from the interactive Python-tied viewer to create Neuroglancer links.
In [ ]:
print(neuroglancer.to_url(viewer.state))
Stop the Neuroglancer web server, which invalidates any existing links to the Python-tied viewer.
In [ ]:
neuroglancer.stop()