Visualizing point clouds with Mayavi

This notebook will cover the basics of visualizing point clouds with Mayavi. Mayavi is a python codeable interface for VTK, which offers the possibility to plot scientific data as well as point clouds.


In [1]:
import numpy as np
import mayavi.mlab as mlab
from tvtk.api import tvtk
from mayavi.modules.glyph import Glyph
from mayavi.tools.sources import MGlyphSource
from mayavi.modules.surface import Surface
from mayavi.filters.mask_points import MaskPoints


WARNING:traits.has_traits:DEPRECATED: traits.has_traits.wrapped_class, 'the 'implements' class advisor has been deprecated. Use the 'provides' class decorator.

Plotting the point cloud

Plotting the point cloud can be done very easy by using the mlab.mplot3d() command. It is that way however not easy to change the colors and add different modules to the pipeline. So this pipeline is what we will build manually. A pipeline is, quoting wikipedias definition:

In 3D computer graphics, the graphics pipeline or rendering pipeline refers to the sequence of steps used to create a 2D raster representation of a 3D scene.

So this is what we will do. But first import the data and create a data source:


In [2]:
xyz = np.load(r'D:\rongen\xyz_delft_center.npy', mmap_mode = 'r')
rgb = np.load(r'D:\rongen\rgb_delft_center.npy', mmap_mode = 'r')

def create_data_source():
    '''
    Function to create a data source readible by mayavi/vtk
    '''
    # Create the datasource
    data_source  = MGlyphSource()
    # Add the x, y and z data to it
    data_source.reset(x = xyz[:, 0], y = xyz[:, 1], z = xyz[:, 2])
    # Add a vtk_data_source to the pipeline
    vtk_data_source = mlab.pipeline.add_dataset(data_source.dataset)
    # To be able to change the content of the datasource afterwards, we state the following:
    data_source.m_data = vtk_data_source
    return vtk_data_source
vtk_data_source = create_data_source()


C:\Users\rongen\AppData\Local\Continuum\Anaconda\lib\site-packages\traits\has_traits.py:1536: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
  setattr( self, name, value )

The window has poppud up already, but we still need to do some steps before we can see the point cloud.

Add the colors:


In [3]:
def add_colors(vtk_data_source):
    '''
    Function to add colors to data source
    Input : the data source which has to be colored
    '''
    # Create an array which vtk can read
    col = tvtk.UnsignedCharArray()
    # Set color data to it
    col.from_array(rgb)
    # Change the scalar (color) data of the dataset to the color array
    vtk_data_source.mlab_source.dataset.point_data.scalars=col

add_colors(vtk_data_source)

Now it is possible to add a mask to the pipeline. The mask will prevent some points from being rendered, which can come in handy when you do not want to show all points, to save for example memory. In our case, if you have sufficient memory (> 4 GB) you can probably plot all points and skip the mask. In that case you can add the glyph directly to the vtk_data_source (next step)


In [4]:
# Create an engine to which filters can be added
engine = mlab.get_engine()

def create_mask(npoints = 100000):
    '''
    Function to create a mask
    Input : the number of points which are visible
    '''
    # Create the mask
    mask = MaskPoints()
    engine.add_filter(mask, vtk_data_source)
    mask.filter.random_mode = True
    mask.filter.maximum_number_of_points = npoints
    return mask
mask = create_mask()

And add a filter which contains information on how to visualize the points.


In [5]:
# Create the glyph with which the point cloud actually gets points
def create_glyph(add_to = 'mask'):
    '''
    Function to add glyphs to the data
    '''
    glyph = Glyph()
    if add_to == 'mask':
        engine.add_filter(glyph, mask)
    else:
        engine.add_filter(glyph, vtk_data_source)
    glyph.glyph.glyph_source.glyph_source.glyph_type = 'vertex'

glyph = create_glyph()
# Show the figure
mlab.show()

Improve navigation

The point cloud looks fine, but the navigation is still a bit annoying. The camera is rolling, which we do not want in our case. Also the focal point is fixed, so we can only look at one point. Lets improve these things, starting with the rolling.


In [6]:
#-------------------------------------------------------------------------------
# Add a terrain interactor style, which does not roll. For this we use the tvtk
# module which has vtk objects as traits.

# The interactor style had to be inserted before the mask or glyph is added to 
# the pipeline, so we create the animation again below (Close it first)

# Set the interactor to the new style
def change_interaction():
    '''
    Function to change the interaction style
    '''
    istyle = tvtk.InteractorStyleTerrain()
    iactor = fig.scene.interactor
    iactor.interactor_style = istyle

# figure
fig             = mlab.gcf()

# data_source
vtk_data_source = create_data_source()
add_colors(vtk_data_source)

# interactorstyle
change_interaction()

# engine, mask and glyph
engine          = mlab.get_engine()
mask            = create_mask(200000)
glyph           = create_glyph()

# show
mlab.show()

The camera stopped rolling, but the zoom by scroll function has been disabled for some reason.. Let's build that one ourselves, while we our builing in key press event for moving around.


In [7]:
# We will have to add observers for key press events and mouse wheel events.
# Change the interaction function and add observers

def change_interaction():
    '''
    Function to change the interaction style and adds observers
    '''
    istyle = tvtk.InteractorStyleTerrain()
    iactor = fig.scene.interactor
    iactor.interactor_style = istyle
    istyle.add_observer('KeyPressEvent',           Callback)
    istyle.add_observer('MouseWheelBackwardEvent', Callback)
    istyle.add_observer('MouseWheelForwardEvent',  Callback)

# A callback function is needed to handle the events. The abbsorvers give two
# arguments to the callback function: the object, with all the event traits
# and the eventname itself, which is just a string.

def Callback(obj, event):
    '''
    Callback function, handels the interaction events
    '''
    # Get the geometric properties (azimuth, elevation, distance, focal point)
    # This van easily be done be calling mlav.view() without arguments.
    # mlab.move() without argument gives the camera position and the focal point
    azm, elev, dist, fp = mlab.view()
    cam, fp = mlab.move()
    # Set the height of the focal point to 2.5 m.
    fp[2] = 2.5
    # Convert to radians
    azm = azm * 2 * np.pi / 360.
    
    # To move the camera we will use the numpad keys.
    if event == 'KeyPressEvent':
        # Get the pressed key
        key = obj.GetInteractor().GetKeyCode()
        
        # Let the moved distance be determined by the hieght of the camera (cam[2])
        step = cam[2]/50.+1
        
        if key == '8':
            mlab.view(focalpoint = fp + [-step*np.cos(azm),-step*np.sin(azm), 0], distance = dist)
        if key == '4':
            mlab.view(focalpoint = fp + [ step*np.sin(azm),-step*np.cos(azm), 0], distance = dist)
        if key == '5':
            mlab.view(focalpoint = fp + [ step*np.cos(azm), step*np.sin(azm), 0], distance = dist)
        if key == '6':
            mlab.view(focalpoint = fp + [-step*np.sin(azm), step*np.cos(azm), 0], distance = dist)
    
    # Make our own zoom function
    if event == 'MouseWheelForwardEvent':
        dist *= 0.9
        mlab.view(focalpoint = fp, distance = dist)
    if event == 'MouseWheelBackwardEvent':
        dist *= 1.111
        mlab.view(focalpoint = fp, distance = dist)

In [8]:
fig             = mlab.gcf()
vtk_data_source = create_data_source()
add_colors(vtk_data_source)
change_interaction()
engine          = mlab.get_engine()
mask            = create_mask(1000000)
glyph           = create_glyph()
mlab.show()

That works fine! So we now have a detailed point cloud, which interacts fine.

Making a movie

Next thing we are going to do is create a sequence of png snapshots, which we can compile to a movie. We will add another key to our interaction function, which will rotate the camera and take a snapshot.


In [15]:
import time

def Callback(obj, event):
    '''
    Callback function, handels the interaction events
    '''
    # Get the geometric properties (azimuth, elevation, distance, focal point)
    # This van easily be done be calling mlav.view() without arguments.
    # mlab.move() without argument gives the camera position and the focal point
    azm, elev, dist, fp = mlab.view()
    cam, fp = mlab.move()
    # Set the height of the focal point to 2.5 m.
    fp[2] = 2.5
    # Convert to radians
    azm = azm * 2 * np.pi / 360.
    
    # To move the camera we will use the numpad keys.
    if event == 'KeyPressEvent':
        # Get the pressed key
        key = obj.GetInteractor().GetKeyCode()
        
        # Let the moved distance be determined by the hieght of the camera (cam[2])
        step = cam[2]/50.+1
        
        if key == '8':
            mlab.view(focalpoint = fp + [-step*np.cos(azm),-step*np.sin(azm), 0], distance = dist)
        if key == '4':
            mlab.view(focalpoint = fp + [ step*np.sin(azm),-step*np.cos(azm), 0], distance = dist)
        if key == '5':
            mlab.view(focalpoint = fp + [ step*np.cos(azm), step*np.sin(azm), 0], distance = dist)
        if key == '6':
            mlab.view(focalpoint = fp + [-step*np.sin(azm), step*np.cos(azm), 0], distance = dist)
    
        # Make a Movie
        if key == 'm':
            # Enable off screen rendering, which shortens the time to create snapshots greatly
            engine.current_scene.scene.off_screen_rendering = True
            # Loop through different azimuths (rotations)
            i = 0
            for azimuth in range(0,360,30):
                # Set the view:
                mlab.view(azimuth = azimuth, elevation = 75, distance = 500, focalpoint = np.array([84465, 447547, 2.5]))
                # Save the scene
                engine.scenes[0].scene.save(r'D:\rongen\snapshot{:03d}.png'.format(i))
                i += 1
                # Pause for a second, to avoid break downs
                time.sleep(1)
            # Disable off screen rendering again
            engine.current_scene.scene.off_screen_rendering = False
    
    # Make our own zoom function
    if event == 'MouseWheelForwardEvent':
        dist *= 0.9
        mlab.view(focalpoint = fp, distance = dist)
    if event == 'MouseWheelBackwardEvent':
        dist *= 1.111
        mlab.view(focalpoint = fp, distance = dist)

Pressing m will create a sequence of snapshots from different angles now. These snapshots will have to be combined with a movie maker program or ffmpeg or whatever you like.


In [16]:
fig             = mlab.gcf()
vtk_data_source = create_data_source()
add_colors(vtk_data_source)
change_interaction()
engine          = mlab.get_engine()
mask            = create_mask(100000)
glyph           = create_glyph()
mlab.show()

Adding model data

A way to add modeldata in Mayavi, is to create a tvtk unstructured grid, and add the data to it. The next section will explain how to do this.

Note that a lot of data files have a different are build up differently. It is therefor hard to write a script which is applicable for all grids. The next cell will therefore not work, but should be applied to your specific dataset.


In [3]:
# The trick to make an tvtk.Unstructuredgrid is to make an administration of the
# cells in the grid. Per cell should be known which grid points surround the
# cell. An index of each node per cell is needed.

# Let's say we have a netcdf file with the following data:
xnode = dataset.variables['FlowElemContour_x'][:] # (N,4)-array with the x-coordinates of the cell corners
ynode = dataset.variables['FlowElemContour_y'][:] # (N,4)-array with the y-coordinates of the cell corners

xcc = dataset.variables['FlowElem_xcc'][:] # (N,1)-array with the x-coordinates of the cell centers
ycc = dataset.variables['FlowElem_ycc'][:] # (N,1)-array with the x-coordinates of the cell centers

# First we need to create an array with points.
points = np.c_[xnode.ravel(), ynode.ravel()]

# We only need every point once, so sort out the unqie points:
b = np.ascontiguousarray(points).view(np.dtype((np.void, points.dtype.itemsize * points.shape[1])))
_, idx = np.unique(b, return_index=True)
unique_points = points[idx]

# For some reason creating a tvtk.UnstructuredGrid goes wrong when point 0 is
# requested. Therefore we add a dummy point, which will not be requested
unique_points = np.vstack((unique_points[0,:]+0.5, unique_points))

# Write a function to check where a point is in the unique_points array
def checkwhere(elm):
    c1 = np.where(elm[0] == unique_points[:,0])[0]
    c2 = np.where(elm[1] == unique_points[:,1])[0]
    return np.intersect1d(c1, c2)[0]

# Create the indices of the cell corners for each cell
indices = np.array([checkwhere(i) for i in np.c_[xnode.ravel(), ynode.ravel()]])
indices = indices.reshape((len(indices)/4,4))

# Number of cells
n_cells = int(indices.shape[0])

# Array with number of nodes per cell
elemtype = np.sum(indices > 0, 1)

# Array with cell types, these depend on the number of nodes per cell.
# if these are not four per cell, script something to get the right cell
# type for each quantity of nodes
cell_types = np.ones(len(elemtype))*tvtk.Quad().cell_type

# Array with number of nodes per cell, and index of nodes
cells = np.c_[elemtype, indices].ravel()

# Array with offset, needed to read out cells
offsets = np.r_[[0], np.cumsum(elemtype + 1)]

# Convert cells to tvtk CellArray
cell_array = tvtk.CellArray()
cell_array.set_cells(n_cells, cells)

# Create the grid
grid = tvtk.UnstructuredGrid(points = unique_points)
grid.set_cells(cell_types, offsets, cell_array)

# Now scalar values can be set to the cell centers or cell npoints (nodes)
# If you have cell data available you might need to interpolate to 
# point data, or vice versa
grid.point_data.scalars = your_point_data
grid.point_data.scalars.name = 'my_data'
grid.cell_data.scalars = your_point_data
grid.cell_data.scalars.name = 'my_data'

# Save the grid to an xml file
w = tvtk.XMLUnstructuredGridWriter(input = grid, file_name = 'my_grid.xml')
w.write()

When you have a grid created or saved, you can add it to Mayavi as follows:


In [18]:
# Load the grid data if it is not stored in memory
read = tvtk.XMLUnstructuredGridReader(file_name = 'path\to\xml\file.xml')
grid = read.get_output()

# Show the grid:
fig = mlab.gcf()
engine = mlab.get_engine()
mlab.pipeline.add_dataset(grid)
engine.add_module(Surface())
mlab.show()

In Mayavi, a lot of properties can be changed for the appearance of your grid. This can be done by clicking the Mayavi icon, which opens the pipeline. A pro tip is to click the recording button. The code of the changes yo do after clicking the recording button will be shown in the recording window. This makes it easy to find out how to change certain properties of your grids in your script directly.