Touch your data! 3D Color Printing with Python

  • Joe Kington, Chevron

But First, Puppies!!

So why can't we 3D print Geology?

  • Quick shout out to Brendan Sullivan: This is his idea/question
  • It needs to be useful, but often comes off as gimmicky

How do we avoid the gimmicks?

  • Form is not enough for geoscience
  • Need color 3D printing!
  • Need to find the right niche

Communication Tool, not a Visualization Tool


In [1]:
%run slice_3d_example.py


WARNING:traits.has_traits:DEPRECATED: traits.has_traits.wrapped_class, 'the 'implements' class advisor has been deprecated. Use the 'provides' class decorator.
/home/jofer/anaconda/lib/python2.7/site-packages/traits/has_traits.py:1766: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
  setattr( self, name, value )

Communication Tool, not a Visualization Tool

  • Each individual drives the interaction
  • Physical is the original interactive
  • Our Niche: Communication with non-expert audience

Great, but what about python?

  • Mayavi / mlab / tvtk to the rescue!
    • VTK is fantastic for visualizing 3D datasets
    • Mayavi/mlab/tvtk are more pythonic
  • VRML is the de-facto standard for color 3D printing
    • fig.scene.save_vrml(filename)

Big Caveat: You need to verify printability!!


In [3]:
# %load mayavi_logo.py
"""
A script to generate the Mayavi logo: a Boy surface.

The boy surface is a mathematical parametric surface, see
http://en.wikipedia.org/wiki/Boy%27s_surface . We display it by sampling
the two parameters of the surface on a grid and using the mlab's mesh
function: :func:`mayavi.mlab.mesh`.
"""

# Author: Gael Varoquaux <gael.varoquaux@normalesup.org>
# Copyright (c) 2007, Enthought, Inc.
# License: BSD Style.


from numpy import sin, cos, mgrid, pi, sqrt
from mayavi import mlab

import utils

fig = mlab.figure(fgcolor=(0, 0, 0), bgcolor=(1, 1, 1))
u, v = mgrid[- 0.035:pi:0.01, - 0.035:pi:0.01]

X = 2 / 3. * (cos(u) * cos(2 * v)
        + sqrt(2) * sin(u) * cos(v)) * cos(u) / (sqrt(2) -
                                                 sin(2 * u) * sin(3 * v))
Y = 2 / 3. * (cos(u) * sin(2 * v) -
        sqrt(2) * sin(u) * sin(v)) * cos(u) / (sqrt(2)
        - sin(2 * u) * sin(3 * v))
Z = -sqrt(2) * cos(u) * cos(u) / (sqrt(2) - sin(2 * u) * sin(3 * v))
S = sin(u)

mlab.mesh(X, Y, Z, scalars=S, colormap='YlGnBu', )

# Nice view from the front
mlab.view(.0, - 5.0, 4)

utils.present(fig)

The building blocks for a topographic model


In [5]:
import numpy as np
from mayavi import mlab
from osgeo import gdal
gdal.UseExceptions()

import utils

z = gdal.Open('data/alaska/clipped_elev.tif').ReadAsArray()

fig = mlab.figure()
mlab.surf(z, warp_scale=0.05, colormap='gist_earth')
utils.present(fig) # Normally, we'd call mlab.show()

Let's worry about true vertical exaggeration


In [6]:
def read(filename):
    ds = gdal.Open(filename)
    elev = ds.ReadAsArray()

    # True x, y coordinates
    x0, dx, dxdy, y0, dydx, dy = ds.GetGeoTransform()
    i, j = np.mgrid[:elev.shape[0], :elev.shape[1]]
    x = x0 + dx * j + dxdy * i
    y = y0 + dy * i + dydx * j

    return ds.ReadAsArray(), x, y

z, x, y = read('data/alaska/clipped_elev.tif')

fig = mlab.figure()
mlab.mesh(x, y, z, colormap='gist_earth')
utils.present(fig)

Now, let's make it a printable model


In [7]:
z, x, y = read('data/alaska/clipped_elev.tif')

xpad, ypad, bottomz = [np.pad(item, 1, mode='edge') for item in x, y, z]
bottomz[1:-1, 1:-1] = -1000

fig = mlab.figure()
mlab.mesh(x, y, z, colormap='gist_earth')

# Add the bottom
mlab.mesh(xpad, ypad, bottomz, color=(1,1,1)) # Inefficient, but okay for now...

utils.present(fig)

But what about scale??


In [8]:
fig = mlab.figure()
mlab.mesh(x - x.min(), y - y.min(), z, colormap='gist_earth')
mlab.axes()
utils.present(fig)

VTK let's us "shrink" things


In [9]:
import mayavi.tools

def scale(fig, ratio):
    """Scales a Mayavi figure and resets the camera."""
    for actor in fig.scene.renderer.actors:
        actor.scale = actor.scale * ratio
    mayavi.tools.camera.view(distance='auto', focalpoint='auto', figure=fig)
    
fig = mlab.figure()
mesh = mlab.mesh(x - x.min(), y - y.min(), z, colormap='gist_earth')
mlab.mesh(xpad, ypad, bottomz, color=(1,1,1))

scale(fig, 0.0001)
mlab.axes()
utils.present(fig)

Or expand them


In [10]:
fig = mlab.figure()
mesh = mlab.mesh(x, y, z, colormap='gist_earth')
mlab.mesh(xpad, ypad, bottomz, color=(1,1,1))

utils.scale(fig, 0.0001)
utils.scale(fig, [1, 1, 2.5]) # Apply vertical exaggeration
utils.present(fig)

But what we really want to do is "drape" data on a surface...

In 3D modeling terms, this is a "texture"


In [11]:
from tvtk.api import tvtk

def texture(mesh, fname, clamp=True):
    img = tvtk.PNGReader(file_name=fname).output
    t = tvtk.Texture(input=img, interpolate=True, edge_clamp=clamp)
    
    mesh.actor.enable_texture = True
    mesh.actor.actor.texture = t
    mesh.actor.tcoord_generator_mode = 'plane'
    mesh.actor.mapper.scalar_visibility = False
    
fig = mlab.figure()
mlab.mesh(xpad, ypad, bottomz, color=(1,1,1))

mesh = mlab.mesh(x, y, z)
texture(mesh, "images/geology.png")

utils.scale(fig, 0.0001 * np.array([1, 1, 2.5]))
utils.present(fig)

One caveat with Shapeways

  • Shapeways (and some other 3D printing services) doesn't like "embedded" textures in VRML
  • Solution - un-embed the textures

In [ ]:


In [ ]:
# %load shapeways_io.py
import os
import binascii
import tempfile
from zipfile import ZipFile, ZIP_DEFLATED
from cStringIO import StringIO

import numpy as np
import Image

def save_vrml(fig, output_filename):
    """
    Saves a Mayavi figure as shapeways-formatted VRML in a zip file.

    Parameters
    ----------
    fig : a Mayavi/mlab figure
    output_filename : string
    """
    _, fname = tempfile.mkstemp()
    fig.scene.save_vrml(fname)

    wrl_name = os.path.basename(output_filename).rstrip('.zip')
    vrml2shapeways(fname, output_filename, wrl_name)

    os.remove(fname)

def vrml2shapeways(filename, output_filename, wrl_name=None):
    """
    Un-embededs images from a vrml file and creates a zip archive with the
    images saved as .png's and the vrml file with links to the images.

    Parameters
    ----------
    filename : string
        The name of the input VRML file
    output_filename : string
        The filename of the zip archive that will be created.
    wrl_name : string or None (optional)
        The name of the VRML file in the zip archive. If None, this will be
        taken from *filename*.
    """
    if not output_filename.endswith('.zip'):
        output_filename += '.zip'

    with ZipFile(output_filename, 'w', ZIP_DEFLATED) as z:
        if wrl_name is None:
            wrl_name = os.path.basename(filename)
        if not wrl_name.endswith('.wrl'):
            wrl_name += '.wrl'

        outfile = StringIO()
        with open(filename, 'r') as infile:
            images = unembed_wrl_images(infile, outfile)
        z.writestr(wrl_name, outfile.getvalue())

        for fname, im in images.iteritems():
            outfile = StringIO()
            im.save(outfile, format='png')
            z.writestr(fname, outfile.getvalue())

def unembed_wrl_images(infile, outfile):
    """
    Converts embedded images in a VRML file to linked .png's.

    Parameters
    ----------
    infile : file-like object
    outfile: file-like object

    Returns
    -------
    images : a dict of filename : PIL Image pairs

    Notes:
    -----
    Should use a proper parser instead of just iterating line-by-line...
    """
    i = 1
    images = {}
    for line in infile:
        if 'texture' in line:
            data, width, height = read_texture_wrl(infile)
            image_filename = 'texture_{}.png'.format(i)
            im = ascii2image_wrl(data, width, height)
            line = '            texture ImageTexture {{ url ["{}"]}}'
            line = line.format(image_filename)
            images[image_filename] = im
            i += 1
        outfile.write(line)
    return images

def read_texture_wrl(infile):
    """
    Reads hexlified image data from the current position in a VRML file.
    """
    header = next(infile).strip().split()
    width, height, nbands = map(int, header[1:])

    data = []
    for line in infile:
        line = line.strip().split()
        for item in line:
            if item.startswith('0x'):
                data.append(item)
            else:
                return data, width, height

def ascii2image_wrl(data, width, height):
    """
    Converts hexlified data in VRML to a PIL image.
    """
    if len(data[0]) == 8:
        nbands = 3
    elif len(data[0]) == 10:
        nbands = 4
    else:
        raise ValueError('Unrecognized data type for image data')

    results = []
    for item in data:
        results.append(binascii.unhexlify(item[2:]))
    data = results
    data = ''.join(data)
    dat = np.fromstring(data, dtype=np.uint8).reshape(height, width, nbands)
    dat = np.roll(dat, nbands, -1)
    dat = np.flipud(dat)
    im = Image.fromarray(dat)
    return im

Let's add some nice textures on the sides...

First, let's build the sides individually


In [ ]:
def build_sides(x, y, z, level):
    slices = [np.s_[:,0], np.s_[:,-1], np.s_[0,:], np.s_[-1,:]]
    for sl in slices:
        build_side(x[sl], y[sl], z[sl], level)

def build_side(x, y, z, base_level):
    x = np.vstack([x, x])
    y = np.vstack([y, y])
    z = np.vstack([z, base_level * np.ones_like(z)])

    mesh = mlab.mesh(x, y, z, color=(1, 1, 1))
    return mesh

def build_bottom(x, y, z, level):
    i = [-1, -1, 0, 0]
    j = [0, -1, 0, -1]
    corners = lambda item: item[i, j].reshape(2, 2)
    mlab.mesh(corners(x), corners(y), level * np.ones((2,2)), color=(1, 1, 1))

Then we can texture them individually


In [13]:
def build_bottom(x, y, z, level):
    i = [-1, -1, 0, 0]
    j = [0, -1, 0, -1]
    corners = lambda item: item[i, j].reshape(2, 2)
    bottom = mlab.mesh(corners(x), corners(y), level * np.ones((2,2)))

    utils.texture(bottom, fname='data/alaska/bottom_annotated.png', clamp=False)
    return bottom

def build_sides(x, y, z, level):
    images = ['left_annotated_halo.png', 'right_annotated_halo.png',
              'back_annotated_halo.png', 'front_annotated_halo.png']

    slices = [np.s_[:,0], np.s_[:,-1], np.s_[0,:], np.s_[-1,:]]
    for sl, im in zip(slices, images):
        image = 'data/alaska/' + im
        build_side(x[sl], y[sl], z[sl], level, image)
        
def build_side(x, y, z, level, fname):
        x = np.vstack([x, x])
        y = np.vstack([y, y])
        ze = np.vstack([z, level * np.ones_like(z)])

        mesh = mlab.mesh(x, y, z)
        utils.texture(mesh, fname=image)
        
%run alaska_model_textured_sides.py

We can extend the same idea to seismic data


In [14]:
%run slice_3d_example.py

Now we'll do the same thing as the topographic model, but texture the sides with seismic data


In [15]:
%run make_base.py

We can even use another horizon as the "base" for the sides


In [16]:
%run make_top.py

Conclusions

  • 3D printing can be a useful geoscience communication technique.
  • Color or B&W is vital for geoscience! More than just form.
  • Generating printable models from real datasets is easy with scipy
  • 3D printed 3D sesimic is nifty!