To run this code, start up a kernel running at the ldif repository root, and attach to it. The easiest way is to register an ldif conda environment with jupyter. From a terminal with that kernel currently active, run
python -m ipykernel install --user --name ldif-env --display-name "LDIF+SIF Kernel"
Then, within the notebook, click Kernel -> Change kernel -> LDIF+SIF Kernel. Once this is done all the following cells should run.
In [1]:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import importlib
import numpy as np
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import trimesh
from ldif.datasets import shapenet_np
importlib.reload(shapenet_np)
from ldif.util import geom_util
importlib.reload(geom_util)
from ldif.representation import structured_implicit_function
importlib.reload(structured_implicit_function)
from ldif.inference import predict
importlib.reload(predict)
from ldif.inference import experiment
importlib.reload(experiment)
from ldif.inference import example
importlib.reload(example)
from ldif.util import gaps_util
importlib.reload(gaps_util)
from ldif.inference import metrics
importlib.reload(metrics)
from ldif.util import random_util
from ldif.util import file_util
importlib.reload(random_util)
from ldif.util.file_util import log
log.set_level('error') # Only show errors.
Set the dataset directory to the output path root from the meshes2dataset.py command. See the README.md for more documentation.
In [2]:
dataset_directory = '/path/to/generated/dataset'
e = example.InferenceExample.from_local_dataset_tokens(dataset_directory,
'val', '02691156', '3b9c905771244df7b6ed9420d56b12a9')
In [3]:
# The example objects lazily load the various data values as they are requested.
# show() is interactive. Its output doesn't get saved with the .ipynb
gaps_util.mshview(e.gt_mesh)
Encoders map from an example to the sif representation. The output is just a numpy array with the blob and implicit parameters.
Decoders map from a sif vector to a variety of different outputs, such as a mesh, a set of inside/outside decisions at query points, an ellipsoid rendering, a txt file, or an interactive viewer session. Please see the decoder class methods for all the outputs.
Both encoders and decoders can be loaded from the identifiers of the training jobs that generated them. Note that they are built dynamically, not frozen models. So the class constructor will generate the graph based on the current state of the dsif/ codebase in whatever client the adhoc_import used above and then try to restore the model weights. If you edit the code, reload the predict module, then make a model object, you can see how the changes in the code affect the model.
In [4]:
model_directory = '/path/to/trained_model_directory/'
encoder = predict.DepthEncoder.from_modeldir(
model_directory=model_directory,
model_name='sif-transcoder',
experiment_name='ldif-autoencoder',
xid=1, # Always 1
ckpt_idx=-1) # -1 means newest
In [5]:
decoder = predict.Decoder.from_modeldir(
model_directory=model_directory,
model_name='sif-transcoder',
experiment_name='ldif-autoencoder',
xid=1,
ckpt_idx=-1,
) # -1 means newest
If the inference kernel is not installed, uncomment the following code:
In [6]:
# decoder.use_inference_kernel = False
In [7]:
embedding = encoder.run_example(e)
mesh = decoder.extract_mesh(embedding, resolution=256)
gaps_util.mshview(mesh)
In [8]:
# IoU can be computed quickly:
decoder.iou(embedding, e)
Out[8]:
In [9]:
txt = decoder.savetxt(embedding, '/data/test-sif.txt')
In [10]:
serialized_mesh = mesh.export('/data/test-sif.ply')
In [11]:
# The other metrics take a few seconds:
metrics.print_all(embedding, decoder, e, resolution=256)
The decoder provides a variety of viewers. This one renders the templates as ellipsoids:
In [12]:
decoder.interactive_viewer(embedding)
In [ ]: