This example compares two methods for dimensionality reduction: tICA and PCA.

In [ ]:
%matplotlib inline
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import simtk.openmm as mm
from msmbuilder.decomposition import tICA, PCA

First, let's use OpenMM to run some dynamics on the 3D potential energy function

$$E(x,y,z) = 5 \cdot (x-1)^2 \cdot (x+1)^2 + y^2 + z^2$$

From looking at this equation, we can see that along the $x$ dimension, the potential is a double-well, whereas along the $y$ and $z$ dimensions, we've just got a harmonic potential. So, we should expect that $x$ is the slow degree of freedom, whereas the system should equilibrate rapidly along $y$ and $z$.

In [ ]:
def propagate(n_steps=10000):
    "Simulate some dynamics"
    system = mm.System()
    force = mm.CustomExternalForce('5*(x-1)^2*(x+1)^2 + y^2 + z^2')
    force.addParticle(0, [])
    integrator = mm.LangevinIntegrator(500, 1, 0.02)
    context = mm.Context(system, integrator)
    context.setPositions([[0, 0, 0]])
    x = np.zeros((n_steps, 3))
    for i in range(n_steps):
        x[i] = context.getState(getPositions=True).getPositions(asNumpy=True)._value
    return x

Okay, let's run the dynamics. The first plot below shows the $x$, $y$ and $z$ coordinate vs. time for the trajectory, and the second plot shows each of the 1D and 2D marginal distributions.

In [ ]:
trajectory = propagate(10000)

ylabels = ['x', 'y', 'z']
for i in range(3):
    plt.subplot(3, 1, i+1)
    plt.plot(trajectory[:, i])
plt.xlabel('Simulation time')

Note that the variance of $x$ is much lower than the variance in $y$ or $z$, despite it's bi-modal distribution.

In [ ]:
# fit the two models
tica = tICA(n_components=1, lag_time=100)
pca = PCA(n_components=1)[trajectory])[trajectory])

In [ ]:
plt.title('1st tIC')[1,2,3], tica.components_[0], color='b')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])
plt.title('1st PC')[1,2,3], pca.components_[0], color='r')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])

print('1st tIC', tica.components_ / np.linalg.norm(tica.components_))
print('1st PC ', pca.components_ / np.linalg.norm(pca.components_))

Note that the first tIC "finds" a projection that just resolves the $x$ coordinate, whereas PCA doesn't.