Running an MSTIS simulation

Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.

Tasks covered in this notebook:

  • Loading OPS objects from storage
  • Ways of assigning initial trajectories to initial samples
  • Setting up a path sampling simulation with various move schemes
  • Visualizing trajectories while the path sampling is running

In [1]:
%matplotlib inline
import openpathsampling as paths
import numpy as np

Loading things from storage

First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.


In [2]:
old_store = paths.AnalysisStorage("mstis_bootstrap.nc")

A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.


In [3]:
print "PathMovers:", len(old_store.pathmovers)
print "Samples:", len(old_store.samples)
print "Ensembles:", len(old_store.ensembles)
print "SampleSets:", len(old_store.samplesets)
print "Snapshots:", len(old_store.snapshots)
print "Networks:", len(old_store.networks)


PathMovers: 0
Samples: 10
Ensembles: 100
SampleSets: 1
Snapshots: 1210
Networks: 0

Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.


In [4]:
template = old_store.snapshots[0]

In [5]:
engine = old_store.engines[0]

Named objects can be found in storage by using their name as a dictionary key. This allows us to load our old collective variables and states.


In [6]:
opA = old_store.cvs['opA']
opB = old_store.cvs['opB']
opC = old_store.cvs['opC']

In [7]:
stateA = old_store.volumes['A']
stateB = old_store.volumes['B']
stateC = old_store.volumes['C']

In [8]:
# we could also load the interfaces, but it takes less code to build new ones:
interfacesA = paths.VolumeInterfaceSet(opA, 0.0,[0.2, 0.3, 0.4])
interfacesB = paths.VolumeInterfaceSet(opB, 0.0,[0.2, 0.3, 0.4])
interfacesC = paths.VolumeInterfaceSet(opC, 0.0,[0.2, 0.3, 0.4])

Once again, we have everything we need to build the MSTIS network. Recall that this will create all the ensembles we need for the simulation. However, even though the ensembles are semantically the same, these are not the same objects. We'll need to deal with that later.


In [9]:
ms_outers = paths.MSOuterTISInterface.from_lambdas(
    {ifaces: 0.5 
     for ifaces in [interfacesA, interfacesB, interfacesC]}
)
mstis = paths.MSTISNetwork(
    [(stateA, interfacesA),
     (stateB, interfacesB),
     (stateC, interfacesC)],
    ms_outers=ms_outers
)

Now we need to set up real trajectories that we can use for each of these. We can start by loading the stored sampleset.


In [10]:
# load the sampleset we have saved before
old_sampleset = old_store.samplesets[0]

About Samples

The OPS object called Sample is used to associate a trajectory with a replica ID and an ensemble. The trajectory needs to be associated with an ensemble so we know how to get correct statistics from the many ensembles that we might be sampling simultaneously. The trajectory needs to be associated with a replica ID so that replica exchange approaches can be analyzed.

Since the ensembles in our MSTIS network are not the exact ensemble objects that we saved our samples with (they were rebuilt), we still need a way to identify which of the new ensembles to associate them with.

There are two main ways to do this. The first is to take one trajectory, and associate it with as many ensembles as possible. If your first path comes from a TPS simulation, that is the approach you'll want to take.

The second approach is better suited to our conditions here: we already have a good trajectory for each ensemble. So we just want to remap our old ensembles to new ones.

Loading one trajectory into lots of ensembles


In [11]:
# this makes a dictionary mapping the outermost ensemble of each sampling transition 
# to a trajectory from the old_sampleset that satisfies that ensemble
trajs = {}
for ens in [t.ensembles[-1] for t in mstis.sampling_transitions]:
    trajs[ens] = [s.trajectory for s in old_sampleset if ens(s.trajectory)][0]
    
assert(len(trajs)==3) # otherwise, we have a problem

In [12]:
initial_samples = {}
for t in mstis.sampling_transitions:
    initial_samples[t] = paths.SampleSet.map_trajectory_to_ensembles(trajs[t.ensembles[-1]], t.ensembles)

In [13]:
single_trajectory_sset = paths.SampleSet.relabel_replicas_per_ensemble(initial_samples.values())

The sanity_check function ensures that all trajectories in the sampleset are actually in the ensemble they claim to be associated with. At this point, we should have 9 samples.


In [14]:
single_trajectory_sset.sanity_check()
assert(len(single_trajectory_sset)==9)

Remapping old ensembles to new ensembles

If your old and new ensembles have the same string representations, then OPS has a function to help you automatically map them. As long as you create the ensembles in the same way, they'll have the same string representation. Note that if you don't have the same string representation, you would have to assign trajectories to ensembles by hand (which isn't that hard, but is a bit tedious).


In [15]:
sset = paths.SampleSet.translate_ensembles(old_sampleset, mstis.sampling_ensembles)

In [16]:
sset.sanity_check()
assert(len(sset)==9)

In [17]:
# tests only: this cell sets something for the online testing
# the next cell unsets it when running the notebook live
bootstrap_sset = sset
sset = single_trajectory_sset

In [18]:
#! skip
# tests don't run this, but users should!
sset = bootstrap_sset

Setting up special ensembles

Whichever way we initially set up the SampleSet, at this point it only contains samples for the main sampling trajectories of each transition. Now we need to put trajectories into various auxiliary ensembles.

Multiple state outer ensemble

The multiple state outer ensemble is, in fact, sampled during the bootstrapping. However, it is actually sampled once for every state that shares it. It is very easy to find a trajectory that satisfies the ensemble and to load add that sample to our sampleset.


In [19]:
for outer_ens in mstis.special_ensembles['ms_outer']:
    # doesn't matter which we take, so we take the first
    traj = next(s.trajectory for s in old_sampleset if outer_ens(s.trajectory)==True)
    samp = paths.Sample(
            replica=None,
            ensemble=outer_ens,
            trajectory=traj
    )
    # now we apply it and correct for the replica ID
    sset.append_as_new_replica(samp)

In [20]:
sset.sanity_check()
assert(len(sset)==10)

Minus interface ensemble

The minus interface ensembles do not yet have a trajectory. We will generate them by starting with same-state trajectories (A-to-A, B-to-B, C-to-C) in each interface, and extending into the minus ensemble.

  • check whether the traj is A-to-A
  • extend

First we need to make sure that the trajectory in the innermost ensemble of each state also ends in that state. This is necessary so that when we extend the trajectory, it can extends into the minus ensemble.

If the trajectory isn't right, we run a shooting move on it until it is.


In [21]:
for transition in mstis.sampling_transitions:
    innermost_ensemble = transition.ensembles[0]
    shooter = None
    if not transition.stateA(sset[innermost_ensemble].trajectory[-1]):
        shooter = paths.OneWayShootingMover(ensemble=innermost_ensemble,
                                            selector=paths.UniformSelector(),
                                            engine=engine)
        pseudoscheme = paths.LockedMoveScheme(root_mover=shooter)
        pseudosim = paths.PathSampling(storage=None, 
                                       move_scheme=pseudoscheme, 
                                       sample_set=sset,
                                      )
    while not transition.stateA(sset[innermost_ensemble].trajectory[-1]):
        pseudosim.run(1)
        sset = pseudosim.sample_set

Now that all the innermost ensembles are safe to use for extending into a minus interface, we extend them into a minus interface:


In [22]:
minus_samples = []
for transition in mstis.sampling_transitions:
    minus_samples.append(transition.minus_ensemble.extend_sample_from_trajectories(
        sset[transition.ensembles[0]].trajectory,
        replica=-len(minus_samples)-1,
        engine=engine
    ))
sset = sset.apply_samples(minus_samples)

In [23]:
sset.sanity_check()
assert(len(sset)==13)

Equilibration

In molecular dynamics, you need to equilibrate if you don't start with an equilibrium frame (e.g., if you start with solvent molecules on a grid, your system should equilibrate before you start taking statistics). Similarly, if you start with a set of paths which are far from the path ensemble equilibrium, you need to equilibrate. This could either be because your trajectories are not from the real dynamics (generated with metadynamics, high temperature, etc.) or because your trajectories are not representative of the path ensemble (e.g., if you put transition trajectories into all interfaces).

As with MD, running equilibration can be the same process as running the total simulation. However, in path sampling, it doesn't have to be: we can equilibrate without replica exchange moves or path reversal moves, for example. In the example below, we create a MoveScheme that only includes shooting movers.


In [24]:
equil_scheme = paths.OneWayShootingMoveScheme(mstis, engine=engine)

In [25]:
equilibration = paths.PathSampling(
    storage=None,
    sample_set=sset,
    move_scheme=equil_scheme
)

In [26]:
#! skip
# tests need the unequilibrated samples to ensure passing
equilibration.run(5)


Working on Monte Carlo cycle number 5
Running for 0 seconds -  5.50 steps per second
Expected time to finish: 0 seconds
DONE! Completed 5 Monte Carlo cycles.

In [27]:
sset = equilibration.sample_set

Running RETIS

Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.


In [28]:
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("../resources/logging.conf", disable_existing_loggers=False)

In [29]:
storage = paths.storage.Storage("mstis.nc", "w")

In [30]:
storage.save(template)


Out[30]:
(store.snapshots[BaseSnapshot] : 2 object(s),
 3,
 119821506910823800303319164102417322217L)

In [31]:
[cv.with_diskcache() for cv in old_store.cvs]


Out[31]:
[<openpathsampling.collectivevariable.CoordinateFunctionCV at 0x11e7768d0>,
 <openpathsampling.collectivevariable.CoordinateFunctionCV at 0x11eb1ed10>,
 <openpathsampling.collectivevariable.CoordinateFunctionCV at 0x11eb1ee50>]

In [32]:
print [cv.diskcache_allow_incomplete for cv in old_store.cvs]


[False, False, False]

In [33]:
mstis_calc = paths.PathSampling(
    storage=storage,
    sample_set=sset,
    move_scheme=paths.DefaultScheme(mstis, engine=engine)
)
mstis_calc.save_frequency = 50

The next block sets up a live visualization. This is optional, and only recommended if you're using OPS interactively (which would only be for very small systems). Some of the same tools can be used to play back the behavior after the fact if you want to see the behavior for more complicated systems. You can create a background (here we use the PES contours), and the visualization will plot the trajectories.


In [34]:
#! skip
# skip this during testing, but leave it for demo purposes
# we use the %run magic because this isn't in a package
%run ../resources/toy_plot_helpers.py
xval = paths.FunctionCV("xval", lambda snap : snap.xyz[0][0])
yval = paths.FunctionCV("yval", lambda snap : snap.xyz[0][1])
mstis_calc.live_visualizer = paths.StepVisualizer2D(mstis, xval, yval, [-1.0, 1.0], [-1.0, 1.0])
background = ToyPlot()
background.contour_range = np.arange(-1.5, 1.0, 0.1)
background.add_pes(engine.pes)
mstis_calc.live_visualizer.background = background.plot()
mstis_calc.status_update_frequency = 1 # increasing this number speeds things up, but isn't as pretty


Now everything is ready: let's run the simulation!


In [35]:
mstis_calc.run_until(100)


DONE! Completed 100 Monte Carlo cycles.

In [36]:
n_steps = int(mstis_calc.move_scheme.n_steps_for_trials(mstis_calc.move_scheme.movers['shooting'][0], 1000))
print n_steps


20100

In [ ]:
#! skip
# don't run all those steps in testing!
mstis_calc.run_until(n_steps)


Working on Monte Carlo cycle number 146
Running for 25 seconds -  1.73 steps per second
Expected time to finish: 11511 seconds

In [37]:
storage.close()

In [ ]: