In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage
object. This makes the analysis much faster.
In [2]:
%%time
storage = paths.AnalysisStorage("mstis.nc")
In [3]:
st_split = paths.Storage('mstis_strip.nc', 'w')
In [4]:
# st_traj = paths.Storage('mstis_traj.nc', 'w')
# st_data = paths.Storage('mstis_data.nc', 'w')
In [5]:
st_split.fallback = storage
In [5]:
# st_data.fallback = storage
Store all trajectories completely in the data file
In [6]:
# st_data.snapshots.save(storage.snapshots[0])
# st_traj.snapshots.save(storage.snapshots[0])
Out[6]:
Add a single snapshot as a reference and create the appropriate stores
In [6]:
st_split.snapshots.save(storage.snapshots[0])
Out[6]:
Store only shallow trajectories (empty snapshots) in the main file
fix CVs first, rest is fine
In [7]:
cvs = storage.cvs
In [15]:
q = storage.snapshots.all()
fill weak cache from stored cache. This should be fast and we can later use the weak cache (as long as q exists) to fill the cache of the data file.
In [16]:
%%time
_ = [cv(q) for cv in cvs]
Now that we have cached the CV values we can save the CVs in the new store. This will also set the disk cache to the new file and since the file is new this one is empty.
In [17]:
%%time
# this will also switch the storage cache to the new file
_ = map(st_split.cvs.save, storage.cvs)
In [9]:
# %%time
# # this will also switch the storage cache to the new file
# _ = map(st_data.cvs.save, storage.cvs)
if all cvs are really cached we can store snapshots now and the auto-complete will fill the CV disk store automatically when snapshots are saved. This takes a little while.
In [18]:
len(st_split.snapshots)
Out[18]:
In [20]:
%%time
_ = map(st_split.trajectories.mention, storage.trajectories)
In [ ]:
print len(st_split.snapshotspshots)
In [10]:
# %%time
# _ = map(st_data.trajectories.mention, storage.trajectories)
Fill trajectory store only with trajectories and their snapshots. We are using lots of small snapshots and these are slow in comparison to large ones. So this will also take a minute or so.
In [11]:
%%time
_ = map(st_traj.trajectories.save, storage.trajectories)
Finally try storing all steps from the simulation. This should contain ALL you need.
In [12]:
%%time
_ = map(st_data.steps.save, storage.steps)
And compare file sizes
In [13]:
print 'Original file:', storage.file_size_str
print 'Data file:', st_data.file_size_str
print 'Traj file:', st_traj.file_size_str
In [25]:
print 'So we saved about %2.0f %%' % ((1.0 - st_data.file_size / float(storage.file_size)) * 100.0)
now we do the trick and use the small data file instead of the full simulation and see if that works.
In [26]:
st_data.close()
st_traj.close()
storage.close()
In [ ]:
st_data.snapshots.only_mention = True