Solver "workflow"

- Asynchronous and "re-distributable" modes

>>> from mystic.solvers import NelderMeadSimplexSolver
>>> from mystic.models import rosen
>>>
>>> solver = NelderMeadSimplexSolver(3)
>>> solver.SetInitialPoints([0.5, 1.5, 2.0])
>>> solver.SetObjective(rosen)
>>>
>>> from mystic.monitors import VerboseMonitor
>>> mon = VerboseMonitor(1)
>>>
>>> solver.SetGenerationMonitor(mon)
>>> solver.Step()
Generation 0 has Chi-Squared: 163.000000
>>> solver.Step()
Generation 1 has Chi-Squared: 156.635039
>>> solver.Step()
Generation 2 has Chi-Squared: 123.476914
>>> solver.Step()
Generation 3 has Chi-Squared: 123.476914
>>> solver.Step()
Generation 4 has Chi-Squared: 105.163966
>>> solver.Step()
Generation 5 has Chi-Squared: 105.163966
>>> solver.Step()
Generation 6 has Chi-Squared: 105.163966
>>> solver.SaveSolver('test.pkl')
DUMPED("test.pkl")

$ python
Python 2.7.9 (default, Dec 11 2014, 01:21:43)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from mystic.solvers import LoadSolver
>>> solver = LoadSolver('test.pkl')
LOADED("test.pkl")
>>> solver.Step()
Generation 7 has Chi-Squared: 105.163966
>>> solver.Step()
Generation 8 has Chi-Squared: 94.683952
>>> solver.Step()
Generation 9 has Chi-Squared: 79.526211
>>> solver.Solve()
Generation 10 has Chi-Squared: 55.152242
Generation 11 has Chi-Squared: 32.635518
Generation 12 has Chi-Squared: 3.928963
Generation 13 has Chi-Squared: 0.293739
Generation 14 has Chi-Squared: 0.293739
Generation 15 has Chi-Squared: 0.293739
Generation 16 has Chi-Squared: 0.293739
Generation 17 has Chi-Squared: 0.293739
Generation 18 has Chi-Squared: 0.293739
Generation 19 has Chi-Squared: 0.293739
Generation 20 has Chi-Squared: 0.293739
Generation 21 has Chi-Squared: 0.234180
Generation 22 has Chi-Squared: 0.097215
Generation 23 has Chi-Squared: 0.058144
Generation 24 has Chi-Squared: 0.058144
Generation 25 has Chi-Squared: 0.050947
Generation 26 has Chi-Squared: 0.035904
Generation 27 has Chi-Squared: 0.023215
Generation 28 has Chi-Squared: 0.007698
Generation 29 has Chi-Squared: 0.007698
Generation 30 has Chi-Squared: 0.007698
Generation 31 has Chi-Squared: 0.004759
Generation 32 has Chi-Squared: 0.004759
Generation 33 has Chi-Squared: 0.004442
Generation 34 has Chi-Squared: 0.003676
Generation 35 has Chi-Squared: 0.003676
Generation 36 has Chi-Squared: 0.003676
Generation 37 has Chi-Squared: 0.003609
Generation 38 has Chi-Squared: 0.003133
Generation 39 has Chi-Squared: 0.003133
Generation 40 has Chi-Squared: 0.002982
Generation 41 has Chi-Squared: 0.002982
Generation 42 has Chi-Squared: 0.002982
Generation 43 has Chi-Squared: 0.002982
Generation 44 has Chi-Squared: 0.002725
Generation 45 has Chi-Squared: 0.002716
Generation 46 has Chi-Squared: 0.002716
Generation 47 has Chi-Squared: 0.002457
Generation 48 has Chi-Squared: 0.002350
Generation 49 has Chi-Squared: 0.001738
Generation 50 has Chi-Squared: 0.001738
Generation 51 has Chi-Squared: 0.001631
Generation 52 has Chi-Squared: 0.000667
Generation 53 has Chi-Squared: 0.000651
Generation 54 has Chi-Squared: 0.000635
Generation 55 has Chi-Squared: 0.000008
Generation 56 has Chi-Squared: 0.000008
Generation 57 has Chi-Squared: 0.000008
Generation 58 has Chi-Squared: 0.000008
Generation 59 has Chi-Squared: 0.000008
Generation 60 has Chi-Squared: 0.000008
Generation 61 has Chi-Squared: 0.000008
Generation 62 has Chi-Squared: 0.000008
Generation 63 has Chi-Squared: 0.000008
Generation 64 has Chi-Squared: 0.000008
Generation 65 has Chi-Squared: 0.000008
Generation 66 has Chi-Squared: 0.000008
Generation 67 has Chi-Squared: 0.000007
Generation 68 has Chi-Squared: 0.000004
Generation 69 has Chi-Squared: 0.000001
Generation 70 has Chi-Squared: 0.000001
Generation 71 has Chi-Squared: 0.000001
Generation 72 has Chi-Squared: 0.000001
Generation 73 has Chi-Squared: 0.000000
Generation 74 has Chi-Squared: 0.000000
Generation 75 has Chi-Squared: 0.000000
Generation 76 has Chi-Squared: 0.000000
Generation 77 has Chi-Squared: 0.000000
Generation 78 has Chi-Squared: 0.000000
Generation 79 has Chi-Squared: 0.000000
Generation 80 has Chi-Squared: 0.000000
Generation 81 has Chi-Squared: 0.000000
Generation 82 has Chi-Squared: 0.000000
Generation 83 has Chi-Squared: 0.000000
Generation 84 has Chi-Squared: 0.000000
Generation 85 has Chi-Squared: 0.000000
Generation 86 has Chi-Squared: 0.000000
Generation 87 has Chi-Squared: 0.000000
Generation 88 has Chi-Squared: 0.000000
STOP("CandidateRelativeTolerance with {'xtol': 0.0001, 'ftol': 0.0001}")
DUMPED("test.pkl")

- Restarts and auto-saved state

```
In [1]:
```from mystic.solvers import PowellDirectionalSolver
from mystic.termination import VTR
from mystic.models import rosen
from mystic.solvers import LoadSolver
import os
solver = PowellDirectionalSolver(3)
solver.SetRandomInitialPoints([0.,0.,0.],[10.,10.,10.])
term = VTR()
tmpfile = 'mysolver.pkl'
solver.SetSaveFrequency(10, tmpfile)
solver.Solve(rosen, term)
x = solver.bestSolution
y = solver.bestEnergy
_solver = LoadSolver(tmpfile)
os.remove(tmpfile)
assert all(x == _solver.bestSolution)
assert y == _solver.bestEnergy

- Database integration and results caching/kriging

`klepto`

package, which can save pickled objects to a databases or "file-based" databases. `klepto`

also provides a caching decorator that integrates well with `mystic`

.

- Extension to parallel computing

```
In [2]:
```from multiprocess import Pool
#from pathos.multiprocessing import ProcessPool as Pool
from numpy.random import random
from mystic.solvers import fmin_powell as solver
from mystic.models import zimmermann as model
dim = 2
tries = 20
def helper(solver, model):
def f(x0, *args, **kwds):
return solver(model, x0, disp=False, full_output=True, *args, **kwds)[:2]
return f
# soln = solver(model, 10*random((dim)), disp=False, full_output=True)
if __name__ == '__main__':
from multiprocess import freeze_support
freeze_support()
p = Pool()
uimap = getattr(p, 'uimap',None) or p.imap_unordered
res = uimap(helper(solver, model), 10*random((tries,dim)))
for soln in res:
print("%s: %s" % (soln[1],soln[0]))
p.close()
p.join()

```
```

```
In [3]:
```%matplotlib notebook

```
In [4]:
```"""
Example:
- Solve 8th-order Chebyshev polynomial coefficients with Powell's method.
- Uses LatticeSolver to provide 'pseudo-global' optimization
- Plot of fitting to Chebyshev polynomial.
Demonstrates:
- standard models
- minimal solver interface
"""
# the Buckshot solver
from mystic.solvers import LatticeSolver
# Powell's Directonal solver
from mystic.solvers import PowellDirectionalSolver
# Chebyshev polynomial and cost function
from mystic.models.poly import chebyshev8, chebyshev8cost
from mystic.models.poly import chebyshev8coeffs
# if available, use a pathos worker pool
try:
#from pathos.pools import ProcessPool as Pool
from pathos.pools import ParallelPool as Pool
except ImportError:
from mystic.pools import SerialPool as Pool
# tools
from mystic.termination import NormalizedChangeOverGeneration as NCOG
from mystic.math import poly1d
from mystic.monitors import VerboseLoggingMonitor
from mystic.tools import getch
import pylab
pylab.ion()
# draw the plot
def plot_exact():
pylab.title("fitting 8th-order Chebyshev polynomial coefficients")
pylab.xlabel("x")
pylab.ylabel("f(x)")
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
exact = chebyshev8(x)
pylab.plot(x,exact,'b-')
pylab.legend(["Exact"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
# plot the polynomial
def plot_solution(params,style='y-'):
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
f = poly1d(params)
y = f(x)
pylab.plot(x,y,style)
pylab.legend(["Exact","Fitted"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
# add some information as constraints
from mystic.constraints import integers
@integers()
def constraints(x):
x[-1] = 1
return x
if __name__ == '__main__':
from pathos.helpers import freeze_support
freeze_support() # help Windows use multiprocessing
print("Powell's Method")
print("===============")
# dimensional information
from mystic.tools import random_seed
random_seed(123)
ndim = 9
npts = 32
# draw frame and exact coefficients
plot_exact()
# configure monitor
stepmon = VerboseLoggingMonitor(1,10)
# use lattice-Powell to solve 8th-order Chebyshev coefficients
solver = LatticeSolver(ndim, npts)
solver.SetNestedSolver(PowellDirectionalSolver)
solver.SetMapper(Pool().map)
solver.SetGenerationMonitor(stepmon)
solver.SetStrictRanges(min=[-300]*ndim, max=[300]*ndim)
solver.SetConstraints(constraints)
solver.Solve(chebyshev8cost, NCOG(1e-4), disp=1)
solution = solver.Solution()
# write 'convergence' support file
from mystic.munge import write_support_file
write_support_file(solver._stepmon) #XXX: only saves the 'best'
# use pretty print for polynomials
print(poly1d(solution))
# compare solution with actual 8th-order Chebyshev coefficients
print("\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs))
# plot solution versus exact coefficients
plot_solution(solution)
#getch()

```
```

- Automated dimensional reduction

`mystic`

provides termination conditions such as `CollapseAt`

and `CollapseAs`

that look for solution vectors that hover around a single value, or start to track other solution vectors, or otherwise. When a collapse conditions is met, the solver terminates, and applys a constraint that reduces the dimensionality of the solution vectors. Upon restart, the solver works on a smaller list of paramaters.

Hierarchical and meta-solvers:

- "Nesting" solvers

- Ensemble solvers

- Global search

```
In [5]:
```!rm log.txt

```
In [6]:
```"""
Uses Ensemble Solvers to provide 'pseudo-global' search.
"""
from mystic.search import Searcher
if __name__ == '__main__':
# if available, use a multiprocessing worker pool
try:
from pathos.helpers import freeze_support
freeze_support()
from pathos.pools import ProcessPool as Pool
except ImportError:
from mystic.pools import SerialPool as Pool
# tools
from mystic.termination import VTR, ChangeOverGeneration as COG
from mystic.termination import NormalizedChangeOverGeneration as NCOG
from mystic.monitors import LoggingMonitor, VerboseMonitor, Monitor
from klepto.archives import dir_archive
stop = NCOG(1e-4)
disp = False # print optimization summary
stepmon = True # use LoggingMonitor
archive = False # save an archive
traj = not stepmon # save all trajectories internally, if no logs
# cost function
from mystic.models import griewangk as model
ndim = 2 # model dimensionality
bounds = ndim * [(-9.5,9.5)] # griewangk
# the ensemble solvers
from mystic.solvers import BuckshotSolver, LatticeSolver
# the local solvers
from mystic.solvers import PowellDirectionalSolver
sprayer = BuckshotSolver
seeker = PowellDirectionalSolver
npts = 25 # number of solvers
_map = Pool().map
retry = 1 # max consectutive iteration retries without a cache 'miss'
tol = 8 # rounding precision
mem = 1 # cache rounding precision
#CUTE: 'configure' monitor and archive if they are desired
if stepmon: stepmon = LoggingMonitor(1) # montor for all runs
else: stepmon = None
if archive: #python2.5
ar_name = '__%s_%sD_cache__' % (model.__self__.__class__.__name__,ndim)
archive = dir_archive(ar_name, serialized=True, cached=False)
else: archive = None
searcher = Searcher(npts, retry, tol, mem, _map, archive, sprayer, seeker)
searcher.Verbose(disp)
searcher.UseTrajectories(traj)
searcher.Reset(archive, inv=False)
searcher.Search(model, bounds, stop=stop, monitor=stepmon)
searcher._summarize()
##### extract results #####
xyz = searcher.Samples()
##### invert the model, and get the maxima #####
imodel = lambda *args, **kwds: -model(*args, **kwds)
#CUTE: 'configure' monitor and archive if they are desired
if stepmon not in (None, False):
itermon = LoggingMonitor(1, filename='inv.txt') #XXX: log.txt?
else: itermon = None
if archive not in (None, False): #python2.5
ar_name = '__%s_%sD_invcache__' % (model.__self__.__class__.__name__,ndim)
archive = dir_archive(ar_name, serialized=True, cached=False)
else: archive = None
searcher.Reset(archive, inv=True)
searcher.Search(imodel, bounds, stop=stop, monitor=itermon)
searcher._summarize()

```
```

```
In [7]:
```import mystic
mystic.log_reader('log.txt')

```
```

```
In [8]:
```import mystic
mystic.model_plotter(mystic.models.griewangk, 'log.txt', bounds="-10:10:.1, -10:10:.1")
mystic.model_plotter(mystic.models.griewangk, 'inv.txt', bounds="-10:10:.1, -10:10:.1")

```
```

- Energy-surface interpolation

**EXERCISE:** Convert one of our previous `mystic`

examples to use parallel computing. Note that if the solver has a `SetMapper`

method, it can take a parallel map.

Let's now look at solvers for optimization of probabilities, PDFs, etc