Custom solvers

In this notebook we will take a quick look on how the user may wrap an external solver for use with pyneqsys.


In [ ]:
import warnings
from pyneqsys.symbolic import SymbolicSys
import scipy
version = scipy.__version__.split('.')
scipy_at_least_point_seventeen = int(version[0]) > 0 or int(version[1]) >= 17
if not scipy_at_least_point_seventeen:
    warnings.warn("scipy version less than 0.17, the SciPyLsq example will be skipped")

We will use a 2 dimensional problem for illustration:


In [ ]:
def f(x):
    return [(x[0] - x[1])**3/2 + x[0] - 1,
            (x[1] - x[0])**3/2 + x[1]]
neqsys = SymbolicSys.from_callback(f, 2)

In [ ]:
neqsys.solve([1, 0])

SciPy's could find the root, let's see how KINSOL from SUNDIALS fares:


In [ ]:
neqsys.solve([1, 0], solver='kinsol')

No problem. In SciPy v0.17 a new pure-python least squares optimizer was introduced, let's wrap it for use within pyneqsys:


In [ ]:
if scipy_at_least_point_seventeen:
    # New solver in SciPy v0.17
    # http://scipy.github.io/devdocs/generated/scipy.optimize.root.html#scipy.optimize.root
    class SciPyLsq:
        def __init__(self, neqsys):
            self.neqsys = neqsys
            
        def __call__(self, x0, **kwargs):
            new_kwargs = kwargs.copy()
            if self.neqsys.band is not None:
                raise ValueError("Not supported (see SciPy docs)")
            new_kwargs['args'] = (self.neqsys.internal_params,)
            return scipy.optimize.least_squares(self.neqsys.f_cb, x0, jac=self.neqsys.j_cb, **new_kwargs)
    result = neqsys.solve([1, 0], attached_solver=SciPyLsq)
    print(result)

We can see that the wrapping is quite straightforward. (the solver can then be used with e.g. the symbolic facilities of pyneqsys).

Looking at some demo-solvers distributed with pyneqsys

In pyneqsys.solvers there are some demo solvers provided (they are not "producation grade" but rather serves as API examples.


In [ ]:
# import pyneqsys.solvers; pyneqsys.solvers??  # uncomment to look at the source code

We will plot the convergence behaviour of the solvers:


In [ ]:
def plot_convergence(attached_solver, plot_attr):
    import numpy as np
    import matplotlib.pyplot as plt
    %matplotlib inline
    x_history = np.array(attached_solver.history_x)
    plt.figure(figsize=(15, 3))
    plt.subplot(1, 4, 1)
    plt.plot(x_history[:, 0], x_history[:, 1]); plt.xlabel('x0'), plt.ylabel('x1')
    plt.subplot(1, 4, 2)
    plt.plot(neqsys.rms(x_history)); plt.xlabel('iteration'), plt.ylabel('RMS(residuals)')
    plt.subplot(1, 4, 3)
    plt.semilogy(range(15, len(x_history)), neqsys.rms(x_history[15:])); plt.xlabel('iteration'), plt.ylabel('RMS(residuals)')
    plt.subplot(1, 4, 4)
    plt.plot(np.asarray(getattr(attached_solver, plot_attr)))
    plt.ylabel(plot_attr)
    plt.xlabel('iteration')
    plt.tight_layout()

Let's start with a line-searching gradient descent solver:


In [ ]:
from pyneqsys.solvers import LineSearchingGradientDescentSolver as LSGD
lsgd = LSGD()
print(neqsys.solve([.8, .1], attached_solver=lsgd))
plot_convergence(lsgd, 'history_rms_f')

We can compare this with a conjugate gradient solver:


In [ ]:
from pyneqsys.solvers import PolakRibiereConjugateGradientSolver as CG
cg = CG(4)
print(neqsys.solve([.8, .1], attached_solver=cg))
plot_convergence(cg, 'history_sn')

One can also build generalizations of the solvers quite easily, here is a damped gradient descent solver with damping chosen from the iteration history:


In [ ]:
from pyneqsys.solvers import AutoDampedGradientDescentSolver as ADGD
adgd = ADGD(.15)
print(neqsys.solve([.8, .1], attached_solver=adgd))
plot_convergence(adgd, 'history_damping')

this notebook hopefully shows that the API of pyneqsys is quite approachable.