Using optimization routines from scipy and statsmodels


In [1]:
%matplotlib inline

In [2]:
import scipy.linalg as la
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
import pandas as pd

In [3]:
np.set_printoptions(precision=3, suppress=True)

Finding roots

For root finding, we generally need to provide a starting point in the vicinity of the root. For iD root finding, this is often provided as a bracket (a, b) where a and b have opposite signs.

Univariate roots and fixed points


In [4]:
def f(x):
    return x**3-3*x+1

In [5]:
x = np.linspace(-3,3,100)
plt.axhline(0, c='red')
plt.plot(x, f(x));



In [6]:
from scipy.optimize import brentq, newton

In [7]:
brentq(f, -3, 0), brentq(f, 0, 1), brentq(f, 1,3)


Out[7]:
(-1.8793852415718166, 0.3472963553337031, 1.532088886237956)

Secant method


In [8]:
newton(f, -3), newton(f, 0), newton(f, 3)


Out[8]:
(-1.8793852415718169, 0.34729635533385395, 1.5320888862379578)

Newton-Raphson method


In [9]:
fprime = lambda x: 3*x**2 - 3
newton(f, -3, fprime), newton(f, 0, fprime), newton(f, 3, fprime)


Out[9]:
(-1.8793852415718166, 0.34729635533386066, 1.532088886237956)

Analytical solution using sympy to find roots of a polynomial


In [10]:
from sympy import symbols, N, real_roots

In [11]:
x = symbols('x', real=True)
sol = real_roots(x**3 - 3*x + 1)
list(map(N, sol))


Out[11]:
[-1.87938524157182, 0.347296355333861, 1.53208888623796]

Finding fixed points

Finding the fixed points of a function $g(x) = x$ is the same as finding the roots of $g(x) - x$. However, specialized algorithms also exist - e.g. using scipy.optimize.fixedpoint.


In [12]:
from scipy.optimize import fixed_point

In [13]:
x = np.linspace(-3,3,100)
plt.plot(x, f(x), color='red')
plt.plot(x, x)
pass



In [14]:
fixed_point(f, 0), fixed_point(f, -3), fixed_point(f, 3)


Out[14]:
(array(0.2541016883650524),
 array(-2.114907541476756),
 array(1.8608058531117035))

In [15]:
newton(lambda x: f(x) - x, 0), newton(lambda x: f(x) - x, -3), newton(lambda x: f(x) - x, 3)


Out[15]:
(0.2541016883650524, -2.114907541476814, 1.8608058531117062)

In [16]:
def f(x, r):
    """Discrete logistic equation."""
    return r*x*(1-x)

In [17]:
n = 100
fps = np.zeros(n)
for i, r in enumerate(np.linspace(0, 4, n)):
    fps[i] = fixed_point(f, 0.5, args=(r, ))

plt.plot(np.linspace(0, 4, n), fps)
plt.axis([0,4,-0.1, 1.1])
pass


Note that we don't know anything about the stability of the fixed point

Beyond $r = 3$, the fixed point is unstable, even chaotic, but we would never know that just from the plot above.


In [18]:
xs = []
for i, r in enumerate(np.linspace(0, 4, 400)):
    x = 0.5
    for j in range(10000):
        x = f(x, r)
    for j in range(50):
        x = f(x, r)
        xs.append((r, x))
xs = np.array(xs)
plt.scatter(xs[:,0], xs[:,1], s=1)
plt.axis([0,4,-0.1, 1.1])
pass


Mutlivariate roots and fixed points

Use root to solve polynomial equations. Use fsolve for non-polynomial equations.


In [19]:
from scipy.optimize import root, fsolve

Suppose we want to solve a sysetm of $m$ equations with $n$ unknowns

\begin{align} f(x_0, x_1) &= x_1 - 3x_0(x_0+1)(x_0-1) \\ g(x_0, x_1) &= 0.25 x_0^2 + x_1^2 - 1 \end{align}

Note that the equations are non-linear and there can be multiple solutions. These can be interpreted as fixed points of a system of differential equations.


In [20]:
def f(x):
    return [x[1] - 3*x[0]*(x[0]+1)*(x[0]-1),
            .25*x[0]**2 + x[1]**2 - 1]

In [21]:
sol = root(f, (0.5, 0.5))
sol.x


Out[21]:
array([ 1.117,  0.83 ])

In [22]:
fsolve(f, (0.5, 0.5))


Out[22]:
array([ 1.117,  0.83 ])

In [23]:
r0 = opt.root(f,[1,1])
r1 = opt.root(f,[0,1])
r2 = opt.root(f,[-1,1.1])
r3 = opt.root(f,[-1,-1])
r4 = opt.root(f,[2,-0.5])

roots = np.c_[r0.x, r1.x, r2.x, r3.x, r4.x]

In [24]:
Y, X = np.mgrid[-3:3:100j, -3:3:100j]
U = Y - 3*X*(X + 1)*(X-1)
V = .25*X**2 + Y**2 - 1

plt.streamplot(X, Y, U, V, color=U, linewidth=2, cmap=plt.cm.autumn)
plt.scatter(roots[0], roots[1], s=50, c='none', edgecolors='k', linewidth=2)
pass


We can also give the jacobian


In [25]:
def jac(x):
    return [[-6*x[0], 1], [0.5*x[0], 2*x[1]]]

In [26]:
sol = root(f, (0.5, 0.5), jac=jac)
sol.x, sol.fun


Out[26]:
(array([ 1.117,  0.83 ]), array([-0., -0.]))

Check that values found are really roots


In [27]:
np.allclose(f(sol.x), 0)


Out[27]:
True

Starting from other initial conditions, different roots may be found


In [28]:
sol = root(f, (12,12))
sol.x


Out[28]:
array([ 0.778, -0.921])

In [29]:
np.allclose(f(sol.x), 0)


Out[29]:
True

Optimization Primer

We will assume that our optimization problem is to minimize some univariate or multivariate function $f(x)$. This is without loss of generality, since to find the maximum, we can simply minimize $-f(x)$. We will also assume that we are dealing with multivariate or real-valued smooth functions - non-smooth or discrete functions (e.g. integer-valued) are outside the scope of this course.

To find the minimum of a function, we first need to be able to express the function as a mathematical expresssion. For example, in least squares regression, the function that we are optimizing is of the form $y_i - f(x_i, \theta)$ for some parameter(s) $\theta$. To choose an appropirate optimization algorithm, we should at least answer these two questions if possible:

  1. Is the function convex?
  2. Are there any constraints that the solution must meet?

Finally, we need to realize that optimization methods are nearly always designed to find local optima. For convex problems, there is only one minimum and so this is not a problem. However, if there are multiple local minima, often heuristics such as multiple random starts must be adopted to find a "good" enough solution.

Is the function convex?

Convex functions are very nice because they have a single global minimum, and there are very efficient algorithms for solving large convex systems.

Intuitively, a function is convex if every chord joining two points on the function lies above the function. More formally, a function is convex if $$ f(ta + (1-t)b) \lt tf(a) + (1-t)f(b) $$ for some $t$ between 0 and 1 - this is shown in the figure below.


In [30]:
def f(x):
    return (x-4)**2 + x + 1

with plt.xkcd():
    x = np.linspace(0, 10, 100)

    plt.plot(x, f(x))
    ymin, ymax = plt.ylim()
    plt.axvline(2, ymin, f(2)/ymax, c='red')
    plt.axvline(8, ymin, f(8)/ymax, c='red')
    plt.scatter([4, 4], [f(4), f(2) + ((4-2)/(8-2.))*(f(8)-f(2))], 
                 edgecolor=['blue', 'red'], facecolor='none', s=100, linewidth=2)
    plt.plot([2,8], [f(2), f(8)])
    plt.xticks([2,4,8], ('a', 'ta + (1-t)b', 'b'), fontsize=20)
    plt.text(0.2, 40, 'f(ta + (1-t)b) < tf(a) + (1-t)f(b)', fontsize=20)
    plt.xlim([0,10])
    plt.yticks([])
    plt.suptitle('Convex function', fontsize=20)


/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1288: UserWarning: findfont: Font family ['Humor Sans', 'Comic Sans MS'] not found. Falling back to Bitstream Vera Sans
  (prop.get_family(), self.defaultFamily[fontext]))

Warm up exercise

Show that $f(x) = -\log(x)$ is a convex function.

Checking if a function is convex using the Hessian

The formal definition is only useful for checking if a function is convex if you can find a counter-example. More practically, a twice differentiable function is convex if its Hessian is positive semi-definite, and strictly convex if the Hessian is positive definite.

For example, suppose we want to minimize the scalar-valued function

$$ f(x_1, x_2, x_3) = x_1^2 + 2x_2^2 + 3x_3^2 + 2x_1x_2 + 2x_1x_3 $$

In [31]:
from sympy import symbols, hessian, Function, init_printing, expand, Matrix, diff
x, y, z = symbols('x y z')
f = symbols('f', cls=Function)
init_printing()

In [32]:
f = x**2 + 2*y**2 + 3*z**2 + 2*x*y + 2*x*z
f


Out[32]:
$$x^{2} + 2 x y + 2 x z + 2 y^{2} + 3 z^{2}$$

In [33]:
H = hessian(f, (x, y, z))
H


Out[33]:
$$\left[\begin{matrix}2 & 2 & 2\\2 & 4 & 0\\2 & 0 & 6\end{matrix}\right]$$

In [34]:
np.real_if_close(la.eigvals(np.array(H).astype('float')))


Out[34]:
array([ 0.241,  7.064,  4.695])

Since the matrix is symmetric and all eigenvalues are positive, the Hessian is positive defintie and the function is convex.

Combining convex functions

The following rules may be useful to determine if more complex functions are convex:

  1. The intersection of convex functions is convex
  2. If the functions $f$ and $g$ are convex and $a \ge 0$ and $b \ge 0$ then the function $af + bg$ is convex.
  3. If the function $U$ is convex and the function $g$ is nondecreasing and convex then the function $f$ defined by $f (x) = g(U(x))$ is convex.

Many more technical details about convexity and convex optimization can be found in this book.

Are there any constraints that the solution must meet?

In general, optimization without constraints is easier to perform than optimization in the presence of constraints. The solutions may be very different in the presence or absence of constraints, so it is important to know if there are any constraints.

We will see some examples of two general strategies:

  • convert a problem with constraints into one without constraints or
  • use an algorithm that can optimize with constraints.

Using scipy.optimize

One of the most convenient libraries to use is scipy.optimize, since it is already part of the Anaconda installation and it has a fairly intuitive interface.


In [35]:
from scipy import optimize as opt

Minimizing a univariate function $f: \mathbb{R} \rightarrow \mathbb{R}$


In [36]:
def f(x):
    return x**4 + 3*(x-2)**3 - 15*(x)**2 + 1

In [37]:
x = np.linspace(-8, 5, 100)
plt.plot(x, f(x));


The minimize_scalar function will find the minimum, and can also be told to search within given bounds. By default, it uses the Brent algorithm, which combines a bracketing strategy with a parabolic approximation.


In [38]:
opt.minimize_scalar(f, method='Brent')


Out[38]:
     fun: -803.39553088258845
    nfev: 12
     nit: 11
 success: True
       x: -5.5288011252196627

In [39]:
opt.minimize_scalar(f, method='bounded', bounds=[0, 6])


Out[39]:
     fun: -54.210039377127622
 message: 'Solution found.'
    nfev: 12
  status: 0
 success: True
       x: 2.6688651040396532

Local and global minima


In [40]:
def f(x, offset):
    return -np.sinc(x-offset)

In [41]:
x = np.linspace(-20, 20, 100)
plt.plot(x, f(x, 5));



In [42]:
# note how additional function arguments are passed in
sol = opt.minimize_scalar(f, args=(5,))
sol


Out[42]:
     fun: -0.049029624014074166
    nfev: 11
     nit: 10
 success: True
       x: -1.4843871263953001

In [43]:
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red')
pass


We can try multiple random starts to find the global minimum


In [44]:
lower = np.random.uniform(-20, 20, 100)
upper = lower + 1
sols = [opt.minimize_scalar(f, args=(5,), bracket=(l, u)) for (l, u) in zip(lower, upper)]

In [45]:
idx = np.argmin([sol.fun for sol in sols])
sol = sols[idx]

In [46]:
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');


Using a stochastic algorithm

See documentation for the basinhopping algorithm, which also works with multivariate scalar optimization. Note that this is heuristic and not guaranteed to find a global minimum.


In [47]:
from scipy.optimize import basinhopping

x0 = 0
sol = basinhopping(f, x0, stepsize=1, minimizer_kwargs={'args': (5,)})
sol


Out[47]:
                        fun: -1.0
 lowest_optimization_result:       fun: -1.0
 hess_inv: array([[ 0.304]])
      jac: array([ 0.])
  message: 'Optimization terminated successfully.'
     nfev: 15
      nit: 3
     njev: 5
   status: 0
  success: True
        x: array([ 5.])
                    message: ['requested number of basinhopping iterations completed successfully']
      minimization_failures: 0
                       nfev: 1848
                        nit: 100
                       njev: 616
                          x: array([ 5.])

In [48]:
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');


Minimizing a multivariate function $f: \mathbb{R}^n \rightarrow \mathbb{R}$

We will next move on to optimization of multivariate scalar functions, where the scalar may (say) be the norm of a vector. Minimizing a multivariable set of equations $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is not well-defined, but we will later see how to solve the closely related problem of finding roots or fixed points of such a set of equations.

We will use the Rosenbrock "banana" function to illustrate unconstrained multivariate optimization. In 2D, this is $$ f(x, y) = b(y - x^2)^2 + (a - x)^2 $$

The function has a global minimum at (1,1) and the standard expression takes $a = 1$ and $b = 100$.

Conditioning of optimization problem

With these values for $a$ and $b$, the problem is ill-conditioned. As we shall see, one of the factors affecting the ease of optimization is the condition number of the curvature (Hessian). When the condition number is high, the gradient may not point in the direction of the minimum, and simple gradient descent methods may be inefficient since they may be forced to take many sharp turns.


In [49]:
from sympy import symbols, hessian, Function, N

x, y = symbols('x y')
f = symbols('f', cls=Function)

f = 100*(y - x**2)**2 + (1 - x)**2

H = hessian(f, [x, y]).subs([(x,1), (y,1)])
H, N(H.condition_number())


Out[49]:
$$\left ( \left[\begin{matrix}802 & -400\\-400 & 200\end{matrix}\right], \quad 2508.00960127744\right )$$

As pointed out in the previous lecture, the condition number is basically the ratio of largest to smallest eigenvalue of the Hessian


In [50]:
import scipy.linalg as la

mu = la.eigvals(np.array([802, -400, -400, 200]).reshape((2,2)))
np.real_if_close(mu[0]/mu[1])


Out[50]:
array(2508.009601277337)

In [51]:
def rosen(x):
    """Generalized n-dimensional version of the Rosenbrock function"""
    return sum(100*(x[1:]-x[:-1]**2.0)**2.0 +(1-x[:-1])**2.0)

Why is the condition number so large?


In [52]:
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))

In [53]:
# Note: the global minimum is at (1,1) in a tiny contour island
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.text(1, 1, 'x', va='center', ha='center', color='red', fontsize=20);


Zooming in to the global minimum at (1,1)


In [54]:
x = np.linspace(0, 2, 100)
y = np.linspace(0, 2, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))

In [55]:
plt.contour(X, Y, Z, [rosen(np.array([k, k])) for k in np.linspace(1, 1.5, 10)], cmap='jet')
plt.text(1, 1, 'x', va='center', ha='center', color='red', fontsize=20);


Gradient descent

The gradient (or Jacobian) at a point indicates the direction of steepest ascent. Since we are looking for a minimum, one obvious possibility is to take a step in the opposite direction to the gradient. We weight the size of the step by a factor $\alpha$ known in the machine learning literature as the learning rate. If $\alpha$ is small, the algorithm will eventually converge towards a local minimum, but it may take long time. If $\alpha$ is large, the algorithm may converge faster, but it may also overshoot and never find the minimum. Gradient descent is also known as a first order method because it requires calculation of the first derivative at each iteration.

Some algorithms also determine the appropriate value of $\alpha$ at each stage by using a line search, i.e., $$ \alpha^* = \arg\min_\alpha f(x_k - \alpha \nabla{f(x_k)}) $$ which is a 1D optimization problem.

As suggested above, the problem is that the gradient may not point towards the global minimum especially when the condition number is large, and we are forced to use a small $\alpha$ for convergence.

Simple gradient descent

Let's warm up by minimizing a trivial function $f(x, y) = x^2 + y^2$ to illustrate the basic idea of gradient descent.


In [56]:
def f(x):
    return x[0]**2 + x[1]**2

def grad(x):
    return np.array([2*x[0], 2*x[1]])

a = 0.1 # learning rate
x0 = np.array([1.0,1.0])
print('Start', x0)
for i in range(41):
    x0 -= a * grad(x0)
    if i%5 == 0:
        print(i, x0)


Start [ 1.  1.]
0 [ 0.8  0.8]
5 [ 0.262  0.262]
10 [ 0.086  0.086]
15 [ 0.028  0.028]
20 [ 0.009  0.009]
25 [ 0.003  0.003]
30 [ 0.001  0.001]
35 [ 0.  0.]
40 [ 0.  0.]

Gradient descent for least squares minimization

Usually, when we optimize, we are not just finding the minimum, but also want to know the parameters that give us the minimum. As a simple example, suppose we want to find parameters that minimize the least squares difference between a linear model and some data. Suppose we have some data $(0,1), (1,2), (2,3), (3,3.5), (4,6), (5,9), (6,8)$ and want to find a line $y = \beta_0 +\beta_1 x$ that is the best least squares fit. One way to do this is to solve $X^TX\hat{\beta} = X^Ty$, but we want to show how this can be formulated as a gradient descent problem.

We want to find $\beta = (\beta_0, \beta_1)$ that minimize the squared differences

$$ r = \sum(\beta_0 + \beta_1 x - y)^2 $$

We calculate the gradient with respect to $\beta$ as

$$\nabla r = \pmatrix{ \frac{\delta r}{\delta \beta_0} \\ \frac{\delta r}{\delta \beta_0}}$$

and apply gradient descent.


In [57]:
def f(x, y, b):
    """Helper function."""
    return (b[0] + b[1]*x - y)

def grad(x, y, b):
    """Gradient of objective function with respect to parameters b."""
    n = len(x)
    return np.array([
            sum(f(x, y, b)),
            sum(x*f(x, y, b))
    ])

In [58]:
x, y = map(np.array, zip((0,1), (1,2), (2,3), (3,3.5), (4,6), (5,9), (6,8)))

In [59]:
a = 0.001 # learning rate
b = np.zeros(2)
for i in range(10000):
    b -= a * grad(x, y, b)
b


Out[59]:
array([ 0.571,  1.357])

In [60]:
plt.scatter(x, y, s=30)
plt.plot(x, b[0] + b[1]*x, color='red')
pass


Gradient descent to minimize the Rosen function using scipy.optimize

Because gradient descent is unreliable in practice, it is not part of the scipy optimize suite of functions, but we will write a custom function below to illustrate how to use gradient descent while maintaining the scipy.optimize interface.


In [61]:
def rosen_der(x):
    """Derivative of generalized Rosen function."""
    xm = x[1:-1]
    xm_m1 = x[:-2]
    xm_p1 = x[2:]
    der = np.zeros_like(x)
    der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
    der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
    der[-1] = 200*(x[-1]-x[-2]**2)
    return der

Warning One of the most common causes of failure of optimization is because the gradient or Hessian function is specified incorrectly. You can check for this using check_grad which compares the analytical gradient with one calculated using finite differences.


In [62]:
from scipy.optimize import check_grad

for x in np.random.uniform(-2,2,(10,2)):
    print(x, check_grad(rosen, rosen_der, x))


[-1.002  0.488] 7.76924856764e-06
[-0.54   0.625] 1.78115499411e-06
[ 0.159  0.203] 1.55391738656e-06
[ 1.542  0.689] 1.16032403535e-05
[ 1.333 -1.931] 3.09846435193e-05
[ 0.808  0.92 ] 3.14878094308e-06
[ 1.149  1.416] 7.78932772449e-06
[ 1.767  1.172] 2.73491380503e-05
[ 0.452 -1.621] 5.25282739165e-06
[-1.627  1.048] 1.89059712929e-05

Writing a custom function for the scipy.optimize interface.


In [63]:
def custmin(fun, x0, args=(), maxfev=None, alpha=0.0002,
        maxiter=100000, tol=1e-10, callback=None, **options):
    """Implements simple gradient descent for the Rosen function."""
    bestx = x0
    bestf = fun(x0)
    funcalls = 1
    niter = 0
    improved = True
    stop = False

    while improved and not stop and niter < maxiter:
        niter += 1
        # the next 2 lines are gradient descent
        step = alpha * rosen_der(bestx)
        bestx = bestx - step

        bestf = fun(bestx)
        funcalls += 1
        
        if la.norm(step) < tol:
            improved = False
        if callback is not None:
            callback(bestx)
        if maxfev is not None and funcalls >= maxfev:
            stop = True
            break

    return opt.OptimizeResult(fun=bestf, x=bestx, nit=niter,
                              nfev=funcalls, success=(niter > 1))

In [64]:
def reporter(p):
    """Reporter function to capture intermediate states of optimization."""
    global ps
    ps.append(p)

In [65]:
# Initial starting position
x0 = np.array([4,-4.1])

In [66]:
ps = [x0]
opt.minimize(rosen, x0, method=custmin, callback=reporter)


Out[66]:
     fun: 1.0604663473448339e-08
    nfev: 100001
     nit: 100000
 success: True
       x: array([ 1.,  1.])

In [67]:
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
Z = rosen(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))

In [68]:
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T));


Newton's method and variants

Recall Newton's method for finding roots of a univariate function

$$ x_{K+1} = x_k - \frac{f(x_k)}{f'(x_k)} $$

When we are looking for a minimum, we are looking for the roots of the derivative $f'(x)$, so

$$ x_{K+1} = x_k - \frac{f'(x_k}{f''(x_k)} $$

Newton's method can also be seen as a Taylor series approximation

$$ f(x+h) = f(x) + h f'(x) + \frac{h^2}{2}f''(x) $$

At the function minimum, the derivative is 0, so \begin{align} \frac{f(x+h) - f(x)}{h} &= f'(x) + \frac{h}{2}f''(x) \\ 0 &= f'(x) + \frac{h}{2}f''(x) \end{align}

and letting $\Delta x = \frac{h}{2}$, we get that the Newton step is

$$ \Delta x = - \frac{f'(x)}{f''(x)} $$

The multivariate analog replaces $f'$ with the Jacobian and $f''$ with the Hessian, so the Newton step is

$$ \Delta x = -H^{-1}(x) \nabla f(x) $$

Second order methods

Second order methods solve for $H^{-1}$ and so require calculation of the Hessian (either provided or approximated using finite differences). For efficiency reasons, the Hessian is not directly inverted, but solved for using a variety of methods such as conjugate gradient. An example of a second order method in the optimize package is Newton-GC.


In [69]:
from scipy.optimize import rosen, rosen_der, rosen_hess

In [70]:
ps = [x0]
opt.minimize(rosen, x0, method='Newton-CG', jac=rosen_der, hess=rosen_hess, callback=reporter)


Out[70]:
     fun: 1.3642782750354208e-13
     jac: array([ 0., -0.])
 message: 'Optimization terminated successfully.'
    nfev: 38
    nhev: 26
     nit: 26
    njev: 63
  status: 0
 success: True
       x: array([ 1.,  1.])

In [71]:
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T));


First order methods

As calculating the Hessian is computationally expensive, sometimes first order methods that only use the first derivatives are preferred. Quasi-Newton methods use functions of the first derivatives to approximate the inverse Hessian. A well know example of the Quasi-Newoton class of algorithjms is BFGS, named after the initials of the creators. As usual, the first derivatives can either be provided via the jac= argument or approximated by finite difference methods.


In [72]:
ps = [x0]
opt.minimize(rosen, x0, method='BFGS', callback=reporter)


Out[72]:
      fun: 9.48988612333806e-12
 hess_inv: array([[ 0.5  ,  1.   ],
       [ 1.   ,  2.005]])
      jac: array([ 0., -0.])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 540
      nit: 56
     njev: 132
   status: 2
  success: False
        x: array([ 1.,  1.])

In [73]:
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T));


Zeroth order methods

Finally, there are some optimization algorithms not based on the Newton method, but on other heuristic search strategies that do not require any derivatives, only function evaluations. One well-known example is the Nelder-Mead simplex algorithm.


In [74]:
ps = [x0]
opt.minimize(rosen, x0, method='nelder-mead', callback=reporter)


Out[74]:
 final_simplex: (array([[ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.]]), array([ 0.,  0.,  0.]))
           fun: 5.262756878429089e-10
       message: 'Optimization terminated successfully.'
          nfev: 162
           nit: 85
        status: 0
       success: True
             x: array([ 1.,  1.])

In [75]:
ps = np.array(ps)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X, Y, Z, np.arange(10)**5, cmap='jet')
plt.plot(ps[:, 0], ps[:, 1], '-ro')
plt.subplot(122)
plt.semilogy(range(len(ps)), rosen(ps.T));