ESE - Numerical Methods I: Solution of Nonlinear Equations

A nonlinear aquifer model

Most systems in earth science are nonlinear in nature, meaning they can't be expressed in simple terms of

\begin{equation} y = a + kx \end{equation}

or, explicitly expressed as a function

\begin{equation} f(x) = a + kx \end{equation}

where $f(x)$ is a quantity of interest as a result of some variable, $x$, that we can quantify through measurement or guesses based on observations. Rather, realistic systems, when expressed as equations, contain terms like

\begin{equation} x^2, x^3, tan(x), sin(x), ln(x), erf(x)... \end{equation}

These terms render the system nonlinear, i.e., not linearly dependent on a variable. To motivate this module, we start with basic modeling steps:

  1. identify relevant processes in a system
  2. capture these processe in a more abstract model
  3. derive governing equations for the model
  4. solve for the quantities of interest

Let's take care of the usual imports and settings first. The scipy module we'll use today is scipy.optimize, which we import as scop. Our examples will cover

scop.bisect
scop.newton
scop.ridder

In [1]:
%pylab inline
import esenummet
import numpy as np
import scipy.special as scis
# the main python module we'll look at here
import scipy.optimize as scop
import matplotlib.pyplot as plt
# plot parameters
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['font.family'] = 'serif'
matplotlib.rcParams['figure.figsize'] = 9,7


Populating the interactive namespace from numpy and matplotlib

The system we look at is an example of hydrology. We consider a simple aquifer on an island somewhere in a lake:

The lake is considered of quasi-infinite extend and causes water to saturate the body of the island. If this was the entire description of our model, the resulting water table on the island would be at the same level as the free water surface of the lake. On the island, however, a process called evapotranspiration (ET) takes place on the island. It describes the is the sum of evaporation and plant transpiration due to exposed land surface and hydrophilic plants on the island. This evapotranspiration drains the aquifer at a constant rate, causing the water table to drop and water to continuously enter from the surrounding lake.

A more abstract model describing our aquifer system contains three cells:

The left and right cell represent the static water level of the lake's free water, $h_L$, which we can measure. The center cell describes the island auifer and its average water level $h$, which we want to find (quantification of the water table is an important task in hydrology, especially for ground water development projects).

We can now express our system in form of equations we can solve for. First, in a sustainable system the inflow rate from the lake and the water loss rate due to evapotranspiration balance:

\begin{equation} 2 Q - ET = 0 \end{equation}

Next, the rates $Q$ and $ET$ need be expressed. We find

\begin{equation} Q = c (h_L - h) \end{equation}

where $c$ is a property describing the hydraulic conductivity of the island forming material. We immedeately see that $Q$ is a linear term where $a = c h_L$ and $k = c$. The evapotranspiration term, however, has a nonlinear dependence on $h$:

\begin{equation} ET(h) = et_{max} erf \left( \frac{4h}{h_L} \right) \end{equation}

where $et_{max}$ is the maximum evapotranspiration rate (a constant) for when the watertable is above a certain height and $erf(h)$ is the errorfunction. Our equation, to be solved for $h$, now reads

\begin{equation} 2 c (h_L - h) - et_{max} erf \left( \frac{4h}{h_L} \right) = 0 \end{equation}

Before we take a closer look at solution strategies, let's define some constants:


In [2]:
c = 1.
h_L = 10.
et_max = 15.

And, based on these, we can define the nonlinear evapotranspiration function

\begin{equation} ET(h) = et_{max} erf \left( \frac{4h}{h_L} \right) \end{equation}

and plot it to allow for a better idea of its implications:


In [3]:
def ET(h):
    return et_max*scis.erf(4*h/h_L)

In [4]:
h = np.linspace(0, h_L, 50)
et = ET(h)
plt.plot(h, et, color='r')
plt.text(8,ET(8)+1, '$et_{max}$', fontsize=20)
plt.ylim(0,2*et_max)
plt.xlabel('$h$')
plt.ylabel('$ET(h)$')
plt.show()


This function dictates that for a water table $h$ on our island of $0$, evapotranspiration ceases. Above a certain level, $h \approx 4$, it reaches its maximum value $et_{max}$.

Recalling that the final equation reads

\begin{equation} 2 c (h_L - h) - et_{max} erf \left( \frac{4h}{h_L} \right) = 2 c (h_L - h) - ET(h) = 0 \end{equation}

we will now explore two means to solve for $h$: the Picard iteration and the Newton method.

Picard iteration

To expess $h$ we can rewrite the governing equation to read

\begin{equation} h = \frac{2 c h_L - ET(h)}{2 c} \end{equation}

This form of the equation allows us to solve it using an initial guess for $h$ to use in the right hand side and obtain a new $h$, the left hand side (LHS) result. We can then use this new value of $h$ as an improved guess again in the right hand side (RHS) to obtain a new value for $h$, and repeat this procedure. You should be familiar with this procedure from a levels for problems like $0 = x - tan(x)$. To avoid confusion, we will denote the LHS expression of $h$ as $f_p(h)$, denoting the function we solve for the Picard iteration approach.

\begin{equation} f_p(h) = \frac{2 c h_L - ET(h)}{2 c} \end{equation}

In [5]:
def fp(h):
    return (2*c*h_L - ET(h))/(2*c)

A formal description of this solution strategy looks like

guess x[0]
while (x[i] - x[i-1]) > tolerance:
    x[i] = f_p(x[i-1])

or more graphic:

we use as an initial guesss $x_0 = h_0$:


In [6]:
h_0 = h_L/2.

A python implementation of the entire algorithm looks as simple as:


In [7]:
h = h_0
tol = h_L/50000.
fp_p = []
h_p = []
while 1:
    print 'Iteration {0:2d} yields h = {1:7.5f}'.format(len(h_p), h)
    h_p.append(h)
    h = fp(h)
    fp_p.append(h)
    if abs(h_p[-1]-fp_p[-1]) < tol:
        break
print 'Iteration {0:2d} yields h = {1:7.5f}'.format(len(h_p), h)


Iteration  0 yields h = 5.00000
Iteration  1 yields h = 2.53508
Iteration  2 yields h = 3.63666
Iteration  3 yields h = 2.79750
Iteration  4 yields h = 3.35151
Iteration  5 yields h = 2.93480
Iteration  6 yields h = 3.22660
Iteration  7 yields h = 3.00974
Iteration  8 yields h = 3.16487
Iteration  9 yields h = 3.05052
Iteration 10 yields h = 3.13310
Iteration 11 yields h = 3.07252
Iteration 12 yields h = 3.11647
Iteration 13 yields h = 3.08433
Iteration 14 yields h = 3.10770
Iteration 15 yields h = 3.09063
Iteration 16 yields h = 3.10306
Iteration 17 yields h = 3.09399
Iteration 18 yields h = 3.10060
Iteration 19 yields h = 3.09578
Iteration 20 yields h = 3.09929
Iteration 21 yields h = 3.09673
Iteration 22 yields h = 3.09860
Iteration 23 yields h = 3.09723
Iteration 24 yields h = 3.09823
Iteration 25 yields h = 3.09750
Iteration 26 yields h = 3.09803
Iteration 27 yields h = 3.09765
Iteration 28 yields h = 3.09793
Iteration 29 yields h = 3.09772
Iteration 30 yields h = 3.09787

If we now plot the succession of $h_{i-1}$ and $h_{i}$ in the form of $h$ and $f_p$, respectively, we see a typical linear convergence pattern of under-/overestimation closing in on the solution of $h = 3.0978$, the point where $f_p(h) == h$


In [8]:
esenummet.plot_picard_convergence_pattern(h_p, fp_p)


Newton method

Picard's iterative algorithm uses only one part of a functions information: its value, $f(x)$.

Newton's method additionally uses $f'(x)$ to infer the trend of the function in the vicinity of $x$. This slope, together with the function value $f(x)$, is used to find the intersection of the tangent at $x$ with zero to get an improved guess of the root. The formula can be derived from the Taylor series expansion:

\begin{equation} f(x_{i+1}) = f(x_i) + f'(x_i)(x_{i+1}-x_i) + O(x_{i+1} - x_i)^2 \end{equation}

Let $f(x_{i+1}) = 0$ to find

\begin{equation} 0 = f(x_i) + f'(x_i)(x_{i+1}-x_i) + O(x_{i+1} - x_i)^2 \end{equation}

assuming $x_{i+1}$ close to $x{i}$ we drop the higher order terms to find

\begin{equation} x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)} \end{equation}

which is the Newton-Raphson formula.

The expression of the associated error indicates quadratic convergence:

\begin{equation} \epsilon_{i+1} = -\frac{f''(x)}{2f'(x)} \epsilon_{i}^2 \end{equation}

A pseudo code for the algorithm looks like this:

guess x[0]
while (x[i] - x[i-1]) < tolerance:
    comute dfdx(x[i-1]) 
    x[i] = x[i-1] - f(x[i-1])/dfdx(x[i-1])

To apply this strategy to our aquifer problem, we must express it in a root-finding form:

\begin{equation} h = \frac{2 c h_L - ET(h)}{2 c} \end{equation}\begin{equation} 0 = \frac{2 c h_L - ET(h)}{2 c} - h = f_n(h) \end{equation}\begin{equation} f_n(h) = f_p(h) - h \end{equation}

In [9]:
def fn(h):
    return fp(h) - h

and plot it to get an idea of its behavior:


In [10]:
h = np.linspace(0., h_L, 100)
f = fn(h)

In [11]:
plt.plot(h, f, color='r')
plt.plot([0,10],[0.,0.],color='gray',ls='-.',alpha=0.75)
plt.xlabel('$h$')
plt.ylabel('$f_n(h)$')
plt.title('Root finding formulation')
plt.show()


This graphs shows us that the root, $f_n(h)=0$, is somewhere around $h \approx 3.1$, the result of the Picard algorithm.

We can compute the derivative in a straight-forward manner using some finite difference $dh$:

\begin{equation} f_n'(h) \approx \frac{f_n(h+dh) - f_n(h)}{dh} \end{equation}

The final algorithm in Python looks like:


In [12]:
dh = h_L/100.
h = h_0
h_n = [h]
while 1:
    dfdh = (fn(h+dh) - fn(h)) / dh
    h = h - fn(h)/dfdh
    h_n.append(h)
    print 'Iteration {0:1d} at h = {1:7.5f} yields dfdh = {2:5.3f} and new h = {3:7.5f}'.format(len(h_n)-1, h_n[-2], dfdh, h)
    if abs(h_n[-1]-h_n[-2]) < tol:
        break


Iteration 1 at h = 5.00000 yields dfdh = -1.057 and new h = 2.66859
Iteration 2 at h = 2.66859 yields dfdh = -2.038 and new h = 3.06856
Iteration 3 at h = 3.06856 yields dfdh = -1.714 and new h = 3.09824
Iteration 4 at h = 3.09824 yields dfdh = -1.693 and new h = 3.09780
Iteration 5 at h = 3.09780 yields dfdh = -1.694 and new h = 3.09781

So the Newton method converges in 5 iterations compared to 30 for the Picard method. Inside a Newton step, however, we evaluate $f_n(h)$ two times (at $h$ and $h+dh$), while for a Picard iteration just one function evaluation takes place. Therefore, the quadratic convergence of the Newton method comes at a cost, but still pays off in terms of efficiency.

A graphic illustration of the Newton method shows the iterative root solution for local tangents:


In [13]:
esenummet.plot_newton_convergence_pattern(fn, dh, h_0, tol)


As always, there are much more efficient, robust and tested algorithms with bells and whistles available that we should use - here scipy.optimize.newton.


In [14]:
print scop.newton(fn, h_0)


3.09780767029

Which finds the same result as our minimalistic algorithm.

Problems

Using scop.newton and an initial guess of 10, find the square-root of 612 by solving the equation

\begin{equation} x^2 = 612 \end{equation}

Step 1: reformulate equationt to rootfinding problem, i.e.,

\begin{equation} x^2 -612 = 0 = f_n(x) \end{equation}

In [15]:
def fn_sqrt(x):
    return x**2 -612

Step 2: solve using scop.newton and the initial guess $x_0$


In [16]:
print scop.newton(fn_sqrt, 10.)


24.7386337537

which can be shown to be equal to


In [17]:
print np.sqrt(612)


24.7386337537

Root finding

After this brief introduction, we'll take more in-depth look at root finding approaches.

Let $f(x) = x^3 - 10 x^2 + 5$


In [18]:
def foo(x):
    return x**3 - 10*x**2 + 5

In [19]:
x = np.linspace(0,1,6)
f = foo(x)

In [20]:
plt.plot(x, f, marker='x', color='r')
plt.plot(x,np.zeros_like(x),color='gray',ls='-.',alpha=0.75)
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.show()


The root of $f(x)$ seems to lie beween $f(x=0.6)$ and $f(x=0.8)$

This is supported by numeric values of $f(x)$ that change sign between this interval:


In [21]:
print np.array([x,f]).T


[[ 0.     5.   ]
 [ 0.2    4.608]
 [ 0.4    3.464]
 [ 0.6    1.616]
 [ 0.8   -0.888]
 [ 1.    -4.   ]]

Incremental search (root bracketing)

We can devise a root bracketing algorith that marches along a function $f$ in a given increment $dx$ from $f(a)$ to $f(b)$ and brackets the root in the subsection $[x_1,x_2]$ by identifying sign changes so that $f(a)$ to $f(b)<0$:


In [22]:
def rootsearch(f, a, b, dx):
    """
    Incremental search algorithm. Searches the interval (a,b) in increments dx for
    the bounds (x1,x2) of the smallest root of f(x).
    """
    x1 = a
    f1 = f(a)
    x2 = a + dx
    f2 = f(x2)
    print 'f({0:3.1f}) * f({1:3.1f}) = {2:5.1f}'.format(x1,x2,f1*f2)
    while True:
        x1 = x2
        f1 = f2
        x2 = x1 + dx
        f2 = f(x2)
        print 'f({0:3.1f}) * f({1:3.1f}) = {2:5.1f}'.format(x1,x2,f1*f2)
        if f1*f2 < 0.:
            break
    return x1,x2

In [23]:
a, b = rootsearch(foo, 0., 1., 0.2)
print 'root bracketed between {0:3.1f} and {1:3.1f}'.format(a, b)


f(0.0) * f(0.2) =  23.0
f(0.2) * f(0.4) =  16.0
f(0.4) * f(0.6) =   5.6
f(0.6) * f(0.8) =  -1.4
root bracketed between 0.6 and 0.8

Graphically this marching algorithm can be illustrated as:


In [24]:
esenummet.plot_root_bracketing_pattern(foo, 0., 1., 0.2)


Bisection

Once we know a single can root to be found in $[x_1,x_2]$, we can close in on it with an algorithm similar to incremental search, but with a smart switch to gradually half $dx$ and change direction depending on the sign of $f(x_1)f(x_2)$.

If there is a root in the interval $[x_1, x_2]$, then $f(x_1)f(x_2) < 0$. In order to halve the interval, we compute $f(x_3)$, where $x3 = 1/2 (x_1 + x_2)$ is the midpoint of the interval. If $f(x_2)f(x_3) < 0$, then the root must be in $[x_2, x_3]$, and we record this by replacing the original bound $x_1$ by $x_3$. Otherwise, the root lies in $[x_1, x_3]$, in which case $x_2$ is replaced by $x_3$. In either case, the new interval $[x_1, x_2]$ is half the size of the original interval. The bisection is repeated until the interval has been reduced to a tolerance.

In Python:


In [25]:
def bisection(f, x1, x2, tol=1.0e-5):
    """
    Finds a root of f(x) = 0 by bisection.
    The root must be bracketed in (x1,x2).
    """
    f1 = f(x1)
    f2 = f(x2)
    while abs(x1-x2) > tol:
        x3 = 0.5*(x1 + x2)
        f3 = f(x3)
        print 'x = {0:8.6f}, f(x) = {1:13.10f} between [{2:8.6f},{3:8.6f}]'.format(x3,f3,x1,x2)
        if f2*f3 < 0.0:
            x1 = x3
            f1 = f3
        else:
            x2 = x3
            f2 = f3
    return (x1 + x2)/2.0

Since the incremental search already bounds $[a,b]$ we can use this in the bisection algorithm:


In [26]:
root = bisection(foo, a, b)
print root


x = 0.700000, f(x) =  0.4430000000 between [0.600000,0.800000]
x = 0.750000, f(x) = -0.2031250000 between [0.700000,0.800000]
x = 0.725000, f(x) =  0.1248281250 between [0.700000,0.750000]
x = 0.737500, f(x) = -0.0379316406 between [0.725000,0.750000]
x = 0.731250, f(x) =  0.0437531738 between [0.725000,0.737500]
x = 0.734375, f(x) =  0.0029869080 between [0.731250,0.737500]
x = 0.735938, f(x) = -0.0174533424 between [0.734375,0.737500]
x = 0.735156, f(x) = -0.0072284598 between [0.734375,0.735938]
x = 0.734766, f(x) = -0.0021195864 between [0.734375,0.735156]
x = 0.734570, f(x) =  0.0004339582 between [0.734375,0.734766]
x = 0.734668, f(x) = -0.0008427398 between [0.734570,0.734766]
x = 0.734619, f(x) = -0.0002043722 between [0.734570,0.734668]
x = 0.734595, f(x) =  0.0001147976 between [0.734570,0.734619]
x = 0.734607, f(x) = -0.0000447861 between [0.734595,0.734619]
x = 0.734601, f(x) =  0.0000350060 between [0.734595,0.734607]
0.734603881836

Again, this function has been implemented in a scipy:


In [27]:
print scop.bisect(foo, a, b)


0.73460350779

An illustration better shows the simplicity of the algorithm:


In [28]:
esenummet.plot_bisection_pattern(foo, a, b)


Secant and false position (Ridder's method)


In [29]:
def ridder(f, a, b, tol=1.0e-9, max_iter=30):
    """
    Finds a root of f(x) = 0 with Ridder’s method to a tolerance of tol.
    The root must be bracketed in (a,b).
    """
    fa = f(a)
    fb = f(b)
    for i in range(max_iter):
        # Compute the improved root x from Ridder’s formula
        c = 0.5*(a + b)
        fc = f(c)
        s = np.sqrt(fc**2 - fa*fb)
        dx = (c - a)*fc/s
        if (fa - fb) < 0.0:
            dx = -dx
        x = c + dx
        fx = f(x)
        print 'Iteration {0:1d} at x = {1:7.5f} yields fx = {2:9.6f}'.format(i, x, fx)
        # Test for convergence
        if i > 0:
            if abs(x - xOld) < tol*max(abs(x),1.0):
                return x
        xOld = x
        # Re-bracket the root as tightly as possible
        if fc*fx > 0.0:
            if fa*fx < 0.0:
                b = x
                fb = fx
            else:
                a = x
                fa = fx
        else:
            a = c
            b = x
            fa = fc
            fb = fx

In [30]:
print 'Bracketing the root:'
a, b = rootsearch(foo, 0., 1., 0.2)
print a, b

print '\nFinding the root using ridders method:'
root = ridder(foo, a, b)
print root


Bracketing the root:
f(0.0) * f(0.2) =  23.0
f(0.2) * f(0.4) =  16.0
f(0.4) * f(0.6) =   5.6
f(0.6) * f(0.8) =  -1.4
0.6 0.8

Finding the root using ridders method:
Iteration 0 at x = 0.73469 yields fx = -0.001066
Iteration 1 at x = 0.73460 yields fx = -0.000000
Iteration 2 at x = 0.73460 yields fx = -0.000000
Iteration 3 at x = 0.73460 yields fx = -0.000000
0.734603507789

Again, this function has been implemented in a scipy:


In [31]:
print scop.ridder(foo, a, b)


0.734603507789

Newton-Raphson

Again, the essence of a numerically implemented Newton-Raphson algorithm is the iteration

\begin{equation} x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)} \end{equation}

and the finite-difference approximation of the gradient $f'(x)$ used therein

\begin{equation} f_n'(x) \approx \frac{f_n(x+dx) - f_n(x)}{dx} \end{equation}

which in Python can be coded like:


In [32]:
def newton_raphson(f, x0, tol=1.48E-08, dx=1.0E-9):
    x = x0
    x_n = [x]
    while 1:
        dfdx = (f(x+dx) - f(x)) / dx
        x = x - f(x)/dfdx
        x_n.append(x)
        print 'Iteration {0:1d} at x = {1:7.5f} yields dfdx = {2:7.3f} and new x = {3:7.5f}'.format(len(x_n)-1, x_n[-2], dfdx, x)
        if abs(x_n[-1]-x_n[-2]) < tol:
            break
    return x

with the initial guess $x_0 = 0.2$ yields:


In [33]:
x0 = 0.2
print newton_raphson(foo, x0)


Iteration 1 at x = 0.20000 yields dfdx =  -3.880 and new x = 1.38763
Iteration 2 at x = 1.38763 yields dfdx = -21.976 and new x = 0.86054
Iteration 3 at x = 0.86054 yields dfdx = -14.989 and new x = 0.74259
Iteration 4 at x = 0.74259 yields dfdx = -13.197 and new x = 0.73464
Iteration 5 at x = 0.73464 yields dfdx = -13.074 and new x = 0.73460
Iteration 6 at x = 0.73460 yields dfdx = -13.073 and new x = 0.73460
0.734603507789

Again, this function has been implemented in a scipy:


In [34]:
print scop.newton(foo, x0)


0.734603507789

The convergence pattern illustrated:


In [35]:
esenummet.plot_newton_convergence_pattern(foo, 1E-2, x0, 1E-3, 2, loc0=3, loc1=4, loc2=2, zoom=6, ixmin=0.7, ixmax=0.8, iymin=-0.5, iymax=0.5)


Pitfalls


In [36]:
matplotlib.rcParams['figure.figsize'] = 12,8

Let's first reillustrate the basic concept of Newton root-finding methods for a well behaved function

\begin{equation} f(x) = x^4 - 5 \end{equation}

In [37]:
def f0(x):
    return  x**4 - 5

In [38]:
x0 = 4.0
esenummet.plot_newton_convergence_pattern(f0, 1E-8, x0, 1E-7, 3, loc0=2, loc1=3, loc2=2, zoom=5, ixmin=1.45, ixmax=1.85, iymin=-5, iymax=15)


More complex equations, or systems thereof, provide plenty situations that prevent convergence all together or cause convergence to an undesired root. Their solution strongly depends on a good initial guess. For example,

\begin{equation} f(x) = x sin(\pi x)-exp(-x) \end{equation}

In [39]:
def f1(x):
    return  x*np.sin(np.pi*x)-np.exp(-x)

In [40]:
x = np.linspace(-1, 2, 100)
y = f1(x)

fig, ax = plt.subplots()
ax.plot(x, y, color='r',zorder=0)
xs = [0.57, 0.83, -0.27]
texts = ['$root_I$', '$root_{II}$', '$nearly singular/overflow$']
for i in range(len(xs)):
    ax.scatter([xs[i]], [f1(xs[i])],marker='x',s=60)
    ax.text(xs[i]+0.05, f1(xs[i])-0.1, texts[i])
ax.plot(x,np.zeros_like(x),color='gray',ls='-.',alpha=0.75)
ax.set_xlabel('$x$')
ax.set_ylabel('$f_n(x)$')
plt.title('$f(x)= x sin(\pi x) - exp(-x)$')
plt.show()


Multiple roots

For many functions, multiple roots exist and the algorithm depends on the initial value (here 0.0 and 0.1), as it converges to different roots


In [41]:
x0 = 0.
esenummet.plot_newton_convergence_pattern(f1, 1E-8, x0, 1E-7, 3, inset=False)



In [42]:
x0 = 0.1
esenummet.plot_newton_convergence_pattern(f1, 1E-8, x0, 1E-7, 3, inset=False)


Nearly Singular/Numerical Overflow

The gradient $f'(x)$ is near $0$, so that the next $x$ is orders of magnitude offset, perhaps too big even to be representable (overflow).


In [43]:
x0 = -0.268
esenummet.plot_newton_convergence_pattern(f1, 1E-8, x0, 1E-7, 3, inset=False, maxiter=1)


Oscillation

The iterations are trapped in a region with opposite sign gradients and no root.


In [44]:
def f2(x):
    return 2*x + x*np.sin(x-3) - 5

In [45]:
x = np.linspace(-4, 10, 100)
y = f2(x)

In [46]:
fig, ax = plt.subplots()
ax.plot(x, y, color='r',zorder=0)
xs = [2.8, -2.8, 1.]
texts = ['$root$', '$nearly singular/overflow$', '$oscillation$']
for i in range(len(xs)):
    ax.scatter([xs[i]], [f2(xs[i])],marker='x',s=60)
    ax.text(xs[i], f2(xs[i])-2, texts[i])
ax.plot(x,np.zeros_like(x),color='gray',ls='-.',alpha=0.75)
ax.set_xlabel('$x$')
ax.set_ylabel('$f_n(x)$')
plt.title('$f(x)= 2x + x sin(x-3) - 5$')
plt.show()



In [47]:
x0 = 1
esenummet.plot_newton_convergence_pattern(f2, 1E-8, x0, 1E-7, 2, inset=False, maxiter=14)


Problems

For the equation used above,

\begin{equation} f(x) = x sin(\pi x)-exp(-x) \end{equation}

find the onset of the initial guess $x_0$ in

np.linspace(0.5,1.,15)

that causes the Newton algorithm to converge to the root $0.82$ rather than $0.58$


In [48]:
def f(x):
    return  x*np.sin(np.pi*x)-np.exp(-x)

xspace = np.linspace(0.5, 1., 15)

for x in xspace:
    print 'x_0 = {0:4.2f} yields root {1:5.2f}'.format(x, scop.newton(f,x))


x_0 = 0.50 yields root  0.58
x_0 = 0.54 yields root  0.58
x_0 = 0.57 yields root  0.58
x_0 = 0.61 yields root  0.58
x_0 = 0.64 yields root  0.58
x_0 = 0.68 yields root  0.58
x_0 = 0.71 yields root  0.82
x_0 = 0.75 yields root  0.82
x_0 = 0.79 yields root  0.82
x_0 = 0.82 yields root  0.82
x_0 = 0.86 yields root  0.82
x_0 = 0.89 yields root  0.82
x_0 = 0.93 yields root  0.82
x_0 = 0.96 yields root  0.82
x_0 = 1.00 yields root  0.82

For the equation used above,

\begin{equation} f(x) = x sin(\pi x)-exp(-x) \end{equation}

has one root $(0.58)$ in $[-0.7,0.7]$. How many roots does the Newton algorithm scop.newton find with initial guesses $x_0$

np.linspace(-0.7,0.7,15)

Does scop.bisection and scop.ridder find the root with a, b = -0.7, 0.7


In [49]:
xspace = np.linspace(-0.7, 0.7, 15)

for x in xspace:
    print 'x_0 = {0:5.2f} yields root {1:5.2f}'.format(x, scop.newton(f,x))


x_0 = -0.70 yields root  2.02
x_0 = -0.60 yields root  0.58
x_0 = -0.50 yields root  1.27
x_0 = -0.40 yields root  0.82
x_0 = -0.30 yields root -0.30
x_0 = -0.20 yields root  0.82
x_0 = -0.10 yields root  2.02
x_0 =  0.00 yields root  0.82
x_0 =  0.10 yields root  0.58
x_0 =  0.20 yields root  0.58
x_0 =  0.30 yields root  0.58
x_0 =  0.40 yields root  0.58
x_0 =  0.50 yields root  0.58
x_0 =  0.60 yields root  0.58
x_0 =  0.70 yields root  0.58

In [50]:
a, b = -0.7, 0.7
print 'bisection: {0:5.2f}'.format(scop.bisect(f, a, b))
print 'ridder:    {0:5.2f}'.format(scop.ridder(f, a, b))


bisection:  0.58
ridder:     0.58

In [50]: