ILI285 - Computación Científica I / INF285 - Computación Científica

Roots of 1D equations

[S]cientific [C]omputing [T]eam

Version: 1.32


In [1]:
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
%matplotlib inline
from ipywidgets import interact
from ipywidgets import widgets
sym.init_printing()
from scipy import optimize

Introduction

Hello again! In this document we're going to learn how to find a 1D equation's solution using numerical methods. First, let's start with the definition of a root:

Definition: The function $f(x)$ has a root in $x = r$ if $f(r) = 0$.

An example: Let's say we want to solve the equation $x + \log(x) = 3$. We can rearrange the equation: $x + \log(x) - 3 = 0$. That way, to find its solution we can find the root of $f(x) = x + \log(x) - 3$. Now let's study some numerical methods to solve these kinds of problems.

Defining a function $f(x)$


In [2]:
f = lambda x: x+np.log(x)-3

Finding $r$ using sympy


In [3]:
y = sym.Symbol('y')
fsym = lambda y: y+sym.log(y)-3
r_all=sym.solve(sym.Eq(fsym(y), 0), y)
r=r_all[0].evalf()
print(r)
print(r_all)


2.20794003156932
[LambertW(exp(3))]

In [4]:
def find_root_manually(r=2.0): 
    x = np.linspace(1,3,1000)
    plt.figure(figsize=(8,8))
    plt.plot(x,f(x),'b-')
    plt.grid()
    plt.ylabel('$f(x)$',fontsize=16)
    plt.xlabel('$x$',fontsize=16)
    plt.title('What is r such that $f(r)='+str(f(r))+'$? $r='+str(r)+'$',fontsize=16)
    plt.plot(r,f(r),'k.',markersize=20)
    plt.show()
interact(find_root_manually,r=(1e-5,3,1e-3))


Out[4]:
<function __main__.find_root_manually(r=2.0)>

Bisection Method

The bisection method finds the root of a function $f$, where $f$ is a continuous function. If we want to know if this has a root, we have to check if there is an interval $[a,b]$ for which $f(a)\cdot f(b) < 0$. When these 2 conditions are satisfied, it means that there is a value $r$, between $a$ and $b$, for which $f(r) = 0$. To summarize how this method works, start with the aforementioned interval (checking that there's a root in it), and split it into two smaller intervals $[a,c]$ and $[c,b]$. Then, check which of the two intervals contains a root. Keep splitting each "eligible" interval until the algorithm converges or the tolerance is surpassed.


In [5]:
def bisect(f, a, b, tol=1e-8):
    fa = f(a)
    fb = f(b)
    i = 0
    # Just checking if the sign is not negative => not root  necessarily 
    if np.sign(f(a)*f(b)) >= 0:
        print('f(a)f(b)<0 not satisfied!')
        return None
  
    #Printing the evolution of the computation of the root
    print(' i |     a     |     c     |     b     |     fa    |     fc     |     fb     |   b-a')
    print('----------------------------------------------------------------------------------------')
    
    while(b-a)/2 > tol:
        c = (a+b)/2.
        fc = f(c)
        print('%2d | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f' %
              (i+1, a, c, b, fa, fc, fb, b-a))
        # Did we find the root?
        if fc == 0:
            print('f(c)==0')
            break
        elif np.sign(fa*fc) < 0:
            b = c
            fb = fc
        else:
            a = c
            fa = fc
        i += 1
        
    xc = (a+b)/2.
    return xc

In [6]:
## Finding a root of cos(x). What about if you change the interval?
#f = lambda x: np.cos(x)
## Another function
#f = lambda x: x**3-2*x**2+(4/3)*x-(8/27)
## Computing the cubic root of 7.
#f = lambda x: x**3-7
#bisect(f,0,2)
f = lambda x: x*np.exp(x)-3
#f2 = lambda x: np.cos(x)-x
bisect(f,0,3,tol=1e-13)


 i |     a     |     c     |     b     |     fa    |     fc     |     fb     |   b-a
----------------------------------------------------------------------------------------
 1 | 0.0000000 | 1.5000000 | 3.0000000 | -3.0000000 | 3.7225336 | 57.2566108 | 3.0000000
 2 | 0.0000000 | 0.7500000 | 1.5000000 | -3.0000000 | -1.4122500 | 3.7225336 | 1.5000000
 3 | 0.7500000 | 1.1250000 | 1.5000000 | -1.4122500 | 0.4652440 | 3.7225336 | 0.7500000
 4 | 0.7500000 | 0.9375000 | 1.1250000 | -1.4122500 | -0.6060099 | 0.4652440 | 0.3750000
 5 | 0.9375000 | 1.0312500 | 1.1250000 | -0.6060099 | -0.1077879 | 0.4652440 | 0.1875000
 6 | 1.0312500 | 1.0781250 | 1.1250000 | -0.1077879 | 0.1687856 | 0.4652440 | 0.0937500
 7 | 1.0312500 | 1.0546875 | 1.0781250 | -0.1077879 | 0.0280899 | 0.1687856 | 0.0468750
 8 | 1.0312500 | 1.0429688 | 1.0546875 | -0.1077879 | -0.0404419 | 0.0280899 | 0.0234375
 9 | 1.0429688 | 1.0488281 | 1.0546875 | -0.0404419 | -0.0063254 | 0.0280899 | 0.0117188
10 | 1.0488281 | 1.0517578 | 1.0546875 | -0.0063254 | 0.0108447 | 0.0280899 | 0.0058594
11 | 1.0488281 | 1.0502930 | 1.0517578 | -0.0063254 | 0.0022503 | 0.0108447 | 0.0029297
12 | 1.0488281 | 1.0495605 | 1.0502930 | -0.0063254 | -0.0020399 | 0.0022503 | 0.0014648
13 | 1.0495605 | 1.0499268 | 1.0502930 | -0.0020399 | 0.0001046 | 0.0022503 | 0.0007324
14 | 1.0495605 | 1.0497437 | 1.0499268 | -0.0020399 | -0.0009678 | 0.0001046 | 0.0003662
15 | 1.0497437 | 1.0498352 | 1.0499268 | -0.0009678 | -0.0004316 | 0.0001046 | 0.0001831
16 | 1.0498352 | 1.0498810 | 1.0499268 | -0.0004316 | -0.0001635 | 0.0001046 | 0.0000916
17 | 1.0498810 | 1.0499039 | 1.0499268 | -0.0001635 | -0.0000294 | 0.0001046 | 0.0000458
18 | 1.0499039 | 1.0499153 | 1.0499268 | -0.0000294 | 0.0000376 | 0.0001046 | 0.0000229
19 | 1.0499039 | 1.0499096 | 1.0499153 | -0.0000294 | 0.0000041 | 0.0000376 | 0.0000114
20 | 1.0499039 | 1.0499067 | 1.0499096 | -0.0000294 | -0.0000127 | 0.0000041 | 0.0000057
21 | 1.0499067 | 1.0499082 | 1.0499096 | -0.0000127 | -0.0000043 | 0.0000041 | 0.0000029
22 | 1.0499082 | 1.0499089 | 1.0499096 | -0.0000043 | -0.0000001 | 0.0000041 | 0.0000014
23 | 1.0499089 | 1.0499092 | 1.0499096 | -0.0000001 | 0.0000020 | 0.0000041 | 0.0000007
24 | 1.0499089 | 1.0499091 | 1.0499092 | -0.0000001 | 0.0000009 | 0.0000020 | 0.0000004
25 | 1.0499089 | 1.0499090 | 1.0499091 | -0.0000001 | 0.0000004 | 0.0000009 | 0.0000002
26 | 1.0499089 | 1.0499089 | 1.0499090 | -0.0000001 | 0.0000002 | 0.0000004 | 0.0000001
27 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000001 | 0.0000000 | 0.0000002 | 0.0000000
28 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000001 | -0.0000000 | 0.0000000 | 0.0000000
29 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
30 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
31 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
32 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
33 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
34 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
35 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
36 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
37 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
38 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
39 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
40 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
41 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
42 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
43 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
44 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
Out[6]:
$\displaystyle 1.049908894963977$

It's very important to define a concept called convergence rate. This rate shows how fast the convergence of a method is at a specified point.

The convergence rate for the bisection is always 0.5 because this method uses the half of the interval for each iteration.

Cobweb Plot


In [7]:
def cobweb(x,g=None):
    min_x = np.amin(x)
    max_x = np.amax(x)
    
    plt.figure(figsize=(10,10))
    ax = plt.axes()
    plt.plot(np.array([min_x,max_x]),np.array([min_x,max_x]),'b-')
    for i in np.arange(x.size-1):
        delta_x = x[i+1]-x[i]
        head_length =  np.abs(delta_x)*0.04
        arrow_length = delta_x-np.sign(delta_x)*head_length
        ax.arrow(x[i], x[i], 0, arrow_length, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
        ax.arrow(x[i], x[i+1], arrow_length, 0, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
    
    if g!=None:
        y = np.linspace(min_x,max_x,1000)
        plt.plot(y,g(y),'r')
    
    plt.title('Cobweb diagram')
    plt.grid(True)
    plt.show()

Fixed Point Iteration

To learn about the Fixed-Point Iteration we will first learn about the concept of Fixed Point.

A Fixed Point of a function $g$ is a real number $r$, where $g(r) = r$

The Fixed-Point Iteration is based in the Fixed Point concept and works like this to find the root of a function:

\begin{equation} x_{0} = initial\_guess \\ x_{i+1} = g(x_{i})\end{equation}

To find an equation's solution using this method you'll have to move around some things to rearrange the equation in the form $x = g(x)$. That way, you'll be iterating over the funcion $g(x)$, but you will not find $g$'s root, but $f(x) = g(x) - x$ (or $f(x) = x - g(x)$)'s root. In our following example, we'll find the solution to $f(x) = x - \cos(x)$ by iterating over the funcion $g(x) = \cos(x)$.


In [8]:
def fpi(g, x0, k, flag_cobweb=False):
    x = np.empty(k+1)
    x[0] = x0
    error_i = np.nan
    
    print(' i |     x(i)     |    x(i+1)    ||x(i+1)-x(i)| | e_i/e_{i-1}')
    print('--------------------------------------------------------------')
    
    for i in range(k):
        x[i+1] = g(x[i])
        error_iminus1 = error_i
        error_i = abs(x[i+1]-x[i])
        print('%2d | %.10f | %.10f | %.10f | %.10f' %
              (i,x[i],x[i+1],error_i,error_i/error_iminus1))
    if flag_cobweb:
        cobweb(x,g)
    return x[-1]

In [24]:
g = lambda x: np.cos(x)
fpi2(g, 2, 20, True)


 i |     x(i)     |    x(i+1)    ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
-----------------------------------------------------------------------------
 0 | 2.0000000000 | -0.4161468365 | 2.4161468365 | 0.0000000000 | 0.0000000000
 1 | -0.4161468365 | 0.9146533259 | 1.3308001624 | 0.5507944063 | 0.2279639623
 2 | 0.9146533259 | 0.6100652997 | 0.3045880261 | 0.2288758558 | 0.1719836398
 3 | 0.6100652997 | 0.8196106080 | 0.2095453083 | 0.6879630527 | 2.2586674253
 4 | 0.8196106080 | 0.6825058579 | 0.1371047501 | 0.6542964443 | 3.1224580961
 5 | 0.6825058579 | 0.7759946131 | 0.0934887552 | 0.6818783095 | 4.9734112712
 6 | 0.7759946131 | 0.7137247340 | 0.0622698791 | 0.6660681166 | 7.1245800092
 7 | 0.7137247340 | 0.7559287136 | 0.0422039796 | 0.6777591376 | 10.8842211877
 8 | 0.7559287136 | 0.7276347923 | 0.0282939213 | 0.6704088465 | 15.8849675651
 9 | 0.7276347923 | 0.7467496017 | 0.0191148094 | 0.6755800739 | 23.8772161593
10 | 0.7467496017 | 0.7339005972 | 0.0128490045 | 0.6722015485 | 35.1665315532
11 | 0.7339005972 | 0.7425675503 | 0.0086669531 | 0.6745233116 | 52.4961534769
12 | 0.7425675503 | 0.7367348584 | 0.0058326919 | 0.6729806736 | 77.6490502525
13 | 0.7367348584 | 0.7406662639 | 0.0039314055 | 0.6740293405 | 115.5605938426
14 | 0.7406662639 | 0.7380191412 | 0.0026471227 | 0.6733273142 | 171.2688547776
15 | 0.7380191412 | 0.7398027782 | 0.0017836370 | 0.6738021758 | 254.5413469114
16 | 0.7398027782 | 0.7386015286 | 0.0012012496 | 0.6734832006 | 377.5898286760
17 | 0.7386015286 | 0.7394108086 | 0.0008092800 | 0.6736984720 | 560.8313921735
18 | 0.7394108086 | 0.7388657151 | 0.0005450935 | 0.6735536472 | 832.2875198866
19 | 0.7388657151 | 0.7392329181 | 0.0003672029 | 0.6736512865 | 1235.8453896737
Out[24]:
$\displaystyle 0.7392329180769628$

Let's quickly explain the Cobweb Diagram we have here. The blue line is the function $x$ and the red is the function $g(x)$. The point in which they meet is $g$'s fixed point. In this particular example, we start at $y = x = 1.5$ (the top right corner) and then we "jump" vertically to $y = \cos(1.5) \approx 0.07$. After this, we jump horizontally to $x = \cos(1.5) \approx 0.07$. Then, we jump again vertically to $y = \cos\left(\cos(1.5)\right) \approx 0.997$ and so on. See the pattern here? We're just iterating over $x = \cos(x)$, getting closer to the center of the diagram where the fixed point resides, in $x \approx 0.739$.

It's very important to mention that the algorithm will converge only if the rate of convergence $S < 1$, where $S = \left| g'(r) \right|$. If you want to use this method, you'll have to construct $g(x)$ starting from $f(x)$ accordingly. In this example, $g(x) = \cos(x) \Rightarrow g'(x) = -\sin(x)$ and $|-\sin(0.739)| \approx 0.67$.


In [22]:
g = lambda x: -(3/2)*x**2+(11/2)*x-2
gp = lambda x: -3*x+11/2

a=-1/2.7
g2 = lambda x: x+a*(x-g(x))

#x=np.linspace(2,3,100)
#plt.plot(x,gp(x),'-')
#plt.plot(x,gp(x)*0+1,'r-')
#plt.plot(x,gp(x)*0-1,'g-')
#plt.grid(True)
#plt.show()
fpi(g2, 2.45, 12, True)


 i |     x(i)     |    x(i+1)    ||x(i+1)-x(i)| | e_i/e_{i-1}
--------------------------------------------------------------
 0 | 2.4500000000 | 2.4578703704 | 0.0078703704 | nan
 1 | 2.4578703704 | 2.4573987149 | 0.0004716554 | 0.0599279835
 2 | 2.4573987149 | 2.4574289190 | 0.0000302040 | 0.0640383807
 3 | 2.4574289190 | 2.4574269922 | 0.0000019268 | 0.0637931300
 4 | 2.4574269922 | 2.4574271151 | 0.0000001229 | 0.0638088397
 5 | 2.4574271151 | 2.4574271073 | 0.0000000078 | 0.0638078348
 6 | 2.4574271073 | 2.4574271078 | 0.0000000005 | 0.0638078595
 7 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638072307
 8 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638043463
 9 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0634125082
10 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0618556701
11 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.1111111111
Out[22]:
$\displaystyle 2.457427107756338$

Newton's Method

For this method, we want to iteratively find some function $f(x)$'s root, that is, the number $r$ for which $f(r) = 0$. The algorithm is as follows:

\begin{equation} x_0 = initial\_guess \end{equation}\begin{equation} x_{i+1} = x_i - \cfrac{f(x_i)}{f'(x_i)} \end{equation}

which means that you won't be able to find $f$'s root if $f'(r) = 0$. In this case, you would have to use the modified version of this method, but for now let's focus on the unmodified version first. Newton's (unmodified) method is said to have quadratic convergence.


In [11]:
def newton_method(f, fp, x0, rel_error=1e-8, m=1):
    #Initialization of hybrid error and absolute
    hybrid_error = 100
    error_i = np.inf
    
    print('i |     x(i)     |     x(i+1)   |      |x(i+1)-x(i)|     |  e_i/e_{i-1} | e_i/e_{i-1}^2')
    print('----------------------------------------------------------------------------------------')
    
    #Iteration counter
    i = 1
    while (hybrid_error > rel_error and hybrid_error < 1e12 and i < 1e4):
        #Newton's iteration
        x1 = x0-m*f(x0)/fp(x0)
        
        #Checking if root was found
        if f(x1) == 0.0:
            hybrid_error = 0.0
            break
        
        #Computation of hybrid error
        hybrid_error = abs(x1-x0)/np.max([abs(x1),1e-12])
        
        #Computation of absolute error
        error_iminus1 = error_i
        error_i = abs(x1-x0)
        
        #Increasing counter
        i += 1
        
        #Showing some info
        print("%d | %.10f | %.10f | %.20f | %.10f | %.10f" % 
              (i, x0, x1, error_i, error_i/error_iminus1, error_i/(error_iminus1**2)))
        #Updating solution
        x0 = x1
    
    #Checking if solution was obtained
    if hybrid_error < rel_error: 
        return x1
    elif i>=1e4:
        print('Newton''s Method diverged. Too many iterations!!')
        return None
    else:
        print('Newton''s Method diverged!')
        return None

In [12]:
f = lambda x: np.sin(x)
fp = lambda x: np.cos(x) # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-14)


i |     x(i)     |     x(i+1)   |      |x(i+1)-x(i)|     |  e_i/e_{i-1} | e_i/e_{i-1}^2
----------------------------------------------------------------------------------------
2 | 3.1000000000 | 3.1416166546 | 0.04161665458563579278 | 0.0000000000 | 0.0000000000
3 | 3.1416166546 | 3.1415926536 | 0.00002400099584720650 | 0.0005767161 | 0.0138578204
4 | 3.1415926536 | 3.1415926536 | 0.00000000000000444089 | 0.0000000002 | 0.0000077092
Out[12]:
$\displaystyle 3.141592653589793$

In [32]:
f = lambda x: x**2
fp = lambda x: 2*x # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-2, m=2)


i |     x(i)     |     x(i+1)   |      |x(i+1)-x(i)|     |  e_i/e_{i-1} | e_i/e_{i-1}^2
----------------------------------------------------------------------------------------
Out[32]:
$\displaystyle 0.0$

Wilkinson Polynomial

https://en.wikipedia.org/wiki/Wilkinson%27s_polynomial

Final question: Why is the root far far away from $16$?


In [33]:
x = sym.symbols('x', reals=True)
W=1
for i in np.arange(1,21):
        W*=(x-i)
W # Printing W nicely


Out[33]:
$\displaystyle \left(x - 20\right) \left(x - 19\right) \left(x - 18\right) \left(x - 17\right) \left(x - 16\right) \left(x - 15\right) \left(x - 14\right) \left(x - 13\right) \left(x - 12\right) \left(x - 11\right) \left(x - 10\right) \left(x - 9\right) \left(x - 8\right) \left(x - 7\right) \left(x - 6\right) \left(x - 5\right) \left(x - 4\right) \left(x - 3\right) \left(x - 2\right) \left(x - 1\right)$

In [35]:
# Expanding the Wilkinson polynomial
We=sym.expand(W)
We


Out[35]:
$\displaystyle x^{20} - 210 x^{19} + 20615 x^{18} - 1256850 x^{17} + 53327946 x^{16} - 1672280820 x^{15} + 40171771630 x^{14} - 756111184500 x^{13} + 11310276995381 x^{12} - 135585182899530 x^{11} + 1307535010540395 x^{10} - 10142299865511450 x^{9} + 63030812099294896 x^{8} - 311333643161390640 x^{7} + 1206647803780373360 x^{6} - 3599979517947607200 x^{5} + 8037811822645051776 x^{4} - 12870931245150988800 x^{3} + 13803759753640704000 x^{2} - 8752948036761600000 x + 2432902008176640000$

In [36]:
# Just computiong the derivative
Wep=sym.diff(We,x)
Wep


Out[36]:
$\displaystyle 20 x^{19} - 3990 x^{18} + 371070 x^{17} - 21366450 x^{16} + 853247136 x^{15} - 25084212300 x^{14} + 562404802820 x^{13} - 9829445398500 x^{12} + 135723323944572 x^{11} - 1491437011894830 x^{10} + 13075350105403950 x^{9} - 91280698789603050 x^{8} + 504246496794359168 x^{7} - 2179335502129734480 x^{6} + 7239886822682240160 x^{5} - 17999897589738036000 x^{4} + 32151247290580207104 x^{3} - 38612793735452966400 x^{2} + 27607519507281408000 x - 8752948036761600000$

In [37]:
# Lamdifying the polynomial to be used with sympy
P=sym.lambdify(x,We)
Pp=sym.lambdify(x,Wep)

In [38]:
# Using scipy function to compute a root
root = optimize.newton(P,16)
print(root)


---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-38-af4a7a71aca0> in <module>
      1 # Using scipy function to compute a root
----> 2 root = optimize.newton(P,16)
      3 print(root)

~/opt/anaconda3/lib/python3.7/site-packages/scipy/optimize/zeros.py in newton(func, x0, fprime, args, tol, maxiter, fprime2, x1, rtol, full_output, disp)
    341     if disp:
    342         msg = "Failed to converge after %d iterations, value is %s" % (itr + 1, p)
--> 343         raise RuntimeError(msg)
    344 
    345     return _results_select(full_output, (p, funcalls, itr + 1, _ECONVERR))

RuntimeError: Failed to converge after 50 iterations, value is 16.013917983178537

Acknowledgements

  • Material created by professor Claudio Torres (ctorres@inf.utfsm.cl) and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. March 2016. v1.1.
  • Update April 2020 - v1.32 - C.Torres : Re-ordering the notebook.

Propose Classwork

Build a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$. Write down your solution below or go and see the solution


In [ ]:
print('Please try to think and solve before you see the solution!!!')

In class

From the textbook


In [ ]:
g1 = lambda x: 1-x**3
g2 = lambda x: (1-x)**(1/3)
g3 = lambda x: (1+2*x**3)/(1+3*x**2)
fpi(g3, 0.5, 10, True)

In [ ]:
g1p = lambda x: -3*x**2
g2p = lambda x: -(1/3)*(1-x)**(-2/3)
g3p = lambda x: ((1+3*x**2)*(6*x**2)-(1+2*x**3)*6*x)/((1+3*x**2)**2)
r=0.6823278038280194
print(g3p(r))

Adding another implementation of FPI including and extra column for analyzing quadratic convergence


In [18]:
def fpi2(g, x0, k, flag_cobweb=False):
    x = np.empty(k+1)
    x[0] = x0
    error_i = np.inf
    
    print(' i |     x(i)     |    x(i+1)    ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2')
    print('-----------------------------------------------------------------------------')
    
    for i in range(k):
        x[i+1] = g(x[i])
        error_iminus1 = error_i
        error_i = abs(x[i+1]-x[i])
        print('%2d | %.10f | %.10f | %.10f | %.10f | %.10f' %
              (i,x[i],x[i+1],error_i,error_i/error_iminus1,error_i/(error_iminus1**2)))
    if flag_cobweb:
        cobweb(x,g)
    return x[-1]

Which function shows quadratic convergence? Why?


In [ ]:
g1 = lambda x: (4./5.)*x+1./x
g2 = lambda x: x/2.+5./(2*x)
g3 = lambda x: (x+5.)/(x+1)
fpi2(g1, 3.0, 30, True)

Building a FPI to compute the cubic root of 7


In [ ]:
# What is 'a'? Can we find another 'a'?
a = -3*(1.7**2)
print(a)

In [ ]:
f = lambda x: x**3-7
g =  lambda x: f(x)/a+x
r=fpi(g, 1.7, 14, True)
print(f(r))

Playing with some roots


In [ ]:
f = lambda x: 8*x**4-12*x**3+6*x**2-x
fp = lambda x: 32*x**3-36*x**2+12*x-1

x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,10))

plt.title('What are we seeing with the semiloigy plot? Is this function differentiable?')
plt.semilogy(x,np.abs(f(x)),'b-')
plt.semilogy(x,np.abs(fp(x)),'r-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.show()

In [ ]:
r=newton_method(f, fp, 0.3, rel_error=1e-8, m=1)
print([r,f(r)])
# Is this showing quadratic convergence? If not, can you fix it?

Solutions

Problem: Build a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$


In [20]:
# We are finding the 1/a
# Solution code:
a = 2.1
g = lambda x: 2*x-a*x**2
gp = lambda x: 2-2*a*x
r=fpi2(g, 0.7, 7, flag_cobweb=True)
print([r,1/a])
# Are we seeing quadratic convergence?


 i |     x(i)     |    x(i+1)    ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
-----------------------------------------------------------------------------
 0 | 0.7000000000 | 0.3710000000 | 0.3290000000 | 0.0000000000 | 0.0000000000
 1 | 0.3710000000 | 0.4529539000 | 0.0819539000 | 0.2491000000 | 0.7571428571
 2 | 0.4529539000 | 0.4750566054 | 0.0221027054 | 0.2696968100 | 3.2908355795
 3 | 0.4750566054 | 0.4761877763 | 0.0011311709 | 0.0511779387 | 2.3154603813
 4 | 0.4761877763 | 0.4761904762 | 0.0000026999 | 0.0023867984 | 2.1100246103
 5 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000056698 | 2.1000206476
 6 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000000000 | 0.0000000000
[0.47619047619047616, 0.47619047619047616]

What is this plot telling us?


In [21]:
xx=np.linspace(0.2,0.8,1000)
plt.figure(figsize=(10,10))
plt.plot(xx,g(xx),'-',label=r'$g(x)$')
plt.plot(xx,gp(xx),'r-',label=r'$gp(x)$')
plt.plot(xx,xx,'g-',label=r'$x$')
plt.plot(xx,0*xx+1,'k--')
plt.plot(xx,0*xx-1,'k--')
plt.legend(loc='best')
plt.grid()
plt.show()


In-Class 20200420


In [ ]:
g = lambda x: (x**2-1.)/2.
gh = lambda x: x+0.7*(x-g(x))
#fpi(g, 2, 15, True)
fpi2(gh, 0, 15, True)

In-Class 20200423


In [ ]:
g1 = lambda x: np.cos(x)
g2 = lambda x: np.cos(x)**2
g3 = lambda x: np.sin(x)
g4 = lambda x: 1-5*x+(15/2)*x**2-(5/2)*x**3 # (0,1) y (1,2)

interact(lambda x0, n: fpi2(g4, x0, n, True),x0=widgets.FloatSlider(min=0, max=3, step=0.01, value=0), n=widgets.IntSlider(min=10, max=100, step=1, value=10))

In [ ]: