★ Numerical Differentiaion and Integration ★


In [1]:
# Import modules
import math
import sympy as sym
import numpy as np
import scipy 
import matplotlib.pyplot as plt
import plotly
import plotly.plotly as ply
import plotly.figure_factory as ply_ff
from IPython.display import Math
from IPython.display import display

# Startup plotly
plotly.offline.init_notebook_mode(connected=True)

''' Fix MathJax issue '''
# The polling here is to ensure that plotly.js has already been loaded before
# setting display alignment in order to avoid a race condition.
from IPython.core.display import display, HTML
display(HTML(
    '<script>'
        'var waitForPlotly = setInterval( function() {'
            'if( typeof(window.Plotly) !== "undefined" ){'
                'MathJax.Hub.Config({ SVG: { font: "STIX-Web" }, displayAlign: "center" });'
                'MathJax.Hub.Queue(["setRenderer", MathJax.Hub, "SVG"]);'
                'clearInterval(waitForPlotly);'
            '}}, 250 );'
    '</script>'
))


5.1 Numerical Differentiation

Two-point forward-difference formula

$f'(x) = \frac{f(x+h) - f(x)}{h} - \frac{h}{2}f''(c)$

where $c$ is between $x$ and $x+h$

Example

Use the two-point forward-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1/x$ at $x = 2$


In [53]:
# Parameters
x = 2
h = 0.1

# Symbolic computation
sym_x = sym.Symbol('x')
sym_deri_x1 = sym.diff(1 / sym_x, sym_x)
sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()

# Approximation
f = lambda x : 1 / x
deri_x1 = (f(x + h) - f(x)) / h

# Comparison
print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )


approximate = -0.238095, real value = -0.250000, backward error = 0.011905

Three-point centered-difference formula

$f'(x) = \frac{f(x+h) - f(x-h)}{2h} - \frac{h^2}{6}f'''(c)$

where $x-h < c < x+h$

Example

Use the three-point centered-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1 / x$ at $x = 2$


In [56]:
# Parameters
x = 2
h = 0.1
f = lambda x : 1 / x

# Symbolic computation
sym_x = sym.Symbol('x')
sym_deri_x1 = sym.diff(1 / sym_x, sym_x)
sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()

# Approximation
deri_x1 = (f(x + h) - f(x - h)) / (2 * h)

# Comparison
print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )


approximate = -0.250627, real value = -0.250000, backward error = 0.000627

Three-point centered-difference formula for second derivative

$f''(x) = \frac{f(x - h) - 2f(x) + f(x + h)}{h^2} - \frac{h^2}{12}f^{(iv)}(c)$

for some $c$ between $x - h$ and $x + h$

Rounding error

Example

Approximate the derivative of $f(x) = e^x$ at $x = 0$


In [117]:
# Parameters
f = lambda x : math.exp(x)
real_value = 1
h_msg = "$10^{-%d}$"
twp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h
thp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h)

data = [
    ["h", 
     "$f'(x) \\approx \\frac{e^{x+h} - e^x}{h}$", 
     "error", 
     "$f'(x) \\approx \\frac{e^{x+h} - e^{x-h}}{2h}$", 
     "error"],
]

for i in range(1,10):
    h = pow(10, -i)
    twp_deri_x1_value = twp_deri_x1(0, h) 
    thp_deri_x1_value = thp_deri_x1(0, h)
    row = ["", "", "", "", ""]
    row[0] = h_msg %i
    row[1] = '%.14f' %twp_deri_x1_value
    row[2] = '%.14f' %abs(twp_deri_x1_value - real_value)
    row[3] = '%.14f' %thp_deri_x1_value
    row[4] = '%.14f' %abs(thp_deri_x1_value - real_value)
    data.append(row)

table = ply_ff.create_table(data)
plotly.offline.iplot(table, show_link=False)


Extrapolation for order n formula

$ Q \approx \frac{2^nF(h/2) - F(h)}{2^n - 1} $


In [2]:
sym.init_printing(use_latex=True)

x = sym.Symbol('x')
dx = sym.diff(sym.exp(sym.sin(x)), x)
Math('Derivative : %s' %sym.latex(dx) )


Out[2]:
$$Derivative : e^{\sin{\left (x \right )}} \cos{\left (x \right )}$$

5.2 Newton-Cotes Formulas For Numerical Integration

Trapezoid Rule

$\int_{x_0}^{x_1} f(x) dx = \frac{h}{2}(y_0 + y_1) - \frac{h^3}{12}f''(c)$

where $h = x_1 - x_0$ and $c$ is between $x_0$ and $x_1$

Simpson's Rule

$\int_{x_0}^{x_2} f(x) dx = \frac{h}{3}(y_0 + 4y_1 + y_2) - \frac{h^5}{90}f^{(iv)}(c)$

where $h = x_2 - x_1 = x_1 - x_0$ and $c$ is between $x_0$ and $x_2$

Example

Apply the Trapezoid Rule and Simpson's Rule to approximate $\int_{1}^{2} \ln(x) dx$ and find an upper bound for the error in your approximations


In [42]:
# Apply Trapezoid Rule
trapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2])

# Evaluate the error term of Trapezoid Rule
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 2)
trapz_err = abs(expr.subs(sym_x, 1).evalf() / 12)

# Print out results
print('Trapezoid rule : %f and upper bound error : %f' %(trapz, trapz_err) )


Trapezoid rule : 0.346574 and upper bound error : 0.083333

In [59]:
# Apply Simpson's Rule
area = scipy.integrate.simps([np.log(1), np.log(1.5), np.log(2)], [1, 1.5, 2])

# Evaluate the error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 4)
simps_err = abs( pow(0.5, 5) / 90 * expr.subs(sym_x, 1).evalf() )

# Print out results
print('Simpson\'s rule : %f and upper bound error : %f' %(area, simps_err) )


Simpson's rule : 0.385835 and upper bound error : 0.002083

Composite Trapezoid Rule

$\int_{a}^{b} f(x) dx = \frac{h}{2} \left ( y_0 + y_m + 2\sum_{i=1}^{m-1}y_i \right ) - \frac{(b-a)h^2}{12}f''(c)$

where $h = (b - a) / m $ and $c$ is between $a$ and $b$

Composite Simpson's Rule

$ \int_{a}^{b}f(x)dx = \frac{h}{3}\left [ y_0 + y_{2m} + 4\sum_{i=1}^{m}y_{2i-1} + 2\sum_{i=1}^{m - 1}y_{2i} \right ] - \frac{(b-a)h^4}{180}f^{(iv)}(c) $

where $c$ is between $a$ and $b$

Example

Carry out four-panel approximations of $\int_{1}^{2} \ln{x} dx$ using the composite Trapezoid Rule and composite Simpson's Rule


In [16]:
# Apply composite Trapezoid Rule
x = np.linspace(1, 2, 5)
y = np.log(x)
trapz = scipy.integrate.trapz(y, x)

# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 2)
trapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() )

print('Trapezoid Rule : %f, error = %f' %(trapz, trapz_err) )


Trapezoid Rule : 0.383700, error = 0.005208

In [19]:
# Apply composite Trapezoid Rule
x = np.linspace(1, 2, 9)
y = np.log(x)
area = scipy.integrate.simps(y, x)

# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.log(sym_x), sym_x, 4)
simps_err = abs( (2 - 1) * pow(0.125, 4) / 180 * expr.subs(sym_x, 1).evalf() )

print('Simpson\'s Rule : %f, error = %f' %(area, simps_err) )


Simpson's Rule : 0.386292, error = 0.000008

Midpoint Rule

$ \int_{x_0}^{x_1} f(x)dx = hf(\omega) + \frac{h^3}{24}f''(c) $

where $ h = (x_1 - x_0) $, $\omega$ is the midpoint $ x_0 + h / 2 $, and $c$ is between $x_0$ and $x_1$

Composite Midpoint Rule

$ \int_{a}^{b} f(x) dx = h \sum_{i=1}^{m}f(\omega_{i}) + \frac{(b - a)h^2}{24} f''(c) $

where $h = (b - a) / m$ and $c$ is between $a$ and $b$. The $\omega_{i}$ are the midpoints of the $m$ equal subintervals of $[a,b]$

Example

Approximate $\int_{0}^{1} \frac{\sin x}{x} dx$ by using the composite Midpoint Rule with $m = 10$ panels


In [50]:
# Parameters
m = 10
h = (1 - 0) / m
f = lambda x : np.sin(x) / x
mids = np.arange(0 + h/2, 1, h)

# Apply composite midpoint rule
area = h * np.sum(f(mids))

# Error term
sym_x = sym.Symbol('x')
expr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2)
mid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() )

# Print out
print('Composite Midpoint Rule : %.8f, error = %.8f' %(area, mid_err) )


Composite Midpoint Rule : 0.94620858, error = 0.00009964

5.3 Romberg Integration


In [181]:
def romberg(f, a, b, step):
    R = np.zeros(step * step).reshape(step, step)
    R[0][0] = (b - a) * (f(a) + f(b)) / 2
    for j in range(1, step):
        h = (b - a) / pow(2, j)
        summ = 0
        for i in range(1, pow(2, j - 1) + 1):
            summ += h * f(a + (2 * i - 1) * h)
        R[j][0] = 0.5 * R[j - 1][0] + summ
        
        for k in range(1, j + 1):
            R[j][k] = ( pow(4, k) * R[j][k - 1] - R[j - 1][k - 1] ) / ( pow(4, k) - 1 )
        
    return R[step - 1][step - 1]

Example

Apply Romberg Integration to approximate $\int_{1}^{2} \ln{x}dx$


In [183]:
f = lambda x : np.log(x)
result = romberg(f, 1, 2, 4)
print('Romberg Integration : %f' %(result) )


Romberg Integration : 0.386294

In [184]:
f = lambda x : np.log(x)
result = scipy.integrate.romberg(f, 1, 2, show=True)
print('Romberg Integration : %f' %(result) )


Romberg integration of <function vectorize1.<locals>.vfunc at 0x7fa72aad8510> from [1, 2]

 Steps  StepSize   Results
     1  1.000000  0.346574 
     2  0.500000  0.376019  0.385835 
     4  0.250000  0.383700  0.386260  0.386288 
     8  0.125000  0.385644  0.386292  0.386294  0.386294 
    16  0.062500  0.386132  0.386294  0.386294  0.386294  0.386294 
    32  0.031250  0.386254  0.386294  0.386294  0.386294  0.386294  0.386294 

The final result is 0.38629436112 after 33 function evaluations.
Romberg Integration : 0.386294

5.4 Adaptive Quadrature


In [105]:
''' Use Trapezoid Rule '''

def adaptive_quadrature(f, a, b, tol):
    return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)
    
def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):
    c = (a + b) / 2
    S = lambda x, y : (y - x) * (f(x) + f(y)) / 2
    if abs( S(a, b) - S(a, c) - S(c, b) ) < 3 * tol * (b - a) / (orig_b - orig_a) or deep > 20 :
        return S(a, c) + S(c, b)
    else:
        return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)

In [106]:
''' Use Simpon's Rule '''

def adaptive_quadrature(f, a, b, tol):
    return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)
    
def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):
    c = (a + b) / 2
    S = lambda x, y : (y - x) * ( f(x) + 4 * f((x + y) / 2) + f(y) ) / 6
    if abs( S(a, b) - S(a, c) - S(c, b) ) < 15 * tol  or deep > 20 :
        return S(a, c) + S(c, b)
    else:
        return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)

Example

Use Adaptive Quadrature to approximate the integral $ \int_{-1}^{1} (1 + \sin{e^{3x}}) dx $


In [108]:
f = lambda x : 1 + np.sin(np.exp(3 * x))
val = adaptive_quadrature(f, -1, 1, tol=1e-12)
print(val)


2.50080911034

5.5 Gaussian Quadrature


In [28]:
poly = scipy.special.legendre(2)
# Find roots of polynomials
comp = scipy.linalg.companion(poly)
roots = scipy.linalg.eig(comp)[0]

Example

Approximate $\int_{-1}^{1} e^{-\frac{x^2}{2}}dx$ using Gaussian Quadrature


In [29]:
f = lambda x : np.exp(-np.power(x, 2) / 2)
quad = scipy.integrate.quadrature(f, -1, 1)
print(quad[0])


1.71124878401

In [50]:
# Parametes
a = -1
b = 1
deg = 3
f = lambda x : np.exp( -np.power(x, 2) / 2 )

x, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss
quad = np.sum(w * f(x))
    
print(quad)


1.7120202452

Example

Approximate the integral $\int_{1}^{2} \ln{x} dx$ using Gaussian Quadrature


In [66]:
# Parametes
a = 1
b = 2
deg = 4
f = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2

x, w = scipy.special.p_roots(deg)
np.sum(w * f(x))


Out[66]:
0.38629449693871409