Learning SciPy for Numerical and Scientific Computing

Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).

NOTE: This IPython notebook should be read alonside the corresponding chapter in the book, where each piece of code is fully explained.

Chapter 4. SciPy for Numerical Analysis

Summary

This chapter is without a doubt one of the most interesting chapters in this book. It covers with great detail the definition and manipulation of functions (one or several variables), the extraction of their roots, extreme values (optimization), computation of derivatives, integration, interpolation, regression, and applications to the solution of ordinary differential equations.

Evaluation of special functions

Special functions (scipy.special)

Convenience and test functions


In [1]:
import scipy.special
import numpy

In [2]:
a=scipy.special.exp10(-16)
numpy.log(1+a)


Out[2]:
0.0

In [3]:
scipy.special.log1p(a)


Out[3]:
9.9999999999999998e-17

Univariate polynomials


In [12]:
P1=numpy.poly1d([1,0,3])           # using coefficients
print(P1)


   2
1 x + 3

In [5]:
print(P1.r) #roots


[-0.+1.j  0.-1.j]

In [6]:
print(P1.o)#order


2

In [7]:
print (P1.deriv()) # derivative


 
2 x

In [10]:
P2=numpy.poly1d([1,1,1], True)     # using roots
print(P2)


   3     2
1 x - 3 x + 3 x - 1

In [6]:
P2=numpy.poly1d([1,1,1], True, variable='z') 
print (P2)


   3     2
1 z - 3 z + 3 z - 1

In [13]:
P1( scipy.arange(10) )           # evaluate at 0,1,...,9


Out[13]:
array([ 3,  4,  7, 12, 19, 28, 39, 52, 67, 84])

In [14]:
P1.__call__(scipy.arange(10))    # same evaluation


Out[14]:
array([ 3,  4,  7, 12, 19, 28, 39, 52, 67, 84])

Let's consider and evaluate the approximation of the natural logarithm:

$$ \boxed{ \begin{equation} ln(1+x) \approx 1 + \frac{x^2}{2} \quad \text{if} \quad x \to 0 \end{equation} }$$


In [9]:
Px=numpy.poly1d([-(1./2.),1,0])

In [10]:
print(Px)


      2
-0.5 x + 1 x

In [15]:
a=1./10000000000000000.
print(a)


1e-16

In [12]:
Px(a)


Out[12]:
9.9999999999999998e-17

In [21]:
P1=numpy.poly1d([1,0,1]) 
print(P1)


   2
1 x + 1

As mentioned in the book, the following four (In []) lines of code show that the respective commands are equivalent, as shown by the corresponding Out[] lines


In [17]:
print(scipy.polyadd(P1, scipy.poly1d([2,1])))


   2
1 x + 2 x + 2

In [18]:
print(scipy.polyadd(P1, [2,1]))


   2
1 x + 2 x + 2

In [19]:
print(P1 + scipy.poly1d([2,1]))


   2
1 x + 2 x + 2

In [20]:
print(P1 + [2,1])


   2
1 x + 2 x + 2

In [16]:
P1/[2,1]


Out[16]:
(poly1d([ 0.5 , -0.25]), poly1d([ 3.25]))

As mentioned in the book, the output sould be read as follows:

$$ \boxed{ \begin{equation} \frac{x^2 + 1}{2x + 1} = \underbrace{(\frac{1}{2}x - \frac{1}{4})}_\text{quotient} + \overbrace{ (\frac{5/4}{2x+1}) }^\text{reminder} \end{equation} }$$


In [18]:
x=numpy.linspace(-1,1,1000)

In [19]:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,scipy.special.eval_jacobi(3,0,1,x))
plt.show()


The gamma function

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following expressions is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{equation} \Gamma(z) = \int_0^\infty \mathrm{e}^{-t}\, \mathrm{t}^{z-1} \,\mathrm{d}t \end{equation} }$$

$$ \boxed{ \begin{equation} ln(a!/b!) \approx 10^{10} \Psi(a) \end{equation} }$$


In [29]:
(10**10)*scipy.special.psi(10**15)


Out[29]:
345387763949.10681

The Riemann zeta function

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following expressions is fully explained on the corresponding section for this chapter in the book

$$\boxed{ \begin{equation} \zeta(p) = \sum\limits_{n=1}^{\infty} \frac{1}{n^p} \end{equation} }$$

$$ \boxed{ \begin{equation} zeta(a,p)=\sum\limits_{n=0}^{\infty} \frac{1}{(n+a)^p} \end{equation} }$$


In [20]:
scipy.special.zeta(2,1)


Out[20]:
1.6449340668482266

Airy (and Bairy) functions

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following equation is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{equation} y'' = xy \end{equation} }$$


In [21]:
import numpy
import scipy.special
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15.0, 10.0)
import mpl_toolkits.mplot3d
x=numpy.mgrid[-4:4:100j,-4:4:100j]
z=x[0]+1j*x[1]
(Ai, Aip, Bi, Bip) = scipy.special.airy(z)
steps = range(int(Bi.real.min()), int(Bi.real.max()),6)
fig=plt.figure()
subplot1=fig.add_subplot(121,aspect='equal')
subplot1.contourf(x[0], x[1], Bi.real, steps)
subplot2=fig.add_subplot(122,projection='3d')
subplot2.plot_surface(x[0],x[1],Bi.real)
plt.show()


Bessel and Struve functions

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following equations is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{equation} x^2 y'' + x y' + (x^2 - \alpha^2)y=0 \end{equation} }$$

$$ \boxed{ \begin{equation} x^2 y'' + x y' + (x^2 - \alpha^2)y= \frac{ 4 (x/2)^{\alpha + 1} }{ \sqrt{\pi} (\alpha + \frac{1}{2}) } \end{equation} }$$


In [35]:
scipy.special.jn(5,scipy.pi)


Out[35]:
0.052141184367118461

Other special functions

Special functions (scipy.special)

Interpolation and regression

Interpolation (scipy.interpolate)


In [36]:
import scipy.interpolate
x=scipy.linspace(-1,1,10); xn=scipy.linspace(-1,1,1000)
y=scipy.sin(x)
polynomial=scipy.interpolate.lagrange(x, scipy.sin(x))
plt.plot(xn,polynomial(xn),x,y,'or')
plt.show()


To show extra details not shown in the book, the following lines of codes uses variables names slightly different from the ones in the code that appears in the book


In [58]:
import numpy
import scipy.interpolate
x1=numpy.linspace(1,10,10); y1=numpy.sin(x1)
Polynomial=scipy.interpolate.BarycentricInterpolator(x1,y1)
exactValues1=numpy.sin(x1+0.3)
exactValues1


Out[58]:
array([ 0.96355819,  0.74570521, -0.15774569, -0.91616594, -0.83226744,
        0.0168139 ,  0.85043662,  0.90217183,  0.12445442, -0.76768581])

In [60]:
interpolatedValues1=Polynomial(x1+0.3)
interpolatedValues1


Out[60]:
array([ 0.97103132,  0.74460631, -0.15742869, -0.91631362, -0.83216445,
        0.01670922,  0.85059283,  0.90181323,  0.12588718, -0.7825744 ])

In [61]:
PercentRelativeError1 = \
       numpy.abs((exactValues1 - interpolatedValues1)/interpolatedValues1)*100
PercentRelativeError1


Out[61]:
array([ 0.76960822,  0.14758101,  0.20136334,  0.01611703,  0.01237594,
        0.62647084,  0.01836479,  0.0397652 ,  1.13812858,  1.90251374])

In [62]:
x2=numpy.linspace(1.5,10.5,10); y2=numpy.sin(x2)
Polynomial.add_xi(x2,y2)
interpolatedValues=Polynomial(x1+0.3)
interpolatedValues


Out[62]:
array([ 0.96355818,  0.74570521, -0.15774569, -0.91616594, -0.83226744,
        0.0168139 ,  0.85043662,  0.90217183,  0.12445442, -0.76768581])

In [64]:
PercentRelativeError = \
       numpy.abs((exactValues1 - interpolatedValues)/interpolatedValues)*100
PercentRelativeError


Out[64]:
array([  1.26235197e-07,   2.02545428e-09,   5.95243584e-10,
         1.84316962e-11,   8.73752888e-12,   4.14379958e-10,
         1.74933536e-11,   8.52321518e-11,   9.45324204e-09,
         1.29542673e-07])

In [65]:
x3=numpy.sort(numpy.ravel([x1,x2]))
y3=numpy.sin(x3)
Polynomial3=scipy.interpolate.BarycentricInterpolator(x3,y3)
interpolatedValues3=Polynomial3(x1+0.3)
interpolatedValues3


Out[65]:
array([ 0.96355818,  0.74570521, -0.15774569, -0.91616594, -0.83226744,
        0.0168139 ,  0.85043662,  0.90217183,  0.12445442, -0.76768581])

In [68]:
PercentRelativeError3 = \
       numpy.abs((exactValues1 - interpolatedValues3)/interpolatedValues3)*100
PercentRelativeError3


Out[68]:
array([  1.26221486e-07,   2.02535006e-09,   5.95225989e-10,
         1.84074599e-11,   8.73752888e-12,   4.14091076e-10,
         1.75064084e-11,   8.52567640e-11,   9.45315283e-09,
         1.29549297e-07])

In [69]:
x=numpy.array([0,0,1,1,2,2]); y=numpy.array([0,0,1,0,2,0])
interp=scipy.interpolate.KroghInterpolator(x,y)
xn=numpy.linspace(0,2,20)   # evaluate the polynomial in a larger set
plt.plot(x,y,'o',xn,interp(xn),'r')
plt.show()



In [70]:
x=numpy.arange(5); y=numpy.sin(x)
xn=numpy.linspace(0,4,40)
interp=scipy.interpolate.InterpolatedUnivariateSpline(x,y)
plt.plot(x,y,'.',xn,interp(xn))
plt.show()



In [71]:
import numpy
import scipy.interpolate
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
x=y=numpy.arange(10)
f=(lambda i,j: numpy.sin(i)*numpy.cos(j))  # function to interpolate
A=numpy.fromfunction(f, (10,10))           # generate samples
spline=scipy.interpolate.RectBivariateSpline(x,y,A)
fig=plt.figure()
subplot=fig.add_subplot(111,projection='3d')
xx=numpy.mgrid[0:9:100j, 0:9:100j]         # larger grid for plotting
A=spline(numpy.linspace(0,9,100), numpy.linspace(0,9,100))
subplot.plot_surface(xx[0],xx[1],A)
plt.show()



In [26]:
import numpy
import scipy
import matplotlib.pyplot as plt
x=numpy.linspace(0,1,10)
y=numpy.sin(x*numpy.pi/2)
line=numpy.polyfit(x,y,deg=1)
plt.plot(x,y,'.',x,numpy.polyval(line,x),'r')
plt.show()



In [73]:
import numpy
import scipy.interpolate
import matplotlib.pyplot as plt
x=numpy.linspace(0,1,10)
y=numpy.sin(x*numpy.pi/2)
spline=scipy.interpolate.UnivariateSpline(x,y,k=2)
xn=numpy.linspace(0,1,100)
plt.plot(x,y,'.', xn, spline(xn))
plt.show()



In [74]:
x=None; y=None; yc=None

In [75]:
A=18; w=3*numpy.pi; h=0.5
x=numpy.linspace(0,1,100); y=A*numpy.sin(w*x+h)
plot1, = plt.plot(x, y, '-b', label="Data")

yc=None 
yc=scipy.copy(y)
yc += 4*((0.5-scipy.rand(100))*numpy.exp(2*scipy.rand(100)**2)) # contamined data
plot2, = plt.plot(x, yc, 'r+', label="Contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()



In [76]:
p0=None
p0 = [20, 2*scipy.pi, 1]
target_function = lambda x,AA,ww,hh: AA*scipy.sin(ww*x+hh)

In [77]:
import scipy.optimize
pF=None; pVar = None
pF,pVar = scipy.optimize.curve_fit(target_function, x, yc, p0)
print(pF)


[ 18.31594848   9.47420486   0.49402283]

In [78]:
yFit=None
yFit=target_function(x,*pF)

In [79]:
plot2, = plt.plot(x, yc, 'r+', label="Contamined Data")
plot3, = plt.plot(x, yFit,'k', label="Fit of contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()



In [80]:
plot1, = plt.plot(x, y, '-b', label="Data")
plot3, = plt.plot(x, yFit,'--k', label="Fit of contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()



In [81]:
plot1, = plt.plot(x, y, '-b', label="Data")
plot2, = plt.plot(x, yc, 'r+', label="Contamined Data")
plot3, = plt.plot(x, yFit,'--k', label="Fit of Contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()



In [82]:
plt.rcParams['figure.figsize'] = (15.0, 15.0)

plt.subplot(2,2,1)
plt.plot(x, y, '-b', label="Data")
plt.plot(x, yc, 'r+', label="Contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=4, borderaxespad=0.)

plt.subplot(2,2,2)
plt.plot(x, yc, 'r+', label="Contamined Data")
plt.plot(x, yFit,'k', label="Fit of contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)

plt.subplot(2,1,2)
plt.plot(x, y, '-b', label="Data")
plt.plot(x, yc, 'r+', label="Contamined Data")
plt.plot(x, yFit,'--k', label="Fit of Contamined Data")
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)


plt.show()


Optimization

Implementing the leastsq function as described in the corresponding section of the book


In [83]:
import numpy
import scipy

A=18; w=3*numpy.pi; h=0.5

x=None; y=None
x=numpy.linspace(0,1,100); y=A*numpy.sin(w*x+h)
y += 4*((0.5-scipy.rand(100))*numpy.exp(2*scipy.rand(100)**2))

import scipy.optimize
p0 = [20, 2*numpy.pi, 1]
target_function = lambda x,AA,ww,hh: AA*numpy.sin(ww*x+hh)

#pF,pVar = scipy.optimize.curve_fit(target_function, x, y, p0)
#print (pF)

In [84]:
error_function = lambda p,x,y: target_function(x,p[0],p[1],p[2])-y
lpF,lpVar = scipy.optimize.leastsq(error_function,p0,args=(x,y))
print (lpF)


[ 18.40969124   9.39559992   0.5331237 ]

Minimization


In [85]:
import scipy.optimize
scipy.optimize.fmin(scipy.optimize.rosen,[0,0])


Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 79
         Function evaluations: 146
Out[85]:
array([ 1.00000439,  1.00001064])

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [86]:
help(scipy.optimize.minimize)


Help on function minimize in module scipy.optimize._minimize:

minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)
    Minimization of scalar function of one or more variables.
    
    .. versionadded:: 0.11.0
    
    Parameters
    ----------
    fun : callable
        Objective function.
    x0 : ndarray
        Initial guess.
    args : tuple, optional
        Extra arguments passed to the objective function and its
        derivatives (Jacobian, Hessian).
    method : str or callable, optional
        Type of solver.  Should be one of
    
            - 'Nelder-Mead'
            - 'Powell'
            - 'CG'
            - 'BFGS'
            - 'Newton-CG'
            - 'Anneal (deprecated as of scipy version 0.14.0)'
            - 'L-BFGS-B'
            - 'TNC'
            - 'COBYLA'
            - 'SLSQP'
            - 'dogleg'
            - 'trust-ncg'
            - custom - a callable object (added in version 0.14.0)
    
        If not given, chosen to be one of ``BFGS``, ``L-BFGS-B``, ``SLSQP``,
        depending if the problem has constraints or bounds.
    jac : bool or callable, optional
        Jacobian (gradient) of objective function. Only for CG, BFGS,
        Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg.
        If `jac` is a Boolean and is True, `fun` is assumed to return the
        gradient along with the objective function. If False, the
        gradient will be estimated numerically.
        `jac` can also be a callable returning the gradient of the
        objective. In this case, it must accept the same arguments as `fun`.
    hess, hessp : callable, optional
        Hessian (matrix of second-order derivatives) of objective function or
        Hessian of objective function times an arbitrary vector p.  Only for
        Newton-CG, dogleg, trust-ncg.
        Only one of `hessp` or `hess` needs to be given.  If `hess` is
        provided, then `hessp` will be ignored.  If neither `hess` nor
        `hessp` is provided, then the Hessian product will be approximated
        using finite differences on `jac`. `hessp` must compute the Hessian
        times an arbitrary vector.
    bounds : sequence, optional
        Bounds for variables (only for L-BFGS-B, TNC and SLSQP).
        ``(min, max)`` pairs for each element in ``x``, defining
        the bounds on that parameter. Use None for one of ``min`` or
        ``max`` when there is no bound in that direction.
    constraints : dict or sequence of dict, optional
        Constraints definition (only for COBYLA and SLSQP).
        Each constraint is defined in a dictionary with fields:
            type : str
                Constraint type: 'eq' for equality, 'ineq' for inequality.
            fun : callable
                The function defining the constraint.
            jac : callable, optional
                The Jacobian of `fun` (only for SLSQP).
            args : sequence, optional
                Extra arguments to be passed to the function and Jacobian.
        Equality constraint means that the constraint function result is to
        be zero whereas inequality means that it is to be non-negative.
        Note that COBYLA only supports inequality constraints.
    tol : float, optional
        Tolerance for termination. For detailed control, use solver-specific
        options.
    options : dict, optional
        A dictionary of solver options. All methods accept the following
        generic options:
            maxiter : int
                Maximum number of iterations to perform.
            disp : bool
                Set to True to print convergence messages.
        For method-specific options, see :func:`show_options()`.
    callback : callable, optional
        Called after each iteration, as ``callback(xk)``, where ``xk`` is the
        current parameter vector.
    
    Returns
    -------
    res : OptimizeResult
        The optimization result represented as a ``OptimizeResult`` object.
        Important attributes are: ``x`` the solution array, ``success`` a
        Boolean flag indicating if the optimizer exited successfully and
        ``message`` which describes the cause of the termination. See
        `OptimizeResult` for a description of other attributes.
    
    
    See also
    --------
    minimize_scalar : Interface to minimization algorithms for scalar
        univariate functions
    show_options : Additional options accepted by the solvers
    
    Notes
    -----
    This section describes the available solvers that can be selected by the
    'method' parameter. The default method is *BFGS*.
    
    **Unconstrained minimization**
    
    Method *Nelder-Mead* uses the Simplex algorithm [1]_, [2]_. This
    algorithm has been successful in many applications but other algorithms
    using the first and/or second derivatives information might be preferred
    for their better performances and robustness in general.
    
    Method *Powell* is a modification of Powell's method [3]_, [4]_ which
    is a conjugate direction method. It performs sequential one-dimensional
    minimizations along each vector of the directions set (`direc` field in
    `options` and `info`), which is updated at each iteration of the main
    minimization loop. The function need not be differentiable, and no
    derivatives are taken.
    
    Method *CG* uses a nonlinear conjugate gradient algorithm by Polak and
    Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp.
    120-122. Only the first derivatives are used.
    
    Method *BFGS* uses the quasi-Newton method of Broyden, Fletcher,
    Goldfarb, and Shanno (BFGS) [5]_ pp. 136. It uses the first derivatives
    only. BFGS has proven good performance even for non-smooth
    optimizations. This method also returns an approximation of the Hessian
    inverse, stored as `hess_inv` in the OptimizeResult object.
    
    Method *Newton-CG* uses a Newton-CG algorithm [5]_ pp. 168 (also known
    as the truncated Newton method). It uses a CG method to the compute the
    search direction. See also *TNC* method for a box-constrained
    minimization with a similar algorithm.
    
    Method *Anneal* uses simulated annealing, which is a probabilistic
    metaheuristic algorithm for global optimization. It uses no derivative
    information from the function being optimized.
    
    Method *dogleg* uses the dog-leg trust-region algorithm [5]_
    for unconstrained minimization. This algorithm requires the gradient
    and Hessian; furthermore the Hessian is required to be positive definite.
    
    Method *trust-ncg* uses the Newton conjugate gradient trust-region
    algorithm [5]_ for unconstrained minimization. This algorithm requires
    the gradient and either the Hessian or a function that computes the
    product of the Hessian with a given vector.
    
    **Constrained minimization**
    
    Method *L-BFGS-B* uses the L-BFGS-B algorithm [6]_, [7]_ for bound
    constrained minimization.
    
    Method *TNC* uses a truncated Newton algorithm [5]_, [8]_ to minimize a
    function with variables subject to bounds. This algorithm uses
    gradient information; it is also called Newton Conjugate-Gradient. It
    differs from the *Newton-CG* method described above as it wraps a C
    implementation and allows each variable to be given upper and lower
    bounds.
    
    Method *COBYLA* uses the Constrained Optimization BY Linear
    Approximation (COBYLA) method [9]_, [10]_, [11]_. The algorithm is
    based on linear approximations to the objective function and each
    constraint. The method wraps a FORTRAN implementation of the algorithm.
    
    Method *SLSQP* uses Sequential Least SQuares Programming to minimize a
    function of several variables with any combination of bounds, equality
    and inequality constraints. The method wraps the SLSQP Optimization
    subroutine originally implemented by Dieter Kraft [12]_. Note that the
    wrapper handles infinite values in bounds by converting them into large
    floating values.
    
    **Custom minimizers**
    
    It may be useful to pass a custom minimization method, for example
    when using a frontend to this method such as `scipy.optimize.basinhopping`
    or a different library.  You can simply pass a callable as the ``method``
    parameter.
    
    The callable is called as ``method(fun, x0, args, **kwargs, **options)``
    where ``kwargs`` corresponds to any other parameters passed to `minimize`
    (such as `callback`, `hess`, etc.), except the `options` dict, which has
    its contents also passed as `method` parameters pair by pair.  Also, if
    `jac` has been passed as a bool type, `jac` and `fun` are mangled so that
    `fun` returns just the function values and `jac` is converted to a function
    returning the Jacobian.  The method shall return an ``OptimizeResult``
    object.
    
    The provided `method` callable must be able to accept (and possibly ignore)
    arbitrary parameters; the set of parameters accepted by `minimize` may
    expand in future versions and then these parameters will be passed to
    the method.  You can find an example in the scipy.optimize tutorial.
    
    References
    ----------
    .. [1] Nelder, J A, and R Mead. 1965. A Simplex Method for Function
        Minimization. The Computer Journal 7: 308-13.
    .. [2] Wright M H. 1996. Direct search methods: Once scorned, now
        respectable, in Numerical Analysis 1995: Proceedings of the 1995
        Dundee Biennial Conference in Numerical Analysis (Eds. D F
        Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK.
        191-208.
    .. [3] Powell, M J D. 1964. An efficient method for finding the minimum of
       a function of several variables without calculating derivatives. The
       Computer Journal 7: 155-162.
    .. [4] Press W, S A Teukolsky, W T Vetterling and B P Flannery.
       Numerical Recipes (any edition), Cambridge University Press.
    .. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization.
       Springer New York.
    .. [6] Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory
       Algorithm for Bound Constrained Optimization. SIAM Journal on
       Scientific and Statistical Computing 16 (5): 1190-1208.
    .. [7] Zhu, C and R H Byrd and J Nocedal. 1997. L-BFGS-B: Algorithm
       778: L-BFGS-B, FORTRAN routines for large scale bound constrained
       optimization. ACM Transactions on Mathematical Software 23 (4):
       550-560.
    .. [8] Nash, S G. Newton-Type Minimization Via the Lanczos Method.
       1984. SIAM Journal of Numerical Analysis 21: 770-778.
    .. [9] Powell, M J D. A direct search optimization method that models
       the objective and constraint functions by linear interpolation.
       1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez
       and J-P Hennart, Kluwer Academic (Dordrecht), 51-67.
    .. [10] Powell M J D. Direct search algorithms for optimization
       calculations. 1998. Acta Numerica 7: 287-336.
    .. [11] Powell M J D. A view of algorithms for optimization without
       derivatives. 2007.Cambridge University Technical Report DAMTP
       2007/NA03
    .. [12] Kraft, D. A software package for sequential quadratic
       programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
       Center -- Institute for Flight Mechanics, Koln, Germany.
    
    Examples
    --------
    Let us consider the problem of minimizing the Rosenbrock function. This
    function (and its respective derivatives) is implemented in `rosen`
    (resp. `rosen_der`, `rosen_hess`) in the `scipy.optimize`.
    
    >>> from scipy.optimize import minimize, rosen, rosen_der
    
    A simple application of the *Nelder-Mead* method is:
    
    >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
    >>> res = minimize(rosen, x0, method='Nelder-Mead')
    >>> res.x
    [ 1.  1.  1.  1.  1.]
    
    Now using the *BFGS* algorithm, using the first derivative and a few
    options:
    
    >>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
    ...                options={'gtol': 1e-6, 'disp': True})
    Optimization terminated successfully.
             Current function value: 0.000000
             Iterations: 52
             Function evaluations: 64
             Gradient evaluations: 64
    >>> res.x
    [ 1.  1.  1.  1.  1.]
    >>> print res.message
    Optimization terminated successfully.
    >>> res.hess
    [[ 0.00749589  0.01255155  0.02396251  0.04750988  0.09495377]
     [ 0.01255155  0.02510441  0.04794055  0.09502834  0.18996269]
     [ 0.02396251  0.04794055  0.09631614  0.19092151  0.38165151]
     [ 0.04750988  0.09502834  0.19092151  0.38341252  0.7664427 ]
     [ 0.09495377  0.18996269  0.38165151  0.7664427   1.53713523]]
    
    
    Next, consider a minimization problem with several constraints (namely
    Example 16.4 from [5]_). The objective function is:
    
    >>> fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
    
    There are three constraints defined as:
    
    >>> cons = ({'type': 'ineq', 'fun': lambda x:  x[0] - 2 * x[1] + 2},
    ...         {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
    ...         {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2})
    
    And variables must be positive, hence the following bounds:
    
    >>> bnds = ((0, None), (0, None))
    
    The optimization problem is solved using the SLSQP method as:
    
    >>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
    ...                constraints=cons)
    
    It should converge to the theoretical solution (1.4 ,1.7).

Roots


In [87]:
import scipy.special
print (scipy.special.jn_zeros(4,3))


[  7.58834243  11.06470949  14.37253667]

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [88]:
import scipy.optimize
help(scipy.optimize.root)


Help on function root in module scipy.optimize._root:

root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None)
    Find a root of a vector function.
    
    .. versionadded:: 0.11.0
    
    Parameters
    ----------
    fun : callable
        A vector function to find a root of.
    x0 : ndarray
        Initial guess.
    args : tuple, optional
        Extra arguments passed to the objective function and its Jacobian.
    method : str, optional
        Type of solver.  Should be one of
    
            - 'hybr'
            - 'lm'
            - 'broyden1'
            - 'broyden2'
            - 'anderson'
            - 'linearmixing'
            - 'diagbroyden'
            - 'excitingmixing'
            - 'krylov'
    
    jac : bool or callable, optional
        If `jac` is a Boolean and is True, `fun` is assumed to return the
        value of Jacobian along with the objective function. If False, the
        Jacobian will be estimated numerically.
        `jac` can also be a callable returning the Jacobian of `fun`. In
        this case, it must accept the same arguments as `fun`.
    tol : float, optional
        Tolerance for termination. For detailed control, use solver-specific
        options.
    callback : function, optional
        Optional callback function. It is called on every iteration as
        ``callback(x, f)`` where `x` is the current solution and `f`
        the corresponding residual. For all methods but 'hybr' and 'lm'.
    options : dict, optional
        A dictionary of solver options. E.g. `xtol` or `maxiter`, see
        :obj:`show_options()` for details.
    
    Returns
    -------
    sol : OptimizeResult
        The solution represented as a ``OptimizeResult`` object.
        Important attributes are: ``x`` the solution array, ``success`` a
        Boolean flag indicating if the algorithm exited successfully and
        ``message`` which describes the cause of the termination. See
        `OptimizeResult` for a description of other attributes.
    
    See also
    --------
    show_options : Additional options accepted by the solvers
    
    Notes
    -----
    This section describes the available solvers that can be selected by the
    'method' parameter. The default method is *hybr*.
    
    Method *hybr* uses a modification of the Powell hybrid method as
    implemented in MINPACK [1]_.
    
    Method *lm* solves the system of nonlinear equations in a least squares
    sense using a modification of the Levenberg-Marquardt algorithm as
    implemented in MINPACK [1]_.
    
    Methods *broyden1*, *broyden2*, *anderson*, *linearmixing*,
    *diagbroyden*, *excitingmixing*, *krylov* are inexact Newton methods,
    with backtracking or full line searches [2]_. Each method corresponds
    to a particular Jacobian approximations. See `nonlin` for details.
    
    - Method *broyden1* uses Broyden's first Jacobian approximation, it is
      known as Broyden's good method.
    - Method *broyden2* uses Broyden's second Jacobian approximation, it
      is known as Broyden's bad method.
    - Method *anderson* uses (extended) Anderson mixing.
    - Method *Krylov* uses Krylov approximation for inverse Jacobian. It
      is suitable for large-scale problem.
    - Method *diagbroyden* uses diagonal Broyden Jacobian approximation.
    - Method *linearmixing* uses a scalar Jacobian approximation.
    - Method *excitingmixing* uses a tuned diagonal Jacobian
      approximation.
    
    .. warning::
    
        The algorithms implemented for methods *diagbroyden*,
        *linearmixing* and *excitingmixing* may be useful for specific
        problems, but whether they will work may depend strongly on the
        problem.
    
    References
    ----------
    .. [1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom.
       1980. User Guide for MINPACK-1.
    .. [2] C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear
        Equations. Society for Industrial and Applied Mathematics.
        <http://www.siam.org/books/kelley/>
    
    Examples
    --------
    The following functions define a system of nonlinear equations and its
    jacobian.
    
    >>> def fun(x):
    ...     return [x[0]  + 0.5 * (x[0] - x[1])**3 - 1.0,
    ...             0.5 * (x[1] - x[0])**3 + x[1]]
    
    >>> def jac(x):
    ...     return np.array([[1 + 1.5 * (x[0] - x[1])**2,
    ...                       -1.5 * (x[0] - x[1])**2],
    ...                      [-1.5 * (x[1] - x[0])**2,
    ...                       1 + 1.5 * (x[1] - x[0])**2]])
    
    A solution can be obtained as follows.
    
    >>> from scipy import optimize
    >>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
    >>> sol.x
    array([ 0.8411639,  0.1588361])

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following equations is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \left\{ \begin{align} x' & = x^2 - 2 x - y + 0.5 \\ y' & = x^2 + 4 y^2 - 4 \end{align} \right. }$$


In [89]:
import numpy
import matplotlib.pyplot as plt
f=lambda x: [x[0]**2 - 2*x[0] - x[1] + 0.5, x[0]**2 + 4*x[1]**2 - 4]
x,y=numpy.mgrid[-0.5:2.5:24j,-0.5:2.5:24j]
U,V=f([x,y])
plt.quiver(x,y,U,V,color='r', \
         linewidths=(0.2,), edgecolors=('k'), \
         headaxislength=5)
plt.show()



In [90]:
import scipy.optimize
f=lambda x: [x[0]**2 - 2*x[0] - x[1] + 0.5, x[0]**2 + 4*x[1]**2 - 4]

In [91]:
scipy.optimize.root(f,[0,1])


Out[91]:
  status: 1
 success: True
     qtf: array([ -4.81190247e-09,  -3.83395899e-09])
    nfev: 9
       r: array([ 2.38128242, -0.60840482, -8.35489601])
     fun: array([  3.59529073e-12,   3.85025345e-12])
       x: array([-0.22221456,  0.99380842])
 message: 'The solution converged.'
    fjac: array([[-0.98918813, -0.14665209],
       [ 0.14665209, -0.98918813]])

In [92]:
scipy.optimize.root(f,[2,0])


Out[92]:
  status: 1
 success: True
     qtf: array([  2.08960516e-10,   8.61298294e-11])
    nfev: 12
       r: array([-4.56575336, -1.67067665, -1.81464307])
     fun: array([  2.44249065e-15,   1.42996726e-13])
       x: array([ 1.90067673,  0.31121857])
 message: 'The solution converged.'
    fjac: array([[-0.39612596, -0.91819618],
       [ 0.91819618, -0.39612596]])

Integration

General references for the following topics:
scipy.special
http://docs.scipy.org/doc/scipy-0.14.0/reference/special.html

scipy.integrate
http://docs.scipy.org/doc/scipy-0.14.0/reference/integrate.html#module-scipy.integrate

The NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

Exponential/logarithm integrals

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following integrals is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{align} \mathrm{expn}(n,x) & = \int\limits_{1}^{\infty} \frac{\mathrm{e}^{-x t }}{\mathrm{t}^{n}} \,\mathrm{d}t & \quad \quad \mathrm{exp1}(x) &= \int\limits_{1}^{\infty} \frac{\mathrm{e}^{-x t }}{\mathrm{t}} \,\mathrm{d}t \\ \mathrm{expi}(x) & = \int\limits_{-\infty}^{x} \frac{\mathrm{e}^{t }}{\mathrm{t}} \,\mathrm{d}t &\quad \quad \mathrm{dawsn}(x) &= \mathrm{e}^{-x^2} \int\limits_{0}^{x} \mathrm{e}^{t^2} \,\mathrm{d}t \\ \mathrm{erf}(x) & = \frac{2}{\sqrt{\pi}} \int\limits_{0}^{x} \mathrm{e}^{-t^2} \,\mathrm{d}t &\quad \quad \mathrm{erfc}(x) &= \frac{2}{\sqrt{\pi}} \int\limits_{x}^{\infty} \mathrm{e}^{-t^2} \,\mathrm{d}t \\ \mathrm{spence}(x) & = - \int\limits_{1}^{x} \frac{\mathrm{log}(t )}{\mathrm{t - 1}} \,\mathrm{d}t &\quad \quad \quad & \quad \end{align} }$$

Trigonometric and hyperbolic trigonometric integrals

A general references for the following topics are:
NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/
Table of the Fresnel Sine and Cosine Integrals
http://www.mymathlib.com/functions/fresnel_sin_cos_integrals.html

The meaning of the following integrals is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{align} \mathrm{fresnel}(z) & = \int\limits_{0}^{z} \sin\left({\frac{\pi}{2}t^2}\right)\mathrm{d}t \quad \quad \\ \mathrm{sici}(x) & = \int\limits_{0}^{x} \frac{\mathrm{sin}(t )}{\mathrm{t}}\mathrm{d}t, \quad \quad \gamma + \mathrm{log}(x) + \int\limits_{0}^{x} \frac{\cos(t) - 1}{\mathrm{t}}\mathrm{d}t \\ \mathrm{shichi}(x) & = \int\limits_{0}^{x} \frac{\mathrm{sinh}(t )}{\mathrm{t}}\mathrm{d}t, \quad \quad \gamma + \mathrm{log}(x) + \int\limits_{0}^{x} \frac{\cosh(t) - 1}{\mathrm{t}}\mathrm{d}t \end{align} }$$

$$ \boxed{ \begin{equation} \gamma = \lim\limits_{n \to \infty} \left( \sum\limits_{k=1}^{n} \frac{1}{k} - log(n) \right) \end{equation} }$$

Elliptic integrals

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following integrals is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{align} \mathrm{ellipkm1}(m) & = \int\limits_{0}^{\pi/2} \frac{\mathrm{d}\theta}{\sqrt{1 - \mathrm{m} \sin^2(\theta)}} & \quad \mathrm{ellipe}(m) &= \int\limits_{0}^{\pi/2} \sqrt{1 - \mathrm{m} \sin^2(\theta)} \,\mathrm{d}\theta \\ \mathrm{ellipkinc}(m,n) & = \int\limits_{0}^{n} \frac{\mathrm{d}\theta}{\sqrt{1 - \mathrm{m} \sin^2(\theta)}} & \quad \mathrm{ellipeinc}(m,n) &= \int\limits_{0}^{n} \sqrt{1 - \mathrm{m} \sin^2(\theta)} \,\mathrm{d}\theta \end{align} }$$

Gamma and beta integrals

A general reference for the following topics is the NIST Digital Library of Mathematical Functions
http://dlmf.nist.gov/

The meaning of the following integrals is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{align} \mathrm{gammainc}(a,x) & = \frac{1}{\Gamma(a)} \int\limits_{0}^{x} \mathrm{e}^{-t}\mathrm{t}^{a-1} \,\mathrm{d}t \\ \mathrm{gammaincc}(a,x) & = \frac{1}{\Gamma(a)} \int\limits_{x}^{\infty} \mathrm{e}^{-t}\mathrm{t}^{a-1} \,\mathrm{d}t \\ \mathrm{betainc}(a,b,x) & = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int\limits_{0}^{x} \mathrm{t}^{a-1}(\mathrm{t-1})^{b-1} \,\mathrm{d}t \end{align} }$$

Numerical Integration

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [93]:
import scipy.integrate
help(scipy.integrate.quad)


Help on function quad in module scipy.integrate.quadpack:

quad(func, a, b, args=(), full_output=0, epsabs=1.49e-08, epsrel=1.49e-08, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50)
    Compute a definite integral.
    
    Integrate func from `a` to `b` (possibly infinite interval) using a
    technique from the Fortran library QUADPACK.
    
    Parameters
    ----------
    func : function
        A Python function or method to integrate.  If `func` takes many
        arguments, it is integrated along the axis corresponding to the
        first argument.
    a : float
        Lower limit of integration (use -numpy.inf for -infinity).
    b : float
        Upper limit of integration (use numpy.inf for +infinity).
    args : tuple, optional
        Extra arguments to pass to `func`.
    full_output : int, optional
        Non-zero to return a dictionary of integration information.
        If non-zero, warning messages are also suppressed and the
        message is appended to the output tuple.
    
    Returns
    -------
    y : float
        The integral of func from `a` to `b`.
    abserr : float
        An estimate of the absolute error in the result.
    infodict : dict
        A dictionary containing additional information.
        Run scipy.integrate.quad_explain() for more information.
    message :
        A convergence message.
    explain :
        Appended only with 'cos' or 'sin' weighting and infinite
        integration limits, it contains an explanation of the codes in
        infodict['ierlst']
    
    Other Parameters
    ----------------
    epsabs : float or int, optional
        Absolute error tolerance.
    epsrel : float or int, optional
        Relative error tolerance.
    limit : float or int, optional
        An upper bound on the number of subintervals used in the adaptive
        algorithm.
    points : (sequence of floats,ints), optional
        A sequence of break points in the bounded integration interval
        where local difficulties of the integrand may occur (e.g.,
        singularities, discontinuities). The sequence does not have
        to be sorted.
    weight : float or int, optional
        String indicating weighting function. Full explanation for this
        and the remaining arguments can be found below.
    wvar : optional
        Variables for use with weighting functions.
    wopts : optional
        Optional input for reusing Chebyshev moments.
    maxp1 : float or int, optional
        An upper bound on the number of Chebyshev moments.
    limlst : int, optional
        Upper bound on the number of cycles (>=3) for use with a sinusoidal
        weighting and an infinite end-point.
    
    See Also
    --------
    dblquad : double integral
    tplquad : triple integral
    nquad : n-dimensional integrals (uses `quad` recursively)
    fixed_quad : fixed-order Gaussian quadrature
    quadrature : adaptive Gaussian quadrature
    odeint : ODE integrator
    ode : ODE integrator
    simps : integrator for sampled data
    romb : integrator for sampled data
    scipy.special : for coefficients and roots of orthogonal polynomials
    
    Notes
    -----
    
    **Extra information for quad() inputs and outputs**
    
    If full_output is non-zero, then the third output argument
    (infodict) is a dictionary with entries as tabulated below.  For
    infinite limits, the range is transformed to (0,1) and the
    optional outputs are given with respect to this transformed range.
    Let M be the input argument limit and let K be infodict['last'].
    The entries are:
    
    'neval'
        The number of function evaluations.
    'last'
        The number, K, of subintervals produced in the subdivision process.
    'alist'
        A rank-1 array of length M, the first K elements of which are the
        left end points of the subintervals in the partition of the
        integration range.
    'blist'
        A rank-1 array of length M, the first K elements of which are the
        right end points of the subintervals.
    'rlist'
        A rank-1 array of length M, the first K elements of which are the
        integral approximations on the subintervals.
    'elist'
        A rank-1 array of length M, the first K elements of which are the
        moduli of the absolute error estimates on the subintervals.
    'iord'
        A rank-1 integer array of length M, the first L elements of
        which are pointers to the error estimates over the subintervals
        with L=K if K<=M/2+2 or L=M+1-K otherwise. Let I be the sequence
        infodict['iord'] and let E be the sequence infodict['elist'].
        Then E[I[1]], ..., E[I[L]] forms a decreasing sequence.
    
    If the input argument points is provided (i.e. it is not None),
    the following additional outputs are placed in the output
    dictionary.  Assume the points sequence is of length P.
    
    'pts'
        A rank-1 array of length P+2 containing the integration limits
        and the break points of the intervals in ascending order.
        This is an array giving the subintervals over which integration
        will occur.
    'level'
        A rank-1 integer array of length M (=limit), containing the
        subdivision levels of the subintervals, i.e., if (aa,bb) is a
        subinterval of (pts[1], pts[2]) where pts[0] and pts[2] are
        adjacent elements of infodict['pts'], then (aa,bb) has level l if
        |bb-aa|=|pts[2]-pts[1]| * 2**(-l).
    'ndin'
        A rank-1 integer array of length P+2.  After the first integration
        over the intervals (pts[1], pts[2]), the error estimates over some
        of the intervals may have been increased artificially in order to
        put their subdivision forward.  This array has ones in slots
        corresponding to the subintervals for which this happens.
    
    **Weighting the integrand**
    
    The input variables, *weight* and *wvar*, are used to weight the
    integrand by a select list of functions.  Different integration
    methods are used to compute the integral with these weighting
    functions.  The possible values of weight and the corresponding
    weighting functions are.
    
    ==========  ===================================   =====================
    ``weight``  Weight function used                  ``wvar``
    ==========  ===================================   =====================
    'cos'       cos(w*x)                              wvar = w
    'sin'       sin(w*x)                              wvar = w
    'alg'       g(x) = ((x-a)**alpha)*((b-x)**beta)   wvar = (alpha, beta)
    'alg-loga'  g(x)*log(x-a)                         wvar = (alpha, beta)
    'alg-logb'  g(x)*log(b-x)                         wvar = (alpha, beta)
    'alg-log'   g(x)*log(x-a)*log(b-x)                wvar = (alpha, beta)
    'cauchy'    1/(x-c)                               wvar = c
    ==========  ===================================   =====================
    
    wvar holds the parameter w, (alpha, beta), or c depending on the weight
    selected.  In these expressions, a and b are the integration limits.
    
    For the 'cos' and 'sin' weighting, additional inputs and outputs are
    available.
    
    For finite integration limits, the integration is performed using a
    Clenshaw-Curtis method which uses Chebyshev moments.  For repeated
    calculations, these moments are saved in the output dictionary:
    
    'momcom'
        The maximum level of Chebyshev moments that have been computed,
        i.e., if M_c is infodict['momcom'] then the moments have been
        computed for intervals of length |b-a|* 2**(-l), l=0,1,...,M_c.
    'nnlog'
        A rank-1 integer array of length M(=limit), containing the
        subdivision levels of the subintervals, i.e., an element of this
        array is equal to l if the corresponding subinterval is
        |b-a|* 2**(-l).
    'chebmo'
        A rank-2 array of shape (25, maxp1) containing the computed
        Chebyshev moments.  These can be passed on to an integration
        over the same interval by passing this array as the second
        element of the sequence wopts and passing infodict['momcom'] as
        the first element.
    
    If one of the integration limits is infinite, then a Fourier integral is
    computed (assuming w neq 0).  If full_output is 1 and a numerical error
    is encountered, besides the error message attached to the output tuple,
    a dictionary is also appended to the output tuple which translates the
    error codes in the array info['ierlst'] to English messages.  The output
    information dictionary contains the following entries instead of 'last',
    'alist', 'blist', 'rlist', and 'elist':
    
    'lst'
        The number of subintervals needed for the integration (call it K_f).
    'rslst'
        A rank-1 array of length M_f=limlst, whose first K_f elements
        contain the integral contribution over the interval (a+(k-1)c,
        a+kc) where c = (2*floor(|w|) + 1) * pi / |w| and k=1,2,...,K_f.
    'erlst'
        A rank-1 array of length M_f containing the error estimate
        corresponding to the interval in the same position in
        infodict['rslist'].
    'ierlst'
        A rank-1 integer array of length M_f containing an error flag
        corresponding to the interval in the same position in
        infodict['rslist'].  See the explanation dictionary (last entry
        in the output tuple) for the meaning of the codes.
    
    Examples
    --------
    Calculate :math:`\int^4_0 x^2 dx` and compare with an analytic result
    
    >>> from scipy import integrate
    >>> x2 = lambda x: x**2
    >>> integrate.quad(x2, 0, 4)
    (21.333333333333332, 2.3684757858670003e-13)
    >>> print(4**3 / 3.)  # analytical result
    21.3333333333
    
    Calculate :math:`\int^\infty_0 e^{-x} dx`
    
    >>> invexp = lambda x: np.exp(-x)
    >>> integrate.quad(invexp, 0, np.inf)
    (1.0, 5.842605999138044e-11)
    
    >>> f = lambda x,a : a*x
    >>> y, err = integrate.quad(f, 0, 1, args=(1,))
    >>> y
    0.5
    >>> y, err = integrate.quad(f, 0, 1, args=(3,))
    >>> y
    1.5

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [94]:
help(scipy.integrate.simps)


Help on function simps in module scipy.integrate.quadrature:

simps(y, x=None, dx=1, axis=-1, even='avg')
    Integrate y(x) using samples along the given axis and the composite
    Simpson's rule.  If x is None, spacing of dx is assumed.
    
    If there are an even number of samples, N, then there are an odd
    number of intervals (N-1), but Simpson's rule requires an even number
    of intervals.  The parameter 'even' controls how this is handled.
    
    Parameters
    ----------
    y : array_like
        Array to be integrated.
    x : array_like, optional
        If given, the points at which `y` is sampled.
    dx : int, optional
        Spacing of integration points along axis of `y`. Only used when
        `x` is None. Default is 1.
    axis : int, optional
        Axis along which to integrate. Default is the last axis.
    even : {'avg', 'first', 'str'}, optional
        'avg' : Average two results:1) use the first N-2 intervals with
                  a trapezoidal rule on the last interval and 2) use the last
                  N-2 intervals with a trapezoidal rule on the first interval.
    
        'first' : Use Simpson's rule for the first N-2 intervals with
                a trapezoidal rule on the last interval.
    
        'last' : Use Simpson's rule for the last N-2 intervals with a
               trapezoidal rule on the first interval.
    
    See Also
    --------
    quad: adaptive quadrature using QUADPACK
    romberg: adaptive Romberg quadrature
    quadrature: adaptive Gaussian quadrature
    fixed_quad: fixed-order Gaussian quadrature
    dblquad: double integrals
    tplquad: triple integrals
    romb: integrators for sampled data
    cumtrapz: cumulative integration for sampled data
    ode: ODE integrators
    odeint: ODE integrators
    
    Notes
    -----
    For an odd number of samples that are equally spaced the result is
    exact if the function is a polynomial of order 3 or less.  If
    the samples are not equally spaced, then the result is exact only
    if the function is a polynomial of order 2 or less.


In [95]:
f=lambda t: numpy.exp(-t)*t**4
from scipy.special import gammainc
from scipy.integrate import quad
from scipy.misc import factorial
print (gammainc(5,1))


0.00365984682734

In [96]:
print('%.19f' % gammainc(5,1))


0.0036598468273437135

In [97]:
import numpy
result,error=quad(f,0,1)/factorial(4)
result


Out[97]:
0.0036598468273437127

In [98]:
import numpy
import scipy.integrate
x=numpy.linspace(0,1,10000)
scipy.integrate.simps(f(x)/factorial(4), x)


Out[98]:
0.0036598468273469067

Ordinary differential equations

The meaning of the following equation is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \begin{equation} \frac{d\mathbf{y}}{dt} = f(t,\mathbf{y}), \,\,\, \mathbf{y}(t)=(y_1(t),\dots,y_n(t)), \,\,\,t \in \mathbb{R} \end{equation} }$$

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [99]:
import scipy.integrate
help(scipy.integrate.ode)


Help on class ode in module scipy.integrate._ode:

class ode(__builtin__.object)
 |  A generic interface class to numeric integrators.
 |  
 |  Solve an equation system :math:`y'(t) = f(t,y)` with (optional) ``jac = df/dy``.
 |  
 |  Parameters
 |  ----------
 |  f : callable ``f(t, y, *f_args)``
 |      Rhs of the equation. t is a scalar, ``y.shape == (n,)``.
 |      ``f_args`` is set by calling ``set_f_params(*args)``.
 |      `f` should return a scalar, array or list (not a tuple).
 |  jac : callable ``jac(t, y, *jac_args)``
 |      Jacobian of the rhs, ``jac[i,j] = d f[i] / d y[j]``.
 |      ``jac_args`` is set by calling ``set_f_params(*args)``.
 |  
 |  Attributes
 |  ----------
 |  t : float
 |      Current time.
 |  y : ndarray
 |      Current variable values.
 |  
 |  See also
 |  --------
 |  odeint : an integrator with a simpler interface based on lsoda from ODEPACK
 |  quad : for finding the area under a curve
 |  
 |  Notes
 |  -----
 |  Available integrators are listed below. They can be selected using
 |  the `set_integrator` method.
 |  
 |  "vode"
 |  
 |      Real-valued Variable-coefficient Ordinary Differential Equation
 |      solver, with fixed-leading-coefficient implementation. It provides
 |      implicit Adams method (for non-stiff problems) and a method based on
 |      backward differentiation formulas (BDF) (for stiff problems).
 |  
 |      Source: http://www.netlib.org/ode/vode.f
 |  
 |      .. warning::
 |  
 |         This integrator is not re-entrant. You cannot have two `ode`
 |         instances using the "vode" integrator at the same time.
 |  
 |      This integrator accepts the following parameters in `set_integrator`
 |      method of the `ode` class:
 |  
 |      - atol : float or sequence
 |        absolute tolerance for solution
 |      - rtol : float or sequence
 |        relative tolerance for solution
 |      - lband : None or int
 |      - rband : None or int
 |        Jacobian band width, jac[i,j] != 0 for i-lband <= j <= i+rband.
 |        Setting these requires your jac routine to return the jacobian
 |        in packed format, jac_packed[i-j+lband, j] = jac[i,j].
 |      - method: 'adams' or 'bdf'
 |        Which solver to use, Adams (non-stiff) or BDF (stiff)
 |      - with_jacobian : bool
 |        Whether to use the jacobian
 |      - nsteps : int
 |        Maximum number of (internally defined) steps allowed during one
 |        call to the solver.
 |      - first_step : float
 |      - min_step : float
 |      - max_step : float
 |        Limits for the step sizes used by the integrator.
 |      - order : int
 |        Maximum order used by the integrator,
 |        order <= 12 for Adams, <= 5 for BDF.
 |  
 |  "zvode"
 |  
 |      Complex-valued Variable-coefficient Ordinary Differential Equation
 |      solver, with fixed-leading-coefficient implementation.  It provides
 |      implicit Adams method (for non-stiff problems) and a method based on
 |      backward differentiation formulas (BDF) (for stiff problems).
 |  
 |      Source: http://www.netlib.org/ode/zvode.f
 |  
 |      .. warning::
 |  
 |         This integrator is not re-entrant. You cannot have two `ode`
 |         instances using the "zvode" integrator at the same time.
 |  
 |      This integrator accepts the same parameters in `set_integrator`
 |      as the "vode" solver.
 |  
 |      .. note::
 |  
 |          When using ZVODE for a stiff system, it should only be used for
 |          the case in which the function f is analytic, that is, when each f(i)
 |          is an analytic function of each y(j).  Analyticity means that the
 |          partial derivative df(i)/dy(j) is a unique complex number, and this
 |          fact is critical in the way ZVODE solves the dense or banded linear
 |          systems that arise in the stiff case.  For a complex stiff ODE system
 |          in which f is not analytic, ZVODE is likely to have convergence
 |          failures, and for this problem one should instead use DVODE on the
 |          equivalent real system (in the real and imaginary parts of y).
 |  
 |  "lsoda"
 |  
 |      Real-valued Variable-coefficient Ordinary Differential Equation
 |      solver, with fixed-leading-coefficient implementation. It provides
 |      automatic method switching between implicit Adams method (for non-stiff
 |      problems) and a method based on backward differentiation formulas (BDF)
 |      (for stiff problems).
 |  
 |      Source: http://www.netlib.org/odepack
 |  
 |      .. warning::
 |  
 |         This integrator is not re-entrant. You cannot have two `ode`
 |         instances using the "lsoda" integrator at the same time.
 |  
 |      This integrator accepts the following parameters in `set_integrator`
 |      method of the `ode` class:
 |  
 |      - atol : float or sequence
 |        absolute tolerance for solution
 |      - rtol : float or sequence
 |        relative tolerance for solution
 |      - lband : None or int
 |      - rband : None or int
 |        Jacobian band width, jac[i,j] != 0 for i-lband <= j <= i+rband.
 |        Setting these requires your jac routine to return the jacobian
 |        in packed format, jac_packed[i-j+lband, j] = jac[i,j].
 |      - with_jacobian : bool
 |        Whether to use the jacobian
 |      - nsteps : int
 |        Maximum number of (internally defined) steps allowed during one
 |        call to the solver.
 |      - first_step : float
 |      - min_step : float
 |      - max_step : float
 |        Limits for the step sizes used by the integrator.
 |      - max_order_ns : int
 |        Maximum order used in the nonstiff case (default 12).
 |      - max_order_s : int
 |        Maximum order used in the stiff case (default 5).
 |      - max_hnil : int
 |        Maximum number of messages reporting too small step size (t + h = t)
 |        (default 0)
 |      - ixpr : int
 |        Whether to generate extra printing at method switches (default False).
 |  
 |  "dopri5"
 |  
 |      This is an explicit runge-kutta method of order (4)5 due to Dormand &
 |      Prince (with stepsize control and dense output).
 |  
 |      Authors:
 |  
 |          E. Hairer and G. Wanner
 |          Universite de Geneve, Dept. de Mathematiques
 |          CH-1211 Geneve 24, Switzerland
 |          e-mail:  ernst.hairer@math.unige.ch, gerhard.wanner@math.unige.ch
 |  
 |      This code is described in [HNW93]_.
 |  
 |      This integrator accepts the following parameters in set_integrator()
 |      method of the ode class:
 |  
 |      - atol : float or sequence
 |        absolute tolerance for solution
 |      - rtol : float or sequence
 |        relative tolerance for solution
 |      - nsteps : int
 |        Maximum number of (internally defined) steps allowed during one
 |        call to the solver.
 |      - first_step : float
 |      - max_step : float
 |      - safety : float
 |        Safety factor on new step selection (default 0.9)
 |      - ifactor : float
 |      - dfactor : float
 |        Maximum factor to increase/decrease step size by in one step
 |      - beta : float
 |        Beta parameter for stabilised step size control.
 |      - verbosity : int
 |        Switch for printing messages (< 0 for no messages).
 |  
 |  "dop853"
 |  
 |      This is an explicit runge-kutta method of order 8(5,3) due to Dormand
 |      & Prince (with stepsize control and dense output).
 |  
 |      Options and references the same as "dopri5".
 |  
 |  Examples
 |  --------
 |  
 |  A problem to integrate and the corresponding jacobian:
 |  
 |  >>> from scipy.integrate import ode
 |  >>>
 |  >>> y0, t0 = [1.0j, 2.0], 0
 |  >>>
 |  >>> def f(t, y, arg1):
 |  >>>     return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
 |  >>> def jac(t, y, arg1):
 |  >>>     return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
 |  
 |  The integration:
 |  
 |  >>> r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True)
 |  >>> r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
 |  >>> t1 = 10
 |  >>> dt = 1
 |  >>> while r.successful() and r.t < t1:
 |  >>>     r.integrate(r.t+dt)
 |  >>>     print("%g %g" % (r.t, r.y))
 |  
 |  References
 |  ----------
 |  .. [HNW93] E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary
 |      Differential Equations i. Nonstiff Problems. 2nd edition.
 |      Springer Series in Computational Mathematics,
 |      Springer-Verlag (1993)
 |  
 |  Methods defined here:
 |  
 |  __init__(self, f, jac=None)
 |  
 |  integrate(self, t, step=0, relax=0)
 |      Find y=y(t), set y as an initial condition, and return y.
 |  
 |  set_f_params(self, *args)
 |      Set extra parameters for user-supplied function f.
 |  
 |  set_initial_value(self, y, t=0.0)
 |      Set initial conditions y(t) = y.
 |  
 |  set_integrator(self, name, **integrator_params)
 |      Set integrator by name.
 |      
 |      Parameters
 |      ----------
 |      name : str
 |          Name of the integrator.
 |      integrator_params :
 |          Additional parameters for the integrator.
 |  
 |  set_jac_params(self, *args)
 |      Set extra parameters for user-supplied function jac.
 |  
 |  set_solout(self, solout)
 |      Set callable to be called at every successful integration step.
 |      
 |      Parameters
 |      ----------
 |      solout : callable
 |          ``solout(t, y)`` is called at each internal integrator step,
 |          t is a scalar providing the current independent position
 |          y is the current soloution ``y.shape == (n,)``
 |          solout should return -1 to stop integration
 |          otherwise it should return None or 0
 |  
 |  successful(self)
 |      Check if integration was successful.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  y

Let's solve via ode the following initial value problem (for additional details, pleae see the corresponding section for this chapter in the book)

$$ \boxed{ \begin{equation} y' = - 20 y, \,\,\, y(0)=1 \end{equation} }$$

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [100]:
import numpy
from scipy.integrate import ode
f=lambda t,y: -20*y        # The ODE
actual_solution=lambda t:numpy.exp(-20*t)  # actual solution
dt=0.01            # time step
solver=ode(f).set_integrator('dop853')  # solver
solver.set_initial_value(1,0)      # initial value
while solver.successful() and solver.t<=1+dt:
   # solve the equation at succesive time steps,
   # until the time is greater than 1
   # but make sure that the solution is successful
   print (solver.t, solver.y, actual_solution(solver.t))
   # We compare each numerical solution with the actual
   # solution of the ODE
   solver.integrate(solver.t + dt)    # solve next step


(0, array([ 1.]), 1.0)
(0.01, array([ 0.81873075]), 0.81873075307798182)
(0.02, array([ 0.67032005]), 0.67032004603563933)
(0.03, array([ 0.54881164]), 0.54881163609402639)
(0.04, array([ 0.44932896]), 0.44932896411722156)
(0.05, array([ 0.36787944]), 0.36787944117144233)
(0.060000000000000005, array([ 0.30119421]), 0.30119421191220203)
(0.07, array([ 0.24659696]), 0.24659696394160643)
(0.08, array([ 0.20189652]), 0.20189651799465538)
(0.09, array([ 0.16529889]), 0.16529888822158656)
(0.09999999999999999, array([ 0.13533528]), 0.13533528323661273)
(0.10999999999999999, array([ 0.11080316]), 0.11080315836233391)
(0.11999999999999998, array([ 0.09071795]), 0.090717953289412553)
(0.12999999999999998, array([ 0.07427358]), 0.074273578214333905)
(0.13999999999999999, array([ 0.06081006]), 0.060810062625217973)
(0.15, array([ 0.04978707]), 0.049787068367863944)
(0.16, array([ 0.0407622]), 0.040762203978366211)
(0.17, array([ 0.03337327]), 0.033373269960326066)
(0.18000000000000002, array([ 0.02732372]), 0.027323722447292545)
(0.19000000000000003, array([ 0.02237077]), 0.022370771856165581)
(0.20000000000000004, array([ 0.01831564]), 0.018315638888734165)
(0.21000000000000005, array([ 0.01499558]), 0.014995576820477691)
(0.22000000000000006, array([ 0.01227734]), 0.012277339903068426)
(0.23000000000000007, array([ 0.01005184]), 0.010051835744633567)
(0.24000000000000007, array([ 0.00822975]), 0.0082297470490200163)
(0.25000000000000006, array([ 0.00673795]), 0.0067379469990854609)
(0.26000000000000006, array([ 0.00551656]), 0.0055165644207607663)
(0.2700000000000001, array([ 0.00451658]), 0.0045165809426126625)
(0.2800000000000001, array([ 0.00369786]), 0.0036978637164829255)
(0.2900000000000001, array([ 0.00302755]), 0.0030275547453758097)
(0.3000000000000001, array([ 0.00247875]), 0.0024787521766663542)
(0.3100000000000001, array([ 0.00202943]), 0.0020294306362957306)
(0.3200000000000001, array([ 0.00166156]), 0.0016615572731739309)
(0.3300000000000001, array([ 0.00136037]), 0.0013603680375478902)
(0.34000000000000014, array([ 0.00111378]), 0.0011137751478448002)
(0.35000000000000014, array([ 0.00091188]), 0.00091188196555451375)
(0.36000000000000015, array([ 0.00074659]), 0.00074658580837667725)
(0.37000000000000016, array([ 0.00061125]), 0.00061125276112957067)
(0.38000000000000017, array([ 0.00050045]), 0.0005004514334406091)
(0.3900000000000002, array([ 0.00040973]), 0.00040973497897978534)
(0.4000000000000002, array([ 0.00033546]), 0.00033546262790251066)
(0.4100000000000002, array([ 0.00027465]), 0.00027465356997214107)
(0.4200000000000002, array([ 0.00022487]), 0.00022486732417884738)
(0.4300000000000002, array([ 0.00018411]), 0.00018410579366757822)
(0.4400000000000002, array([ 0.00015073]), 0.00015073307509547596)
(0.45000000000000023, array([ 0.00012341]), 0.00012340980408667888)
(0.46000000000000024, array([ 0.00010104]), 0.00010103940183709289)
(0.47000000000000025, array([  8.27240656e-05]), 8.2724065556631795e-05)
(0.48000000000000026, array([  6.77287365e-05]), 6.7728736490853532e-05)
(0.49000000000000027, array([  5.54515994e-05]), 5.5451599432176647e-05)
(0.5000000000000002, array([  4.53999298e-05]), 4.5399929762484692e-05)
(0.5100000000000002, array([  3.71703187e-05]), 3.7170318684126531e-05)
(0.5200000000000002, array([  3.04324830e-05]), 3.0432483008403462e-05)
(0.5300000000000002, array([  2.49160097e-05]), 2.4916009731503072e-05)
(0.5400000000000003, array([  2.03995034e-05]), 2.0399503411171851e-05)
(0.5500000000000003, array([  1.67017008e-05]), 1.6701700790245571e-05)
(0.5600000000000003, array([  1.36741961e-05]), 1.3674196065680865e-05)
(0.5700000000000003, array([  1.11954848e-05]), 1.1195484842590881e-05)
(0.5800000000000003, array([  9.16608774e-06]), 9.1660877362475697e-06)
(0.5900000000000003, array([  7.50455792e-06]), 7.5045579150768183e-06)
(0.6000000000000003, array([  6.14421235e-06]), 6.1442123533281658e-06)
(0.6100000000000003, array([  5.03045561e-06]), 5.0304556071114119e-06)
(0.6200000000000003, array([  4.11858871e-06]), 4.1185887075356861e-06)
(0.6300000000000003, array([  3.37201523e-06]), 3.3720152341391604e-06)
(0.6400000000000003, array([  2.76077257e-06]), 2.7607725720371792e-06)
(0.6500000000000004, array([  2.26032941e-06]), 2.2603294069810381e-06)
(0.6600000000000004, array([  1.85060120e-06]), 1.8506011975818948e-06)
(0.6700000000000004, array([  1.51514411e-06]), 1.5151441121432384e-06)
(0.6800000000000004, array([  1.24049508e-06]), 1.2404950799567024e-06)
(0.6900000000000004, array([  1.01563147e-06]), 1.0156314710024831e-06)
(0.7000000000000004, array([  8.31528719e-07]), 8.3152871910356195e-07)
(0.7100000000000004, array([  6.80798134e-07]), 6.807981343976282e-07)
(0.7200000000000004, array([  5.57390369e-07]), 5.5739036926945469e-07)
(0.7300000000000004, array([  4.56352637e-07]), 4.5635263679039531e-07)
(0.7400000000000004, array([  3.73629938e-07]), 3.7362993798852337e-07)
(0.7500000000000004, array([  3.05902321e-07]), 3.0590232050182309e-07)
(0.7600000000000005, array([  2.50451637e-07]), 2.504516372327595e-07)
(0.7700000000000005, array([  2.05052458e-07]), 2.0505245756119085e-07)
(0.7800000000000005, array([  1.67882753e-07]), 1.6788275299956484e-07)
(0.7900000000000005, array([  1.37450773e-07]), 1.3745077279213838e-07)
(0.8000000000000005, array([  1.12535175e-07]), 1.1253517471925791e-07)
(0.8100000000000005, array([  9.21360083e-08]), 9.2136008345660369e-08)
(0.8200000000000005, array([  7.54345835e-08]), 7.5434583498441788e-08)
(0.8300000000000005, array([  6.17606134e-08]), 6.1760613355803196e-08)
(0.8400000000000005, array([  5.05653135e-08]), 5.0565313483354667e-08)
(0.8500000000000005, array([  4.13993772e-08]), 4.1399377187851225e-08)
(0.8600000000000005, array([  3.38949433e-08]), 3.3894943261968879e-08)
(0.8700000000000006, array([  2.77508324e-08]), 2.7750832422407169e-08)
(0.8800000000000006, array([  2.27204599e-08]), 2.2720459927738314e-08)
(0.8900000000000006, array([  1.86019393e-08]), 1.8601939266915312e-08)
(0.9000000000000006, array([  1.52299797e-08]), 1.5229979744712467e-08)
(0.9100000000000006, array([  1.24692528e-08]), 1.2469252785750856e-08)
(0.9200000000000006, array([  1.02089607e-08]), 1.0208960723597492e-08)
(0.9300000000000006, array([  8.35839010e-09]), 8.3583901013745192e-09)
(0.9400000000000006, array([  6.84327102e-09]), 6.8432710222179141e-09)
(0.9500000000000006, array([  5.60279644e-09]), 5.6027964375371876e-09)
(0.9600000000000006, array([  4.58718175e-09]), 4.5871817466474593e-09)
(0.9700000000000006, array([  3.75566677e-09]), 3.7556667659382487e-09)
(0.9800000000000006, array([  3.07487988e-09]), 3.0748798795865734e-09)
(0.9900000000000007, array([  2.51749872e-09]), 2.517498719438251e-09)
(1.0000000000000007, array([  2.06115362e-09]), 2.0611536224385285e-09)

In [100]:

Click the left mouse button on the left of the output of the following command to shorten the displayed output in a elevator type window


In [101]:
help(scipy.integrate.odeint)


Help on function odeint in module scipy.integrate.odepack:

odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0)
    Integrate a system of ordinary differential equations.
    
    Solve a system of ordinary differential equations using lsoda from the
    FORTRAN library odepack.
    
    Solves the initial value problem for stiff or non-stiff systems
    of first order ode-s::
    
        dy/dt = func(y,t0,...)
    
    where y can be a vector.
    
    Parameters
    ----------
    func : callable(y, t0, ...)
        Computes the derivative of y at t0.
    y0 : array
        Initial condition on y (can be a vector).
    t : array
        A sequence of time points for which to solve for y.  The initial
        value point should be the first element of this sequence.
    args : tuple, optional
        Extra arguments to pass to function.
    Dfun : callable(y, t0, ...)
        Gradient (Jacobian) of `func`.
    col_deriv : bool, optional
        True if `Dfun` defines derivatives down columns (faster),
        otherwise `Dfun` should define derivatives across rows.
    full_output : bool, optional
        True if to return a dictionary of optional outputs as the second output
    printmessg : bool, optional
        Whether to print the convergence message
    
    Returns
    -------
    y : array, shape (len(t), len(y0))
        Array containing the value of y for each desired time in t,
        with the initial value `y0` in the first row.
    infodict : dict, only returned if full_output == True
        Dictionary containing additional output information
    
        =======  ============================================================
        key      meaning
        =======  ============================================================
        'hu'     vector of step sizes successfully used for each time step.
        'tcur'   vector with the value of t reached for each time step.
                 (will always be at least as large as the input times).
        'tolsf'  vector of tolerance scale factors, greater than 1.0,
                 computed when a request for too much accuracy was detected.
        'tsw'    value of t at the time of the last method switch
                 (given for each time step)
        'nst'    cumulative number of time steps
        'nfe'    cumulative number of function evaluations for each time step
        'nje'    cumulative number of jacobian evaluations for each time step
        'nqu'    a vector of method orders for each successful step.
        'imxer'  index of the component of largest magnitude in the
                 weighted local error vector (e / ewt) on an error return, -1
                 otherwise.
        'lenrw'  the length of the double work array required.
        'leniw'  the length of integer work array required.
        'mused'  a vector of method indicators for each successful time step:
                 1: adams (nonstiff), 2: bdf (stiff)
        =======  ============================================================
    
    Other Parameters
    ----------------
    ml, mu : int, optional
        If either of these are not None or non-negative, then the
        Jacobian is assumed to be banded.  These give the number of
        lower and upper non-zero diagonals in this banded matrix.
        For the banded case, `Dfun` should return a matrix whose
        rows contain the non-zero bands (starting with the lowest diagonal).
        Thus, the return matrix `jac` from `Dfun` should have shape
        ``(ml + mu + 1, len(y0))`` when ``ml >=0`` or ``mu >=0``.
        The data in `jac` must be stored such that ``jac[i - j + mu, j]``
        holds the derivative of the `i`th equation with respect to the `j`th
        state variable.  If `col_deriv` is True, the transpose of this
        `jac` must be returned.
    rtol, atol : float, optional
        The input parameters `rtol` and `atol` determine the error
        control performed by the solver.  The solver will control the
        vector, e, of estimated local errors in y, according to an
        inequality of the form ``max-norm of (e / ewt) <= 1``,
        where ewt is a vector of positive error weights computed as
        ``ewt = rtol * abs(y) + atol``.
        rtol and atol can be either vectors the same length as y or scalars.
        Defaults to 1.49012e-8.
    tcrit : ndarray, optional
        Vector of critical points (e.g. singularities) where integration
        care should be taken.
    h0 : float, (0: solver-determined), optional
        The step size to be attempted on the first step.
    hmax : float, (0: solver-determined), optional
        The maximum absolute step size allowed.
    hmin : float, (0: solver-determined), optional
        The minimum absolute step size allowed.
    ixpr : bool, optional
        Whether to generate extra printing at method switches.
    mxstep : int, (0: solver-determined), optional
        Maximum number of (internally defined) steps allowed for each
        integration point in t.
    mxhnil : int, (0: solver-determined), optional
        Maximum number of messages printed.
    mxordn : int, (0: solver-determined), optional
        Maximum order to be allowed for the non-stiff (Adams) method.
    mxords : int, (0: solver-determined), optional
        Maximum order to be allowed for the stiff (BDF) method.
    
    See Also
    --------
    ode : a more object-oriented integrator based on VODE.
    quad : for finding the area under a curve.

Lorenz Attractors

The meaning of the following equations is fully explained on the corresponding section for this chapter in the book

$$ \boxed{ \left\{ \begin{align} \frac{dx}{dt} & = \sigma(y-x) \\ \frac{dy}{dt} & = r x - y - x z \\ \frac{dz}{dt} & = x y - b z \end{align} \right. }$$


In [102]:
import numpy
from numpy import linspace
import scipy
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sigma=10.0
b=8.0/3.0
r=28.0
f = lambda x,t: [sigma*(x[1]-x[0]),
                 r*x[0]-x[1]-x[0]*x[2],
                 x[0]*x[1]-b*x[2]]

t=linspace(0,20,2000); y0=[5.0,5.0,5.0]
solution=odeint(f,y0,t)
X=solution[:,0]; Y=solution[:,1]; Z=solution[:,2]

import matplotlib.pyplot as plt
plt.gca(projection='3d'); plt.plot(X,Y,Z)
plt.show()



In [103]:
plt.rcParams['figure.figsize'] = (15.0, 5.0)
plt.subplot(121); plt.plot(t,Z)
plt.subplot(122); plt.plot(Y,Z)
plt.show()


This is the end of the working codes shown and thoroughly discussed in Chapter 4 of the book Learning SciPy for Numerical and Scientific Computing

Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).