In [1]:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
In [ ]:
Numerical integration methods work differently depending on whether you have the analytic function available (in which case you can evaluate it freely at any point you please) or if it is sampled for you.
Create a function to integrate, and use NumPy to sample it at $N$ points. Compare the answer you get from integrating the function directly (using integrate.quad to the integral of the sampled function (using integrate.simps).
To get a better sense of the accuracy, vary $N$, and look at how the error changes (if you plot the error vs. $N$, you can measure the convergence).
In [ ]:
For a linear system, ${\bf A x} = {\bf b}$, we can only solve for $x$ if the determinant of the matrix ${\bf A}$ is non-zero. If the determinant is zero, then we call the matrix singular. The condition number of a matrix is a measure of how close we are to being singular. The formal definition is: \begin{equation} \mathrm{cond}({\bf A}) = \| {\bf A}\| \| {\bf A}^{-1} \| \end{equation} But we can think of it as a measure of how much ${\bf x}$ would change due to a small change in ${\bf b}$. A large condition number means that our solution for ${\bf x}$ could be inaccurate.
A Hilbert matrix has $H_{ij} = (i + j + 1)^{-1}$, and is known to have a large condition number. Here's a routine to generate a Hilbert matrix
In [1]:
def hilbert(n):
""" return a Hilbert matrix, H_ij = (i + j - 1)^{-1} """
H = np.zeros((n,n), dtype=np.float64)
for i in range(1, n+1):
for j in range(1, n+1):
H[i-1,j-1] = 1.0/(i + j - 1.0)
return H
Let's solve ${\bf Hx} ={\bf b}$. Create a linear system by picking an ${\bf x}$ and generating a ${\bf b}$ by multiplying by the matrix ${\bf H}$. Then use the scipy.linalg.solve() function to recover ${\bf x}$. Compute the error in ${\bf x}$ as a function of the size of the matrix.
You won't need a large matrix, $n \sim 13$ or so, will start showing big errors.
You can compute the condition number with numpy.linalg.cond()
There are methods that can do a better job with nearly-singular matricies. Take a look at scipy.linalg.lstsq() for example.
In [ ]:
There are a large class of ODE integration methods available through the scipy.integrate.ode() function. Not all of them provide dense output -- most will just give you the value at the end of the integration.
The explicit dopri5 integrator will store the solution at intermediate points and allow you to access them. We'll use that here. You'll need to use the set_solout() method to define a function that takes the current integration solution and store it).
The damped driven pendulum obeys the following equations: $$\dot{\theta} = \omega$$ $$\dot{\omega} = -q \omega - \sin \theta + b \cos \omega_d t$$ here, $\theta$ is the angle of the pendulum from vertical and $\omega$ is the angular velocity. $q$ is a damping coefficient, $b$ is a forcing amplitude, and $\omega_d$ is a driving frequency.
Choose $q = 0.5$ and $\omega_d = 2/3$.
Integrate the system for different values of $b$ (start with $b = 0.9$ and increase by $0.05$, and plot the results ($\theta$ vs. $t$). Here's a RHS function to get you started:
In [4]:
def rhs(t, Y, q, omega_d, b):
""" damped driven pendulum system derivatives. Here, Y = (theta, omega) are
the solution variables. """
f = np.zeros_like(Y)
f[0] = Y[1]
f[1] = -q*Y[1] - np.sin(Y[0]) + b*np.cos(omega_d*t)
return f
Note that the pendulum can flip over, giving values of $\theta$ outside of $[-\pi, \pi]$. The following function can be used to restrict it back to $[-\pi, \pi]$ for plotting.
In [5]:
def restrict_theta(theta):
""" convert theta to be restricted to lie between -pi and pi"""
tnew = theta + np.pi
tnew += -2.0*np.pi*np.floor(tnew/(2.0*np.pi))
tnew -= np.pi
return tnew
Write a function that takes an initial angle, $\theta_0$, and integrates the system and returns the solution. For the parameters that are part of the rhs() function, you'll need to use the set_f_params() method.
Some values of $b$ will show very non-periodic behavior. To see chaos, integrate two different pendula that are the same except for $\theta_0$, with only a small difference between then (like 60 degrees and 60.0001 degrees. You'll see the solutions track for a while, but then diverge.
In [ ]:
We looked at fits, but not what the errors are on the fit. Look at scipy.optimize.curve_fit(). This is a simplified wrapper on the least squares fitting. It can return the convariance matrix, the diagonals of which can give the error of the fit for the parameters.
Make up some data that models a non-linear function (by introducing some random noise) and perform a fit and find the errors on the parameters.
In [ ]: