In order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences: Therefore we find: $$\begin{eqnarray} && dx \approx \Delta x &=&x_1-x_0, \\ && dy \approx \Delta y &=&y_1-y_0 = y(x_1)-y(x_0) = y(x_0+\Delta_x)-y(x_0),\end{eqnarray}$$
Then we re-write the derivative in terms of discrete differences as: $$\frac{dy}{dx} \approx \frac{\Delta y}{\Delta x}$$
Let's look at the accuracy of this approximation in terms of the interval $\Delta x$. In our first example we will evaluate the derivative of $y=x^2$ at $x=1$.
In [5]:
dx = 1.
x = 1.
while(dx > 1.e-10):
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 10.
Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type:
In [123]:
((1.+0.0001)*(1+0.0001)-1)
Out[123]:
Let's try using powers of 1/2
In [10]:
dx = 1.
x = 1.
while(dx > 1.e-10):
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 2.
In addition, one could consider the midpoint difference, defined as: $$ dy \approx \Delta y = y(x_0+\frac{\Delta_x}{2})-y(x_0-\frac{\Delta_x}{2}).$$
For a more complex function we need to import it from math. For instance, let's calculate the derivative of $sin(x)$ at $x=\pi/4$, including both the forward and midpoint differences.
In [13]:
from math import sin, sqrt, pi
dx = 1.
while(dx > 1.e-10):
x = pi/4.
d1 = sin(x+dx) - sin(x); #forward
d2 = sin(x+dx*0.5) - sin(x-dx*0.5); # midpoint
d1 = d1 / dx;
d2 = d2 / dx;
print("%6.0e %20.16f %20.16f %20.16f %20.16f" % (dx, d1, d1-sqrt(2.)/2., d2, d2-sqrt(2.)/2.) )
dx = dx / 2.
A more in-depth discussion about round-off erros in numerical differentiation can be found here
In [9]:
%matplotlib inline
import numpy as np
from matplotlib import pyplot
y = lambda x: x*x
x1 = np.arange(0,10,1)
x2 = np.arange(0,10,0.1)
y1 = np.gradient(y(x1), 1.)
print y1
pyplot.plot(x1,np.gradient(y(x1),1.),'r--o');
pyplot.plot(x1[:x1.size-1],np.diff(y(x1))/np.diff(x1),'b--x');
Notice above that gradient() uses forward and backward differences at the two ends.
In [39]:
pyplot.plot(x2,np.gradient(y(x2),0.1),'b--o');
More discussion about numerical differenciation, including higher order methods with error extrapolation can be found here.
The module scipy also includes methods to accurately calculate derivatives:
In [91]:
from scipy.misc import derivative
y = lambda x: x**2
dx = 1.
x = 1.
while(dx > 1.e-10):
d = derivative(f, x, dx, n=1, order=3)
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 10.
One way to improve the roundoff errors is by simply using the decimal package
In [126]:
from decimal import Decimal
dx = Decimal("1.")
while(dx >= Decimal("1.e-10")):
x = Decimal("1.")
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-Decimal("2.")))
dx = dx / Decimal("10.")