Monte Carlo Methods: Lab 1

Take a look at Chapter 10 of Newman's Computational Physics with Python where much of this material is drawn from.


In [1]:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)


Out[1]:

Integration

If we have an ugly function, say

$$ \begin{equation} f(x) = \sin^2 \left(\frac{1}{x (2-x)}\right), \end{equation} $$

then it can be very difficult to integrate. To see this, just do a quick plot.


In [2]:
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)

from __future__ import division

In [3]:
def f(x):
    return numpy.sin(1.0/(x*(2.0-x)))**2

In [4]:
x = numpy.linspace(0.0, 2.0, 10000)
pyplot.plot(x, f(x))
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$");


/Users/ih3/anaconda/lib/python3.4/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
  from ipykernel import kernelapp as app
/Users/ih3/anaconda/lib/python3.4/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sin
  from ipykernel import kernelapp as app

We see that as the function oscillates infinitely often, integrating this with standard methods is going to be very inaccurate.

However, we note that the function is bounded, so the integral (given by the shaded area below) must itself be bounded - less than the total area in the plot, which is $2$ in this case.


In [5]:
pyplot.fill_between(x, f(x))
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$");


/Users/ih3/anaconda/lib/python3.4/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
  from ipykernel import kernelapp as app
/Users/ih3/anaconda/lib/python3.4/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sin
  from ipykernel import kernelapp as app

So if we scattered (using a uniform random distribution) a large number of points within this box, the fraction of them falling below the curve is approximately the integral we want to compute, divided by the area of the box:

$$ \begin{equation} I = \int_a^b f(x) \, dx \quad \implies \quad I \simeq \frac{k A}{N} \end{equation} $$

where $N$ is the total number of points considered, $k$ is the number falling below the curve, and $A$ is the area of the box. We can choose the box, but we need $y \in [\min_{x \in [a, b]} (f(x)), \max_{x \in [a, b]} (f(x))] = [c, d]$, giving $A = (d-c)(b-a)$.

So let's apply this technique to the function above, where the box in $y$ is $[0,1]$.


In [6]:
def mc_integrate(f, domain_x, domain_y, N = 10000):
    """
    Monte Carlo integration function: to be completed. Result, for the given f, should be around 1.46.
    """
    import numpy.random
    
    return I

In [ ]:

Accuracy

To check the accuracy of the method, let's apply this to calculate $\pi$.

The area of a circle of radius $2$ is $4\pi$, so the area of the quarter circle in $x, y \in [0, 2]$ is just $\pi$:

$$ \begin{equation} \pi = \int_0^2 \sqrt{4 - x^2} \, dx. \end{equation} $$

Check the convergence of the Monte Carlo integration with $N$. (I suggest using $N = 100 \times 2^i$ for $i = 0, \dots, 19$; you should find the error scales roughly as $N^{-1/2}$)


In [ ]:

Mean Value Method

Monte Carlo integration is pretty inaccurate, as seen above: it converges slowly, and has poor accuracy at all $N$. An alternative is the mean value method, where we note that by definition the average value of $f$ over the interval $[a, b]$ is precisely the integral multiplied by the width of the interval.

Hence we can just choose our $N$ random points in $x$ as above, but now just compute

$$ \begin{equation} I \simeq \frac{b-a}{N} \sum_{i=1}^N f(x_i). \end{equation} $$

In [7]:
def mv_integrate(f, domain_x, N = 10000):
    """
    Mean value Monte Carlo integration: to be completed
    """
    import numpy.random
    return I

Let's look at the accuracy of this method again applied to computing $\pi$.


In [ ]:

The convergence rate is the same (only roughly, typically), but the Mean Value method is expected to be better in terms of its absolute error.

Dimensionality

Compared to standard integration methods (Gauss quadrature, Simpson's rule, etc) the convergence rate for Monte Carlo methods is very slow. However, there is one crucial advantage: as you change dimension, the amount of calculation required is unchanged, whereas for standard methods it grows geometrically with the dimension.

Try to compute the volume of an $n$-dimensional unit hypersphere, which is the object in $\mathbb{R}^n$ such that

$$ \begin{equation} \sum_{i=1}^n x_i^2 \le 1. \end{equation} $$

The volume of the hypersphere can be found in closed form, but can rapidly be computed using the Monte Carlo method above, by counting the $k$ points that randomly fall within the hypersphere and using the standard formula $I \simeq V k / N$.


In [8]:
def mc_integrate_multid(f, domain, N = 10000):
    """
    Monte Carlo integration in arbitrary dimensions (read from the size of the domain): to be completed
    """
    
    return I

In [9]:
from scipy import special

In [10]:
def volume_hypersphere(ndim=3):
    return numpy.pi**(float(ndim)/2.0) / special.gamma(float(ndim)/2.0 + 1.0)

Now let us repeat this across multiple dimensions.

The errors clearly vary over a range, but the convergence remains roughly as $N^{-1/2}$ independent of the dimension; using other techniques such as Gauss quadrature would see the points required scaling geometrically with the dimension.

Importance sampling

Consider the integral (which arises, for example, in the theory of Fermi gases)

$$ \begin{equation} I = \int_0^1 \frac{x^{-1/2}}{e^x + 1} \, dx. \end{equation} $$

This has a finite value, but the integrand diverges as $x \to 0$. This may cause a problem for Monte Carlo integration when a single value may give a spuriously large contribution to the sum.

We can get around this by changing the points at which the integrand is sampled. Choose a weighting function $w(x)$. Then a weighted average of any function $g(x)$ can be

$$ \begin{equation} <g>_w = \frac{\int_a^b w(x) g(x) \, dx}{\int_a^b w(x) \, dx}. \end{equation} $$

As our integral is

$$ \begin{equation} I = \int_a^b f(x) \, dx \end{equation} $$

we can, by setting $g(x) = f(x) / w(x)$ get

$$ \begin{equation} I = \int_a^b f(x) \, dx = \left< \frac{f(x)}{w(x)} \right>_w \int_a^b w(x) \, dx. \end{equation} $$

This gives

$$ \begin{equation} I \simeq \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{w(x_i)} \int_a^b w(x) \, dx, \end{equation} $$

where the points $x_i$ are now chosen from a non-uniform probability distribution with pdf

$$ \begin{equation} p(x) = \frac{w(x)}{\int_a^b w(x) \, dx}. \end{equation} $$

This is a generalization of the mean value method - we clearly recover the mean value method when the weighting function $w(x) \equiv 1$. A careful choice of the weighting function can mitigate problematic regions of the integrand; e.g., in the example above we could choose $w(x) = x^{-1/2}$, giving $p(x) = x^{-1/2}/2$.

So, let's try to solve the integral above. The expected solution is around 0.84.


In [ ]: