2. Integrals

If we have a function $f(x)$ which is well-defined for some $a \le x \le b $, its integral over those two values is defined as

$$\int_a^b dx\; f(x) \;=\; \lim_{N \rightarrow 0} \, \sum_{n=0}^{N} \Delta x\; f(x_n) \;\;\;\mathrm{where}\;\; x_n = a + n\Delta x, \;\; \Delta x \equiv \left(\frac{b-a}{N}\right).$$

This is called a definite integral, and represents the area under the graph of $f(x)$ in the region between $x=a$ and $x=b$, as shown in the figure below. The function $f(x)$ is called the integrand, and the two points $a$ and $b$ are called the bounds of the integral. The interval between the two bounds is divided into $N$ segments, of length $(b-a)/N$ each. Each term in the sum represents the area of a rectangle, and as $N\rightarrow \infty$, the sum converges to the area under the curve.

A multiple integral involves integration over more than one variable. For instance, when we have a function $f(x_1,x_2)$ that depends on two independent variables, $x_1$ and $x_2$, we can perform a double integral by integrating over one variable first, then the other variable:

$$\int_{a_1}^{b_1} dx_1 \int_{a_2}^{b_2} dx_2 \; f(x_1, x_2) \equiv \int_{a_1}^{b_1} dx_1 F(x_1)\quad\text{where}\;\;F(x_1) \equiv \int_{a_2}^{b_2} dx_2 \; f(x_1, x_2).$$

Basic properties of definite integrals

The value of a definite integral depends only on the integrand, and the two integration bounds. The variable which is integrated over is a dummy variable, which means that changing the symbol does not affect the value of the overall expression:

$$\int_a^b dx\; f(x) = \int_a^b dy\; f(y).$$

Since the value of the integral does not depend on the dummy variable, it is nonsensical to write something like

$$\frac{d}{dx}\; \left[\int_a^b dx\; f(x)\right]. \;\;\;(\text{Nonsense expression}!)$$

Since an integral is defined as the limiting form of a sum, it can be algebraically manipulated in the same way as a summation expression. For instance, an integral of a linear combination is equal to a linear combination of two integrals with the same bounds:

$$\int_a^b dx \;\Big[c_1 \,f_1(x) + c_2\, f_2(x)\Big] = c_1 \int_{a}^{b} dx \; f_1(x)\;\, +\;\, c_2 \int_{a}^{b} dx\; f_2(x).$$

This is analogous to how the summation of a linear combination is equal to the linear combination of separate summations:

$$\sum_{n = p}^{q} \Big[ c_1 A_n \, + \, c_2 B_n\Big] = c_1 \sum_{n = p}^{q} A_n \, + \, c_2 \sum_{n = p}^{q} B_n.$$

For a similar reason, multiple integrals can be manipulated like multiple summations. If we have a double integral where the integrals have independent bounds, we can swap the order of the integrals:

$$\int_{a_1}^{b_1} dx_1 \int_{a_2}^{b_2} dx_2 \;\; f(x_1, x_2) = \int_{a_2}^{b_2} dx_2 \int_{a_1}^{b_1} dx_1 \;\; f(x_1, x_2).$$

This is analogous to how we can swap the order of two independent summations. Note, however, that this manipulation is invalid if the integration bounds are not independent. For instance, if the upper or lower bound of the inner integral depends on the integration variable of the outer integral, we can't swap the two integrals:

$$\int_{a_1}^{b_1} dx_1 \int_{a_1}^{x_1} dx_2 \;\; f(x_1, x_2) \ne \int_{a_1}^{x_1} dx_2 \int_{a_1}^{b_1} dx_1 \;\; f(x_1, x_2).\;\; (\text{Nonsense expression}!)$$

(Note that the expression on the right is nonsensical: $x_1$ is meant to be a dummy variable, yet it exists outside of any integration sign.)

Numerical integration

In this course, you will learn various methods for doing integrals analytically. It is often helpful to check your work by redoing the integrals numerically. Python provides two convenient ways to do numerical integrals.

The first method is the scipy.integrate.quad function. To use this, you have to define the integrand as a Python function, then specify the bounds of the integrand. The scipy.integrate.quad function will then calculate the definite integral, automatically choosing an appropriate discretization that gives an acceptable numerical error.

The following code shows an example of using scipy.integrate.quad to calculate the numerical integral

$$\int_{1}^2 x^2 dx = \left[\frac{x^3}{3}\right]_1^2 = \frac{7}{3} = 2.3333...$$

In [1]:
from scipy.integrate import quad

def f(x):                # The function to be integrated.
    return x**2

I, error = quad(f, 1, 2)
print(I)


2.3333333333333335

The second method is to use numpy.trapz, which evaluates the area under the curve using a trapezium rule. To use this, discretize the $x$ variable and calculate the integrand at those $x$ values. Here is an example:


In [1]:
from numpy import linspace, trapz

x = linspace(1, 2, 1000) # 1000 discretization points between 1 and 2
y = x**2                 # Array containing the values of y at those points
I = trapz(y, x)          # Area under curve. Note the sequence of inputs: y FIRST, then x.
print(I)


2.333333500333834

Integrals as antiderivatives

Since the value of a definite integral depends on the values of the upper and lower bounds, we can ask what happens to the value of the definite integral when either bound is varied. Using the defintion of the derivative, we can show that

$$\begin{aligned} \frac{d}{db} \left[\int_a^b dx\; f(x)\right] &= f(b), \\ \frac{d}{da} \left[\int_a^b dx\; f(x)\right] &= -f(a).\end{aligned}$$

To prove the first equation, observe that increasing the upper bound from $b$ to $b + \delta b$ increases the area under the curve by $f(b) \delta b$ (to lowest order in $\delta b$). Hence, the definite integral's rate of change with $b$ is $f(b)$. Likewise, increasing the lower bound from $a$ to $\delta a$ decreases the area under the curve, leading to a rate of change of $-f(a)$.

From the above result, we define the concept of an indefinite integral, or antiderivative, as the inverse of a derivative operation:

$$\int^x dx' f(x') \equiv F(x) \;\;\mathrm{such}\;\mathrm{that}\;\; \frac{d}{dx}F(x) = f(x).$$

Since derivatives are not one-to-one (i.e., two different functions can have the same derivative), an antiderivative does not have a unique, well-specified value. Rather, its value is only defined up to an additive constant, called an integration constant. A definite integral, by contrast, always has a well-defined value.

Finding antiderivatives is much harder than differentiation. Once you know how to differentiate a few special functions, differentiating some combination of those functions usually involves a straightforward (if tedious) application of composition rules. By contrast, there is no general systematic procedure for symbolic integration. Integration often requires creative steps, like guessing a solution and checking if its derivative yields the desired integrand.

Some common techniques are summarized in the following sections; others will be introduced later in this course.

Integration by parts

If the integrand consists of two factors, and you know the antiderivative of one of the factors, you can integrate by parts by shifting the derivative onto the other factor:

$$\int_a^b dx \; f(x) \, \frac{dg}{dx} \;=\; \Big[\,f(x)\, g(x)\,\Big]_a^b - \int_a^b \frac{df}{dx}\, g(x). $$

The first term on the right hand side is a constant denoting $[f(a)g(a) - f(b)g(b)]$. Hopefully, the integral in the second term is easier to solve than the original integral.

Judicious use of integration by parts is a key step for solving many integrals. For example, consider

$$\int_a^b dx\; x \, e^{\gamma x}.$$

The integrand consists of two factors, $x$ and $e^{\gamma x}$; we happen to know the antiderivative of both factors. Integrating by parts lets us replace one of these factors with its antiderivative, while applying an additional derivative on the other factor. The smart thing to do is to apply the derivative on the $x$ factor, and the antiderivative on the $e^{\gamma x}$:

$$\begin{align}\int_a^b dx\; x\, e^{\gamma x} \;&=\; \left[\;x\, \frac{e^{\gamma x}}{\gamma}\, \right]_a^b - \int_a^b dx\; \frac{e^{\gamma x}}{\gamma} \\&=\; \left[\;x\, \frac{e^{\gamma x}}{\gamma} - \frac{e^{\gamma x}}{\gamma^2} \,\right]_a^b.\end{align}$$

Whenever we finish doing an integral, it is good practice to double-check the result by making sure the dimensions match up. Note that $\gamma$ has units of inverse $x$, so the integral on the left-hand side has units of $x^2$. The solution on the right hand side has two terms, with units $x/\gamma$ and $1/\gamma^2$; both of these are equivalent to units of $x^2$, which is what we need!


In [2]:
## Let's double-check the above integral numerically:
from scipy.integrate import quad
from numpy import exp

def f(x, gamma): return x*exp(gamma*x)

a, b, g = 1.0, 2.0, 0.1

I, error = quad(f, a, b, (g,)) # Note that (gamma,) defines a tuple of extra inputs supplied to f
print(I)
print(b*exp(g*b)/g - exp(g*b)/g**2 - a*exp(g*a)/g + exp(g*a)/g**2)


1.7531619739946995
1.7531619739947075

Change of variables

Another useful technique for solving integrals is to change variables. Consider the integral

$$\int_0^\infty \frac{dx}{x^2 + 1}.$$

We can solve this by making a change of variables $x = \tan(u)$. This involves (i) replacing all occurences of $x$ in the integrand with $\tan(u)$, (ii) replacing the integral limits, and (iii) replacing $dx$ with $(dx/du) \, du = 1/[\cos(u)]^2 du$:

$$\begin{align}\int_0^\infty \frac{dx}{x^2 + 1} &= \int_0^{\pi/2} \frac{1}{[\tan(u)]^2 + 1} \cdot \frac{1}{[\cos(u)]^2} \; du \\ &= \int_0^{\pi/2} \frac{1}{[\sin(u)]^2 + [\cos(u)]^2} \; du.\end{align}$$

Due to the Pythagorean theorem, the integrand reduces to 1, so

$$\int_0^\infty \frac{dx}{x^2 + 1} = \int_0^{\pi/2} du = \frac{\pi}{2}.$$

Clearly, this technique often requires some cleverness and/or trial-and-error in choosing the right change of variables.

The Gaussian integral

Here's a famous integral:

$$\int_{-\infty}^\infty \; e^{-\gamma x^2} \; dx.$$

The integrand is called a Gaussian, or bell curve, and is plotted below. The larger the value of $\gamma$, the more narrowly-peaked the curve.


In [4]:
## Visualize the function f(x) = exp(-gamma x^2).
%matplotlib inline
from ipywidgets import interact, FloatSlider
from numpy import linspace, exp
import matplotlib.pyplot as plt

def plot_exp(gamma):
    plt.figure(figsize=(10,4))
    lim, nx = 5.0, 200
    x = linspace(-lim, lim, nx)
    plt.plot(x, exp(-gamma*x*x))
    plt.title("Use slider to adjust the value of 'gamma'")
    plt.xlabel('x');    plt.xlim(-lim, lim)
    plt.ylabel('f(x)'); plt.ylim(0., 1.1)
    plt.show()

interact(plot_exp, gamma=FloatSlider(min=0.1, max=4.0, step=0.1, value=1.0));


The integral was solved by Gauss in a brilliant way. Let $I(\gamma)$ denote the value of the integral. Then $I^2$ is just two independent copies of the integral, multiplied together:

$$I^2(\gamma) = \left[\int_{-\infty}^\infty dx\; e^{-\gamma x^2}\right] \times \left[\int_{-\infty}^\infty dy\; e^{-\gamma y^2}\right].$$

Note that in the second copy of the integral, we have changed the "dummy" label $x$ (the integration variable) into $y$, to avoid ambiguity. Now, this becomes a two-dimensional integral, taken over the entire 2D plane:

$$I^2(\gamma) = \int_{-\infty}^\infty dx\, \int_{-\infty}^\infty dy \; e^{-\gamma (x^2+y^2)}.$$

Next, change from Cartesian to polar coordinates:

$$I^2(\gamma) = \int_{0}^\infty dr\, r \int_{0}^{2\pi} d\phi \; e^{-\gamma r^2} = \left[ \int_{0}^\infty dr\, r \, e^{-\gamma r^2}\right] \times \left[\int_{0}^{2\pi} d\phi \right] = \frac{1}{2\gamma} \cdot 2\pi.$$

By taking the square root, we arrive at the result

$$I(\gamma) = \int_{-\infty}^\infty dx \; e^{-\gamma x^2} = \sqrt{\frac{\pi}{\gamma}}.$$

Differentiating under the integral sign

In the previous section, we noted that if an integrand contains a parameter (denoted $\gamma$) which is independent of the integration variable (denoted $x$), then the definite integral can itself be regarded as a function of $\gamma$. It can then be shown that taking the derivative of the definite integral with respect to $\gamma$ is equivalent to taking the partial derivative of the integrand:

$$\frac{d}{d\gamma} \, \left[\int_a^b dx\; f(x,\gamma)\right] = \int_a^b dx \; \frac{\partial f}{\partial \gamma}(x,\gamma).$$

This operation, called differentiating under the integral sign, was first used by Leibniz, one of the inventors of calculus. It can be applied as a technique for solving integrals, popularized by Richard Feynman in his book Surely You're Joking, Mr. Feynman!.

Here is the method. Given a definite integral $I_0$,

  1. Come up with a way to generalize the integrand, by introducing a parameter $\gamma$, such that the generalized integral becomes a function $I(\gamma)$ which reduces to the original integral $I_0$ for a particular parameter value, say $\gamma = \gamma_0$.
  2. Differentiate under the integral sign. If you have chosen the generalization right, the resulting integral will be easier to solve, so...
  3. Solve the integral to obtain $I'(\gamma)$.
  4. Integrate $I'$ over $\gamma$ to obtain the desired integral $I(\gamma)$, and evaluate it at $\gamma_0$ to obtain the desired integral $I_0$.

An example is helpful for demonstrating this procedure. Consider the integral

$$\int_{0}^\infty dx \; \frac{\sin(x)}{x}.$$

First, (i) we generalize the integral as follows (we'll soon see why):

$$I(\gamma) = \int_{0}^\infty dx \; \frac{\sin(x)}{x}\, e^{-\gamma x}.$$

The desired integral is $I(0)$. Next, (ii) differentiating under the integral gives

$$I'(\gamma) = - \int_{0}^\infty dx \; \sin(x)\, e^{-\gamma x}.$$

Taking the partial derivative of the integrand with respect to $\gamma$ brought down a factor of $-x$, cancelling out the troublesome denominator. Now, (iii) we solve the new integral, which can be done by integrating by parts twice:

$$\begin{aligned}I'(\gamma) &= \left[\cos(x)\,e^{-\gamma x}\right]_0^\infty + \gamma \int_{0}^\infty dx \; \cos(x)\, e^{-\gamma x} \\ &= -1 + \gamma \left[\sin(x)\,e^{-\gamma x}\right]_0^\infty + \gamma^2 \int_{0}^\infty dx \; \sin(x)\, e^{-\gamma x} \\&= -1 - \gamma^2 I'(\gamma).\end{aligned}$$

Hence,

$$I'(\gamma) = - \frac{1}{1+\gamma^2}.$$

Finally, (iv) we need to integrate this over $\gamma$. But we already saw how to do this particular integral in a previous section, and the result is

$$I(\gamma) = A - \tan^{-1}(\gamma),$$

where $A$ is a constant of integration. When $\gamma \rightarrow \infty$, the integral must vanish, which implies that $A = \tan^{-1}(+\infty) = \pi/2$. Finally, we arrive at the result

$$\int_{0}^\infty dx \; \frac{\sin(x)}{x} = I(0) = \frac{\pi}{2}.$$

When we discuss contour integration, we will see a more straightforward way to do this integral.

Exercises

  1. Consider the step function $$\Theta(x) = \left\{\begin{array}{ll} 1, &\;\;\;\textrm{for} \; x \ge 0\\ 0,&\;\;\; \textrm{otherwise.}\end{array}\right.$$ Write down an expression for the antiderivative of $\Theta(x)$, and sketch its graph.
  2. Show that
    $$\int_0^{2\pi} dx\, [\sin(x)]^2 = \int_0^{2\pi} dx\, [\cos(x)]^2 = \pi.$$
  3. Calculate the following definite integrals:
    1. $\displaystyle\int_{0}^\pi dx\; x^2 \sin(2x)$
    2. $\displaystyle\int_{1}^\alpha dx\; x \ln(x)$
    3. $\displaystyle\int_0^\infty dx\;e^{-\gamma x} \, \cos(x)$
    4. $\displaystyle\int_0^\infty dx\;e^{-\gamma x} \, x \cos(x)$
    5. $\displaystyle\int_{-\infty}^\infty dx\;e^{-\gamma |x|}$
    6. $\displaystyle \int_{-\infty}^\infty dx \;e^{-|x+1|} \sin(x)$
  4. By differentiating under the integral, solve $$\int_0^1 dx\; \frac{x^2-1}{\ln(x)}.$$ Hint: replace $x^2$ in the numerator with $x^\gamma$. (Solution)
  5. Let $f(x,y)$ be a function that depends on two inputs $x$ and $y$, and define $$I(x) = \int_0^x f(x,y) dy.$$ Using the definition of the derivative, prove that $$\frac{dI}{dx} = f(x,y) + \int_0^x \frac{\partial f}{\partial x}(x,y) \;dy.$$
  6. Consider the ordinary differential equation $$\frac{dy}{dt} = - \gamma y(t) + f(t),$$ where $\gamma > 0$ and $f(t)$ is some function of $t$. The solution can be written in the form $$y(t) = y(0) + \int_0^t dt' \, e^{-\gamma(t-t')} \, g(t').$$ Find the appropriate function $g$, in terms of $f$ and $y(0)$. (Solution)
  7. Use numerical integration to check the value of the Gaussian integral derived analytically above. Note that it's impossible to perform a numerical integral over an infinite range, so you must pick a range that is appropriately large.
  8. Use numerical integration to check the value of the integral $$\int_{0}^\infty dx \, \sin(x)/x = \pi/2$$ derived above. Note that the integrand is divergent at $x=0$, so you should adjust the lower bound of the numerical integral to some positive but appropriately small value.