As beautiful & efficient as the Fundamental Theorem of Calculus is, anyone interested in applying the tools of the calculus to real-world problems (scientists & engineers, for example) will quickly find that additional tools are needed.
That is because, in some cases, it is impossible to find the exact value of a definite integral. In fact, the majority of functions in the real world do not have closed-form antiderivatives—that is, very few functions in the real world are composed of elemental functions—combinations of function families we have studied, such as algebraic functions, trigonometric functions, and exponential functions.
Fortunately, there is a viable alternative to using the Fundamental Theorem of Calculus to find definite integrals.
You can approximate definite integrals using simple, arithmetic methods similar to Riemann Sums.
Essentially, these methods provide an approximate value for the area under a curve between two endpoints (the limits of integration). The methods work on any type of functions, regardless of whether or not an antiderivative of the function can be found.
In many scenarios in science & engineering, the resulting approximations are close enough to fulfill project requirements. Additionally, methods exist for controlling the accuracy of/error in the resulting approximate values.
These functions cannot be integrated using the techniques of integration we've learned: $$\int_{0}^{1} e^{x^2} dx \quad \quad \quad \quad \int \sqrt{1 + x^2} dx$$
In these cases, you can approimate the value using techniques of Approximate Integration.
Whenever you're using an approximation technique, the issue of accuracy of the approximation arises—how accurate is the result?
Each approximation method (i.e. rule) below has a formula for error analysis which has been developed. It's critical to know these error calculation formulas, as well as the various methods for approximation.
The error is the difference between the approximation and the actual value of the definite integral $\int_{a}^{b} f(x) dx$.
When ${x_i}^*$ is chosen to be the midpoint $\overline{x}_i$ of the the sub-interval $[x_{i-1}, x_i]$, you have the Midpoint Approximation $M_n$. Typically, a Midpoint Approximation $M_n$ is better than a Left Endpoint Approximation $L_n$ or a Right Endpoint Approximation $R_n$.
$$\int_{a}^{b} f(x) dx \approx M_n = \Delta x\left[ f(\overline{x}_1) + f(\overline{x}_2) + \cdots + f(\overline{x}_n) \right]$$$$\text{where} \space \Delta x = \frac{b-a}{n}$$$$\text{and} \space \overline{x}_i = \frac{1}{2}(x_{i-1} + x_i) = \text{midpoint of} \space [x_{i-1}, x_i]$$Suppose $f$ is defined and integrable on $[a,b]$. The Trapezoidal Rule approximation to $\int_{a}^{b} f(x) dx$ using $n$ equally spaced sub-intervals on $[a,b]$ is:
If $f''$ is continuous and $M$ is any upper bound for the values of $\left|f''\right|$ on $[a,b]$, then the error $E_T$ in the trapezoidal approximation of the integral of $f$ from $a$ to $b$ for $n$ steps satisfies the inequality:
$$\left|E_T\right| \le \frac{M(b-a)^3}{12n^2}$$Simpson's rule for approximate integration uses parabolas, rather than straight line segments to approximate curve segments.
Simpson's rule is a much better approximation than any of Riemann's methods (left, right or mid points), and better than the Trapezoidal method.
As before, we partition the interval $[a, b]$ into $n$ subintervals of equal length $\Delta x = \frac{(b - a)}{n}$, but this time we require that $n$ be an even number.
If $f^{(4)}$ (the 4th derivative of $f$) is continuous, and $M$ is any upper bound for the values of $\left|f^{(4)}\right|$ on $[a,b]$, then the error $E_s$ in the Simpson's Rule approximation of the integral of $f$ from $a$ to $b$ for $n$ steps satisfies the inequality:
$$\left|E_s\right| \le \frac{M(b-a)^5}{180n^4} \quad \quad \tiny\text{Simpson's Rule}$$
In [ ]: