Convergence rate and semi-log plots

Let $\{p_n \}_{n=0}^{\infty}$ a sequence such that $L = \lim_{n\rightarrow \infty}p_n $

We follow the textbook and we define the convergence rate, $r$, as the number such that

\begin{equation} \lim_{n \rightarrow \infty} \frac{|p_n - L|}{|p_{n-1}-L|^r} = C. \end{equation}
  • if $r=1$ we say that method is linear;
  • if $r=2$ we say that the method is quadratic,
  • etc.

We will provide some notions of how to recognize the convergence rate from a semi-Log plot

Fixed point iteration and linear convergence

From theorem 2.4 in your text book, the fixed point method defines a sequence of values $p_n$ such that

\begin{equation} |p_n - L | < \frac{k^n}{1-k} |p_0 - L| \end{equation}

where $k$ is the Lipschitz constant, and $k < 1$.

We can easily rewrite this as \begin{equation} |p_n - L | \lesssim e^{n \log(k)} \end{equation} where $\log(k) <0 $.

A simple check for linear convergence would be to draw a semi-log plot.


In [6]:
n = 100
iteration = [1:n];
Error = e.^(-iteration);
using Gadfly
plot(x =iteration, y = Error, Scale.y_log10)


Out[6]:
x -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 10-110 10-100 10-90 10-80 10-70 10-60 10-50 10-40 10-30 10-20 10-10 100 1010 1020 1030 1040 1050 1060 10-100 10-98 10-96 10-94 10-92 10-90 10-88 10-86 10-84 10-82 10-80 10-78 10-76 10-74 10-72 10-70 10-68 10-66 10-64 10-62 10-60 10-58 10-56 10-54 10-52 10-50 10-48 10-46 10-44 10-42 10-40 10-38 10-36 10-34 10-32 10-30 10-28 10-26 10-24 10-22 10-20 10-18 10-16 10-14 10-12 10-10 10-8 10-6 10-4 10-2 100 102 104 106 108 1010 1012 1014 1016 1018 1020 1022 1024 1026 1028 1030 1032 1034 1036 1038 1040 1042 1044 1046 1048 1050 10-100 10-50 100 1050 10-100 10-95 10-90 10-85 10-80 10-75 10-70 10-65 10-60 10-55 10-50 10-45 10-40 10-35 10-30 10-25 10-20 10-15 10-10 10-5 100 105 1010 1015 1020 1025 1030 1035 1040 1045 1050 y

If the error curve is a straight line, then your iterative algorithm has linear convergence rate.

This can be explained as follows: take $\log$ at both sides of the inequality \begin{equation} |p_n - L | \lesssim e^{n \log(k)} \end{equation} given that $\log$ is an monotonically increasing function, we have that \begin{equation} log(|p_n - L |) \lesssim n \log(k) \end{equation} where $\log(k)$ is a negative constant.

The semi-Log plot (in y), will plot the $\log$ of the error against the number of iterations $n$, which in this case results in a straight line with slope equals to $\log{k}$.

Newton's Method and quadratic convergence

It is known that under the assumptions on $f$, namely: $f'$ is bounded away from zero close to the root, $f''$ is continous, and the initial guess $p_0$ is close to the root; then the Newton's method will converge quadratically to its limit

In other words, \begin{equation} \lim_{n \rightarrow \infty} \frac{|p_n - L|}{|p_{n-1}-L|^2} = C, \end{equation} which implies that \begin{equation} |p_n - L| < q^{n^2} |p_0 - L| \end{equation} for some $q <1$.

We can easily see this behavior from the semi-Log plot


In [7]:
n = 100
iteration = [1:n];
Error = e.^(-iteration.^2);
using Gadfly
plot(x =iteration, y = Error, Scale.y_log10)


Out[7]:
x -150 -100 -50 0 50 100 150 200 250 -100 -95 -90 -85 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200 -100 0 100 200 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 10-900 10-800 10-700 10-600 10-500 10-400 10-300 10-200 10-100 100 10100 10200 10300 10400 10500 10-800 10-780 10-760 10-740 10-720 10-700 10-680 10-660 10-640 10-620 10-600 10-580 10-560 10-540 10-520 10-500 10-480 10-460 10-440 10-420 10-400 10-380 10-360 10-340 10-320 10-300 10-280 10-260 10-240 10-220 10-200 10-180 10-160 10-140 10-120 10-100 10-80 10-60 10-40 10-20 100 1020 1040 1060 1080 10100 10120 10140 10160 10180 10200 10220 10240 10260 10280 10300 10320 10340 10360 10380 10400 10-1000 10-500 100 10500 10-800 10-750 10-700 10-650 10-600 10-550 10-500 10-450 10-400 10-350 10-300 10-250 10-200 10-150 10-100 10-50 100 1050 10100 10150 10200 10250 10300 10350 10400 y

If the error curve in a semi-log plot is a inverted parabola, your method has a quadratic convergence rate.

This behavior can be explained in the same way than before to have that \begin{equation} log(|p_n - L |) \lesssim n^2 \log(q) \end{equation} where the $\log(q)$ is a negative number.

Remarks

Please, take in account that "convergence rates" are defined depending on the context.

For series, the convergence rate depends on the how fast the error decreases. In such cases, the linear rate for iterative method becomes exponential rate for truncated series.

As an example, in Homework 1, you showed that approximating $e$ via a Taylor expansion converged factorially fast to $e$. Using the Stirling formula that the convergence depended as $e^{-n\log{n}}$. If you would have the same error for an iterative method, that would be called superlinear.

For more examples please take a look at https://en.wikipedia.org/wiki/Rate_of_convergence.


In [ ]: