Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Sep 4, 2017
Copyright 1999-2017, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
The Numerical Recipes codes have been rewritten in Fortran 90/95 and C/C++ by us. The original source codes are taken from the widely used software package LAPACK, which follows two other popular packages developed in the 1970s, namely EISPACK and LINPACK.
LINPACK: package for linear equations and least square problems.
LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website http://www.netlib.org it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.
BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from http://www.netlib.org.
Matrix properties reminder.
Matrix Properties Reminder.
Relations | Name | matrix elements |
---|---|---|
$A = A^{T}$ | symmetric | $a_{ij} = a_{ji}$ |
$A = \left (A^{T} \right )^{-1}$ | real orthogonal | $\sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij}$ |
$A = A^{ * }$ | real matrix | $a_{ij} = a_{ij}^{ * }$ |
$A = A^{\dagger}$ | hermitian | $a_{ij} = a_{ji}^{ * }$ |
$A = \left (A^{\dagger} \right )^{-1}$ | unitary | $\sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij}$ |
Diagonal if $a_{ij}=0$ for $i\ne j$
Upper triangular if $a_{ij}=0$ for $i > j$
Lower triangular if $a_{ij}=0$ for $i < j$
Upper Hessenberg if $a_{ij}=0$ for $i > j+1$
Lower Hessenberg if $a_{ij}=0$ for $i < j+1$
Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$
Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$
Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$
Banded, block upper triangular, block lower triangular....
Some Equivalent Statements.
For an $N\times N$ matrix $\mathbf{A}$ the following properties are all equivalent
If the inverse of $\mathbf{A}$ exists, $\mathbf{A}$ is nonsingular.
The equation $\mathbf{Ax}=0$ implies $\mathbf{x}=0$.
The rows of $\mathbf{A}$ form a basis of $R^N$.
The columns of $\mathbf{A}$ form a basis of $R^N$.
$\mathbf{A}$ is a product of elementary matrices.
$0$ is not eigenvalue of $\mathbf{A}$.
The basic matrix operations that we will deal with are addition and subtraction
scalar-matrix multiplication
matrix-matrix multiplication
and transposition
scalar-vector multiplication
the inner or so-called dot product resulting in a constant
and the outer product, which yields a matrix,
int N = 100;
double A[100][100];
// initialize all elements to zero
for(i=0 ; i < N ; i++) {
for(j=0 ; j < N ; j++) {
A[i][j] = 0.0;
In C/C++ this would be coded like
for(i=0 ; i < N ; i++) {
for(j=0 ; j < N ; j++) {
a[i][j] = b[i][j]+c[i][j]
In C/C++ this would be coded like
for(i=0 ; i < N ; i++) {
for(j=0 ; j < N ; j++) {
for(k=0 ; k < N ; k++) {
a[i][j]+=b[i][k]*c[k][j];
ALLOCATE (a(N,N), b(N,N), c(N,N))
DO j=1, N
DO i=1, N
a(i,j)=b(i,j)+c(i,j)
ENDDO
ENDDO
...
DEALLOCATE(a,b,c)
Fortran 90 writes the above statements in a much simpler way
a=b+c
Multiplication
a=MATMUL(b,c)
Fortran contains also the intrinsic functions TRANSPOSE and CONJUGATE.
At least three possibilities in this course
Do it yourself
Use the functions provided in the library package lib.cpp
Use Armadillo http://arma.sourceforgenet (a C++ linear algebra library, discussion both here and at lab).
Do it yourself.
int N;
double ** A;
A = new double*[N]
for ( i = 0; i < N; i++)
A[i] = new double[N];
Always free space when you don't need an array anymore.
for ( i = 0; i < N; i++)
delete[] A[i];
delete[] A;
Armadillo is a C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. The syntax is deliberately similar to Matlab.
Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions. Various matrix decompositions are provided through optional integration with LAPACK, or one of its high performance drop-in replacements (such as the multi-threaded MKL or ACML libraries).
A delayed evaluation approach is employed (at compile-time) to combine several operations into one and reduce (or eliminate) the need for temporaries. This is accomplished through recursive templates and template meta-programming.
Useful for conversion of research code into production environments, or if C++ has been decided as the language of choice, due to speed and/or integration capabilities.
The library is open-source software, and is distributed under a license that is useful in both open-source and commercial/proprietary contexts.
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main(int argc, char** argv)
{
mat A = randu<mat>(5,5);
mat B = randu<mat>(5,5);
cout << A*B << endl;
return 0;
For people using Ubuntu, Debian, Linux Mint, simply go to the synaptic package manager and install armadillo from there. You may have to install Lapack as well. For Mac and Windows users, follow the instructions from the webpage http://arma.sourceforge.net. To compile, use for example (linux/ubuntu)
c++ -O2 -o program.x program.cpp -larmadillo -llapack -lblas
where the -l
option indicates the library you wish to link to.
For OS X users you may have to declare the paths to the include files and the libraries as
c++ -O2 -o program.x program.cpp -L/usr/local/lib -I/usr/local/include -larmadillo -llapack -lblas
#include <iostream>
#include "armadillo"
using namespace arma;
using namespace std;
int main(int argc, char** argv)
{
// directly specify the matrix size (elements are uninitialised)
mat A(2,3);
// .n_rows = number of rows (read only)
// .n_cols = number of columns (read only)
cout << "A.n_rows = " << A.n_rows << endl;
cout << "A.n_cols = " << A.n_cols << endl;
// directly access an element (indexing starts at 0)
A(1,2) = 456.0;
A.print("A:");
// scalars are treated as a 1x1 matrix,
// hence the code below will set A to have a size of 1x1
A = 5.0;
A.print("A:");
// if you want a matrix with all elements set to a particular value
// the .fill() member function can be used
A.set_size(3,3);
A.fill(5.0); A.print("A:");
mat B;
// endr indicates "end of row"
B << 0.555950 << 0.274690 << 0.540605 << 0.798938 << endr
<< 0.108929 << 0.830123 << 0.891726 << 0.895283 << endr
<< 0.948014 << 0.973234 << 0.216504 << 0.883152 << endr
<< 0.023787 << 0.675382 << 0.231751 << 0.450332 << endr;
// print to the cout stream
// with an optional string before the contents of the matrix
B.print("B:");
// the << operator can also be used to print the matrix
// to an arbitrary stream (cout in this case)
cout << "B:" << endl << B << endl;
// save to disk
B.save("B.txt", raw_ascii);
// load from disk
mat C;
C.load("B.txt");
C += 2.0 * B;
C.print("C:");
// submatrix types:
//
// .submat(first_row, first_column, last_row, last_column)
// .row(row_number)
// .col(column_number)
// .cols(first_column, last_column)
// .rows(first_row, last_row)
cout << "C.submat(0,0,3,1) =" << endl;
cout << C.submat(0,0,3,1) << endl;
// generate the identity matrix
mat D = eye<mat>(4,4);
D.submat(0,0,3,1) = C.cols(1,2);
D.print("D:");
// transpose
cout << "trans(B) =" << endl;
cout << trans(B) << endl;
// maximum from each column (traverse along rows)
cout << "max(B) =" << endl;
cout << max(B) << endl;
// maximum from each row (traverse along columns)
cout << "max(B,1) =" << endl;
cout << max(B,1) << endl;
// maximum value in B
cout << "max(max(B)) = " << max(max(B)) << endl;
// sum of each column (traverse along rows)
cout << "sum(B) =" << endl;
cout << sum(B) << endl;
// sum of each row (traverse along columns)
cout << "sum(B,1) =" << endl;
cout << sum(B,1) << endl;
// sum of all elements
cout << "sum(sum(B)) = " << sum(sum(B)) << endl;
cout << "accu(B) = " << accu(B) << endl;
// trace = sum along diagonal
cout << "trace(B) = " << trace(B) << endl;
// random matrix -- values are uniformly distributed in the [0,1] interval
mat E = randu<mat>(4,4);
E.print("E:");
// row vectors are treated like a matrix with one row
rowvec r;
r << 0.59499 << 0.88807 << 0.88532 << 0.19968;
r.print("r:");
// column vectors are treated like a matrix with one column
colvec q;
q << 0.81114 << 0.06256 << 0.95989 << 0.73628;
q.print("q:");
// dot or inner product
cout << "as_scalar(r*q) = " << as_scalar(r*q) << endl;
// outer product
cout << "q*r =" << endl;
cout << q*r << endl;
// sum of three matrices (no temporary matrices are created)
mat F = B + C + D;
F.print("F:");
return 0;
#include <iostream>
#include "armadillo"
using namespace arma;
using namespace std;
int main(int argc, char** argv)
{
cout << "Armadillo version: " << arma_version::as_string() << endl;
mat A;
A << 0.165300 << 0.454037 << 0.995795 << 0.124098 << 0.047084 << endr
<< 0.688782 << 0.036549 << 0.552848 << 0.937664 << 0.866401 << endr
<< 0.348740 << 0.479388 << 0.506228 << 0.145673 << 0.491547 << endr
<< 0.148678 << 0.682258 << 0.571154 << 0.874724 << 0.444632 << endr
<< 0.245726 << 0.595218 << 0.409327 << 0.367827 << 0.385736 << endr;
A.print("A =");
// determinant
cout << "det(A) = " << det(A) << endl;
// inverse
cout << "inv(A) = " << endl << inv(A) << endl;
double k = 1.23;
mat B = randu<mat>(5,5);
mat C = randu<mat>(5,5);
rowvec r = randu<rowvec>(5);
colvec q = randu<colvec>(5);
// examples of some expressions
// for which optimised implementations exist
// optimised implementation of a trinary expression
// that results in a scalar
cout << "as_scalar( r*inv(diagmat(B))*q ) = ";
cout << as_scalar( r*inv(diagmat(B))*q ) << endl;
// example of an expression which is optimised
// as a call to the dgemm() function in BLAS:
cout << "k*trans(B)*C = " << endl << k*trans(B)*C;
return 0;
We assume also that the matrix $\mathbf{A}$ is non-singular and that the matrix elements along the diagonal satisfy $a_{ii} \ne 0$. Simple $4\times 4 $ example
The basic idea of Gaussian elimination is to use the first equation to eliminate the first unknown $x_1$ from the remaining $n-1$ equations. Then we use the new second equation to eliminate the second unknown $x_2$ from the remaining $n-2$ equations. With $n-1$ such eliminations we obtain a so-called upper triangular set of equations of the form
To arrive at such an upper triangular system of equations, we start by eliminating the unknown $x_1$ for $j=2,n$. We achieve this by multiplying the first equation by $a_{j1}/a_{11}$ and then subtract the result from the $j$th equation. We assume obviously that $a_{11}\ne 0$ and that $\mathbf{A}$ is not singular.
Our actual $4\times 4$ example reads after the first operation
or
where each $a_{1k}^{(1)}$ is equal to the original $a_{1k}$ element. The other coefficients are
with a new right-hand side given by
We have also set $w_1^{(1)}=w_1$, the original vector element. We see that the system of unknowns $x_1,\dots,x_n$ is transformed into an $(n-1)\times (n-1)$ problem.
This step is called forward substitution. Proceeding with these substitutions, we obtain the general expressions for the new coefficients
with $m=1,\dots,n-1$ and a right-hand side given by
This set of $n-1$ elimations leads us to an equations which is solved by back substitution. If the arithmetics is exact and the matrix $\mathbf{A}$ is not singular, then the computed answer will be exact.
Even though the matrix elements along the diagonal are not zero, numerically small numbers may appear and subsequent divisions may lead to large numbers, which, if added to a small number may yield losses of precision. Suppose for example that our first division in $(a_{22}-a_{21}a_{12}/a_{11})$ results in $-10^{-7}$ and that $a_{22}$ is one. one. We are then adding $10^7+1$. With single precision this results in $10^7$.
Suppose we want to solve the following boundary value equation
with $x\in (a,b)$ and with boundary conditions $u(a)=u(b) = 0$. We assume that $f$ is a continuous function in the domain $x\in (a,b)$. Since, except the few cases where it is possible to find analytic solutions, we will seek after approximate solutions, we choose to represent the approximation to the second derivative from the previous chapter
We subdivide our interval $x\in (a,b)$ into $n$ subintervals by setting $x_i = ih$, with $i=0,1,\dots,n+1$. The step size is then given by $h=(b-a)/(n+1)$ with $n\in {\mathbb{N}}$. For the internal grid points $i=1,2,\dots n$ we replace the differential operator with the above formula resulting in
which we rewrite as
with $i=1,2,\dots, n$. We need to add to this system the two boundary conditions $u(a) =u_0$ and $u(b) = u_{n+1}$. If we define a matrix
and the corresponding vectors $\mathbf{u} = (u_1, u_2, \dots,u_n)^T$ and $\mathbf{f}(\mathbf{u}) = f(x_1,x_2,\dots, x_n,u_1, u_2, \dots,u_n)^T$ we can rewrite the differential equation including the boundary conditions as a system of linear equations with a large number of unknowns
where $\mathbf{A}$ is a tridiagonal matrix which we rewrite as
for $i=1,2,\dots,n$. We see that $u_{-1}$ and $u_{n+1}$ are not required and we can set $a_1=c_n=0$. In many applications the matrix is symmetric and we have $a_i=c_i$. The algorithm for solving this set of equations is rather simple and requires two steps only, a forward substitution and a backward substitution. These steps are also common to the algorithms based on Gaussian elimination that we discussed previously. However, due to its simplicity, the number of floating point operations is in this case proportional with $O(n)$ while Gaussian elimination requires $2n^3/3+O(n^2)$ floating point operations.
In case your system of equations leads to a tridiagonal matrix, it is clearly an overkill to employ Gaussian elimination or the standard LU decomposition.
Our algorithm starts with forward substitution with a loop over of the elements $i$ and gives an update of the diagonal elements $b_i$ given by the new diagonals $\tilde{b}_i$
and the new righthand side $\tilde{f}_i$ given by
This matrix fulfills the condition of a weak dominance of the diagonal, with $|b_1| > |c_1|$, $|b_n| > |a_n|$ and $|b_k| \ge |a_k|+|c_k|$ for $k=2,3,\dots,n-1$. This is a relevant but not sufficient condition to guarantee that the matrix $\mathbf{A}$ yields a solution to a linear equation problem. The matrix needs also to be irreducible. A tridiagonal irreducible matrix means that all the elements $a_i$ and $c_i$ are non-zero. If these two conditions are present, then $\mathbf{A}$ is nonsingular and has a unique LU decomposition.
When setting up the algo it is useful to note that the different operations on the matrix (here as a $4\times 4$ case with diagonals $d_i$ and off-diagonals $e_i$) give is an extremely simple algorithm, namely
and finally
The matrices we often end up with in rewriting for for example partial differential equations, have the feature that all leading principal submatrices are non-singular.
For the special matrix we can can actually precalculate the updated matrix elements $\tilde{d}_i$. The non-diagonal elements $e_i$ are unchanged. For our particular matrix in project 1 we have
and the new righthand side $\tilde{f}_i$ given by
Recall that $\tilde{d}_1=2$ and $\tilde{f}_1=f_1$. These arrays can be set up before computing $u$.
The backward substitution gives then the final solution
with $u_n=\tilde{f}_{n}/\tilde{b}_{n}$.
#include <iostream>
#include <fstream>
#include <iomanip>
#include <cmath>
#include <string>
// use namespace for output and input
using namespace std;
// object for output files
ofstream ofile;
// Functions used
inline double f(double x){return 100.0*exp(-10.0*x);
}
inline double exact(double x) {return 1.0-(1-exp(-10))*x-exp(-10*x);}
// Begin main program
int main(int argc, char *argv[]){
int exponent;
string filename;
// We read also the basic name for the output file and the highest power of 10^n we want
if( argc <= 1 ){
cout << "Bad Usage: " << argv[0] <<
" read also file name on same line and max power 10^n" << endl;
exit(1);
}
else{
filename = argv[1]; // first command line argument after name of program
exponent = atoi(argv[2]);
}
// Loop over powers of 10
for (int i = 1; i <= exponent; i++){
int n = (int) pow(10.0,i);
// Declare new file name
string fileout = filename;
// Convert the power 10^i to a string
string argument = to_string(i);
// Final filename as filename-i-
fileout.append(argument);
double h = 1.0/(n);
double hh = h*h;
// Set up arrays for the simple case
double *d = new double [n+1]; double *b = new double [n+1]; double *solution = new double [n+1];
double *x = new double[n+1];
// Quick setup of updated diagonal elements and value of
d[0] = d[n] = 2; solution[0] = solution[n] = 0.0;
for (int i = 1; i < n; i++) d[i] = (i+1.0)/( (double) i);
for (int i = 0; i <= n; i++){
x[i]= i*h;
b[i] = hh*f(i*h);
}
// Forward substitution
for (int i = 2; i < n; i++) b[i] = b[i] + b[i-1]/d[i-1];
// Backward substitution
solution[n-1] = b[n-1]/d[n-1];
for (int i = n-2; i > 0; i--) solution[i] = (b[i]+solution[i+1])/d[i];
ofile.open(fileout);
ofile << setiosflags(ios::showpoint | ios::uppercase);
// ofile << " x: approx: exact: relative error" << endl;
for (int i = 1; i < n;i++) {
double xval = x[i];
double RelativeError = fabs((exact(xval)-solution[i])/exact(xval));
ofile << setw(15) << setprecision(8) << xval;
ofile << setw(15) << setprecision(8) << solution[i];
ofile << setw(15) << setprecision(8) << exact(xval);
ofile << setw(15) << setprecision(8) << log10(RelativeError) << endl;
}
ofile.close();
delete [] x; delete [] d; delete [] b; delete [] solution;
}
return 0;
}
Gaussian elimination, $O(2/3n^3)$ flops, general matrix
LU decomposition, upper triangular and lower tridiagonal matrices, $O(2/3n^3)$ flops, general matrix. Get easily the inverse, determinant and can solve linear equations with back-substitution only, $O(n^2)$ flops
Cholesky decomposition. Real symmetric or hermitian positive definite matrix, $O(1/3n^3)$ flops.
Tridiagonal linear systems, important for differential equations. Normally positive definite and non-singular. $O(8n)$ flops for symmetric. Special case of banded matrices.
Singular value decomposition
the QR method will be discussed in chapter 7 in connection with eigenvalue systems. $O(4/3n^3)$ flops.
The LU decomposition method means that we can rewrite this matrix as the product of two matrices $\mathbf{L}$ and $\mathbf{U}$ where
The above set of equations is conveniently solved by using LU decomposition as an intermediate step.
The matrix $\mathbf{A}\in \mathbb{R}^{n\times n}$ has an LU factorization if the determinant is different from zero. If the LU factorization exists and $\mathbf{A}$ is non-singular, then the LU factorization is unique and the determinant is given by
There are at least three main advantages with LU decomposition compared with standard Gaussian elimination:
It is straightforward to compute the determinant of a matrix
If we have to solve sets of linear equations with the same matrix but with different vectors $\mathbf{y}$, the number of FLOPS is of the order $n^3$.
The inverse is such an operation
With the LU decomposition it is rather simple to solve a system of linear equations
This can be written in matrix form as
where $\mathbf{A}$ and $\mathbf{w}$ are known and we have to solve for $\mathbf{x}$. Using the LU dcomposition we write
To show that this is correct we use to the LU decomposition to rewrite our system of linear equations as
and since the determinat of $\mathbf{L}$ is equal to 1 (by construction since the diagonals of $\mathbf{L}$ equal 1) we can use the inverse of $\mathbf{L}$ to obtain
which yields the intermediate step
and
This example shows the basis for the algorithm needed to solve the set of $n$ linear equations.
The algorithm goes as follows
Set up the matrix $\bf A$ and the vector $\bf w$ with their correct dimensions. This determines the dimensionality of the unknown vector $\bf x$.
Then LU decompose the matrix $\bf A$ through a call to the function ludcmp(double a, int n, int indx, double &d)
. This functions returns the LU decomposed matrix $\bf A$, its determinant and the vector indx which keeps track of the number of interchanges of rows. If the determinant is zero, the solution is malconditioned.
Thereafter you call the function lubksb(double a, int n, int indx, double w)
which uses the LU decomposed matrix $\bf A$ and the vector $\bf w$ and returns $\bf x$ in the same place as $\bf w$. Upon exit the original content in $\bf w$ is destroyed. If you wish to keep this information, you should make a backup of it in your calling function.
If the inverse exists then
the identity matrix. With an LU decomposed matrix we can rewrite the last equation as
then we have a linear set of equations
and continue till we have solved all $n$ sets of linear equations.
Standard C/C++: fetch the files lib.cpp
and lib.h
. You can make a directory where you store
these files, and eventually its compiled version lib.o. The example here is program1.cpp from chapter 6 and performs the matrix inversion.
// Simple matrix inversion example
#include <iostream>
#include <new>
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <cstring>
#include "lib.h"
using namespace std;
/* function declarations */
void inverse(double **, int);
void inverse(double **a, int n)
{
int i,j, *indx;
double d, *col, **y;
// allocate space in memory
indx = new int[n];
col = new double[n];
y = (double **) matrix(n, n, sizeof(double));
ludcmp(a, n, indx, &d); // LU decompose a[][]
printf("\n\nLU form of matrix of a[][]:\n");
for(i = 0; i < n; i++) {
printf("\n");
for(j = 0; j < n; j++) {
printf(" a[%2d][%2d] = %12.4E",i, j, a[i][j]);
// find inverse of a[][] by columns
for(j = 0; j < n; j++) {
// initialize right-side of linear equations
for(i = 0; i < n; i++) col[i] = 0.0;
col[j] = 1.0;
lubksb(a, n, indx, col);
// save result in y[][]
for(i = 0; i < n; i++) y[i][j] = col[i];
} //j-loop over columns
// return the inverse matrix in a[][]
for(i = 0; i < n; i++) {
for(j = 0; j < n; j++) a[i][j] = y[i][j];
free_matrix((void **) y); // release local memory
delete [] col;
delete []indx;
} // End: function inverse()
For Fortran users:
PROGRAM matrix
USE constants
USE F90library
IMPLICIT NONE
! The definition of the matrix, using dynamic allocation
REAL(DP), ALLOCATABLE, DIMENSION(:,:) :: a, ainv, unity
! the determinant
REAL(DP) :: d
! The size of the matrix
INTEGER :: n
....
! Allocate now place in heap for a
ALLOCATE ( a(n,n), ainv(n,n), unity(n,n) )
For Fortran users:
WRITE(6,*) ' The matrix before inversion'
WRITE(6,'(3F12.6)') a
ainv=a
CALL matinv (ainv, n, d)
....
! get the unity matrix
unity=MATMUL(ainv,a)
WRITE(6,*) ' The unity matrix'
WRITE(6,'(3F12.6)') unity
! deallocate all arrays
DEALLOCATE (a, ainv, unity)
END PROGRAM matrix
#include <iostream>
#include "armadillo"
using namespace arma;
using namespace std;
int main()
{
mat A = randu<mat>(5,5);
vec b = randu<vec>(5);
A.print("A =");
b.print("b=");
// solve Ax = b
vec x = solve(A,b);
// print x
x.print("x=");
// find LU decomp of A, if needed, P is the permutation matrix
mat L, U;
lu(L,U,A);
// print l
L.print(" L= ");
// print U
U.print(" U= ");
//Check that A = LU
(A-L*U).print("Test of LU decomposition");
return 0;
}
Direct solvers such as Gauss elimination and LU decomposition discussed in connection with project 1.
Iterative solvers such as Basic iterative solvers, Jacobi, Gauss-Seidel, Successive over-relaxation. These methods are easy to parallelize, as we will se later. Much used in solutions of partial differential equations.
Other iterative methods such as Krylov subspace methods with Generalized minimum residual (GMRES) and Conjugate gradient etc will not be discussed.
It is a simple method for solving
where $\mathbf{A}$ is a matrix and $\mathbf{x}$ and $\mathbf{b}$ are vectors. The vector $\mathbf{x}$ is the unknown.
It is an iterative scheme where we start with a guess for the unknown, and after $k+1$ iterations we have
with $\mathbf{A}=\mathbf{D}+\mathbf{U}+\mathbf{L}$ and $\mathbf{D}$ being a diagonal matrix, $\mathbf{U}$ an upper triangular matrix and $\mathbf{L}$ a lower triangular matrix.
If the matrix $\mathbf{A}$ is positive definite or diagonally dominant, one can show that this method will always converge to the exact solution.
We can demonstrate Jacobi's method by this $4\times 4$ matrix problem. We assume a guess for the vector elements $x_i^{(0)}$, a guess which represents our first iteration. The new values are obtained by substitution
which after $k+1$ iterations reads
or in an even more compact form as
can be rewritten as
to the following form
The procedure is generally continued until the changes made by an iteration are below some tolerance.
The convergence properties of the Jacobi method and the Gauss-Seidel method are dependent on the matrix $\mathbf{A}$. These methods converge when the matrix is symmetric positive-definite, or is strictly or irreducibly diagonally dominant. Both methods sometimes converge even if these conditions are not satisfied.
Given a square system of n linear equations with unknown $\mathbf x$:
where
where
The system of linear equations may be rewritten as:
However, by taking advantage of the triangular form of $(D+\omega L)$, the elements of $x^{(k+1)}$ can be computed sequentially using forward substitution:
The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive-definite matrices it can be proven that $0 < \omega < 2$ will lead to convergence, but we are generally interested in faster convergence rather than just convergence.
Cubic spline interpolation is among one of the most used methods for interpolating between data points where the arguments are organized as ascending series. In the library program we supply such a function, based on the so-called cubic spline method to be described below.
A spline function consists of polynomial pieces defined on subintervals. The different subintervals are connected via various continuity relations.
Assume we have at our disposal $n+1$ points $x_0, x_1, \dots x_n$ arranged so that $x_0 < x_1 < x_2 < \dots x_{n-1} < x_n$ (such points are called knots). A spline function $s$ of degree $k$ with $n+1$ knots is defined as follows
On every subinterval $[x_{i-1},x_i)$ s is a polynomial of degree $\le k$.
$s$ has $k-1$ continuous derivatives in the whole interval $[x_0,x_n]$.
As an example, consider a spline function of degree $k=1$ defined as follows
In this case the polynomial consists of series of straight lines connected to each other at every endpoint. The number of continuous derivatives is then $k-1=0$, as expected when we deal with straight lines. Such a polynomial is quite easy to construct given $n+1$ points $x_0, x_1, \dots x_n$ and their corresponding function values.
The most commonly used spline function is the one with $k=3$, the so-called cubic spline function. Assume that we have in adddition to the $n+1$ knots a series of functions values $y_0=f(x_0), y_1=f(x_1), \dots y_n=f(x_n)$. By definition, the polynomials $s_{i-1}$ and $s_i$ are thence supposed to interpolate the same point $i$, that is
with $1 \le i \le n-1$. In total we have $n$ polynomials of the type
and
to be fulfilled. If we also assume that $s'$ and $s''$ are continuous, then
yields $n-1$ conditions. Similarly,
and
and setting up a straight line between $f_i$ and $f_{i+1}$ we have
and integrating twice one obtains
and set $x=x_i$. Defining $h_i=x_{i+1}-x_i$ we obtain finally the following expression
spline(double x[], double y[], int n, double yp1, double yp2, double y2[])
This function takes as input $x[0,..,n - 1]$ and $y[0,..,n - 1]$ containing a tabulation $y_i = f(x_i)$ with $x_0 < x_1 < .. < x_{n - 1}$ together with the first derivatives of $f(x)$ at $x_0$ and $x_{n-1}$, respectively. Then the function returns $y2[0,..,n-1]$ which contains the second derivatives of $f(x_i)$ at each point $x_i$. $n$ is the number of points. This function provides the cubic spline interpolation for all subintervals and is called only once.
Thereafter, if you wish to make various interpolations, you need to call the function
splint(double x[], double y[], double y2a[], int n, double x, double *y)
which takes as input the tabulated values $x[0,..,n - 1]$ and $y[0,..,n - 1]$ and the output y2a[0,..,n - 1] from spline. It returns the value $y$ corresponding to the point $x$.