Final Project (70 points)

Name:

Due date: December 11th 2015

Even though a running code is essential to obtain credit for each question, you will need to answer some theoretical questions. In order to be obtain full credit you will need to be clear and concise.

The aim of this project is to use what have you learn in class to solve the Poisson's equation with homegeneous Dirichelet boundary conditions.

Let $\Omega = [0,1]^2$, we define the Poisson's equation with homogeneous Dirichlet boundary conditions as

\begin{eqnarray} \triangle u = (\partial_{xx} + \partial_{yy}) u &=& f, \qquad \mbox{in } \Omega, \\ u|_{\partial \Omega} &=& 0, \qquad \mbox{on } \partial \Omega. \end{eqnarray}

In this final project, you will discretize the equation above using Finite Differences, you will assemble the corresponding linear system, and you will solve it using different methods, iterative and direct. You will study the asymptotic complexity of each method and you will discuss the pros and cons.

Question 1 (40 points)

We will start by solving this problem in 1D, and we will use a kronecker product to extend the discretization to 2D

In 1D the Poisson equation is given by

\begin{eqnarray} \frac{d^2}{dx^2} u(x) &=& f(x), \qquad \mbox{in } (0,1)\\ u(0) &=& 0, \\ u(1) &=& 0. \end{eqnarray}

We discretize the domain $[0,1]$ with $n+2$ equispaced points, such that $h = 1/(n+1)$ and we define $x_i = ih$. Using this notation we have that $x_0 = 0$ and $x_{n+1} = 1$.

We use finite differences to discretize the differential operator acting on $u$ at a point $x_i$ as

\begin{equation} \frac{d^2}{dx^2} u(x_i) \approx \frac{ u(x_{i+1}) - 2u(x_{i}) + u(x_{i-1}) }{h^2} = f(x_i). \end{equation}

We define the vectors $\mathbf{u}$ and $\mathbf{f}$ by $\mathbf{u}_i = u(x_i)$ and $\mathbf{f}_i = f(x_i)$, respectively. We can define the discrete system in the form : \begin{equation} \frac{ \mathbf{u}_{i+1} - 2\mathbf{u}_{i} + \mathbf{u}_{i-1} }{h^2} = \mathbf{f}_{i}, \qquad \mbox{for } i=1,..., n; \end{equation} plus the additional boundary conditions \begin{eqnarray} \mathbf{u}_{0} &=& 0, \\ \mathbf{u}_{n+1} &=& 0. \end{eqnarray}

This leads to the linear system given by \begin{equation} \mathbf{A} \mathbf{u} = \mathbf{f}, \end{equation} where \begin{equation} \mathbf{A} = \frac{1}{h^2}\left [ \begin{array}{ccccc} -2 & 1 & 0 & \dots &\dots &0 \\ 1 & -2 & 1 & 0 & \dots &0 \\ 0 & 1 & -2 & 1 & 0 &0 \\ 0 & 0 & \ddots & \ddots & \ddots &0 \\ 0& 0& 0 & 1 & -2 & 1 \\ 0& 0& \dots & 0 & 1 & -2 \\ \end{array} \right ]. \end{equation}

Given the $\mathbf{u}_{0}$ and $\mathbf{u}_{n+1}$ are known and equal to zero, we used that knowledge to build $\mathbf{A}$. We need to solve only for the $n$ interior deegres of freedom.

Notation

We provide with a summarized version of all the notations

  • $h = 1/(n+1)$
  • $x$ is a vector such that $x_i = ih$ for $i=0,...,n+1$
  • $\mathbf{u}_i = u(x_i)$,
  • $\mathbf{f}_i = f(x_i)$

Q1.a

Write a function that assembles the $n\times n$ matrix $\mathbf{A}$ described above.
The input of your function will be $n$, and the ouput will be a sparse matrix $\mathbf{A}$.

How much memory do we need to store $\mathbf{A}$, with respect to $n$? (you can use $\mathcal{O}$ notation)

Hints:

  • you can use spdiagm(v,d,n,m) to build diagonal sparse matrices. You create a $n\times m$ diagonal matrix that has v in its d-th diagonal. Remember that $d=0$ is the main diagonal, $d>0$ represent the upper diagonals and $d<0$.
  • You can build $\mathbf{A}$ as a sum of three diagonal sparse matrices.

In [ ]:
function stiffnessMatrix1D(n)
    
end

Q1.b

Write a function that for any function $f$, it outputs the $n\times 1$ vector $\mathbf{f}$, where $\mathbf{f}_i = f(x_i)$.

The inputs of your function are

  • $f$ a function handle (a generic function)
  • $n$ the number of interior points in your domain.

In [3]:
function formRHS(f,n)

end


Out[3]:
formRHS (generic function with 1 method)

Q1.c

Write a function that implements a diagonal solve. I.e. that it solves the system $\mathbf{D} \mathbf{x} = \mathbf{b}$ when $\mathbf{D}$ is diagonal. Moreover, you will implement a backward and forward solves.

Your function needs to work when the input $\mathbf{D}$, $\mathbf{U}$, $\mathbf{L}$ are sparse or dense matrices.


In [ ]:
# write your triangular solver here
function backwardSubstitution(U,b)
    # input:    U upper triangular matrix 
    #           b vector
    # output:   x = U\b
    
    
    return x
end

In [ ]:
# write your triangular solver here
function forwardSubstitution(L,b)
    # input:    L lower triangular matrix 
    #           b vector
    # output:   x = L\b

    return x
end

In [ ]:
# write your diagonal solver here
function diagonalSubstitution(D,b)
    # input:    D is a diagonal matrix 
    #           b vector
    # output:   x = D\b

    return x
end

Q1.d

You will implement the Jacobi, Gauss-Seidel and SOR methods and you will use them to solve a generic system of the form $\mathbf{A}\mathbf{x} = \mathbf{b}$. Each function will take as inputs

  • $A$ a square matrix,
  • $b$ a vector

Your function will have the following optional parameters

  • $Nmax$ maximum number of iterations, by default set to 30
  • $\epsilon$ the tolerance, by default set to 1e-6
  • history this is a boolean to return all the succesive approximations

Given the optimized nature of the sparse triangular solvers, you are encouraged to use the backslash operator to perform the system solves within the body of the functions.

The ouput of your function is the final approximation $x$ of your vector, in the case that history is true, it will output a matrix with the all the intermediate approximations of size $ n \times \mbox{number of iterations to converge}$.

If the method converged within the specified amount of iterations, you will print (using println) the number of iterations needed to converge.

Your function will raise an error if it didn't converge in the $Nmax$ iterations.


In [1]:
# Implement Jacobi
function JacobiIt(A,b; Nmax = 30, ϵ=1e-6, history = false)
    # check for the sizes
    (n,m) = size(A)
    n!=m && error("Dimension mistmach")
    # initial guess
    x0 = zeros(b)
    history && (xHist = x0)
    # D (this needs to be sparse)
    D = spdiagm(diag(A),0)
    # L+U
    M = triu(A,1) + tril(A,-1)
    for i = 1:Nmax
        x = D\(b - M*x0)
        history && (xHist = hcat(xHist,x))
        if norm(x-x0) < ϵ
            println("The Jacobi iteration converged in ", i, " iterations")
            history ? (return xHist[:,2:end]) : (return x)
        end
        x0 = x
    end
    error("Tolerance not achieved within the maximum number of iterations")
end


Out[1]:
JacobiIt (generic function with 1 method)

In [ ]:
# Implement Gauss-Seidel
function GaussSeidelIt(A,b; Nmax = 30, ϵ=1e-6, history = false)
    
end

In [ ]:
# Implement SOR
function SORIt(A,b, omega; Nmax = 30, ϵ=1e-6, history = false)
    
end

Q1.e

You will use your iterative solvers to solve $\mathbf{A}\mathbf{u} = \mathbf{f}$ as specified at the begining of this questions, where $f(x) = sin(2\pi x)$, for $n = 100, 200, 400$. Moreover, you will answer:

  • Do they all converge? why? (you need to rigourously show why) (See ex 16 and 17 in ex set 7.3)
  • Which method converges faster and why? (see theorem 7.26)
  • How does the number of iterations depend on $n$? (you may need to run your algorithm for bigger $n$)
  • What would be the complexity, with respect to $n$, of each the method in this case? (remember that you are using sparse matrices to perform most of the operations so you need to be careful when counting the number of operations) How does the choice of "omega" affect the assymptotic complexity of the SOR iteration?

Hint: you may need to compute the spectral radius of the iterations matrices in this case, if you compute the spectral radius to answer the question, you need to specify how is the spectral radius related to the speed of convergence. If you are still working with sparse matrices you can use eigs() to compute the leading eigenvalues. If you are having weird results with eigs, you can increase the number of iterations using the optional parameter maxiter=1000, or you can use full() to convert a sparse matrix in a dense matrix.


In [ ]:


In [ ]:
# write your scripts in here

Answer:

Q1.f

In this question you will use the built-in lu factorization algorithm (via UMFPACK) to solve the system.
What is the complexity of the algorithm, with respect to n, for solving $\mathbf{A}\mathbf{u} = \mathbf{f}$ and why? You need to provide evidence of your complexity claim.

Hint: you may need to take a look at the sparsity of the LU factors. When you perform the LU factorization of a matrix like $\mathbf{A}$ are you introducing any non-zeros?


In [ ]:

Question 2 (30 points)

Now that you have defined everything for the 1D case, you can use the same notation to solve the Poisson's equation in 2D.

In this case we need to discretize the Laplacian, and in particular, $\partial_{xx}$ and $\partial_{yy}$, which are discretized using finite differences via:

\begin{equation} \partial_{xx} u(x_{i,j}) \approx \frac{ u(x_{i+1,j}) - 2u(x_{i,j}) + u(x_{i-1,j}) }{h^2}, \end{equation}

and \begin{equation} \partial_{yy} u(x_{i,j}) \approx \frac{ u(x_{i,j+1}) - 2u(x_{i,j}) + u(x_{i,j-1}) }{h^2}. \end{equation}

Analogously to the 1D, we define $\mathbf{x}_{i,j} = (ih,jh)$ and $\mathbf{u}_{i,j} = u(x_{i,j})$

In this case, the system to be solved is slightly more complicated, and it is given by \begin{equation} \frac{ \mathbf{u}_{i+1,j} + \mathbf{u}_{i,j+1} - 4\mathbf{u}_{i} + \mathbf{u}_{i-1,j} + \mathbf{u}_{i,j-1} }{h^2} = \mathbf{f}_{i,j}, \qquad \mbox{for } i,j=1,..., n; \end{equation} plus the boundary conditions \begin{eqnarray} \mathbf{u}_{i,0} &=& 0, \qquad \mbox{for } i = 0,..,n+1, \\ \mathbf{u}_{i,n+1} &=& 0, \qquad\mbox{for } i = 0,..,n+1, \\ \mathbf{u}_{0,j} &=& 0, \qquad \mbox{for } j = 0,..,n+1, \\ \mathbf{u}_{n+1,j} &=& 0, \qquad \mbox{for } j = 0,..,n+1; \end{eqnarray}

or in its compact form \begin{equation} \mathbf{S}\mathbf{u} = \mathbf{f}. \end{equation}

Q2.a

Using the stiffnessMatrix1D and the kron function in julia, write a function that generated the matrix $\mathbf{S}$ from the 2D discretization of the system.

Remember that \begin{equation} \triangle = \partial_{xx} \otimes I + I \otimes \partial_{yy} \end{equation} where $\otimes$ is the kronecker product.

Or in this case \begin{equation} S = A \otimes I_n + I_n \otimes A \end{equation} where $I_n$ is an identity matrix of size $n$, and $A$ is the stiffness matrix in 1D.

Hint: use ? to read the documentation of the function kron.


In [4]:
function stiffnessMatrix2D(n)
    
end


Out[4]:
stiffnessMatrix2D (generic function with 1 method)

Q2.b

Write a function that forms the rhs for the 2D case. In this case the ordering is important so you will form a matrix $n \times n$ $\mathbf{F}$ such that $\mathbf{F}_{i,j} = f(\mathbf{x}_{i,j}) = f(ih,jh)$. Then you will return the vectorized form of $\mathbf{F}$ using [:].

The input of you function will be

  • n, the number of interior points per dimension (in this case $N = n^2$)
  • f, a function handle that accepts two inputs $f(x_i,y_j)$

In [ ]:
function formRHS2D(f,n)
    
end

Q2.c

You will use your iterative solvers to solve $\mathbf{S}\mathbf{u} = \mathbf{f}$, where $f(x,y) = sin(2\pi x)sin(2\pi y)$, for $n = 50, 100, 200$, and you will answer:

  • Do they all converge? why? (you need to rigourously show why) (see if you can extend what you showed in Q.1.e)
  • Which method converges faster and why?
  • How does the number of iterations depend on $N$?
  • What would be the complexity, with respect to $N$, of each the method in this case?

Hint: you may need to compute the spectral radius of the iterations matrices in this case, if you compute the spectral radius to answer the question, you need to specify how is the spectral radius related to the speed of convergence.


In [ ]:

Q2.d

In this questions you will use the built-in lu factorization algorithm (via UMFPACK) to solve your system.
What is the complexity of the algorithm, with respect to n, for solving $\mathbf{S}\mathbf{u} = \mathbf{f}$ and why? You need to provide evidence of your complexity claim.

Hint: you may need to take a look at the sparsity of the LU factors. When you perform the LU factorization of a matrix like $\mathbf{A}$ are you introducing any non-zeros?


In [ ]: