Problem 1

30 pts

  • Prove that $\mathrm{vec}(AXB) = (B^\top \otimes A)\, \mathrm{vec}(X)$ if $\mathrm{vec}(X)$ is a columnwise reshape of a matrix into a long vector. What does it change if the reshape is rowwise? Note: to make a columnwise reshape in Python one should use np.reshape(X, order='f'), where the string 'f' stands for the Fortran ordering.
  • Let $A$ and $B$ be dense $n\times n$ matrices. Suggest an algorithm that calculates matrix-vector multiplication $(A\otimes B)x$ faster than in $\mathcal{O}(n^4)$ operations.
  • Let matrices $A$ and $B$ have eigendecompositions $A = S_A\Lambda_A S_A^{-1}$ and $B = S_B\Lambda_B S^{-1}_B$. Find eigenvectors and eigenvalues of the matrix $A\otimes I + I \otimes B$.
  • Let $x_{k+1} = x_{k} - \tau_k (Ax_k - f)$, $x_0$ is zero vector. Does $x_{k}\in \mathcal{K}_{k}(A,f)$ - the Krylov subspace? Can CG be represented in this form (i.e. there exists $\tau_k$ such that $x_{k+1}$ generated by the sequence above and the CG method are equal for any $k$)?
  • Let $A = \mathrm{diag}\left(\frac{1}{1000},\frac{1}{999},\dots \frac{1}{2}, 1, 1000 \right)$. Estimate analytically the number of iterations required to solve linear system with $A$ with the relative accuracy $10^{-4}$ using
    • Richardson iteration with the optimal choice of parameter
    • Chebyshev iteration
    • Conjugate gradient method

In [ ]:

Problem 2

40 pts

Sparse systems

Consider a 2D Poisson equation in $\Omega = [0,1]^2$ $$ \Delta u \equiv \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = f(x,y), \quad (x,y)\in \Omega $$ with zero Dirichlet boundary conditions $$ u_{\partial \Omega} = 0, $$ with known function $f(x,y)$ and unknown $u(x,y)$.

To find solution of the Poisson equation we will use the finite difference method. Standard second order finite difference discretization on a uniform grid $(x_i, y_j) = (ih, jh)$, $i,j = 0,\dots, N$, $h = \frac{1}{N}$ leads to the following system of equations: $$ \begin{split} &\frac{u_{i+1,j} - 2u_{i,j} + u_{i-1,j}}{h^2} + \frac{u_{i,j+1} - 2u_{i,j} + u_{i,j-1}}{h^2} = f(ih, jh) \\ &u_{0,j} = u_{i,0} = u_{N,j} = u_{i,N} = 0, \quad i,j = 0,\dots,N \end{split} $$

  • Write the system above as a matrix equation $BU_h + U_h C = F_h$ with matrices $U_h = \begin{bmatrix}u_{1,1} & \dots & u_{1,N-1} \\ \vdots & \ddots & \vdots \\ u_{N-1,1} & \dots & u_{N-1,N-1} \end{bmatrix}$, $F_h = \begin{bmatrix}f_{1,1} & \dots & f_{1,N-1} \\ \vdots & \ddots & \vdots \\ f_{N-1,1} & \dots & f_{N-1,N-1} \end{bmatrix}$. What are matrices $B$ and $C$? Show that they are negative definite.
  • Using Kronecker product properties rewrite $ BU_h + U_h C = F_h$ as $A_h \mathrm{vec}(U_h) = \mathrm{vec}(F_h)$, where $\mathrm{vec}(\cdot)$ is a columnwise reshape. What is matrix $A_h$?
  • Choose $f(x,y)$ such that $u(x, y) = \sin\pi x \sin \pi y$ is a solution (just substitute $u$ in the Poisson equation and find $f$, then pretend as if you do not know the solution $u$). Solve the system with the found $f$ using the scipy.sparse.linalg.spsolve which is direct sparse solver. Use pandas library and print table that contains $N$, time, relative error between the analytic solution and the obtained one for $N=128,256,512$. Matrices $B, C$ and $A_h$ should be assembled in the CSR format using functions from the scipy.sparse package (functions scipy.sparse.kron and scipy.sparse.spdiags will be helpful). Do not use full matrices! Use only sparse arithmetics.
  • What is the iterative method of choice among cg, minres, GMRES, BicgStab? Explain why.
  • Run the method from the previous task with and without ILU0 preconditioner for $N=256$. Plot relative error w.r.t. iteration number for both cases on one plot.

Eigenvalues

  • Find $3$ smallest eigenvalues of matrices $B$ and $C$ using scipy.sparse.linalg.eigs (Implicitly Restarted Arnoldi Method) or if $B$ and $C$ are Hermitian using scipy.sparse.linalg.eigsh (Implicitly Restarted Lanczos Method). Print them.
  • What are the first $3$ smallest distinct eigenvalues of $A_h$ in terms of eigenvalues of $B$ and $C$? What are their multiplicities (explain the answer)? Find these eigenvalues numerically using scipy.sparse.linalg.eigsh and compare them with what you have found using eigenvalues of $B$ and $C$.
  • Bonus: Find analytically eigenvalues of the matrix $A_h$ and prove that $\text{cond}( A_h )= \mathcal{O}\left(\frac{1}{h^2}\right)$

In [ ]:

Problem 3

30 pts

Structured matrices

  • Find convolution of the Lena image $n\times n$, $n=512$ with the following filter $$ T_{i_1j_1,i_2j_2} \equiv T_{i_1-j_1,i_2-j_2} = \frac{\alpha}{\pi} e^{-\alpha \left[(i_1 - j_1)^2 + (i_2 - j_2)^2 \right]}, \quad i_1,j_1, i_2, j_2 = 1,\dots, n, \quad \alpha = \frac{1}{100} $$ using FFT. What is the complexity of this operation? Plot the result as an image.
  • Write matvec function that produces multiplication of $T$ by a given vector $x$. Use scipy.sparse.linalg.LinearOperator to create an object that has attribute .dot() (this object will be further used in the iterative process). Note that .dot() input and output must be 1D vectors, so do not forget to use reshape.
  • Run an appropriate Krylov method with the obtained Linear Operator and try to reconstruct Lena using the right-hand side from the first bullet (smoothed Lena). On one figure plot relative error w.r.t. the number of iterations for $\alpha=\frac{1}{50},\frac{1}{100},\frac{1}{200}$ and the corresponding right-hand side. Comment on the results.
  • Bonus: Let $x\in\mathbb{R}^{n^2}$ being reshaped into a matrix of size $n\times n$ have rank $r$. Let also $T_{i_1j_1,i_2j_2} \equiv \widetilde T_{i_1-j_1,i_2-j_2}$ such that $n\times n$ matrix $\widetilde T_{i_1,i_2}$ has rank $R$. Propose an algorithm that calculates $Tx$ using $\mathcal{O}((r+R)n\log n + rRn)$ operations.

In [ ]: