This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi.

The Making of a Preconditioner ---Vectorized Version

This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.

The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver.

1. Multigrid algorithm

We need some terminology before going further.

  • Approximation:
  • Residual:
  • Exact solution (of the discrete problem)
  • Correction

This is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm

  • Smoothing Operator (a.k.a Relaxation)
  • Restriction Operator
  • Interpolation Operator (a.k.a Prolongation Operator)
  • Bottom solver

We will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy.


In [98]:
import numpy as np

1.1 Smoothing operator

This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual.


In [99]:
def Jacrelax(nx,ny,u,f,iters=1):
  '''
  under-relaxed Jacobi iteration
  '''
  dx=1.0/nx; dy=1.0/ny
  Ax=1.0/dx**2; Ay=1.0/dy**2
  Ap=1.0/(2.0*(Ax+Ay))

  #Dirichlet BC
  u[ 0,:] = -u[ 1,:]
  u[-1,:] = -u[-2,:]
  u[:, 0] = -u[:, 1]
  u[:,-1] = -u[:,-2]

  for it in range(iters):
    u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1])
                             + Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny])
                             - f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1]
    #Dirichlet BC
    u[ 0,:] = -u[ 1,:]
    u[-1,:] = -u[-2,:]
    u[:, 0] = -u[:, 1]
    u[:,-1] = -u[:,-2]

  res=np.zeros([nx+2,ny+2])
  res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
                                       + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
                                       - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]))
  return u,res

1.2 Interpolation Operator

This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.


In [100]:
def prolong(nx,ny,v):
  '''
  interpolate 'v' to the fine grid
  '''
  v_f=np.zeros([2*nx+2,2*ny+2])
  v_f[1:2*nx:2  ,1:2*ny:2  ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx  ,1:ny+1]+v[1:nx+1,0:ny]  )+0.0625*v[0:nx  ,0:ny  ]
  v_f[2:2*nx+1:2,1:2*ny:2  ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny]  )+0.0625*v[2:nx+2,0:ny  ]
  v_f[1:2*nx:2  ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx  ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx  ,2:ny+2]
  v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2]
  return v_f

1.3 Restriction

This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity


In [101]:
def restrict(nx,ny,v):
  '''
  restrict 'v' to the coarser grid
  '''
  v_c=np.zeros([nx+2,ny+2])
  v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2])
  return v_c

1.4 Bottom Solver

Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.

1.5 V-cycle

Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.


In [102]:
def V_cycle(nx,ny,num_levels,u,f,level=1):

  if(level==num_levels):#bottom solve
    u,res=Jacrelax(nx,ny,u,f,iters=50)
    return u,res

  #Step 1: Relax Au=f on this grid
  u,res=Jacrelax(nx,ny,u,f,iters=1)

  #Step 2: Restrict residual to coarse grid
  res_c=restrict(nx//2,ny//2,res)

  #Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)
  e_c=np.zeros_like(res_c)
  e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)

  #Step 4: Interpolate(prolong) e_c to fine grid and add to u
  u+=prolong(nx//2,ny//2,e_c)
  
  #Step 5: Relax Au=f on this grid
  u,res=Jacrelax(nx,ny,u,f,iters=1)
  return u,res

Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.


In [103]:
#analytical solution
def Uann(x,y):
   return (x**3-x)*(y**3-y)
#RHS corresponding to above
def source(x,y):
  return 6*x*y*(x**2+ y**2 - 2)

Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy. The coarsest problem is going to have a 2-by-2 grid.


In [104]:
#input
max_cycles = 30
nlevels    = 6  
NX         = 2*2**(nlevels-1)
NY         = 2*2**(nlevels-1)
tol        = 1e-15

In [105]:
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u   =np.zeros([NX+2,NY+2])#approximation
f   =np.zeros([NX+2,NY+2])#RHS

#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY

xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')

uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1]   =source(XX,YY)

Now we can call the solver


In [106]:
print('mgd2d.py solver:')
print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)
for it in range(1,max_cycles+1):
  u,res=V_cycle(NX,NY,nlevels,u,f)
  rtol=np.max(np.max(np.abs(res)))
  if(rtol<tol):
    break
  error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
  print('  cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))

error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))


mgd2d.py solver:
NX: 64 , NY: 64 , tol: 1e-15 levels:  6
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.055859265039 0.0942025534415
-0.0339063119777 0.10539690032
-0.0179541634168 0.106242413489
-0.00922176485728 0.106876040375
-0.00467581448341 0.106954482548
  cycle:  1 , L_inf(res.)=  0.891977476345 ,L_inf(true error):  0.0411816115789
-0.00464994629977 0.106992648642
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.014286860452 0.0256719978527
-0.00823173483674 0.0291200941797
-0.00415124287141 0.0292203893998
-0.00205497881698 0.0294948267646
-0.00566035627608 0.136466068655
  cycle:  2 , L_inf(res.)=  0.257779410083 ,L_inf(true error):  0.0116506189761
-0.00565568937853 0.136477230591
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.00384601555839 0.0071683668273
-0.00215890746464 0.00821431432214
-0.00106987854255 0.00824530867788
-0.000528008278904 0.00831910567277
-0.00591647018214 0.144784400505
  cycle:  3 , L_inf(res.)=  0.0735673054651 ,L_inf(true error):  0.00330803624272
-0.00591583857471 0.144787135101
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.00105486667268 0.00202168925104
-0.000580809117689 0.00233699972319
-0.000284379784854 0.00235969181692
-0.00014026181722 0.00237646948722
-0.00598542995015 0.147160971807
  cycle:  4 , L_inf(res.)=  0.0208583793969 ,L_inf(true error):  0.000930219571437
-0.00598537320539 0.147161759972
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.000293164160102 0.000574345886137
-0.00015907649552 0.000669031293282
-7.72453387279e-05 0.000679203948744
-3.81199541727e-05 0.000682962993154
-0.006004351328 0.147841855119
  cycle:  5 , L_inf(res.)=  0.00588946434527 ,L_inf(true error):  0.000247798905275
-0.00600435527906 0.147842085711
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-8.22811451446e-05 0.00016409915236
-4.4147596327e-05 0.000192463083358
-2.13202648709e-05 0.000196383349896
-1.05357003484e-05 0.000197201195914
-0.00600961348044 0.148037881463
  cycle:  6 , L_inf(res.)=  0.00171344338378 ,L_inf(true error):  6.7168536506e-05
-0.00600961761373 0.148037949807
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.32685067969e-05 4.71037757452e-05
-1.23730195738e-05 5.55881476304e-05
-5.95449691649e-06 5.69944456368e-05
-2.94807481849e-06 5.71628430068e-05
-0.00601109156916 0.14809454362
  cycle:  7 , L_inf(res.)=  0.000523285391864 ,L_inf(true error):  6.86431869779e-05
-0.00601109311751 0.148094564101
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-6.61970337182e-06 1.35744581284e-05
-3.49374824379e-06 1.61105291962e-05
-1.67778317099e-06 1.65951176623e-05
-8.32525182507e-07 1.66731832337e-05
-0.0060115098979 0.148110978472
  cycle:  8 , L_inf(res.)=  0.000161594333349 ,L_inf(true error):  6.90600429546e-05
-0.00601151035852 0.148110984667
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.89260395712e-06 3.92569838494e-06
-9.92333453181e-07 4.68364919752e-06
-4.75958828462e-07 4.84657470149e-06
-2.36749005283e-07 4.87941211232e-06
-0.00601162899657 0.148115760208
  cycle:  9 , L_inf(res.)=  5.09588276145e-05 ,L_inf(true error):  6.91786495929e-05
-0.00601162911815 0.148115762096
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-5.43424331763e-07 1.1390059324e-06
-2.83207630714e-07 1.36560841986e-06
-1.35749566405e-07 1.41953423258e-06
-6.76971847838e-08 1.43214291767e-06
-0.00601166306851 0.148117155521
  cycle:  10 , L_inf(res.)=  1.62977007676e-05 ,L_inf(true error):  6.92125721805e-05
-0.00601166309772 0.1481171561
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.56636554386e-07 3.31506284532e-07
-8.11563808622e-08 3.9930337245e-07
-3.88899320461e-08 4.16972895522e-07
-1.94458902709e-08 4.21585933102e-07
-0.0060116728562 0.148117563831
  cycle:  11 , L_inf(res.)=  5.2736240832e-06 ,L_inf(true error):  6.92223164925e-05
-0.00601167286249 0.14811756401
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.53122901836e-08 9.67826356328e-08
-2.33404135148e-08 1.17089417157e-07
-1.11857686395e-08 1.22842614393e-07
-5.61276049402e-09 1.24483271065e-07
-0.00601167567823 0.148117683655
  cycle:  12 , L_inf(res.)=  1.72635327544e-06 ,L_inf(true error):  6.92251261566e-05
-0.00601167567935 0.148117683711
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.31540233088e-08 2.83438183515e-08
-6.73497174927e-09 3.44350381829e-08
-3.23134262559e-09 3.63014758366e-08
-1.62593400228e-09 3.68749691819e-08
-0.00601167649466 0.148117718922
  cycle:  13 , L_inf(res.)=  5.71547388972e-07 ,L_inf(true error):  6.92259390797e-05
-0.00601167649478 0.148117718939
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-3.83191001114e-09 8.32753410327e-09
-1.94953640899e-09 1.01579187592e-08
-9.3654412923e-10 1.07624245317e-08
-4.72636076333e-10 1.09607648554e-08
-0.00601167673164 0.148117729332
  cycle:  14 , L_inf(res.)=  1.91270373762e-07 ,L_inf(true error):  6.92261750469e-05
-0.00601167673161 0.148117729338
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.1202525199e-09 2.4549158983e-09
-5.66059617271e-10 3.00609378374e-09
-2.72309529835e-10 3.20185668672e-09
-1.37854980693e-10 3.27001911221e-09
-0.00601167680065 0.148117732416
  cycle:  15 , L_inf(res.)=  6.46573425911e-08 ,L_inf(true error):  6.92262437573e-05
-0.00601167680062 0.148117732417
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-3.28716302471e-10 7.26281963436e-10
-1.64864370564e-10 8.92656121592e-10
-7.94303395343e-11 9.56118962659e-10
-4.0346464195e-11 9.79464730663e-10
-0.0060116768208 0.148117733332
  cycle:  16 , L_inf(res.)=  2.20597939915e-08 ,L_inf(true error):  6.92262638279e-05
-0.00601167682079 0.148117733332
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-9.68319038243e-11 2.1568826598e-10
-4.81669151558e-11 2.6604448629e-10
-2.32449259947e-11 2.86659154555e-10
-1.18501417256e-11 2.9464463746e-10
-0.00601167682671 0.148117733605
  cycle:  17 , L_inf(res.)=  7.58882379159e-09 ,L_inf(true error):  6.92262697094e-05
-0.00601167682671 0.148117733605
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.86435097459e-11 6.43174402395e-11
-1.41182655907e-11 7.96045312089e-11
-6.82573035846e-12 8.63191770904e-11
-3.49348230884e-12 8.90509728155e-11
-0.00601167682845 0.148117733687
  cycle:  18 , L_inf(res.)=  2.62980393018e-09 ,L_inf(true error):  6.92262714386e-05
-0.00601167682845 0.148117733687
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-8.51098154814e-12 1.92640062197e-11
-4.15231658413e-12 2.39201408957e-11
-2.01152854838e-12 2.61145569505e-11
-1.0339713364e-12 2.70502759908e-11
-0.00601167682896 0.148117733711
  cycle:  19 , L_inf(res.)=  9.17680154089e-10 ,L_inf(true error):  6.92262719488e-05
-0.00601167682896 0.148117733711
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.5411380353e-12 5.79734174911e-12
-1.22560198379e-12 7.22050449084e-12
-5.95025675066e-13 7.94090488254e-12
-3.07341362772e-13 8.26137475268e-12
-0.00601167682912 0.148117733719
  cycle:  20 , L_inf(res.)=  3.21961124428e-10 ,L_inf(true error):  6.92262720999e-05
-0.00601167682912 0.148117733719
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-7.62791281632e-13 1.75387136183e-12
-3.63184926886e-13 2.19057560407e-12
-1.76760444858e-13 2.43220565445e-12
-9.19680258211e-14 2.53815467806e-12
-0.00601167682916 0.148117733721
  cycle:  21 , L_inf(res.)=  1.13232090371e-10 ,L_inf(true error):  6.92262721448e-05
-0.00601167682916 0.148117733721
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.30179938261e-13 5.33207561676e-13
-1.08043373777e-13 6.677146324e-13
-5.29569423367e-14 7.48947922008e-13
-2.76602920742e-14 7.84288440171e-13
-0.00601167682918 0.148117733722
  cycle:  22 , L_inf(res.)=  4.0017766878e-11 ,L_inf(true error):  6.92262721582e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-6.98843583907e-14 1.63105879338e-13
-3.22754676056e-14 2.04763805654e-13
-1.60515112333e-14 2.32023895271e-13
-8.36767267413e-15 2.44116099728e-13
-0.00601167682918 0.148117733722
  cycle:  23 , L_inf(res.)=  1.40971678775e-11 ,L_inf(true error):  6.92262721622e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.14159421815e-14 5.0279178253e-14
-9.74783354145e-15 6.32114405061e-14
-4.926632139e-15 7.24496308889e-14
-2.57939917723e-15 7.65714240972e-14
-0.00601167682918 0.148117733722
  cycle:  24 , L_inf(res.)=  5.45696821064e-12 ,L_inf(true error):  6.92262721634e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-6.63973397928e-15 1.56867336286e-14
-2.96779707869e-15 1.97433405394e-14
-1.50204371963e-15 2.28500314265e-14
-7.90865643647e-16 2.43215307824e-14
-0.00601167682918 0.148117733722
  cycle:  25 , L_inf(res.)=  1.81898940355e-12 ,L_inf(true error):  6.92262721638e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.1465275699e-15 4.90756311073e-15
-9.57049637689e-16 6.19558203597e-15
-4.85442135731e-16 7.22405406426e-15
-2.53938075393e-16 7.7006572632e-15
-0.00601167682918 0.148117733722
  cycle:  26 , L_inf(res.)=  9.09494701773e-13 ,L_inf(true error):  6.92262721639e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-6.59201772464e-16 1.46438201045e-15
-2.87693665001e-16 2.00565306692e-15
-1.63526969326e-16 2.41513231809e-15
-8.46924943119e-17 2.61972842679e-15
-0.00601167682918 0.148117733722
  cycle:  27 , L_inf(res.)=  9.09494701773e-13 ,L_inf(true error):  6.9226272164e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.51952731765e-16 3.02261296911e-16
-7.26039382988e-17 3.58568831478e-16
-8.4067971779e-17 3.99848818295e-16
-1.10105086692e-16 4.39107362701e-16
-0.00601167682918 0.148117733722
  cycle:  28 , L_inf(res.)=  9.09494701773e-13 ,L_inf(true error):  6.9226272164e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-8.29674903225e-17 8.29674903225e-17
-1.11481760272e-16 5.83476767392e-17
-1.30693081923e-16 6.43571990739e-17
-1.51020683983e-16 7.21246535049e-17
-0.00601167682918 0.148117733722
  cycle:  29 , L_inf(res.)=  9.09494701773e-13 ,L_inf(true error):  6.9226272164e-05
-0.00601167682918 0.148117733722
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.71158954364e-16 9.89255142511e-17
-2.17024084962e-16 5.34794154265e-17
-2.5004350098e-16 2.86486607842e-17
-2.66502578989e-16 1.75328088399e-17
-0.00601167682918 0.148117733722
  cycle:  30 , L_inf(res.)=  9.09494701773e-13 ,L_inf(true error):  6.9226272164e-05
L_inf (true error):  6.9226272164e-05

True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.

This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.

1.6 Full Multi-Grid

We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.


In [107]:
def FMG(nx,ny,num_levels,f,nv=1,level=1):

  if(level==num_levels):#bottom solve
    u=np.zeros([nx+2,ny+2])  
    u,res=Jacrelax(nx,ny,u,f,iters=50)
    return u,res

  #Step 1: Restrict the rhs to a coarse grid
  f_c=restrict(nx//2,ny//2,f)

  #Step 2: Solve the coarse grid problem using FMG
  u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)

  #Step 3: Interpolate u_c to the fine grid
  u=prolong(nx//2,ny//2,u_c)

  #step 4: Execute 'nv' V-cycles
  for _ in range(nv):
    u,res=V_cycle(nx,ny,num_levels-level,u,f)
  return u,res

Lets call the FMG solver for the same problem


In [108]:
print('mgd2d.py FMG solver:')
print('NX:',NX,', NY:',NY,', levels: ',nlevels)

u,res=FMG(NX,NY,nlevels,f,nv=1) 
rtol=np.max(np.max(np.abs(res)))

print(' FMG L_inf(res.)= ',rtol)
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))


mgd2d.py FMG solver:
NX: 64 , NY: 64 , levels:  6
0.0 0.0
-0.0622648000717 0.114029288292
-0.0425992258359 0.139042924248
0.0 0.0
-0.0468121523932 0.148359885109
-0.0230535858186 0.144041786055
0.0 0.0
0.0 0.0
-0.00132866700957 0.00348497106095
-0.0238153532804 0.147641136269
-0.0118862769467 0.147081094992
0.0 0.0
0.0 0.0
0.0 0.0
-0.000424413778161 0.0010911719915
-0.000226462805643 0.00112275666165
-0.0119979607232 0.148107916421
-0.00599312526188 0.147803924537
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.000130092312831 0.00032003894776
-6.96590398428e-05 0.000326326380715
-3.5199622127e-05 0.000332009013641
-0.00600899771525 0.148082042751
 FMG L_inf(res.)=  0.00659679691546
L_inf (true error):  6.66429014552e-05

It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$

2. Stationary iterative methods as preconditioners

A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.

Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.

This allows us to use any blackbox stationary iterative method as a preconditioner

To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method.

We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.

The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner.

Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver.

Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep.


In [109]:
from scipy.sparse.linalg import LinearOperator,bicgstab,cg
def MGVP(nx,ny,num_levels):
  '''
  Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can
  be passed to Krylov solvers as a preconditioner. 
  '''
  def pc_fn(v):
    u =np.zeros([nx+2,ny+2])
    f =np.zeros([nx+2,ny+2])
    f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived
    #perform one V cycle
    u,res=V_cycle(nx,ny,num_levels,u,f)
    return u[1:nx+1,1:ny+1].reshape(v.shape)
  M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)
  return M

Let us define the Poisson matrix also as a LinearOperator


In [110]:
def Laplace(nx,ny):
  '''
  Action of the Laplace matrix on a vector v
  '''
  def mv(v):
    u =np.zeros([nx+2,ny+2])
  
    u[1:nx+1,1:ny+1]=v.reshape([nx,ny])
    dx=1.0/nx; dy=1.0/ny
    Ax=1.0/dx**2; Ay=1.0/dy**2
  
    #BCs. Needs to be generalized!
    u[ 0,:] = -u[ 1,:]
    u[-1,:] = -u[-2,:]
    u[:, 0] = -u[:, 1]
    u[:,-1] = -u[:,-2]

    ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
        + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
        - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])
    return ut.reshape(v.shape)
  A = LinearOperator((nx*ny,nx*ny), matvec=mv)
  return A

The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.

The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations


In [111]:
def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):
      num_iters = 0
      def callback(xk):
         nonlocal num_iters
         num_iters+=1
      x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)
      return x,status,num_iters

Lets look at what happens with and without the preconditioner.


In [112]:
A = Laplace(NX,NY)
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex

#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)

u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))

u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))


0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
Without preconditioning. status: 0 , Iters:  149
error : 0.999615872436
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.848911855627 0.848911855627
-0.90273051963 0.95328219692
-0.877138474584 1.11587876009
-0.95132779675 1.2195781945
-1.11704090371 1.5533689113
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.427496091572 0.404816863916
-0.480355717832 0.358496263708
-0.53699732111 0.300902889767
-0.558327790305 0.246273190066
-0.592501760886 0.188694613917
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.0519043277594 0.0519043277594
-0.0475007421038 0.0643715953007
-0.0388691392279 0.0657035795069
-0.0268081792613 0.0685819671081
-0.0300379334339 0.0768729703157
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.00405283759207 0.00394072225494
-0.00499758096328 0.00461171139026
-0.00568059878901 0.00393602532877
-0.00613203508723 0.0028098397556
-0.0071015682861 0.00271076165459
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.000909850098781 0.000909850098781
-0.000957304205983 0.00116566533357
-0.000758474096199 0.00135162838194
-0.000580364692157 0.00155091091016
-0.000609577609019 0.00174203095648
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.000121704342642 0.000121704342642
-0.000155378225821 0.000112352851339
-0.000165782655238 8.74024053247e-05
-0.00018311493947 6.5053511796e-05
-0.000202828359929 7.51479568808e-05
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.89515492471e-05 1.89515492471e-05
-1.52138377825e-05 2.91229730444e-05
-1.2279797903e-05 3.83907250423e-05
-1.74475933112e-05 4.34777921182e-05
-2.29189595915e-05 4.73349784464e-05
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.64421575558e-06 2.64421575558e-06
-3.87536942883e-06 2.38162598783e-06
-4.7413436269e-06 3.19400057702e-06
-5.41272203413e-06 3.86953211424e-06
-5.77117548341e-06 4.34038366244e-06
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-3.78977508691e-07 3.78977508691e-07
-3.98183899637e-07 6.39338110049e-07
-6.41027386581e-07 8.82416343627e-07
-7.69819777677e-07 1.00479584064e-06
-8.65974271608e-07 1.09930557056e-06
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.36887963937e-08 4.36887963937e-08
-6.81182660259e-08 5.73411368849e-08
-9.53245025564e-08 8.19920445328e-08
-1.16648324118e-07 9.37341315696e-08
-1.27452553217e-07 1.06970831005e-07
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-9.93210958946e-09 9.93210958946e-09
-1.71176303732e-08 1.33571819214e-08
-2.4249905154e-08 2.00170931441e-08
-2.85028094009e-08 2.29656139331e-08
-3.08636108596e-08 2.56174534467e-08
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-8.52985252376e-10 8.52985252376e-10
-1.09037674732e-09 1.49604463504e-09
-1.94019499407e-09 2.0805025681e-09
-2.55055221869e-09 2.37158675145e-09
-2.86979954774e-09 2.74566337469e-09
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.83238460007e-10 2.83238460007e-10
-4.58276731748e-10 2.04267785435e-10
-6.50599454234e-10 3.16204477122e-10
-7.49368281766e-10 3.9136263141e-10
-8.2212123446e-10 4.74124769884e-10
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.49539838954e-11 2.49539838954e-11
-2.38078065914e-11 3.45866902215e-11
-2.86557772966e-11 4.51451303257e-11
-4.18801839658e-11 5.3575189391e-11
-5.11396389124e-11 6.08479166095e-11
With preconditioning. status: 0 , Iters:  7
error : 0.999615867807

Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG:


In [113]:
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))

u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))


Without preconditioning. status: 0 , Iters:  204
error : 0.999615868575
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.848911855627 0.848911855627
-0.90273051963 0.95328219692
-0.877138474584 1.11587876009
-0.95132779675 1.2195781945
-1.11704090371 1.5533689113
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.380181965246 0.359066706833
-0.429467251185 0.313016960977
-0.475174393272 0.261254494643
-0.500690189883 0.210204740553
-0.541163411478 0.173503433874
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.0323410380101 0.0323410380101
-0.0277584000933 0.0443153852172
-0.0216985594159 0.0465447834141
-0.0145224209452 0.052544761985
-0.0198057309604 0.0587733865331
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.0106716896486 0.0106716896486
-0.00954791180039 0.0130861201183
-0.00723129880436 0.0142749101545
-0.00536658866725 0.0152704739676
-0.00381512883243 0.0156470521613
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.00123903871409 0.00123903871409
-0.0016459263824 0.00112289401032
-0.00172012557176 0.000781778354458
-0.0018189414576 0.00056519693782
-0.00203336370179 0.000682816292207
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-0.000402693575954 0.000402693575954
-0.000533607125549 0.000321248481936
-0.000580114106883 0.000230427621258
-0.000617869512672 0.000150624863083
-0.000650649141066 0.000117061504423
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.58495467345e-05 4.58495467345e-05
-4.44777116854e-05 5.56490033957e-05
-3.38368771182e-05 6.800904168e-05
-2.88636836207e-05 7.3598922204e-05
-3.58729537461e-05 8.55654139546e-05
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.31267493438e-05 1.31267493438e-05
-1.09763019993e-05 1.71131999982e-05
-7.77540800516e-06 1.76670525593e-05
-4.17300564553e-06 1.8972047897e-05
-3.79836917906e-06 2.03424912361e-05
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.59808736316e-06 1.51719048431e-06
-2.01040885732e-06 1.7385828269e-06
-2.29296082789e-06 1.61582969305e-06
-2.44791093254e-06 1.86477721168e-06
-2.68336666905e-06 1.99955206447e-06
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.628316232e-07 4.49440507596e-07
-5.79950425906e-07 3.81961422133e-07
-6.46900116985e-07 2.63655632199e-07
-6.92356865531e-07 1.76912672504e-07
-7.56614639497e-07 2.21297439656e-07
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.25134536947e-08 4.25134536947e-08
-4.50450196417e-08 5.8040666397e-08
-5.52061686898e-08 6.90001027189e-08
-6.12760184336e-08 7.26787373702e-08
-6.98670975949e-08 9.01941024417e-08
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-1.49920991836e-08 1.49920991836e-08
-1.57719547321e-08 1.89949982683e-08
-1.81030300012e-08 2.21668272153e-08
-1.87569055665e-08 2.32449426899e-08
-2.08300458583e-08 2.59200984482e-08
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-2.52351459802e-09 2.52351459802e-09
-2.5030294466e-09 3.10369418349e-09
-1.97657145365e-09 4.15426466536e-09
-2.34085563974e-09 4.56537795914e-09
-2.75418826044e-09 5.14743477452e-09
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
0.0 0.0
-4.69829650164e-10 5.80913088499e-10
-6.23751085888e-10 7.58339814298e-10
-7.17027314985e-10 8.3094816717e-10
-7.97013118965e-10 9.02171651934e-10
-8.7724540152e-10 9.817371062e-10
With preconditioning. status: 0 , Iters:  14
error : 0.999615867894

There we have it. A Multigrid Preconditioned Krylov Solver. We did all this without even having to deal with an actual matrix. How great is that! I think the next step should be solving a non-linear problem without having to deal with an actual Jacobian (matrix).