In [2]:
# Initial import statements
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import *
from numpy import *
from numpy.linalg import *
Write a Python program that solves $Ax = b$ using LU decomposition. Use the functions lu_factor and lu_solve from scipy.linalg package.
$$ A = \begin{bmatrix} 1 & 4 & 1 \\ 1 & 6 & -1 \\ 2 & -1 & 2 \end{bmatrix}B = \begin{bmatrix} 7 \\ 13 \\ 5 \end{bmatrix}$$We can use the functions lu_factor and lu_solve from scipy.linalg to solve the problem by using LU decomposition. Function lu_factor returns lu (N,N) which is a matrix containing U in its upper triangle, and L in its lower triangle. The function also returns piv (N,) which i representing the permutation matrix P.
In function lu_decomp1(A, b), the result of lu_factor is saved into a variable (PLU) which is later referred in lu_solve(PLU, b) function call, which gives us the result of $Ax = b$.
An alternative method for solving the problem would be to use "lu"-function, which unpacks the matrices into separate variables, which could be useful if you need to modify the variables or you don't want to use lu_solve to calculate the end result.
The expected result is: $[5.5,0.9,-2.1]$
In [3]:
from scipy.linalg import lu_factor, lu_solve
# Create a function which can be used later if needed
def lu_decomp1(A, b):
# Solve by using lu_factor and lu_solve
PLU = lu_factor(A)
x = lu_solve(PLU, b)
return x
# Create variables
A = np.matrix(((1, 4, 1),
(1, 6, -1),
(2, -1, 2)))
b = np.array(([7, 13, 5]))
x = lu_decomp1(A, b)
print(dot(inv(A), b))
print("Result = {}".format(x))
Invert the following matrices with any method $$ A = \begin{bmatrix} 5 & -3 & -1 & 0 \\ -2 & 1 & 1 & 1 \\ 3 & -5 & 1 & 2 \\ 0 & 8 & -4 & -3 \end{bmatrix} B = \begin{bmatrix} 1 & 3 & -9 & 6 & 4 \\ 2 & -1 & 6 & 7 & 1 \\ 3 & 2 & -3 & 15 & 5 \\ 8 & -1 & 1 & 4 & 2 \\ 11 & 1 & -2 & 18 & 7 \end{bmatrix}$$ Comment on the reliability of the results.
Probably the simplest way to inverse the given matrices is to use inv() function from the numpy.linalg package. Inv() function returns inverse of the matrix given as a parameter in the function call.
In [4]:
A = np.array([[5, -3, -1, 0],
[-2, 1, 1, 1],
[3, -5, 1, 2],
[0, 8, -4, -3]])
B = np.array(([1, 3, -9, 6, 4],
[2, -1, 6, 7, 1],
[3, 2, -3, 15, 5],
[8, -1, 1, 4, 2],
[11, 1, -2, 18, 7]))
ainv = inv(A)
binv = inv(B)
print("Inverse of A:\n {}".format(ainv))
print("\nInverse of B:\n {}".format(binv))
In [5]:
print("Determinant of A: {}".format(np.linalg.det(A)))
print("Determinant of B: {}".format(np.linalg.det(B)))
If you want to invert matrices with small determinant, the solution is to ensure the tolerances are low enough so that the inv() function can invert the matrix.
Use the Gauss-Seidel with relaxation to solve $Ax = b$, where $$A = \begin{bmatrix} 4 & -1 & 0 & 0 \\ -1 & 4 & -1 & 0 \\ 0 & -1 & 4 & -1 \\ 0 & 0 & -1 & 3 \end{bmatrix} B = \begin{bmatrix} 15 \\ 10 \\ 10 \\ 10 \\ \end{bmatrix}$$
Take $x_i = b_i/A_{ii}$ as the starting vector, and use $ω = 1.1$ for the relaxation factor.
We can use the sample code created during class as a baseline for the exercise. We need to make couple of modifications to the source code in order to take the value of omega into the account. We also want to stop iterating once good enough accuracy is achieved. Accuracy is defined in the tol -variable.
The tolerance needed is calculated by taking dot product of xOld (taken before iteration) and x (current iteration) and comparing it against the tol variable. If the difference is less than the tol variable, it means the value of x has not changed more than the tolerence, which indicates we are close to the level of accuracy we need.
Expected result is: [ 5. 5. 5. 5.]
In [7]:
def gaussSeidel(A, b):
omega = 1.1
# Amount of iterations
p = 1000
# Define tolerance
tol = 1.0e-9
n = len(b)
x = np.zeros(n)
# Generate array based on starting vector
for y in range(n):
x[y] = b[y]/A[y, y]
# Iterate p times
for k in range(p):
xOld = x.copy()
for i in range(n):
s = 0
for j in range(n):
if j != i:
s = s + A[i, j] * x[j]
x[i] = omega/A[i, i] * (b[i] - s) + (1 - omega)*x[i]
# Break execution if we are within the tolerance needed
dx = math.sqrt(np.dot(x-xOld,x-xOld))
if dx < tol: return x
return x
A = np.array(([4.0, -1, 0, 0],
[-1, 4, -1, 0],
[0, -1, 4, -1],
[0, 0, -1, 3]))
b = np.array(([15.0, 10, 10, 10]))
x = gaussSeidel(A, b)
print("Result = {}".format(x))