In this homework you will implement one of the most basic linear solvers. Gaussian elimination with bakwards substitution with row swap, you can find the details in Alg 6.1 in your textbook (or https://en.wikipedia.org/wiki/Gaussian_elimination)
Q1.a You will write the Gaussian elimination algorithm. We will proceed by writing the function rowSwap!. The inputs of the function will be:
The function will not have any output. The swaping has to be done in the matrix M. Finally, your function will raise an error if $i$ or $j$ are out of range.
In Julia, all the input variariables are passed by reference. This means that you can modify the input variables outside the scope of the function. By convention, each time that a function modifies a variable outside its scope the function contains an exclamation mark.
In [ ]:
function rowSwap!(A, i,j)
# input: A matrix
# i,j the row indeces to be swapped
end
Q1.b You will write a function that performs the Gaussian elimination. The function takes as input:
Your function will create the augmented system, and perform the Gaussian elimination.
The output of the function will be the tuple (U,b1).
U is the upper triangular matrix resulting from the elimination and b1, is the resulting vector.
To obtain $U$ and $b1$ from your augmented matrix you perform a slicing (i.e. use [:,1:n]).
You will use your rowSwap function acting on the augmented matrix.
Finally, your function will raise an error if:
Hints:
In [ ]:
function gaussianElimination(A,b)
#input: A squared matrix
# b a vector
#output: U upper triangular matrix
# b1 the resulting vector
return (U,b1)
end
Q1.c You will write a function that performs the backward substitution.
The input of your function will be:
The output will be the solution to the system $Ux = b$.
Your function needs to have safeguards against a size mismatch (i.e., the size of the matrix and your vector are not compatible, or your matrix is not squared).
Hint: if you need to run a foor loop that goes from n-1 to 1, you can use the syntax
for j = n-1:-1:1
In [ ]:
function backwardSubstitution(U,b)
# input: U upper triangular matrix
# b vector
# output: x = U\b
return x
end
You can test that your function is correct by running the following script:
In [ ]:
# size of the Matrix
m = 100
# creating an upper triangular Matrix
U = diagm(m*ones(m,1)[:], 0) + diagm(rand(m-1,1)[:], 1) + diagm(rand(m-2,1)[:], 2)
# creating the rhs
b = rand(size(U)[1],1)
@time x = backwardSubstitution(U,b)
print("Residual of the backward substitution is ", norm(U*x -b)/norm(b),"\n")
The residual should be extremely small (around epsilon machine).
In [ ]:
function solveGauss(A,b)
# input: A square matrix
# b vector
# output: x = A\b
return x
end
You can test your code by running the following script
In [ ]:
# size of the Matrix
m = 100
# creating the Matrix
A = rand(m,m) + m*eye(m)
# creating the rhs
b = rand(size(A)[1],1)
@time x = solveGauss(A,b)
print("Residual of the solver is ", norm(A*x -b)/norm(b),"\n")
The residual should be extremely small (around epsilon machine)
You will perform a benchmark to obtain the assymptotic complexity of your algorithm with respect to the number of unknows in the system.
You will run your algorithm to solve linear systems for matrices of different sizes; you will time the execution time for each of those solves, and you will plot the runtime versus the size of the matrices in a log-log scale. From the plot you will claim the assymptotic complexity of your algorithm with respect to the number of unknowns on the linear
Q2.a You will run the following script, to bechmark the Julia built-in linear system solver (which is an interface to LAPACK, for more information see https://en.wikipedia.org/wiki/LAPACK and http://www.netlib.org/lapack/)
In [ ]:
nSamples = 10;
times = zeros(nSamples,1)
sizes = 2*2.^(0:nSamples-1)
for i = 1:nSamples
m = sizes[i]
# creating the Matrix
A = rand(m,m) + m*eye(m)
# creating the rhs
b = rand(size(A)[1],1)
tic();
x =A\b
times[i] = toc();
end
You will plot the timing using the following script (you will use the layer object to plot two different data-set in the same graph)
In [ ]:
using Gadfly
plot(
layer(x = sizes, y = times, Geom.point, Geom.line),
layer(x = sizes, y = 0.00000001*sizes.^3, Geom.point, Geom.line, Theme(default_color=color("red"))),
Scale.y_log10, Scale.x_log10,
Guide.ylabel("Runtime [s]"), # label for y-axis
Guide.xlabel("Size of the Matrix"), # label for x-axis
)
In red you have the cubic scaling and in light-blue the numerical runtimes.
What should you expect?
What are you seeing instead?
How can you explain this?
What would happen if you increse the size of the matrices?
Answer:
Q2.b You will modify the script in the question above in order to bechmark the algorithm you wrote.
In [ ]:
nSamples = 10;
times2 = zeros(nSamples,1)
sizes2 = 2*2.^(0:nSamples-1)
for i = 1:nSamples
# fill the details
end
You will plot the benchmark using Gadlfy
In [ ]:
using Gadfly
plot(
layer(x = sizes2, y = times2, Geom.point, Geom.line),
layer(x = sizes2, y = 0.00000001*sizes2.^3, Geom.point, Geom.line, Theme(default_color=color("red"))),
Scale.y_log10, Scale.x_log10,
Guide.ylabel("Runtime [s]"), # label for y-axis
Guide.xlabel("Size of the Matrix"), # label for x-axis
)
Based on the runtime scaling you obtained, what is the assymptotic complexity of your function solveGauss?
Asnwer: