The inverse matrix theorem is a statement about a number of equivalent conditions that we can impose on a matrix.
Given a matrix $A$ with shape $(n, n)$
These require some explanation.
In [1]:
import numpy as np
import numpy.linalg as la
In [2]:
A = np.array(range(1,5)).reshape(2,2)
determinant_A = la.det(A)
print(A)
print("Determinant is: {}".format(determinant_A)) # Notice the rounding error.
Hmm.... is this related to anything??
In [3]:
# Let's check it's eigenvalues.
print("The Matrix A has eigenvalues: {}".format([x for x in la.eigvals(A)]))
print("And their product is: {}".format(np.product(la.eigvals(A))))
determinant_A - np.product(la.eigvals(A)) < 10**-5
Out[3]:
In [4]:
# Lets define an epsilon and check if they are the same.
eps = 10 ** -5
def check_floats_equal(float_1, float_2, eps=eps):
return float_1 - float_2 < eps
check_floats_equal(np.product(la.eigvals(A)), la.det(A))
Out[4]:
Hmm... Interesting.
The Determinant of a matrix is just the product of it's eigenvalues.
In [5]:
vals, vecs = la.eig(A)
print(vecs)
In [6]:
# Lets define a distance function for vectors in a matrix.
def l2_distance_cols(matrix):
norms = []
for row in matrix.transpose():
dist = 0
for val in row:
dist += val ** 2
norms.append(dist)
return np.array(norms)
l2_distance_cols(vecs)
Out[6]:
It looks like the eigenvectors are normalized to length 1.
Let's look at another matrix and look at its eigenvalues and eigenvectors. This one should be a bit simpler.
In [7]:
A = np.array([0, -1, 1, 0]).reshape(2, 2)
In [8]:
A
Out[8]:
In [9]:
np.linalg.det(A)
Out[9]:
In [10]:
vals, vecs = np.linalg.eig(A)
print("Values:\n{}".format(vals))
print("Vectors:\n{}".format(vecs))
Hmm.... what is happening there, why does it have imaginary eigenvectors??
In [11]:
# First a quick vector making helper function.
def list_to_vector(items):
return np.array(items).reshape(len(items), 1)
In [12]:
np.matmul(A, list_to_vector([1,0]))
Out[12]:
In [13]:
np.matmul(A, list_to_vector([0,1]))
Out[13]:
This looks like a twist to me. It's sending $$ [1, 0]^T \mapsto [0,1]^T$$ $$ [0, 1]^T \mapsto [-1,0]^T$$
If it's a twist... how can it have a direction it stretches something in a straight line in... we'd have to imagine really hard...
Oh.
That's why it has imaginary eigenvectors -- they don't really
exist. But they do symbolically and mathematically.
Well... according to the inverse matrix theorem, the fact that $Det(A) = 1 \neq 0$ implies that it has an inverse matrix. Lets check that out.
In [14]:
A_inv = np.linalg.inv(A)
print("Here is A^-1\n{}".format(A_inv))
In [15]:
np.matmul(A_inv, list_to_vector([1,0]))
Out[15]:
In [16]:
np.matmul(A_inv, list_to_vector([0,1]))
Out[16]:
Hmm...
$$ [1,0]^T \mapsto [0,-1]^T \\ [0,1]^T \mapsto [1,0]^T $$Oh!
It's a rotation in the opposite direction!! Neato gang!
What happens if we multiply the two together...?
In [17]:
I_2 = np.matmul(A, A_inv)
print("{} \n* \n{} \n=\n{}".format(A, A_inv, I_2))
We just get the identity matrix back -- just like we wanted!!
Lets take another example.
Fix a new matrix $A = 2 * I_2$
In [18]:
A = np.eye(2) * 2
print(A)
In [19]:
vals, vecs = np.linalg.eig(A)
print("Vals:\n{}".format(vals))
print("Vecs:\n{}".format(vecs))
What should the inverse of $A$ be?
In [20]:
A_inv = np.linalg.inv(A)
print("The inverse is:\n{}".format(A_inv))
Hmm... We have $$A = 2 * I_2$$ $$A^{-1} = \frac{1}{2} * I_2$$
It looks like it just stretches along the eigenvectors.
In [ ]: