Lecture 7 - matrices and vectors

This lecture reviewed matrices and vectors, and viewed matrices as operators on vectors. It introduced the concept of a transformation (mapping) and presented the basics of matrix algebra.

Vector operations

The notebook introduces the use of NumPy (http://www.numpy.org/). NumPy is a Python package for numerical simulations, and in particular vector and matrix operations. Below are some basic operations.

Creating vector and matrices

We first create some vectors (arrays) and a matrix.


In [1]:
import numpy as np

# Create two vectors
u = np.array([7, 3, 2, -4])
v = np.array([1, 1, -3, 2])
print("u={}, v={}".format(u, v))

# Create a matrix
A = np.matrix([[3, 4, 5, 4], [2, 2, 2, 9], [-2, 2, 7, 1], [-2, 6, 4, 4]])
print("A={}".format(A))


u=[ 7  3  2 -4], v=[ 1  1 -3  2]
A=[[ 3  4  5  4]
 [ 2  2  2  9]
 [-2  2  7  1]
 [-2  6  4  4]]

We can perform some basic operations, such at the dot product, matrix-vector multiplication and taking the transpose.


In [2]:
# Dot product between two vectors
x = u.dot(v)
print("Dot product (u.v): {}".format(x))

# Product Au
x = A.dot(u)
print("Product Au: {}".format(x))

# Product A*A
x = A.dot(A)
print("Product AA: {}".format(x))

# Transpose A^T
At = np.transpose(A)
print("A^T: {}".format(At))


Dot product (u.v): -4
Product Au: [[ 27 -12   2  -4]]
Product AA: [[ -1  54  74  69]
 [-12  70  64  64]
 [-18  16  47  21]
 [-10  36  46  66]]
A^T: [[ 3  2 -2 -2]
 [ 4  2  2  6]
 [ 5  2  7  4]
 [ 4  9  1  4]]

Inverse and determinant

It is easy to compute the determinant and the inverse of a square matrix:


In [3]:
# Compute determinant
detA = np.linalg.det(A)
print("Determinant of A: {}".format(detA))

# Compute inverse
Ainv = np.linalg.inv(A)
print("Inverse of A")
print(Ainv)

# Check that inverse is correct
print("A*A^-1: {}".format(A*Ainv))


Determinant of A: -1137.999999999999
Inverse of A
[[ 0.23022847 -0.04393673 -0.08963093 -0.10896309]
 [ 0.10017575 -0.11072056 -0.14586995  0.18541301]
 [ 0.04920914 -0.00175747  0.15641476 -0.08435852]
 [-0.08435852  0.14586995  0.01757469  0.00175747]]
A*A^-1: [[  1.00000000e+00   0.00000000e+00   4.16333634e-17   1.73472348e-18]
 [ -1.11022302e-16   1.00000000e+00  -2.77555756e-17   1.73472348e-17]
 [ -5.55111512e-17  -8.32667268e-17   1.00000000e+00   1.43114687e-17]
 [ -5.55111512e-17  -2.22044605e-16  -6.93889390e-17   1.00000000e+00]]

Note that the computations and being done in floating point arithmetic and not symbolically, hence the off-diagonal terms in $A A^{-1}$ not exactly zero (but they are very small).

Transformations

We can examine the effect of mulitplying a vector by a matrix by visualing the transformation of a cube. Below we plot a unit cube:


In [4]:
%matplotlib inline

# Set up plotting environment
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from itertools import product, combinations

fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")

# Draw cube
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")


Now, we consider the transformation induced by a diagonal matrix

$$ \boldsymbol{A} = \begin{bmatrix} 0.8 & 0 & 0 \\ 0 & 1.1 & 0 \\ 0 & 0 & 1.7 \end{bmatrix} $$

We first define the matrix, and compute the determinant (recall the the determinant is the 'scaling' factor):


In [5]:
# Create a transformation matrix (diagonal)
A = np.array([[0.8, 0.0, 0], [0.0, 1.1, 0.0], [0.0, 0.0, 1.7]])

# Check determinant
print("Det A: {}".format(np.linalg.det(A)))


Det A: 1.4960000000000002

The determinant is greater than one, so we expect the volume of the transformed polyhedron (red lines) to be to larger. Applying $\boldsymbol{A}$ to the coordinate vector of each vertex of the unit cube, we see the effect of the transformation.


In [6]:
# Draw orginal cube and transformed shape
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")
        
        s = A.dot(s)
        e = A.dot(e)
        ax.plot3D(*zip(s, e), color="r", marker="o")


Note that the diagonal transformation matrix as simpler stretched or compressed the cube along the $x_{i}$ axes.

If we pick a transformation matrix $\boldsymbol{A}$ that is singular ($\det \boldsymbol{A} = 0$), the cube will collapse onto a plane:


In [7]:
# Create a transformation matrix (diagonal)
A = np.array([[0.8, 0.0, 0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.7]])

# Check determinant
print("Det A: {}".format(np.linalg.det(A)))

# Draw orginal cube and transformed shape
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")
        s = A.dot(s)
        e = A.dot(e)
        ax.plot3D(*zip(s, e), color="r", marker="o")


Det A: 0.0

If we consider a transformation matrix that is not diagonal, the transformed cube will not longer be a rectangular prism:


In [8]:
# Create a transformation matrix (diagonal)
A = np.array([[0.8, 0.8, 0.8], [0.6, 1.0, 0.0], [-1.1, 0.0, 0.7]])

# Check determinant
print("Det A: {}".format(np.linalg.det(A)))

# Draw orginal cube and transformed shape
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")
        
        s = A.dot(s)
        e = A.dot(e)
        ax.plot3D(*zip(s, e), color="r", marker="o")


Det A: 1.104

In [9]:
# Create a transformation matrix (diagonal)
A = np.array([[2.0, 0.0, 0.0], [0.0, 1.1, 0.0], [0.0, 0.0, 1.1]])

# Check determinant
print("Det A: {}".format(np.linalg.det(A)))

# Draw orginal cube and transformed shape
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")
        
        s = A.dot(s)
        e = A.dot(e)
        ax.plot3D(*zip(s, e), color="r", marker="o")


Det A: 2.4200000000000004

The transformation matrices so far have had the property $\det \boldsymbol{A} \ge 0$. Id we take the above transformation matrix and multiple it by $-1$, the determinant will be negative. We can the see what effect that has of the transformation:


In [10]:
# Multiply A by -1 and print determinant
A = -A
print("Det A: {}".format(np.linalg.det(A)))

# Draw orginal cube and transformed shape
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
r = [0, 1]
for s, e in combinations(np.array(list(product(r, r, r))), 2):
    if np.sum(np.abs(s - e)) == r[1] - r[0]:
        ax.plot3D(*zip(s, e), color="b", marker="o")
        
        s = A.dot(s)
        e = A.dot(e)
        ax.plot3D(*zip(s, e), color="r", marker="o")


Det A: -2.4200000000000004

What see now it that a negative determinant involves not only stretching and/or rotation, but reflections (in a sense, negative strecthing).

TODO items

  • Add a function for plotting cubes to improve code reuse
  • Label vertices to see where mapped points go to.