Determinant formula from Cavalieri's principle


In [2]:
# setup SymPy
from sympy import *
init_printing()
Vector = Matrix

# setup plotting
#%matplotlib inline
%matplotlib notebook
import matplotlib.pyplot as mpl
from util.plot_helpers import plot_vec, plot_vecs, plot_line, plot_plane, autoscale_arrows

Two dimentions


In [2]:
a,b, c,d = symbols('a b  c d')

# Consider the volume of the parallelegram with sides:
u1 = Vector([a,b])
u2 = Vector([c,d])

# We can compute the volume of the parallelpiped by computing the deteminant of
A = Matrix([[a,b],
            [c,d]])
A.det()


Out[2]:
$$a d - b c$$

Cavalieri's principle

Mathematically, we have

$$ D(\vec{u}_1, \ \vec{u}_2) = D(\vec{u}_1 - \alpha \vec{u}_2, \ \vec{u}_2). $$

In [3]:
# choose alpha so A's top right entry will be zero
alpha = symbols('alpha')

alpha = b/d
A[0,:] = A[0,:] - alpha*A[1,:]
A


Out[3]:
$$\left[\begin{matrix}a - \frac{b c}{d} & 0\\c & d\end{matrix}\right]$$

Copmputing the area is the same as computing the product of the entries on the diagonal:


In [4]:
A[0,0]*A[1,1]


Out[4]:
$$d \left(a - \frac{b c}{d}\right)$$

In [5]:
simplify( A[0,0]*A[1,1] )


Out[5]:
$$a d - b c$$

Intuition: the coefficient $\alpha$ encodes something very important about the "alternating multi-linear" structure that deteminants and cross products embody. In words, the choice of $\alpha$ and the resulting expression $a-\frac{bc}{d}$ correponds to what happens to the first component of $\vec{u}_1$ when we make it's second component zero.

(Yeah, I know, handwavy like crazy, but better than nothing.)

Three dimentions


In [6]:
a,b,c, d,e,f, g,h,i = symbols('a b c  d e f  g h i')

# Consider the volume of the parellelpiped with sides:
u1 = Vector([a,b,c])
u2 = Vector([d,e,f])
u3 = Vector([g,h,i])

# We can compute the volume of the parallelpiped by computing the deteminant of
A = Matrix([[a,b,c],
            [d,e,f],
            [g,h,i]])
A.det()


Out[6]:
$$a e i - a f h - b d i + b f g + c d h - c e g$$

Cavalieri's principle

This principle leads to the following property of the determinant

$$ D(\vec{u}_1, \ \vec{u}_2, \ \vec{u}_3) = D(\vec{u}_1, \ \vec{u}_2- k \vec{u}_3, \ \vec{u}_3) = D(\vec{u}_1 -\ell\vec{u}_2 - m\vec{u}_3 , \ \vec{u}_2-k \vec{u}_3, \ \vec{u}_3). $$

In particuler we make the following particular choice for the cofficients

$$ D(\vec{u}_1, \vec{u}_2, \vec{u}_3) = D(\vec{u}_1 - \beta \vec{u}_3 - \gamma(\vec{u}_2 - \alpha\vec{u}_3), \ \vec{u}_2 - \alpha\vec{u}_3, \ \vec{u}_3). $$

Choosing the right coefficients $\alpha$, $\beta$, and $\gamma$ can transform the matrix $A$ into a lower triangular form, which will make computing the determinant easier.


In [7]:
alpha, beta, gamma = symbols('alpha beta gamma')

A


Out[7]:
$$\left[\begin{matrix}a & b & c\\d & e & f\\g & h & i\end{matrix}\right]$$

In [8]:
# first get rid of f by subtracting third row from second row
alpha = f/i
A[1,:]= A[1,:] - alpha*A[2,:]
A


Out[8]:
$$\left[\begin{matrix}a & b & c\\d - \frac{f g}{i} & e - \frac{f h}{i} & 0\\g & h & i\end{matrix}\right]$$

In [9]:
# second get rid of c by subtracting third row from first row
beta = c/i
A[0,:]= A[0,:] - beta*A[2,:]
A


Out[9]:
$$\left[\begin{matrix}a - \frac{c g}{i} & b - \frac{c h}{i} & 0\\d - \frac{f g}{i} & e - \frac{f h}{i} & 0\\g & h & i\end{matrix}\right]$$

In [10]:
# third get rid of b-ch/i by subtracting second row from first row
gamma = A[0,1]/A[1,1]
A[0,:]= A[0,:] - gamma*A[1,:]
A


Out[10]:
$$\left[\begin{matrix}a - \frac{c g}{i} - \frac{\left(b - \frac{c h}{i}\right) \left(d - \frac{f g}{i}\right)}{e - \frac{f h}{i}} & 0 & 0\\d - \frac{f g}{i} & e - \frac{f h}{i} & 0\\g & h & i\end{matrix}\right]$$

Stargting from the first row, the volume of the determinant is proporitonal to the coeffieicnt A[0,0]. The are of the parallelepiped formd by the first two rows is A[0,0]*A[1,1] (we can ignore A[1,0]) and the overall volume is


In [11]:
A[0,0]*A[1,1]*A[2,2]


Out[11]:
$$i \left(e - \frac{f h}{i}\right) \left(a - \frac{c g}{i} - \frac{\left(b - \frac{c h}{i}\right) \left(d - \frac{f g}{i}\right)}{e - \frac{f h}{i}}\right)$$

In [12]:
simplify( A[0,0]*A[1,1]*A[2,2] )


Out[12]:
$$a e i - a f h - b d i + b f g + c d h - c e g$$

In [13]:
# I still don't know how to motivate the recusive formula except to say it turns out that way...

# I tried to think about decomposing the problem into subparts,
#  but i cannot motivate det(Ai + Aj + Ak) != det(Ai) + det(Aj) + det(Ak)
#  where Ai is the same as A but with A[0,1] and A[0,2] set to zero.

In [3]:
u1 = Vector([3,0,0])
u2 = Vector([2,2,0])
u3 = Vector([3,3,3])

plot_vecs(u1,u2,u3)
plot_vec(u1, at=u2, color='k')
plot_vec(u2, at=u1, color='b')
plot_vec(u1, at=u2+u3, color='k')
plot_vec(u2, at=u1+u3, color='b')
plot_vec(u1, at=u3, color='k')
plot_vec(u2, at=u3, color='b')
plot_vec(u3, at=u1, color='g')
plot_vec(u3, at=u2, color='g')
plot_vec(u3, at=u1+u2, color='g')

autoscale_arrows()



In [ ]: