# Annotating Public Resources Using I-Python Notebook

## A Linear Algebra Example

``````

In :

``````
``````

Out:

``````

Yes, you may embed Youtubes in your I-Python Notebooks, meaning you may follow up on a presentation with some example interactive code (or static code for display purposes).

Consider the Khan Academy video above. He's looking for eigenvectors of a matrix and follows some time-worn and trusted algebraic techniques.

NumPy and SciPy come with their own linear algebra components. NumPy's matrix object will transpose, for example, and below we test the example Hermitian matrix from Wikipedia, proving it obey's the definition of Hermitian in equalling it's own conjugate transpose.

A and A.H may not look the same at first glance, but remember the zero terms (e.g. 0.j) don't matter.

``````

In :

import numpy as np
from scipy import linalg

#  https://en.wikipedia.org/wiki/Hermitian_matrix
A = np.matrix('2, 2+1j, 4; 2-1j, 3, 1j; 4, -1j, 1')
assert (A == A.H).all()  # expect True
print("A", A, sep='\n')
print("A.H", A.H, sep='\n')

``````
``````

A
[[ 2.+0.j  2.+1.j  4.+0.j]
[ 2.-1.j  3.+0.j  0.+1.j]
[ 4.+0.j -0.-1.j  1.+0.j]]
A.H
[[ 2.-0.j  2.+1.j  4.-0.j]
[ 2.-1.j  3.-0.j -0.+1.j]
[ 4.-0.j  0.-1.j  1.-0.j]]

``````

Now let's return to Khan's example. He actually starts his solution in an earlier video, defining matrix A and seeking eigenvalues as a first step....

``````

In :

``````
``````

Out:

``````
``````

In :

A  = np.array(
[[-1, 2, 2],
[2, 2, -1],
[2, -1, 2]])    # ordinary numpy Array
M_A = np.matrix(A)   # special matrix version
la, v = linalg.eig(A)                  # get eigenvalues la and eigenvectors v
l1, l2, l3 = list(map(lambda c: c.real, la))
print("Eigenvalues  :", l1, l2, l3)
print("Eigenvector 1:", v[:,0])
print("Eigenvector 2:", v[:,1])
print("Eigenvector 3:", v[:,2])

``````
``````

Eigenvalues  : -3.0 3.0 3.0
Eigenvector 1: [-0.81649658  0.40824829  0.40824829]
Eigenvector 2: [ 0.57735027  0.57735027  0.57735027]
Eigenvector 3: [-0.27602622 -0.89708523  0.34503278]

``````

Of course the SciPy docs comes with it's own documentation on how the eigen-stuff is found.

Are the above solutions and Khan's really the same?

We may show that the 2nd and 3rd solutions obey the rule:

{a [1/2, 0, 1] + b [1/2, 1, 0], a,b both floats}

per Khan's algebraic solution.

To show this, divide through by x in [x,y,z] to get [1.0, 3.2500000543426637, -1.2500000181142212] i.e. ratios [4.0, 13.0, -5.0]. So a=-5, b=13 in Khan's equation of the eigenspace (back to top video). Likewise [1, 1, 1] (same ratios as Eigenvector 2) is obtained with a = b = 1.

Now say you want to prove that the original matrix, applied to any of the above eigenvectors, simply scales each one by some linear amount (the definition of an eigenvector):

``````

In :

eigen1 = v[:,0].reshape(3, 1)
print("Scaling E1", (M_A * eigen1)/eigen1,  sep="\n")  # show the scale factor
eigen2 = v[:,1].reshape(3, 1)
print("Scaling E2", (M_A * eigen2)/eigen2,  sep="\n")  # show the scale factor
eigen3 = v[:,2].reshape(3, 1)
print("Scaling E3", (M_A * eigen3)/eigen3,  sep="\n")  # show the scale factor

``````
``````

Scaling E1
[[-3.]
[-3.]
[-3.]]
Scaling E2
[[ 3.]
[ 3.]
[ 3.]]
Scaling E3
[[ 3.]
[ 3.]
[ 3.]]

``````