In [96]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
In [97]:
M = np.mat("3 -2; 1 0")
w, V = np.linalg.eig(M)
In [98]:
w
Out[98]:
In [99]:
V
Out[99]:
Verify the property that $$\mathbf{M} \mathbf{v} = \lambda \mathbf{v} $$
In [100]:
V[:, 0]
Out[100]:
In [101]:
3 * V[:, 0]
Out[101]:
In [102]:
for i in range(2):
print('i = {}'.format(i+1))
print('Left = ', (M @ V[:, i]).reshape(1,2))
print('Right = ', (w[i] * V[:, i]).reshape(1,2))
In [ ]:
In [ ]:
In [103]:
U = np.mat("1 0; 0 1")
V = M @ U
V
Out[103]:
In [104]:
U2 = np.hstack([np.array([i-1, j-1]).reshape(2,1) for i in range(3) for j in range(3)])
U2
Out[104]:
In [105]:
V2 = M @ U2
V2
Out[105]:
In [106]:
plt.plot(U2[0,:], U2[1,:], 'bx')
plt.title('original space')
plt.draw()
plt.show()
In [107]:
plt.plot(V2[0,:], V2[1,:], 'r+')
plt.title('new space')
plt.draw()
plt.show()
In [108]:
plt.plot(U2[0,:], U2[1,:], 'bx')
plt.plot(V2[0,:], V2[1,:], 'r+')
plt.title('plot together')
plt.draw()
plt.show()
With some more efforts, one can draw the original coordinate systems and the transformed one in a more visual way. This will hopefully strengthen the idea that a matrix multiplication represents a linear transformation.
In [109]:
from scipy import linalg
In [110]:
A = np.array([ [5, 3], [0, -1], [12, 1], [4, 2] ])
A
Out[110]:
We only do a reduced SVD: $$\mathbf{A} = \mathbf{U} \mathbf{S} \mathbf{V}^{\top}, $$ so that $\mathbf{U}$ is of the same shape as $\mathbf{A}$.
In [111]:
U, s, Vt = linalg.svd(A, full_matrices=False)
S = np.diag(s)
V = Vt.T
In [112]:
U
Out[112]:
In [113]:
s
Out[113]:
In [114]:
S
Out[114]:
In [115]:
Vt
Out[115]:
In [116]:
V
Out[116]:
We demonstrate a few useful properties of SVD:
In [117]:
print(U.shape, S.shape, Vt.shape)
In [118]:
reconA = np.round(U @ S @ Vt, 5)
reconA
Out[118]:
Another view of SVD is to decompose the original matrix into a sum of rank-1 matrices of decreasing "importance", i.e., $$ \mathbf{A} = \sum_{i} {\sigma_i} \left( \mathbf{U}_i \otimes \mathbf{V}_i \right), $$ where $\otimes$ is the vector outer product, i.e., $$ \mathbf{u} \otimes \mathbf{v} = \mathbf{u} \otimes \mathbf{v}^{\top} $$
In the following, we will approximate $\mathbf{A}$ by only one summand.
In [121]:
U1 = U[:, 0]
U1
Out[121]:
In [127]:
s1 = s[0]
s1
Out[127]:
In [125]:
Vt1 = Vt[0, :]
Vt1
Out[125]:
In [129]:
reconA1 = np.round(s1 * np.outer(U1, Vt1), 5)
reconA1
Out[129]:
In [130]:
A
Out[130]:
We manually creat a matrix $\mathbf{A}$ such that all but its last column is heavily correlated.
In [159]:
a = np.random.rand(8)
b = np.random.rand(8)
sigma = 0.01
At = np.array( [a + sigma*np.random.randn(8) for _ in range(4)])
At = np.vstack((At, b))
A = At.T
print(a)
print(b)
print()
print(A)
In [172]:
def approximate_one(A):
U, s, Vt = linalg.svd(A, full_matrices=False)
print('singular values are: \n', s)
S = np.diag(s)
V = Vt.T
# only 1 summand
U1 = U[:, 0]
s1 = s[0]
Vt1 = Vt[0, :]
A1 = s1 * np.outer(U1, Vt1)
return A1
In [173]:
A1 = approximate_one(A)
A1
Out[173]:
In [174]:
print(A1 - A)
In [175]:
[ np.sum( (A1-A)[:, i] ) for i in range(A.shape[1]) ]
Out[175]:
As we can see and expect, the error is pretty small except in the last column.
In [ ]:
Notice that scipy has two version of SVD: scipy.linalg.svd
and scipy.sparse.linalg.svds
. See http://fa.bianp.net/blog/2012/singular-value-decomposition-in-scipy/ for a detailed discussion of the differences.
In [ ]: