PageRank

Ways to think about SVD

  • Data compression
  • SVD trades a large number of features for a smaller set of better features
  • All matrices are diagonal (if you use change of bases on the domain and range)

Relationship between SVD and Eigen Decomposition: the left-singular vectors of A are the eigenvectors of $AA^T$. The right-singular vectors of A are the eigenvectors of $A^T A$. The non-zero singular values of A are the square roots of the eigenvalues of $A^T A$ (and $A A^T$).

SVD is a generalization of eigen decomposition. Not all matrices have eigen values, but ALL matrices have singular values.

A Hermitian matrix is one that is equal to it's own conjugate transpose. In the case of real-valued matrices (which is all we are considering in this course), Hermitian means the same as Symmetric.

Relevant Theorems:

  • If A is symmetric, then eigenvalues of A are real and $A = Q \Lambda Q^T$
  • If A is triangular, then its eigenvalues are equal to its diagonal entries

  • 确定图中顶点的相对重要性的经典方法是计算邻接矩阵的主特征向量,以便将每个顶点的第一特征向量的分量值分配为中心性分数

  1. 维基百科主要特征向量 - scikit-learn 0.19.1文档 2.Eigenvector centrality - Wikipedia
  2. Power iteration - Wikipedia
  3. Katz centrality - Wikipedia
  4. PageRank - Wikipedia
  5. PageRank算法--从原理到实现 - CSDN博客

In [0]:
#@title Power iteration
import numpy as np

def power_iteration(A, num_simulations):
    # Ideally choose a random vector
    # To decrease the chance that our vector
    # Is orthogonal to the eigenvector
    b_k = np.random.rand(A.shape[0])

    for _ in range(num_simulations):
        # calculate the matrix-by-vector product Ab
        b_k1 = np.dot(A, b_k)

        # calculate the norm
        b_k1_norm = np.linalg.norm(b_k1)

        # re normalize the vector
        b_k = b_k1 / b_k1_norm

    return b_k

power_iteration(np.array([[0.5, 0.5], [0.2, 0.8]]), 100)

In [0]:
# 稀疏矩阵
import numpy as np
from scipy import sparse
def power_method(A, max_iter=100):
    n = A.shape[1]
    A = np.copy(A)
    A.data /= np.take(A.sum(axis=0).A1, A.indices)

    scores = np.ones(n, dtype=np.float32) * np.sqrt(A.sum()/(n*n)) # initial guess
    for i in range(max_iter):
        scores = A @ scores
        nrm = np.linalg.norm(scores)
        scores /= nrm
        print(nrm)

    return scores

In [0]:
x = np.matrix(np.arange(12).reshape((3,4)))
a = sparse.csr_matrix(x, dtype=np.float32)
power_method(a, max_iter=10)

In [0]:


In [0]:
np.random.randn(2, 4)
  • numpy.matrix.A1

Return self as a flattened ndarray. Equivalent to np.asarray(x).ravel()

  1. numpy.matrix.A1 — NumPy v1.14 Manual

In [0]:
x = np.matrix(np.arange(12).reshape((3,4)))

In [0]:
x

In [0]:
x.A1

In [0]:
x.ravel()

In [0]:
x.A1.shape, x.ravel().shape

How to normalize a sparse matrix


In [0]:
from scipy import sparse
S = sparse.csr_matrix(np.array([[1,2],[3,4]]))
S

In [0]:
Sr = S.sum(axis=0).A1
Sr

In [0]:
S.indices

In [0]:
S.data

In [0]:
S.data / np.take(Sr, S.indices)

In [0]:
np.take(Sr, S.indices)

QR 分解


In [0]:
from numba import jit
@jit()
def pure_qr(A, max_iter=50000):
    Ak = np.copy(A)
    n = A.shape[0]
    QQ = np.eye(n)
    for k in range(max_iter):
        Q, R = np.linalg.qr(Ak)
        Ak = R @ Q
        QQ = QQ @ Q
        if k % 100 == 0:
            print(Ak)
            print("\n")
    return Ak, QQ

In [0]:
n = 6
A = np.random.rand(n,n)
AT = A @ A.T

In [0]:
Ak, Q = pure_qr(A)

In [0]:
# 特征值
np.linalg.eigvals(A)

In [0]:
# Q 是正交的
np.allclose(np.eye(n), Q @ Q.T), np.allclose(np.eye(n), Q.T @ Q)

In [0]:

The Arnoldi Iteration is two things:

  1. the basis of many of the iterative algorithms of numerical linear algebra
  2. a technique for finding eigenvalues of nonhermitian matrices (Trefethen, page 257)

How Arnoldi Locates Eigenvalues

  1. Carry out Arnoldi iteration
  2. Periodically calculate the eigenvalues (called Arnoldi estimates or Ritz values) of the Hessenberg H, using the QR algorithm
  3. Check at whether these values are converging. If they are, they're probably eigenvalues of A.

In [0]:
# Decompose square matrix A @ Q ~= Q @ H
def arnoldi(A):
    m, n = A.shape
    assert(n <= m)
    
    # Hessenberg matrix
    H = np.zeros([n+1,n]) #, dtype=np.float64)
    # Orthonormal columns
    Q = np.zeros([m,n+1]) #, dtype=np.float64)
    # 1st col of Q is a random column with unit norm
    b = np.random.rand(m)
    Q[:,0] = b / np.linalg.norm(b)
    for j in range(n):
        v = A @ Q[:,j]
        for i in range(j+1):
            #This comes from the formula for projection of v onto q.
            #Since columns q are orthonormal, q dot q = 1
            H[i,j] = np.dot(Q[:,i], v)
            v = v - (H[i,j] * Q[:,i])
        H[j+1,j] = np.linalg.norm(v)
        Q[:,j+1] = v / H[j+1,j]
        
        # printing this to see convergence, would be slow to use in practice
        print(np.linalg.norm(A @ Q[:,:-1] - Q @ H))
    return Q[:,:-1], H[:-1,:]

In [0]:
Q, H = arnoldi(A)

In [0]:
H

In [0]:
Q

In [0]:
n = 10
A0 = np.random.rand(n,n)
A = A0 @ A0.T

In [0]:
np.linalg.eigvals(A)