Latent Semantic Analysis (LSA) is a method for reducing the dimnesionality of documents treated as a bag of words. It is used for document classification, clustering and retrieval. For example, LSA can be used to search for prior art given a new patent application. In this homework, we will implement a small library for simple latent semantic analysis as a practical example of the application of SVD. The ideas are very similar to PCA.
We will implement a toy example of LSA to get familiar with the ideas. If you want to use LSA or similar methods for statiscal language analyis, the most efficient Python library is probably gensim - this also provides an online algorithm - i.e. the training information can be continuously updated. Other useful functions for processing natural language can be found in the Natural Lnaguage Toolkit.
Note: The SVD from scipy.linalg performs a full decomposition, which is inefficient since we only need to decompose until we get the first k singluar values. If the SVD from scipy.linalg
is too slow, please use the sparsesvd
function from the sparsesvd package to perform SVD instead. You can install in the usual way with
!pip install sparsesvd
Then import the following
from sparsesvd import sparsesvd
from scipy.sparse import csc_matrix
and use as follows
sparsesvd(csc_matrix(M), k=10)
In [1]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy.linalg as la
import scipy.stats as st
Exercise 1 (10 points). Calculating pairwise distance matrices.
Suppose we want to construct a distance matrix between the rows of a matrix. For example, given the matrix
M = np.array([[1,2,3],[4,5,6]])
the distance matrix using Euclidean distance as the measure would be
[[ 0.000 1.414 2.828]
[ 1.414 0.000 1.414]
[ 2.828 1.414 0.000]]
if $M$ was a collection of column vectors.
Write a function to calculate the pairwise-distance matrix given the matrix $M$ and some arbitrary distance function. Your functions should have the following signature:
def func_name(M, distance_func):
pass
For 3 and 4, try to avoid using transposition (but if you get stuck, there will be no penalty for using transpoition). Check that all four functions give the same result when applied to the given matrix $M$.
In [2]:
# Questions 1.1 to 1.5
def squared_euclidean_norm(u, axis=-1):
return (u**2).sum(axis)
def euclidean_norm(u, axis=-1):
return np.sqrt(squared_euclidean_norm(u, axis))
def squared_euclidean_dist(u, v, axis=-1):
"""Returns squared Euclidean distance between two vectors."""
return squared_euclidean_norm(u-v, axis)
def euclidean_dist(u, v, axis=-1):
"""Return Euclidean distacne between two vectors."""
return np.sqrt(squared_euclidean_dist(u, v, axis))
def cosine_dist(u, v, axis=-1):
"""Returns cosine of angle betwwen two vectors."""
# return 1 - np.dot(u, v)/(la.norm(u)*la.norm(v))
return 1 - (u * v).sum(axis)/(euclidean_norm(u, axis) * euclidean_norm(v, axis))
def loop_row_pdist(M, f):
"""REturns pairwise-distance matrix assuming M consists of row vectors.."""
nrows, ncols = M.shape
return np.array([[f(M[u,:], M[v,:]) for u in range(nrows)]
for v in range(nrows)])
def loop_col_pdist(M, f):
"""REturns pairwise-distance matrix assuming M consists of column vectors.."""
nrows, ncols = M.shape
return np.array([[f(M[:,u], M[:,v]) for u in range(ncols)]
for v in range(ncols)])
def broadcast_row_pdist(M, f):
"""REturns pairwise-distance matrix assuming M consists of row vectors.."""
return f(M[None,:,:], M[:,None,:])
def broadcast_col_pdist(M, f):
"""REturns pairwise-distance matrix assuming M consists of column vectors.."""
return f(M[:,None,:], M[:,:,None], axis=0)
In [4]:
# Q1 checking reuslts
M = np.array([[1,2,3],[4,5,6]])
# dist = euclidean_dist
for dist in (cosine_dist, euclidean_dist, squared_euclidean_dist):
print(loop_row_pdist(M, dist), '\n')
print(broadcast_row_pdist(M, dist), '\n')
print(loop_col_pdist(M, dist), '\n')
print(broadcast_col_pdist(M, dist))
Exercise 2 (20 points). Write 3 functions to calculate the term frequency (tf), the inverse document frequency (idf) and the product (tf-idf). Each function should take a single argument docs
, which is a dictionary of (key=identifier, value=dcoument text) pairs, and return an appropriately sized array. Convert '-' to ' ' (space), remove punctuation, convert text to lowercase and split on whitespace to generate a collection of terms from the dcoument text.
Print the table of tf-idf values for the following document collection
s1 = "The quick brown fox"
s2 = "Brown fox jumps over the jumps jumps jumps"
s3 = "The the the lazy dog elephant."
s4 = "The the the the the dog peacock lion tiger elephant"
docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4}
In [11]:
# The tf() function is optional - it can also be coded directly into tfs()
# Questino 2.1
def tf(doc):
"""Returns the number of times each term occurs in a dcoument.
We preprocess the document to strip punctuation and convert to lowercase.
Terms are found by splitting on whitespace."""
from collections import Counter
from string import punctuation
table = dict.fromkeys(map(ord, punctuation))
terms = doc.lower().replace('-', ' ').translate(table).split()
return Counter(terms)
def tfs(docs):
"""Create a term freqeuncy dataframe from a dictionary of documents."""
from operator import add
df = pd.DataFrame({k: tf(v) for k, v in docs.items()}).fillna(0)
return df
# Question 2.2
def idf(docs):
"""Find inverse document frequecny series from a dictionry of doucmnets."""
term_freq = tfs(docs)
num_docs = len(docs)
doc_freq = (term_freq > 0).sum(axis=1)
return np.log(num_docs/(1 + doc_freq))
# Question 2.3
def tf_idf(docs):
"""Return the product of the term-frequency and inverse document freqeucny."""
return tfs(docs).mul(idf(docs), axis=0)
In [12]:
# Question 2.4
s1 = "The quick brown fox"
s2 = "Brown fox jumps over the jumps jumps jumps"
s3 = "The the the lazy dog elephant."
s4 = "The the the the the dog peacock lion tiger elephant"
docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4}
tf_idf(docs)
Out[12]:
Exercise 3 (20 points).
Write a function that takes a matrix $M$ and an integer $k$ as arguments, and reconstructs a reduced matrix using only the $k$ largest singular values. Use the scipy.linagl.svd
function to perform the decomposition. This is the least squares approximation to the matrix $M$ in $k$ dimensions.
Apply the function you just wrote to the following term-frequency matrix for a set of $9$ documents using $k=2$ and print the reconstructed matrix $M'$.
M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 2, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1]])
Calculate the pairwise correlation matrix for the original matrix M and the reconstructed matrix using $k=2$ singular values (you may use scipy.stats.spearmanr to do the calculations). Consider the fist 5 sets of documents as one group $G1$ and the last 4 as another group $G2$ (i.e. first 5 and last 4 columns). What is the average within group correlation for $G1$, $G2$ and the average cross-group correlation for G1-G2 using either $M$ or $M'$. (Do not include self-correlation in the within-group calculations.).
In [13]:
# Question 3.1
def svd_projection(M, k):
"""Returns the matrix M reconstructed using only k singluar values"""
U, s, V = la.svd(M, full_matrices=False)
s[k:] = 0
M_ = U.dot(np.diag(s).dot(V))
try:
return pd.DataFrame(M_, index=M.index, columns=M.columns)
except AttributeError:
return M_
In [14]:
# Qeustion 3.2
M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 2, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1]])
Md = svd_projection(M, 2)
Md
Out[14]:
In [15]:
# Question 3.3
# Results for full-rank matrix (not graded - just here for comparison)
rho, pval = st.spearmanr(M)
np.mean(rho[:5, :5][np.tril_indices_from(rho[:5, :5], 1)]), \
np.mean(rho[5:, 5:][np.tril_indices_from(rho[5:, 5:], 1)]), \
rho[5:, :5].mean()
Out[15]:
In [16]:
rho
Out[16]:
In [17]:
# Results after LSA (graded)
# G1/G1, G2/G2 and G1/G2 average correlation
rho, pval = st.spearmanr(Md)
np.mean(rho[:5, :5][np.tril_indices_from(rho[:5, :5], 1)]), \
np.mean(rho[5:, 5:][np.tril_indices_from(rho[5:, 5:], 1)]), \
rho[5:, :5].mean()
Out[17]:
In [18]:
rho
Out[18]:
Exercise 4 (40 points). Clustering with LSA
Begin by loading a pubmed database of selected article titles using 'pickle'. With the following:
import pickle
docs = pickle.load(open('pubmed.pic', 'rb'))
Create a tf-idf matrix for every term that appears at least once in any of the documents. What is the shape of the tf-idf matrix?
Perform SVD on the tf-idf matrix to obtain $U \Sigma V^T$ (often written as $T \Sigma D^T$ in this context with $T$ representing the terms and $D$ representing the documents). If we set all but the top $k$ singular values to 0, the reconstructed matrix is essentially $U_k \Sigma_k V_k^T$, where $U_k$ is $m \times k$, $\Sigma_k$ is $k \times k$ and $V_k^T$ is $k \times n$. Terms in this reduced space are represented by $U_k \Sigma_k$ and documents by $\Sigma_k V^T_k$. Reconstruct the matrix using the first $k=10$ singular values.
Use agglomerative hierachical clustering with complete linkage to plot a dendrogram and comment on the likely number of document clusters with $k = 100$. Use the dendrogram function from SciPy .
Determine how similar each of the original documents is to the new document mystery.txt
. Since $A = U \Sigma V^T$, we also have $V = A^T U S^{-1}$ using orthogonality and the rule for transposing matrix products. This suggests that in order to map the new document to the same concept space, first find the tf-idf vector $v$ for the new document - this must contain all (and only) the terms present in the existing tf-idx matrix. Then the query vector $q$ is given by $v^T U_k \Sigma_k^{-1}$. Find the 10 documents most similar to the new document and the 10 most dissimilar.
In [25]:
# Quesiton 4.1
import pickle
docs = pickle.load(open('pubmed.pic', 'rb'))
df = tf_idf(docs)
df.shape
Out[25]:
In [26]:
# Question 4.2
k = 10
T, s, D = la.svd(df)
print(T.shape, s.shape, D.shape, '\n')
df_10 = T[:,:k].dot(np.diag(s[:k]).dot(D[:k,:]))
assert(df.shape == df_10.shape)
df_10
Out[26]:
In [29]:
# Question 4.2 (alternative solution 1 setting unwanted singluar values to zero)
T, s, D = la.svd(df, full_matrices=False)
print(T.shape, s.shape, D.shape, '\n')
s[10:] = 0
df_10 = T.dot(np.diag(s).dot(D))
assert(df.shape == df_10.shape)
df_10
Out[29]:
In [30]:
! pip install sparsesvd
In [32]:
# Question 4.2 (alternative solution 2 using sparsesvd)
from scipy.sparse import csc_matrix
from sparsesvd import sparsesvd
k = 10
T, s, D = sparsesvd(csc_matrix(df), k=k)
print(T.shape, s.shape, D.shape, '\n')
df_10 = T.T.dot(np.diag(s).dot(D))
assert(df.shape == df_10.shape)
print(df_10)
In [33]:
# Question 4.3
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import pdist, squareform
plt.figure(figsize=(16,36))
T, s, D = sparsesvd(csc_matrix(df), k=100)
x = np.diag(s).dot(D).T
data_dist = pdist(x, metric='cosine') # computing the distance
data_link = linkage(data_dist) # computing the linkage
labels = [c[:40] for c in df.columns[:]]
dendrogram(data_link, orientation='right', labels=labels);
In [35]:
# Quesiton 4.4
k = 10
T, s, D = sparsesvd(csc_matrix(df), k=100)
doc = {'mystery': open('mystery.txt').read()}
terms = tf_idf(doc)
query_terms = df.join(terms).fillna(0)['mystery']
q = query_terms.T.dot(T.T.dot(np.diag(1.0/s)))
ranked_docs = df.columns[np.argsort(cosine_dist(q, x))][::-1]
print("Query article:", )
print(' '.join(line.strip() for line in doc['mystery'].splitlines()[:2]))
print()
print("Most similar")
print('='*80)
for i, title in enumerate(ranked_docs[:10]):
print('%03d' % i, title)
print()
print("Most dissimilar")
print('='*80)
for i, title in enumerate(ranked_docs[-10:]):
print('%03d' % (len(docs) - i), title)
These were downloaded with the following script.
from Bio import Entrez, Medline
Entrez.email = "YOUR EMAIL HERE"
import pickle
try:
docs = pickle.load(open('pubmed.pic', 'rb'))
except Exception, e:
print e
docs = {}
for term in ['plasmodium', 'diabetes', 'asthma', 'cytometry']:
handle = Entrez.esearch(db="pubmed", term=term, retmax=50)
result = Entrez.read(handle)
handle.close()
idlist = result["IdList"]
handle2 = Entrez.efetch(db="pubmed", id=idlist, rettype="medline", retmode="text")
result2 = Medline.parse(handle2)
for record in result2:
title = record.get("TI", None)
abstract = record.get("AB", None)
if title is None or abstract is None:
continue
docs[title] = '\n'.join([title, abstract])
print(title)
handle2.close()
pickle.dump(docs, open('pubmed.pic', 'wb'))
docs.values()
In [ ]: