In [ ]:
%matplotlib inline
This is an example showing how the scikit-learn can be used to cluster documents by topics using a bag-of-words approach. This example uses a scipy.sparse matrix to store the features instead of standard numpy arrays.
Two feature extraction methods can be used in this example:
TfidfVectorizer uses a in-memory vocabulary (a python dict) to map the most frequent words to features indices and hence compute a word occurrence frequency (sparse) matrix. The word frequencies are then reweighted using the Inverse Document Frequency (IDF) vector collected feature-wise over the corpus.
HashingVectorizer hashes word occurrences to a fixed dimensional space, possibly with collisions. The word count vectors are then normalized to each have l2-norm equal to one (projected to the euclidean unit-ball) which seems to be important for k-means to work in high dimensional space.
HashingVectorizer does not provide IDF weighting as this is a stateless model (the fit method does nothing). When IDF weighting is needed it can be added by pipelining its output to a TfidfTransformer instance.
Two algorithms are demoed: ordinary k-means and its more scalable cousin minibatch k-means.
Additionally, latent semantic analysis can also be used to reduce dimensionality and discover latent patterns in the data.
It can be noted that k-means (and minibatch k-means) are very sensitive to feature scaling and that in this case the IDF weighting helps improve the quality of the clustering by quite a lot as measured against the "ground truth" provided by the class label assignments of the 20 newsgroups dataset.
This improvement is not visible in the Silhouette Coefficient which is small for both as this measure seem to suffer from the phenomenon called "Concentration of Measure" or "Curse of Dimensionality" for high dimensional datasets such as text data. Other measures such as V-measure and Adjusted Rand Index are information theoretic based evaluation scores: as they are only based on cluster assignments rather than distances, hence not affected by the curse of dimensionality.
Note: as k-means is optimizing a non-convex objective function, it will likely end up in a local optimum. Several runs with independent random init might be necessary to get a good convergence.
In [ ]:
# Author: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Lars Buitinck
# License: BSD 3 clause
from __future__ import print_function
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import sys
from time import time
import numpy as np
class options:
def __init__(self):
self.components = 100
self.use_idf = True
self.use_hashing = False
self.features = 10000
self.verbose = False
opts = options()
Load some categories from the training set
In [ ]:
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
# Uncomment the following to do the analysis on all the categories
#categories = None
print("Loading 20 newsgroups dataset for categories:")
print(categories)
dataset = fetch_20newsgroups(subset='all',
categories=categories,
shuffle=True,
random_state=42)
print("%d documents" % len(dataset.data))
print("%d categories" % len(dataset.target_names))
print()
labels = dataset.target
true_k = np.unique(labels).shape[0]
print(true_k)
print(dataset.data[0])
In [ ]:
print("Extracting features from the training dataset using a sparse vectorizer")
t0 = time()
if opts.use_hashing:
if opts.use_idf:
# Perform an IDF normalization on the output of HashingVectorizer
hasher = HashingVectorizer(n_features=opts.features,
stop_words='english', non_negative=True,
norm=None, binary=False)
vectorizer = make_pipeline(hasher, TfidfTransformer())
else:
vectorizer = HashingVectorizer(n_features=opts.features,
stop_words='english',
non_negative=False, norm='l2',
binary=False)
else:
vectorizer = TfidfVectorizer(max_df=0.5, max_features=opts.features,
min_df=2, stop_words='english',
use_idf=opts.use_idf)
X = vectorizer.fit_transform(dataset.data)
print("done in %fs" % (time() - t0))
print("n_samples: %d, n_features: %d" % X.shape)
In [ ]:
if opts.components:
print("Performing dimensionality reduction using LSA")
t0 = time()
# Vectorizer results are normalized, which makes KMeans behave as
# spherical k-means for better results. Since LSA/SVD results are
# not normalized, we have to redo the normalization.
svd = TruncatedSVD(opts.components)
normalizer = Normalizer(copy=False)
lsa = make_pipeline(svd, normalizer)
X = lsa.fit_transform(X)
print("done in %fs" % (time() - t0))
explained_variance = svd.explained_variance_ratio_.sum()
print("Explained variance of the SVD step: {}%".format(
int(explained_variance * 100)))
In [ ]:
km_mini = MiniBatchKMeans(n_clusters=true_k,
init='k-means++',
n_init=1,
init_size=1000,
batch_size=1000,
verbose=opts.verbose)
km = KMeans(n_clusters=true_k,
init='k-means++',
max_iter=100,
n_init=1,
verbose=opts.verbose)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X)
time_km = time() - t0
print("done in %0.3fs" % time_km)
print("Clustering sparse data with %s" % km_mini)
t0 = time()
km_mini.fit(X)
time_mini = time() - t0
print("done in %0.3fs" % time_mini)
In [ ]:
print("Metrics:==============")
print("metrics \t Kmeans \t Mini-kmeans")
print("Homogeneity: %0.3f %0.3f" % (metrics.homogeneity_score(labels, km.labels_), metrics.homogeneity_score(labels, km_mini.labels_)))
print("Completeness: %0.3f %0.3f" % (metrics.completeness_score(labels, km.labels_), metrics.completeness_score(labels, km_mini.labels_)))
print("V-measure: %0.3f %0.3f" % (metrics.v_measure_score(labels, km.labels_), metrics.v_measure_score(labels, km_mini.labels_)))
print("Adjusted Rand-Index: %.3f %0.3f"
% (metrics.adjusted_rand_score(labels, km.labels_),
metrics.adjusted_rand_score(labels, km_mini.labels_)))
print("Silhouette Coefficient: %0.3f %0.3f"
% (metrics.silhouette_score(X, km.labels_, sample_size=1000),
metrics.silhouette_score(X, km_mini.labels_, sample_size=1000)))
In [ ]:
print("output top terms per cluster: ================")
if not opts.use_hashing:
if opts.components:
original_space_centroids = svd.inverse_transform(km.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
else:
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
print(order_centroids.shape)
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
In [ ]: