Clustering

Clustering is the task of gathering samples into groups of similar samples according to some predefined similarity or dissimilarity measure (such as the Euclidean distance). In this section we will explore a basic clustering task on the iris data.

By the end of this section you will

  • Know how to instantiate and train KMeans, an unsupervised clustering algorithm
  • Know several other interesting clustering algorithms within scikit-learn

Let's re-use the results of the 2D PCA of the iris dataset in order to explore clustering. First we need to repeat some of the code from the previous notebook


In [ ]:
# make sure ipython inline mode is activated
%pylab inline

In [ ]:
# all of this is copied from the previous notebook, '06_iris_dimensionality' 
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
import pylab as pl
from itertools import cycle

iris = load_iris()
X = iris.data
y = iris.target

pca = PCA(n_components=2, whiten=True).fit(X)
X_pca = pca.transform(X)

def plot_2D(data, target, target_names):
    colors = cycle('rgbcmykw')
    target_ids = range(len(target_names))
    pl.figure()
    for i, c, label in zip(target_ids, colors, target_names):
        pl.scatter(data[target == i, 0], data[target == i, 1],
                   c=c, label=label)
    pl.legend()

To remind ourselves what we're looking at, let's again plot the PCA components we defined in the last notebook:


In [ ]:
plot_2D(X_pca, iris.target, iris.target_names)

Now we will use one of the simplest clustering algorithms, K-means. This is an iterative algorithm which searches for three cluster centers such that the distance from each point to its cluster is minimizied. First, let's step back for a second, look at the above plot, and think about what this will do. The algorithm will look for three cluster centers, and label the points according to which cluster center they're closest to.

Question: what would you expect the output to look like?


In [ ]:
from sklearn.cluster import KMeans
from numpy.random import RandomState
rng = RandomState(42)

kmeans = KMeans(n_clusters=3, random_state=rng)
kmeans.fit(X_pca)

In [ ]:
import numpy as np
np.round(kmeans.cluster_centers_, decimals=2)

The labels_ attribute of the K means estimator contains the ID of the cluster that each point is assigned to.


In [ ]:
kmeans.labels_

The K-means algorithm has been used to infer cluster labels for the points. Let's call the plot_2D function again, but color the points based on the cluster labels rather than the iris species.


In [ ]:
plot_2D(X_pca, kmeans.labels_, ["c0", "c1", "c2"])

Clustering comes with assumptions: A clustering algorithm find clusters using specific criterion, that correspond to given assumptions. For K-means clustering, the model is that all clusters have equal, spherical variance. In the case of the iris dataset this assumption does not match the geometry of the classes, and thus the clustering cannot recover the classes.

Gaussian Mixture Models: we can choose a different set of assumptions using a Gaussian Mixture Model (GMM). The GMM can be used to relax the assumptions of equal variance or of sphericality. However, the less assumptions, the more the problem is ill-posed and hard to learn. The covariance_type argument of the GMM controls these assumptions. For the iris dataset, we will use the 'tied' mode, which imposes the same covariance for each classes. This makes the covariance learning problem easier.


In [ ]:
from sklearn.mixture import GMM
gmm = GMM(n_components=3, covariance_type='tied')
gmm.fit(X_pca)

plot_2D(X_pca, gmm.predict(X_pca), ["c0", "c1", "c2"])
plt.title('GMM labels')

plot_2D(X_pca, iris.target, iris.target_names)
plt.title('True labels')

We see that the label are now much closer to the ground truth.

In general, there is no guarantee that structure found by a clustering algorithm has anything to do with latent structures of the data.

Some Notable Clustering Routines

The following are two well-known clustering algorithms. Like most unsupervised learning models in the scikit, they expect the data to be clustered to have the shape (n_samples, n_features):

  • sklearn.cluster.KMeans:
    The simplest, yet effective clustering algorithm. Needs to be provided with the number of clusters in advance, and assumes that the data is normalized as input (but use a PCA model as preprocessor).
  • sklearn.cluster.MeanShift:
    Can find better looking clusters than KMeans but is not scalable to high number of samples.
  • sklearn.cluster.DBSCAN:
    Can detect irregularly shaped clusters based on density, i.e. sparse regions in the input space are likely to become inter-cluster boundaries. Can also detect outliers (samples that are not part of a cluster).

Other clustering algorithms do not work with a data array of shape (n_samples, n_features) but directly with a precomputed affinity matrix of shape (n_samples, n_samples):

  • sklearn.cluster.AffinityPropagation:
    Clustering algorithm based on message passing between data points.
  • sklearn.cluster.SpectralClustering:
    KMeans applied to a projection of the normalized graph Laplacian: finds normalized graph cuts if the affinity matrix is interpreted as an adjacency matrix of a graph.
  • sklearn.cluster.Ward:
    Ward implements hierarchical clustering based on the Ward algorithm, a variance-minimizing approach. At each step, it minimizes the sum of squared differences within all clusters (inertia criterion).
  • sklearn.cluster.DBSCAN:
    DBSCAN can work with either an array of samples or an affinity matrix.

Some Applications of Clustering

Here are some common applications of clustering algorithms:

  • Compression, in a data reduction sens
  • Can be used as a preprocessing step for recommender systems
  • Similarly:
    • grouping related web news (e.g. Google News) and web search results
    • grouping related stock quotes for investment portfolio management
    • building customer profiles for market analysis
  • Building a code book of prototype samples for unsupervised feature extraction for supervised learning algorithms

Exercise: digits clustering

Perform K-means clustering on the digits data, searching for ten clusters. Visualize the cluster centers as images (i.e. reshape each to 8x8 and use plt.imshow) Do the clusters seem to be correlated with particular digits?

Visualize the projected digits as in the last notebook, but this time use the cluster labels as the color. What do you notice?


In [ ]:
from sklearn.datasets import load_digits
digits = load_digits()
# ...

In [ ]:
%load solutions/08B_digits_clustering.py