Customer Segmentation using Clustering


This mini-project is based on this blog post by yhat. Please feel free to refer to the post for additional information, and solutions.


In [1]:
%matplotlib inline
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns

# Setup Seaborn
sns.set_style("whitegrid")
sns.set_context("poster")

Data

The dataset contains information on marketing newsletters/e-mail campaigns (e-mail offers sent to customers) and transaction level data from customers. The transactional data shows which offer customers responded to, and what the customer ended up buying. The data is presented as an Excel workbook containing two worksheets. Each worksheet contains a different dataset.


In [2]:
df_offers = pd.read_excel("./WineKMC.xlsx", sheetname=0)
df_offers.columns = ["offer_id", "campaign", "varietal", "min_qty", "discount", "origin", "past_peak"]
df_offers.head()


Out[2]:
offer_id campaign varietal min_qty discount origin past_peak
0 1 January Malbec 72 56 France False
1 2 January Pinot Noir 72 17 France False
2 3 February Espumante 144 32 Oregon True
3 4 February Champagne 72 48 France True
4 5 February Cabernet Sauvignon 144 44 New Zealand True

We see that the first dataset contains information about each offer such as the month it is in effect and several attributes about the wine that the offer refers to: the variety, minimum quantity, discount, country of origin and whether or not it is past peak. The second dataset in the second worksheet contains transactional data -- which offer each customer responded to.


In [3]:
df_transactions = pd.read_excel("./WineKMC.xlsx", sheetname=1)
df_transactions.columns = ["customer_name", "offer_id"]
df_transactions['n'] = 1
df_transactions.head()


Out[3]:
customer_name offer_id n
0 Smith 2 1
1 Smith 24 1
2 Johnson 17 1
3 Johnson 24 1
4 Johnson 26 1

Data wrangling

We're trying to learn more about how our customers behave, so we can use their behavior (whether or not they purchased something based on an offer) as a way to group similar minded customers together. We can then study those groups to look for patterns and trends which can help us formulate future offers.

The first thing we need is a way to compare customers. To do this, we're going to create a matrix that contains each customer and a 0/1 indicator for whether or not they responded to a given offer.

Checkup Exercise Set I

Exercise: Create a data frame where each row has the following columns (Use the pandas [`merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) and [`pivot_table`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html) functions for this purpose):

  • customer_name
  • One column for each offer, with a 1 if the customer responded to the offer

Make sure you also deal with any weird values such as `NaN`. Read the documentation to develop your solution.


In [14]:
#your turn
df_result = pd.merge(df_transactions,df_offers, on='offer_id')
table = pd.pivot_table(df_result, index = 'customer_name', columns ='offer_id', values='n')
table.fillna(0,inplace=True)
table.reset_index(inplace=True)
table.head()


Out[14]:
offer_id customer_name 1 2 3 4 5 6 7 8 9 ... 23 24 25 26 27 28 29 30 31 32
0 Adams 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0
1 Allen 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
2 Anderson 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
3 Bailey 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
4 Baker 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0

5 rows × 33 columns

K-Means Clustering

Recall that in K-Means Clustering we want to maximize the distance between centroids and minimize the distance between data points and the respective centroid for the cluster they are in. True evaluation for unsupervised learning would require labeled data; however, we can use a variety of intuitive metrics to try to pick the number of clusters K. We will introduce two methods: the Elbow method, the Silhouette method and the gap statistic.

Choosing K: The Elbow Sum-of-Squares Method

The first method looks at the sum-of-squares error in each cluster against $K$. We compute the distance from each data point to the center of the cluster (centroid) to which the data point was assigned.

$$SS = \sum_k \sum_{x_i \in C_k} \sum_{x_j \in C_k} \left( x_i - x_j \right)^2 = \sum_k \sum_{x_i \in C_k} \left( x_i - \mu_k \right)^2$$

where $x_i$ is a point, $C_k$ represents cluster $k$ and $\mu_k$ is the centroid for cluster $k$. We can plot SS vs. $K$ and choose the elbow point in the plot as the best value for $K$. The elbow point is the point at which the plot starts descending much more slowly.

Checkup Exercise Set II

Exercise:

  • What values of $SS$ do you believe represent better clusterings? Why?
  • Create a numpy matrix `x_cols` with only the columns representing the offers (i.e. the 0/1 colums)
  • Write code that applies the [`KMeans`](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) clustering method from scikit-learn to this matrix.
  • Construct a plot showing $SS$ for each $K$ and pick $K$ using this plot. For simplicity, test $2 \le K \le 10$.
  • Make a bar chart showing the number of points in each cluster for k-means under the best $K$.
  • What challenges did you experience using the Elbow method to pick $K$?

Lower Sum-of-squares values represent better clusterings. It means that each cluster is more tightly defined.


In [26]:
# your turn
import numpy as np
x_cols = table.columns[2:]
x_cols = np.matrix(table[x_cols])
from sklearn.cluster import KMeans
ks = [2,3,4,5,6,7,8,9,10]
ss = []
for k in ks:
    kmean = KMeans(n_clusters=k)
    kmean.fit(x_cols)
    ss.append(kmean.inertia_)
ss


Out[26]:
[243.09027777777777,
 219.65277777777777,
 206.57169117647055,
 197.59649376992178,
 186.9083333333333,
 179.46072146807444,
 172.07731092436975,
 167.68814102564102,
 159.45579975579977]

In [31]:
sns.set_style('darkgrid')
plt.plot(ks,ss)
plt.xlabel('Number of Clusters (K)')
plt.ylabel('internal Sum of Squares')


Out[31]:
<matplotlib.text.Text at 0x7fcc943db4e0>

No obvious change in derivative along this axis. but if we zoom in, then it looks like K = 8 is the best choice. The challenge is that this is clearly more of an art than science. One needs to have an idea of where to look, or spend time using a range of cluster choices.


In [42]:
k = 8
table['cluster'] = KMeans(n_clusters = k).fit_predict(x_cols)
plt.bar(table.cluster.unique(),table.cluster.value_counts(), align='center')


Out[42]:
<Container object of 8 artists>

Choosing K: The Silhouette Method

There exists another method that measures how well each datapoint $x_i$ "fits" its assigned cluster and also how poorly it fits into other clusters. This is a different way of looking at the same objective. Denote $a_{x_i}$ as the average distance from $x_i$ to all other points within its own cluster $k$. The lower the value, the better. On the other hand $b_{x_i}$ is the minimum average distance from $x_i$ to points in a different cluster, minimized over clusters. That is, compute separately for each cluster the average distance from $x_i$ to the points within that cluster, and then take the minimum. The silhouette $s(x_i)$ is defined as

$$s(x_i) = \frac{b_{x_i} - a_{x_i}}{\max{\left( a_{x_i}, b_{x_i}\right)}}$$

The silhouette score is computed on every datapoint in every cluster. The silhouette score ranges from -1 (a poor clustering) to +1 (a very dense clustering) with 0 denoting the situation where clusters overlap. Some criteria for the silhouette coefficient is provided in the table below.


Range Interpretation
0.71 - 1.0 A strong structure has been found.
0.51 - 0.7 A reasonable structure has been found.
0.26 - 0.5 The structure is weak and could be artificial.
< 0.25 No substantial structure has been found.

</pre> Source: http://www.stat.berkeley.edu/~spector/s133/Clus.html

Fortunately, scikit-learn provides a function to compute this for us (phew!) called sklearn.metrics.silhouette_score. Take a look at this article on picking $K$ in scikit-learn, as it will help you in the next exercise set.

Checkup Exercise Set III

Exercise: Using the documentation for the `silhouette_score` function above, construct a series of silhouette plots like the ones in the article linked above.

Exercise: Compute the average silhouette score for each $K$ and plot it. What $K$ does the plot suggest we should choose? Does it differ from what we found using the Elbow method?


In [47]:
# Your turn.
from sklearn.metrics import silhouette_score
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.cm as cm
for n_clusters in ks:
    fig, (ax1, ax2) = plt.subplots(1, 2)
    fig.set_size_inches(18, 7)

    # The (n_clusters+1)*10 is for inserting blank space between silhouette
    # plots of individual clusters, to demarcate them clearly.
    ax1.set_ylim([0, len(x_cols) + (n_clusters + 1) * 10])

    # Initialize the clusterer with n_clusters value and a random generator
    # seed of 10 for reproducibility.
    clusterer = KMeans(n_clusters=n_clusters, random_state=10)
    cluster_labels = clusterer.fit_predict(x_cols)

    # The silhouette_score average
    silhouette_avg = silhouette_score(x_cols, cluster_labels)
    print("For n_clusters =", n_clusters,
          "The average silhouette_score is :", silhouette_avg)

    # Compute the silhouette scores for each sample
    sample_silhouette_values = silhouette_samples(x_cols, cluster_labels)

    y_lower = 10
    for i in range(n_clusters):
        # Aggregate the silhouette scores for samples belonging to
        # cluster i, and sort them
        ith_cluster_silhouette_values = sample_silhouette_values[cluster_labels == i]

        ith_cluster_silhouette_values.sort()

        size_cluster_i = ith_cluster_silhouette_values.shape[0]
        y_upper = y_lower + size_cluster_i

        color = cm.spectral(float(i) / n_clusters)
        ax1.fill_betweenx(np.arange(y_lower, y_upper),
                          0, ith_cluster_silhouette_values,
                          facecolor=color, edgecolor=color, alpha=0.7)

        # Label the silhouette plots with their cluster numbers at the middle
        ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))

        # Compute the new y_lower for next plot
        y_lower = y_upper + 10  # 10 for the 0 samples

    ax1.set_title("The silhouette plot for the various clusters.")
    ax1.set_xlabel("The silhouette coefficient values")
    ax1.set_ylabel("Cluster label")

    # The vertical line for average silhouette score of all the values
    ax1.axvline(x=silhouette_avg, color="red", linestyle="--")

    ax1.set_yticks([])  # Clear the yaxis labels / ticks
    ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])

    # 2nd Plot showing the actual clusters formed
    colors = cm.spectral(cluster_labels.astype(float) / n_clusters)
    ax2.scatter(x_cols[:, 0], x_cols[:, 1], marker='.', s=30, lw=0, alpha=0.7,
                c=colors)

    # Labeling the clusters
    centers = clusterer.cluster_centers_
    # Draw white circles at cluster centers
    ax2.scatter(centers[:, 0], centers[:, 1],
                marker='o', c="white", alpha=1, s=200)

    for i, c in enumerate(centers):
        ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)

    ax2.set_title("The visualization of the clustered data.")
    ax2.set_xlabel("Feature space for the 1st feature")
    ax2.set_ylabel("Feature space for the 2nd feature")

    plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
                  "with n_clusters = %d" % n_clusters),
                 fontsize=14, fontweight='bold')

    plt.show()


For n_clusters = 2 The average silhouette_score is : 0.0946703988818
For n_clusters = 3 The average silhouette_score is : 0.119165854177
For n_clusters = 4 The average silhouette_score is : 0.135722567362
For n_clusters = 5 The average silhouette_score is : 0.143737045307
For n_clusters = 6 The average silhouette_score is : 0.143077016346
For n_clusters = 7 The average silhouette_score is : 0.124666278068
For n_clusters = 8 The average silhouette_score is : 0.127777206379
For n_clusters = 9 The average silhouette_score is : 0.117989414534
For n_clusters = 10 The average silhouette_score is : 0.142592932427

In [54]:
def avg_silscore(k):
    return silhouette_score(x_cols, KMeans(n_clusters=k).fit_predict(x_cols))



plt.plot(ks, [avg_silscore(k) for k in ks])
plt.xlabel('Number of clusters (K)')
plt.ylabel('Mean silhouette coefficient value')


Out[54]:
<matplotlib.text.Text at 0x7fcc8de3e438>

Repeated runs of the above two cells produce varrying results. Sometimes K=5 is the highest performing choice, while other times it is 9 or 10. Values from 4 to 10 were all found to produce the best results, although 5 seems to be the average best performer after repeated runs.

Choosing $K$: The Gap Statistic

There is one last method worth covering for picking $K$, the so-called Gap statistic. The computation for the gap statistic builds on the sum-of-squares established in the Elbow method discussion, and compares it to the sum-of-squares of a "null distribution," that is, a random set of points with no clustering. The estimate for the optimal number of clusters $K$ is the value for which $\log{SS}$ falls the farthest below that of the reference distribution:

$$G_k = E_n^*\{\log SS_k\} - \log SS_k$$

In other words a good clustering yields a much larger difference between the reference distribution and the clustered data. The reference distribution is a Monte Carlo (randomization) procedure that constructs $B$ random distributions of points within the bounding box (limits) of the original data and then applies K-means to this synthetic distribution of data points.. $E_n^*\{\log SS_k\}$ is just the average $SS_k$ over all $B$ replicates. We then compute the standard deviation $\sigma_{SS}$ of the values of $SS_k$ computed from the $B$ replicates of the reference distribution and compute

$$s_k = \sqrt{1+1/B}\sigma_{SS}$$

Finally, we choose $K=k$ such that $G_k \geq G_{k+1} - s_{k+1}$.

Aside: Choosing $K$ when we Have Labels

Unsupervised learning expects that we do not have the labels. In some situations, we may wish to cluster data that is labeled. Computing the optimal number of clusters is much easier if we have access to labels. There are several methods available. We will not go into the math or details since it is rare to have access to the labels, but we provide the names and references of these measures.

  • Adjusted Rand Index
  • Mutual Information
  • V-Measure
  • Fowlkes–Mallows index

See this article for more information about these metrics.

Visualizing Clusters using PCA

How do we visualize clusters? If we only had two features, we could likely plot the data as is. But we have 100 data points each containing 32 features (dimensions). Principal Component Analysis (PCA) will help us reduce the dimensionality of our data from 32 to something lower. For a visualization on the coordinate plane, we will use 2 dimensions. In this exercise, we're going to use it to transform our multi-dimensional dataset into a 2 dimensional dataset.

This is only one use of PCA for dimension reduction. We can also use PCA when we want to perform regression but we have a set of highly correlated variables. PCA untangles these correlations into a smaller number of features/predictors all of which are orthogonal (not correlated). PCA is also used to reduce a large set of variables into a much smaller one.

Checkup Exercise Set IV

Exercise: Use PCA to plot your clusters:

  • Use scikit-learn's [`PCA`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) function to reduce the dimensionality of your clustering data to 2 components
  • Create a data frame with the following fields:
    • customer name
    • cluster id the customer belongs to
    • the two PCA components (label them `x` and `y`)
  • Plot a scatterplot of the `x` vs `y` columns
  • Color-code points differently based on cluster ID
  • How do the clusters look?
  • Based on what you see, what seems to be the best value for $K$? Moreover, which method of choosing $K$ seems to have produced the optimal result visually?

Exercise: Now look at both the original raw data about the offers and transactions and look at the fitted clusters. Tell a story about the clusters in context of the original data. For example, do the clusters correspond to wine variants or something else interesting?


In [108]:
#your turn


from sklearn.decomposition import PCA
pca = PCA(n_components=2)
x_reduced = pca.fit_transform(x_cols)
cluster_df = pd.DataFrame(x_reduced, table.index, ['principal component 1', 'principal component 2'])
cluster_df['cluster'] = KMeans(n_clusters=3).fit_predict(x_cols)

colors = ['#8dd3c7','#ffffb3','#bebada','#fb8072','#80b1d3','#fdb462','#b3de69','#fccde5','#d9d9d9','#bc80bd']
plt.scatter(cluster_df['principal component 1'], cluster_df['principal component 2'], c=[colors[c] for c in cluster_df.cluster])


Out[108]:
<matplotlib.collections.PathCollection at 0x7fcc8d4b3e80>

It looks like the best value for k is 4 or 3. Both methods of choosing K suggested that we should use k =5 , but k>4 seems to not visuallty give anything that looks nice, for any run.


In [118]:
df_result = pd.merge(df_transactions,df_offers, on='offer_id')
table = pd.pivot_table(df_result, index = 'customer_name', columns ='offer_id', values='n')
table.fillna(0,inplace=True)
table['cluster'] = cluster_df['cluster']
table_sum = table.groupby('cluster').sum()
table_merge = pd.merge(table_sum.T,df_offers, left_index = True, right_on= 'offer_id')

In [125]:
table_merge[[0,1,2,'varietal','origin','discount']].groupby(['origin','varietal','discount']).sum()


Out[125]:
cluster 0 1 2
origin varietal discount
Australia Pinot Noir 83 12.0 0.0 3.0
Prosecco 40 0.0 16.0 3.0
83 1.0 1.0 3.0
California Champagne 50 0.0 2.0 2.0
Merlot 88 1.0 0.0 4.0
Prosecco 52 1.0 2.0 4.0
Chile Chardonnay 57 0.0 0.0 10.0
Merlot 43 0.0 6.0 0.0
64 0.0 0.0 9.0
Prosecco 86 0.0 1.0 11.0
France Cabernet Sauvignon 56 0.0 1.0 5.0
Champagne 48 0.0 0.0 12.0
63 0.0 1.0 20.0
85 0.0 0.0 13.0
89 0.0 0.0 17.0
Malbec 54 0.0 16.0 6.0
56 1.0 0.0 9.0
Pinot Grigio 87 0.0 16.0 1.0
Pinot Noir 17 6.0 0.0 4.0
Germany Cabernet Sauvignon 45 0.0 0.0 4.0
Champagne 66 0.0 1.0 4.0
Pinot Noir 47 7.0 0.0 0.0
Italy Cabernet Sauvignon 19 0.0 0.0 6.0
82 0.0 0.0 6.0
Pinot Noir 34 12.0 0.0 0.0
New Zealand Cabernet Sauvignon 44 0.0 0.0 4.0
Champagne 88 1.0 1.0 7.0
Oregon Cabernet Sauvignon 59 0.0 0.0 6.0
Espumante 32 0.0 2.0 4.0
50 0.0 13.0 1.0
South Africa Chardonnay 39 1.0 0.0 4.0
Espumante 45 0.0 17.0 3.0

We see that cluster 0 looks predominatly for Pinot Noir grapes, regardless or origin, with a few outliers purchasing highly discounted non-pinor noir wines. Cluster 1 generally purchases Espumante and Prosecco, so sparkling wines, but NOT Champagne, as well as french wines. Cluster 2 seem to be biggest spenders in volume, generally preferring french champagne, prosecco from chile, and larger discounts.

Exercise Set V

As we saw earlier, PCA has a lot of other uses. Since we wanted to visualize our data in 2 dimensions, restricted the number of dimensions to 2 in PCA. But what is the true optimal number of dimensions?

Exercise: Using a new PCA object shown in the next cell, plot the `explained_variance_` field and look for the elbow point, the point where the curve's rate of descent seems to slow sharply. This value is one possible value for the optimal number of dimensions. What is it?


In [129]:
#your turn
# Initialize a new PCA model with a default number of components.
import sklearn.decomposition
pca = sklearn.decomposition.PCA()
pca.fit(x_cols)

# Do the rest on your own :)
dimension_range = range(1,len(pca.explained_variance_)+1)
plt.plot(dimension_range, pca.explained_variance_)
plt.xlabel('Number of dimensions')
plt.ylabel('Explained variance')


Out[129]:
<matplotlib.text.Text at 0x7fcc8d369a58>

D=4 seems to be the elbow point. This is where higher dimensions is no longer providing much of a gain in explained variance

Other Clustering Algorithms

k-means is only one of a ton of clustering algorithms. Below is a brief description of several clustering algorithms, and the table provides references to the other clustering algorithms in scikit-learn.

  • Affinity Propagation does not require the number of clusters $K$ to be known in advance! AP uses a "message passing" paradigm to cluster points based on their similarity.

  • Spectral Clustering uses the eigenvalues of a similarity matrix to reduce the dimensionality of the data before clustering in a lower dimensional space. This is tangentially similar to what we did to visualize k-means clusters using PCA. The number of clusters must be known a priori.

  • Ward's Method applies to hierarchical clustering. Hierarchical clustering algorithms take a set of data and successively divide the observations into more and more clusters at each layer of the hierarchy. Ward's method is used to determine when two clusters in the hierarchy should be combined into one. It is basically an extension of hierarchical clustering. Hierarchical clustering is divisive, that is, all observations are part of the same cluster at first, and at each successive iteration, the clusters are made smaller and smaller. With hierarchical clustering, a hierarchy is constructed, and there is not really the concept of "number of clusters." The number of clusters simply determines how low or how high in the hierarchy we reference and can be determined empirically or by looking at the dendogram.

  • Agglomerative Clustering is similar to hierarchical clustering but but is not divisive, it is agglomerative. That is, every observation is placed into its own cluster and at each iteration or level or the hierarchy, observations are merged into fewer and fewer clusters until convergence. Similar to hierarchical clustering, the constructed hierarchy contains all possible numbers of clusters and it is up to the analyst to pick the number by reviewing statistics or the dendogram.

  • DBSCAN is based on point density rather than distance. It groups together points with many nearby neighbors. DBSCAN is one of the most cited algorithms in the literature. It does not require knowing the number of clusters a priori, but does require specifying the neighborhood size.

Clustering Algorithms in Scikit-learn

</colgroup>

Method name

Parameters

Scalability

Use Case

Geometry (metric used) </tr> </thead>

K-Means

number of clusters

Very largen_samples, medium n_clusters with MiniBatch code

General-purpose, even cluster size, flat geometry, not too many clusters

Distances between points </tr>

Affinity propagation

damping, sample preference

Not scalable with n_samples

Many clusters, uneven cluster size, non-flat geometry

Graph distance (e.g. nearest-neighbor graph) </tr>

Mean-shift

bandwidth

Not scalable with n_samples

Many clusters, uneven cluster size, non-flat geometry

Distances between points </tr>

Spectral clustering

number of clusters

Medium n_samples, small n_clusters

Few clusters, even cluster size, non-flat geometry

Graph distance (e.g. nearest-neighbor graph) </tr>

Ward hierarchical clustering

number of clusters

Large n_samples and n_clusters

Many clusters, possibly connectivity constraints

Distances between points </tr>

Agglomerative clustering

number of clusters, linkage type, distance

Large n_samples and n_clusters

Many clusters, possibly connectivity constraints, non Euclidean distances

Any pairwise distance </tr>

DBSCAN

neighborhood size

Very large n_samples, medium n_clusters

Non-flat geometry, uneven cluster sizes

Distances between nearest points </tr>

Gaussian mixtures

many

Not scalable

Flat geometry, good for density estimation

Mahalanobis distances to centers </tr>

Birch

branching factor, threshold, optional global clusterer.

Large n_clusters and n_samples

Large dataset, outlier removal, data reduction.

Euclidean distance between points </tr> </tbody> </table> Source: http://scikit-learn.org/stable/modules/clustering.html

Exercise Set VI

Exercise: Try clustering using the following algorithms.

  1. Affinity propagation
  2. Spectral clustering
  3. Agglomerative clustering
  4. DBSCAN

How do their results compare? Which performs the best? Tell a story why you think it performs the best.


In [130]:
# Your turn
from sklearn.cluster import AffinityPropagation, SpectralClustering, AgglomerativeClustering, DBSCAN

In [ ]: