Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO'
statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel'
and 'Region'
will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
In [1]:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
except:
print("Dataset could not be loaded. Is the dataset missing?")
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
In [2]:
# Display a description of the dataset
display(data.describe())
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices
list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
In [3]:
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [17, 58, 200]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
Hint: Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows:
Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be?
Answer:
For the first establishment, given the disproportionate amount of Delicatessen purchases, moderate spending on milk and fresh, and little spending in the other categories indicate that it could be a Deli shop.
Since the value for fresh is well above the mean of 12000.2977. We expect the second establishment to be a wholesale fresh food supplier. This suggests a restaurant or a mass producer or a market for fresh food.
In the third establishment, we can see that the milk (13240) and grocery (23127) items are way above the mean for milk (5796.2) and grocery (7951.3) repectively that suggest this establishment is most likely a type of hyper market.
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
new_data
a copy of the data by removing a feature of your choice using the DataFrame.drop
function.sklearn.cross_validation.train_test_split
to split the dataset into training and testing sets.test_size
of 0.25
and set a random_state
.random_state
, and fit the learner to the training data.score
function.
In [4]:
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Grocery'], axis=1, inplace = False)
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
# Set a random state.
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Grocery'], test_size=0.25, random_state=88)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=88)
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print(score)
Hint: The coefficient of determination, R^2
, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2
implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance.
Answer:
I attempted to predict the feature, 'Grocery' which reported a prediction score (R^2) of 79.12%. Since a reasonably high score was achieved, this suggests that the other features correlate well with 'Grocery' and would suggest that the feature is not as independently predictive of customers' spending habits, and thus not necessary to segment the customers.
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
In [5]:
# Produce a scatter matrix for each pair of features in the data
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
In [6]:
import seaborn as sns
sns.heatmap(data.corr(), annot=True)
Out[6]:
Hint: Is the data normally distributed? Where do most of the data points lie? You can use corr() to get the feature correlations and then visualize them using a heatmap(the data that would be fed into the heatmap would be the correlation values, for eg: data.corr()
) to gain further insight.
Answer:
The strongest correlation appears to between (Grocery, Detergents_Paper). Other pairs of features which exhibit some degree of correlation are (Milk, Grocery) and (Milk, Detergents_Paper). The Delicatessen feature does not have a strong correlation with the other columns, this is in agreement with the statement made earlier in Question 2. The amount spent in the Delicatessen category would not be predicted by the 5 remaining categories. The data does not appear to be normally distributed and data points are positively skewed.
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
log_data
after applying logarithmic scaling. Use the np.log
function for this.log_samples
after applying logarithmic scaling. Again, use np.log
.
In [7]:
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
In [8]:
# Display the log-transformed sample data
display(log_samples)
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
Q1
. Use np.percentile
for this.Q3
. Again, use np.percentile
.step
.outliers
list.NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data
.
In [9]:
# OPTIONAL: Select the indices for data points you wish to remove
outliers = []
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print("Data points considered outliers for the feature '{}':".format(feature))
outlier = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
display(outlier)
outliers.extend(outlier.index)
print ("Removing outliers: {}".format(np.sort(list(set(outliers)))))
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
outliers
list to be removed, explain why.Hint: If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them.
In [10]:
from collections import Counter
outliers_counter = Counter(outliers)
outliers_more_than_one = list({i for i, count in outliers_counter.items() if count > 1})
print ("Outliers with more than one feature: ", np.sort(outliers_more_than_one))
Answer:
Outliers can skew the summary distribution of attribute values in descriptive statistics like mean and standard deviation and in plots such as histograms and scatterplots, compressing the body of the data. However, outliers can represent examples of data instances that are relevant to the problem such as anomalies in the case of fraud detection and computer security. In this case, we are not concern with such problem and can safely remove the outliers. This can help normalizing the dataset and improving the generalizability of predictions on the dataset.
In addition to that, this points should be added to outliers because while performing K-means clustering, the centroid obtained may be quite faroff from the true centroid of the cluster because of these outliers; K-means is susceptible to outliers. For instance, lets assume that a group of points is located near eachother in some high dimensional space along with an outlier which is very far away. In this case, since the function to be minimized is the sum of squared distances between the points and its centroid. In order to minimize the cost function, the algorithm may select the far away outlier itself to be the centroid which may not capture the underlying distribution of the data.
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data
to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
sklearn.decomposition.PCA
and assign the results of fitting PCA in six dimensions with good_data
to pca
.log_samples
using pca.transform
, and assign the results to pca_samples
.
In [11]:
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=len(good_data.columns)).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.
In [12]:
print ("Explained variance ratio: ", np.cumsum(pca.explained_variance_ratio_))
Answer:
The first and second principal component explain 72.52% of the total variance while the first four explain 92.79%.
The first principal component represents Detergents_Paper most noticeably and also provides information Gain for Milk, Grocery and Delicatassen to some extent. This could represent the supermarket spending category.
The second principal component is significantly correlated to Fresh, Frozen and Delicatessen in terms of spending. It gives a very small loss of Detergents & Paper and small gains for Milk and Grocery. This could represent the restaurants and the cafes.
The third principal component is mostly defined by the high positive correlation to Delicatessen and the high negative correlation to Fresh. This could represent small convenience shops.
The fourth principal component is defined by a large negative correlations to Frozen and Detergents & Paper, with large positive correlations for Delicatessen products. This could represent buyers of Delicatessen goods like Deli shop.
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
In [13]:
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
good_data
to pca
.good_data
using pca.transform
, and assign the results to reduced_data
.log_samples
using pca.transform
, and assign the results to pca_samples
.
In [14]:
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
In [15]:
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1
and Dimension 2
). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
In [16]:
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Out[16]:
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk'
, 'Grocery'
and 'Detergents_Paper'
, but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Hint: Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset.
Answer:
K-means Clustering is fast, reliable, and simple to interpret. It does an excellent job if clusters are well defined and not mixed among themselves. Gaussian Mixture Modeling, on the other hand, is more flexible and allows for mixed membership of clusters. The GMM algorithm is a good algorithm to use for the classification of static postures and non-temporal pattern recognition. By using Gaussian, data points do not necessarilly have to be assigned rigidly, and ones with lower probability could be assigned to multiple clusters at once. It obtains a density estimation for each cluster. However, K-Means requires prior knowledge of the data to specify the expected number of clusters and the data is not divided by clean and tight categories. Therefore, I will be using Gaussian Mixture Model.
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
reduced_data
and assign it to clusterer
.reduced_data
using clusterer.predict
and assign them to preds
.centers
.pca_samples
and assign them sample_preds
.sklearn.metrics.silhouette_score
and calculate the silhouette score of reduced_data
against preds
.score
and print the result.
In [17]:
from sklearn.mixture import GaussianMixture
from sklearn.metrics import silhouette_score
def gmm(n, reduced_data, pca_samples):
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GaussianMixture(n_components=n, random_state=43)
clusterer.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
cluster_centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
score = silhouette_score(reduced_data, preds)
print ("Silhouette Score for {} clusters: {}".format(n, score))
return (preds, cluster_centers, sample_preds, score)
for n in range(2,10):
preds, centers, sample_preds, score = gmm(n, reduced_data, pca_samples)
Answer:
2 clusters perform the best with a score of 0.446753526944537.
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
In [22]:
# Display the results of the clustering from implementation
preds, centers, sample_preds, score = gmm(2, reduced_data, pca_samples)
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
centers
using pca.inverse_transform
and assign the new centers to log_centers
.np.log
to log_centers
using np.exp
and assign the true centers to true_centers
.
In [23]:
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Hint: A customer who is assigned to 'Cluster X'
should best identify with the establishments represented by the feature set of 'Segment X'
. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent.
Answer:
For Segment 0, the total purchase cost for each product category except for Detergents_Paper and Delicatessen is rather high. This indicate that establishment is likely to be a restaurant. On the other hand, Segment 1 has strong weight on the across all categories. However, the higher cost for Detergents_Paper indicates that the establishment is likely to be super market.
In [26]:
# Display the predictions
for i, pred in enumerate(sample_preds):
print("Sample point", i, "predicted to be in Cluster", pred)
display(samples)
Answer:
As can be seen from the table, the first 2 sample points have very low purchase cost for Detergents_Paper and higher purchase cost for the other product category. Intuitively, I would expect the sample points to be an establishment related to food and beverages. While the 3rd sample point is likely to be a supermarket. The predictions for each sample point appears to be consistent since the model has correctly predicted the cluster to which each sample would belong.
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively.
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
The factor in changing delivery frequency would impact customers who are concern with freshness of the product the most. Restaurants, which buy large amounts of fresh food, would be particularly grateful for more frequent delivery from this perspective. On the other hand, retailers are expected to be less concern with freshness. They however might want more frequent delivery if they sell a lot of stock on hand and utilize just-in-time inventory. We can validate this hypothesis by running an A/B test. First, we randomly select a small sample of all customers, begin using 3-day-per-week delivery with that sample, and monitor their purchases. By analyzing the change in their purchases with respect to the customer segment they belong to, we can determine the impact of changing the delivery frequency for that segment of customers.
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
By using a supervised learning classifier, we are able to predict an appropriate customer segment for each of the ten new customers. The model would train on the existing customers' spending. Given their estimated product spending, we can generate predictions for the appropriate customer segment of the new customers.
At the beginning of this project, it was discussed that the 'Channel'
and 'Region'
features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel'
feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa'
(Hotel/Restaurant/Cafe) or 'Retail'
the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
In [28]:
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
Answer:
These classifications are fairly consistent with my previous clustering of customer segments with two clusters. However, as can be seen from the graph, both clusters are not distinct. As there are some overlapping between both clusters, no customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes'. Having say that, there is a fairly clear delineation at which very few Retailers would fall within the Hotels/Restaurants/Cafes cluster.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.