Machine Learning Engineer Nanodegree

Supervised Learning

Project: Finding Donors for CharityML

Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.

Getting Started

In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.

The dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries.


Exploring the Data

Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, 'income', will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.


In [1]:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames

# Import supplementary visualization code visuals.py
import visuals as vs

# Pretty display for notebooks
%matplotlib inline

# Load the Census dataset
data = pd.read_csv("census.csv")

# Success - Display the first record
display(data.head(n=1))


age workclass education_level education-num marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country income
0 39 State-gov Bachelors 13.0 Never-married Adm-clerical Not-in-family White Male 2174.0 0.0 40.0 United-States <=50K

Implementation: Data Exploration

A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:

  • The total number of records, 'n_records'
  • The number of individuals making more than \$50,000 annually, 'n_greater_50k'.
  • The number of individuals making at most \$50,000 annually, 'n_at_most_50k'.
  • The percentage of individuals making more than \$50,000 annually, 'greater_percent'.

Hint: You may need to look at the table above to understand how the 'income' entries are formatted.


In [2]:
# TODO: Total number of records
n_records = len(data)

# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = np.sum(data["income"]==">50K")

# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = np.sum(data["income"]=="<=50K")

# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = np.mean(data["income"]==">50K")*100.0

# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)


Total number of records: 45222
Individuals making more than $50,000: 11208
Individuals making at most $50,000: 34014
Percentage of individuals making more than $50,000: 24.78%

Preparing the Data

Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.

Transforming Skewed Continuous Features

A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: 'capital-gain' and 'capital-loss'.

Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.


In [3]:
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)

# Visualize skewed continuous features of original data
vs.distribution(data)


For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a logarithmic transformation on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of 0 is undefined, so we must translate the values by a small amount above 0 to apply the the logarithm successfully.

Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.


In [4]:
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))

# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)


Normalizing Numerical Features

In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as 'capital-gain' or 'capital-loss' above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.

Run the code cell below to normalize each numerical feature. We will use sklearn.preprocessing.MinMaxScaler for this.


In [5]:
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler

# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])

# Show an example of a record with scaling applied
display(features_raw.head(n = 1))


age workclass education_level education-num marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country
0 0.30137 State-gov Bachelors 0.8 Never-married Adm-clerical Not-in-family White Male 0.02174 0.0 0.397959 United-States

Implementation: Data Preprocessing

From the table in Exploring the Data above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called categorical variables) be converted. One popular way to convert categorical variables is by using the one-hot encoding scheme. One-hot encoding creates a "dummy" variable for each possible category of each non-numeric feature. For example, assume someFeature has three possible entries: A, B, or C. We then encode this feature into someFeature_A, someFeature_B and someFeature_C.

someFeature someFeature_A someFeature_B someFeature_C
0 B 0 1 0
1 C ----> one-hot encode ----> 0 0 1
2 A 1 0 0

Additionally, as with the non-numeric features, we need to convert the non-numeric target label, 'income' to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as 0 and 1, respectively. In code cell below, you will need to implement the following:

  • Use pandas.get_dummies() to perform one-hot encoding on the 'features_raw' data.
  • Convert the target label 'income_raw' to numerical entries.
    • Set records with "<=50K" to 0 and records with ">50K" to 1.

In [6]:
# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()
features = pd.get_dummies(features_raw)

# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.apply(lambda x: 1 if x == '>50K' else 0)

# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))

# Uncomment the following line to see the encoded feature names
#print encoded


103 total features after one-hot encoding.

Shuffle and Split Data

Now all categorical variables have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.

Run the code cell below to perform this split.


In [7]:
# Import train_test_split
from sklearn.cross_validation import train_test_split

# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)

# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])


Training set has 36177 samples.
Testing set has 9045 samples.

Evaluating Model Performance

In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a naive predictor.

Metrics and the Naive Predictor

CharityML, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using accuracy as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that does not make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is more important than the model's ability to recall those individuals. We can use F-beta score as a metric that considers both precision and recall:

$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$

In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the F$_{0.5}$ score (or F-score for simplicity).

Looking at the distribution of classes (those who make at most \$50,000, and those who make more), it's clear most individuals do not make more than \$50,000. This can greatly affect accuracy, since we could simply say "this person does not make more than \$50,000" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, CharityML would identify no one as donors.

Question 1 - Naive Predictor Performace

If we chose a model that always predicted an individual made more than \$50,000, what would that model's accuracy and F-score be on this dataset?
Note: You must use the code cell below and assign your results to 'accuracy' and 'fscore' to be used later.


In [8]:
# TODO: Calculate accuracy
naiveprediction = np.ones(len(income.values))
accuracy = np.mean(income.values == naiveprediction)

# TODO: Calculate F-score using the formula above for beta = 0.5
TP = np.sum(naiveprediction[income.values==1]==1)
FP = np.sum(naiveprediction[income.values==0]==1)
FN = np.sum(naiveprediction[income.values==1]==0)
precision = TP / float(TP + FP)
recall = TP / float(TP + FN)
beta_value = 0.5

fscore = (1 + beta_value**2)*precision*recall / (precision * beta_value**2 + recall)

# Print the results 
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)


Naive Predictor: [Accuracy score: 0.2478, F-score: 0.2917]

Supervised Learning Models

The following supervised learning models are currently available in scikit-learn that you may choose from:

  • Gaussian Naive Bayes (GaussianNB)
  • Decision Trees
  • Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
  • K-Nearest Neighbors (KNeighbors)
  • Stochastic Gradient Descent Classifier (SGDC)
  • Support Vector Machines (SVM)
  • Logistic Regression

Question 2 - Model Application

List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen

  • Describe one real-world application in industry where the model can be applied. (You may need to do research for this — give references!)
  • What are the strengths of the model; when does it perform well?
  • What are the weaknesses of the model; when does it perform poorly?
  • What makes this model a good candidate for the problem, given what you know about the data?

Answer:

Random Forest:

  • An example of Random Forests being used in industry is in the prediction of crop yields. Often the default model to predict crop yields is a (multi-dimensional) linear regression on the input variables, which include weather data in combination with "ground-based data". Advantages of these linear models are their simplicity, the obvious interpretation of the weights assigned to the various input-variables, and their ability to extrapolate beyond the previsouly seen data. Despite Random Forests being primarily designed for classification rather than regression, they were used to obtain predictions which significantly outperform the older linear regressions: they achieved lower root-mean-squared error, as well was improved "Nash-Sutcliffe model efficiency" and "Willmott’s $d$ index of agreement".
  • Advantages of Random Forests include the ability to handle very high-dimensional data; this is due to the space being segmented by many very simple, "stunted" decision trees. Random Forests are also computationally fast ensemble methods, and obtain good scores even on relatively low amounts of data. They are also good at estimating the relative importance of the various input features, thus giving a quick handle on dimensionality reduction. Moreover, random forests are not very prone to overfitting, provided that the decision trees forming the forest are simple. Finally, when used in classification problems, random forests naturally provide a probability estimate of the various classes (however, this probability estimate tends to be underconfident).
  • Disadvantages include the speed of random forests as compared to simpler algorithms; because it is an ensemble of trees, predictions are made by running the data through all of the (potentially hundreds of) trees. This is clearly slower than a linear regression, and in some cases, e.g. moving vehicles, time is extremely important. When used for regression, it will have a hard time extrapolating beyond the training data, since the final prediction is made through voting made by training data. Random Forests also have a tough time with one-hot encoded categorical variables, when there are many categories. In these cases it is often advantageous, although theoretically awkward, to encode categorical variables with integers from 1 to N_categories, and in this way falsely introduce a sense of ordering and scale amongst the categories; nonetheless, this can lead to a boost in performance as opposed to the one-hot encoded variant. Finally, random forests do not provide much intuition for the relationship between input-variables and output (excluding their feature importances ranking).
  • Although this is a one-hot encoded dataset, I suspect some of those variables to not be relevant and hence be excluded from the dataset. Moreover, Random Forests have such excellent performance in a wide variety of settings that they make a good benchmark for other algorithms to beat, and in many cases are indeed the best model to use. The high dimensionality of this dataset and its rather large number of datapoints make it well suited for random forests with many trees.

GaussianNB:

  • Gaussian Naive Bayes is very popular in text analysis. For the sake of diversity, I will discuss a less common use case: the identification of DNA-binding proteins. Here it is possible to take the various features of the proteins, such as their amino-acid composition, their interaction with various other chemicals or their structure, and predict with good accuracy (scoring just under 80%, with the ROC curve having an area-under-the-curve of just under 0.8) how they will bind to DNA. In particular, GaussianNB "outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function" (quoted from article abstract). Here the properties of the proteins are assumed to be independent of each other; while they are not completely independent, they do not correlate with each other so strongly, and are hence very useful for this sort of problem.
  • Gaussian Naive Bayes assumes all variables to be independent, and (for each class) the variables are distributed in a Gaussian way. For very noisy processes, with potentially many hidden variables, the assumption of Gaussianity is very good. Moreover, GaussianNB is very quick and easy to train: it is a simple matter of finding the mean and variance of each of the variables (for each given class). This means that even with very small amounts of data, it is possible to generate reasonable predictions which encode much of the variance of the data. Finally, the Gaussian has extremely nice properties which are highly useful in Bayesian statistics; for example, it is a self-conjugate prior, so if we want a distribution for the possible probabilities of a protein being of a certain type, we not only get a number, but we also get a gaussian distribution over those possible numbers, where the variance of that distribution really depends on the data that we have observed.
  • A risk in Naive Bayes is that one of the probabilities is equal to zero, due to scarcity of that event, thus setting the entire probability to zero. To avoid this, one should actually view each factor in P(x1, x2, ... | y) = P(x1|y) P(x2|y) ... as a distribution, and go through Bayes Theorem using these distributions rather than single values. This "smoothing out" of the probabilities overcomes the zero-problem, but is an added layer of complication. Naive Bayes is also poor at regression tasks, and requires some sort of binning of the output variable in order to generate any regression value. When the variables are correlated with each other, the fundamental assumption of Naive Bayes breaks down. While in very many cases the problems generated by this either cancel each other out, or preserve the hierarchy of probability predictions, this is often not a problem. But it may also be that the correlations of the inputs invalidate the model sufficiently to render it of limited utility.
  • Much of the randomness of the various input-variables in our dataset should be gaussian. While our variables do depend on each other, the correlation between them is probably not very strong; in these cases, GaussianNB has performed well in the past and is a plausible algorithm to try out. It is fast and simple, and is a useful alternative to random forests. Moreover, as a generative model, we can actually interpret the means and variances of the various gaussians to say something interesting about society.

K-Nearest Neighbors:

  • A natural use-case for KNN is recommender systems, but I also discovered a use-case in fault detection. Here it is possible to detect early when machines will fail based on whether a similar set of parameters has often led to a machine failure in the past. Because the interactions between the variables and their causality with regards to failure can be very complex, it is simpler to just "remember" which set of parameters was dangerous, and to trigger warnings when similar conditions are detected.
  • KNN requires virtually no training time. It is also convenient when we do not understand the process, and wish to still make meaninigful predictions; in such cases, it is easiest to base predictions on similarity to past events. Finally, it often does quite well at detecting various "types" of data, i.e. if the data does cluster, KNN will take advantage of that very well.
  • Although predictions are generated, there is no true underlying model that explains what the relationship between input-variables and output quantities is; in other words, KNN can offer no real explanation to why it makes a prediction beyond the fact that the data is "similar" to other data it saw in the past. KNN is also slow at predicting for large datasets. You also have to pick $k$ by hand, which may be challenging to do. KNN is also sensitive to outliers, since these cases drag the cluster mean away from the actual cluster center. KNN makes assumptions on the shape of the cluster which may be invalid. Finally, since KNN compares current data to past data without understanding the relationship, it may be particularly prone to making mistakes when previous data ceases to be of much relevance to today's problem (e.g. recommending flared jeans in 2017 based on their popularity with crop-tops in the '90s).
  • The relationship between demographic information and salary can be very obscure, and it may be that instance-based learning will outperform other algorithms that base their responses on some underlying model which makes assumptions on the causality and distribution of the data. Moreover, KNN is a fundamentally different approach to the other two models above, and for the sake of completentess should be tried (it would make no sense to attempt Random Forests and Decision Trees, as these models operate with essentially the same maximes).

Implementation - Creating a Training and Predicting Pipeline

To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section. In the code block below, you will need to implement the following:

  • Import fbeta_score and accuracy_score from sklearn.metrics.
  • Fit the learner to the sampled training data and record the training time.
  • Perform predictions on the test data X_test, and also on the first 300 training points X_train[:300].
    • Record the total prediction time.
  • Calculate the accuracy score for both the training subset and testing set.
  • Calculate the F-score for both the training subset and testing set.
    • Make sure that you set the beta parameter!

In [9]:
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import accuracy_score, fbeta_score

def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): 
    '''
    inputs:
       - learner: the learning algorithm to be trained and predicted on
       - sample_size: the size of samples (number) to be drawn from training set
       - X_train: features training set
       - y_train: income training set
       - X_test: features testing set
       - y_test: income testing set
    '''
    
    results = {}
    
    # TODO: Fit the learner to the training data using slicing with 'sample_size'
    start = time() # Get start time
    learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
    end = time() # Get end time
    
    # TODO: Calculate the training time
    results['train_time'] = end - start
        
    # TODO: Get the predictions on the test set,
    #       then get predictions on the first 300 training samples
    start = time() # Get start time
    predictions_test = learner.predict(X_test)
    predictions_train = learner.predict(X_train[:300])
    end = time() # Get end time
    
    # TODO: Calculate the total prediction time
    results['pred_time'] = end - start
            
    # TODO: Compute accuracy on the first 300 training samples
    results['acc_train'] = accuracy_score(y_train[:300], predictions_train)

    # TODO: Compute accuracy on test set
    results['acc_test'] = accuracy_score(y_test, predictions_test)
    
    # TODO: Compute F-score on the the first 300 training samples
    results['f_train'] = fbeta_score(y_train[:300], predictions_train, 0.5)
        
    # TODO: Compute F-score on the test set
    results['f_test'] = fbeta_score(y_test, predictions_test, 0.5)
       
    # Success
    print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
        
    # Return the results
    return results

Implementation: Initial Model Evaluation

In the code cell, you will need to implement the following:

  • Import the three supervised learning models you've discussed in the previous section.
  • Initialize the three models and store them in 'clf_A', 'clf_B', and 'clf_C'.
    • Use a 'random_state' for each model you use, if provided.
    • Note: Use the default settings for each model — you will tune one specific model in a later section.
  • Calculate the number of records equal to 1%, 10%, and 100% of the training data.
    • Store those values in 'samples_1', 'samples_10', and 'samples_100' respectively.

Note: Depending on which algorithms you chose, the following implementation may take some time to run!


In [10]:
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier

# TODO: Initialize the three models
clf_A = RandomForestClassifier(random_state=42)
clf_B = GaussianNB()
clf_C = KNeighborsClassifier()

# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = int(len(X_train)*0.01)
samples_10 = int(len(X_train)*0.1)
samples_100 = len(X_train)

# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
    clf_name = clf.__class__.__name__
    results[clf_name] = {}
    for i, samples in enumerate([samples_1, samples_10, samples_100]):
        results[clf_name][i] = \
        train_predict(clf, samples, X_train, y_train, X_test, y_test)

# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)


RandomForestClassifier trained on 361 samples.
RandomForestClassifier trained on 3617 samples.
RandomForestClassifier trained on 36177 samples.
GaussianNB trained on 361 samples.
GaussianNB trained on 3617 samples.
GaussianNB trained on 36177 samples.
KNeighborsClassifier trained on 361 samples.
KNeighborsClassifier trained on 3617 samples.
KNeighborsClassifier trained on 36177 samples.

Improving Results

In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F-score.

Question 3 - Choosing the Best Model

Based on the evaluation you performed earlier, in one to two paragraphs, explain to CharityML which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000.
Hint: Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data.

Answer:

I would recommend using Random Forests. It is a model which often performs very well, and in this case was the one that performed best, even though KNN was a close second in terms of accuracy and F-score. Both Random Forests and KNN performed significantly better than our benchmark "random" guess. Random Forests offers many considerable advantages over KNN: it is dramatically faster at prediction, it can offer feature importances out of the box, and it is able to compute the probabilities of the classifications. This is even before any dimensionality reduction! Finally, random forests are very easy to parallellize, and hence can be made even faster very easily.

This type of data contains a high dimensionality and many subtle correlations which are not well understood; random forests makes extrememly few assumptions on how the data should correlate with the output classification. This agnosticism is advantageous for us in this particular problem, and allows us to do well.

Question 4 - Describing the Model in Layman's Terms

In one to two paragraphs, explain to CharityML, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.

Answer:

We train the model by constructing many datasets from our originial dataset; each of these datasets is made by selecting many random datapoints from the original set, where we are allowed to select datapoints multiple times. So in this way we have created many differnt collections of individuals from our original dataset of individuals.

For each of these datasets, we make a "decision tree", which is essentially a way of asking each datapoint, i.e. each person in that set, a bunch of yes or no questions, so that for each person we can correctly predict which income bracket that person will be in. This is very much like playing "20 Questions" to work out which income bracket each person is in. But because the various datasets are all different to each other, the actual way in which we ask the questions varies---each dataset will have its own decision tree which is good at "playing 20 Questions" precisely with that dataset only.

When we want to predict the income bracket of a new person we have never seen before, we let the first decision tree ask its yes-or-no questions, and it will decide which income bracket it thinks that person will be in. Then we do the same for the next decision tree, and so on, until all decision trees have made a decision on which income bracket that person will be in. Usually not all trees agree, so we simply take a vote and majority wins. That is how we predict the income bracket of that individual.

Implementation: Model Tuning

Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:

  • Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.
  • Initialize the classifier you've chosen and store it in clf.
    • Set a random_state if one is available to the same state you set before.
  • Create a dictionary of parameters you wish to tune for the chosen model.
    • Example: parameters = {'parameter' : [list of values]}.
    • Note: Avoid tuning the max_features parameter of your learner if that parameter is available!
  • Use make_scorer to create an fbeta_score scoring object (with $\beta = 0.5$).
  • Perform grid search on the classifier clf using the 'scorer', and store it in grid_obj.
  • Fit the grid search object to the training data (X_train, y_train), and store it in grid_fit.

Note: Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!


In [11]:
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer

# TODO: Initialize the classifier
clf = RandomForestClassifier(random_state=42)

# TODO: Create the parameters list you wish to tune
parameters = {"n_estimators": [10, 30, 100], "criterion": ["gini", "entropy"], 
              "max_depth": [2, 10, 15, None], "random_state": [42]}

# TODO: Make an fbeta_score scoring object
scorer = make_scorer(fbeta_score, beta=0.5)

# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf, parameters, scoring=scorer, cv=5, n_jobs=3)

# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train, y_train)

# Get the estimator
best_clf = grid_fit.best_estimator_

# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)

# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))


Unoptimized model
------
Accuracy score on testing data: 0.8431
F-score on testing data: 0.6843

Optimized Model
------
Final accuracy score on the testing data: 0.8584
Final F-score on the testing data: 0.7315

Question 5 - Final Model Evaluation

What is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in Question 1?
Note: Fill in the table below with your results, and then provide discussion in the Answer box.

Results:

Metric Benchmark Predictor Unoptimized Model Optimized Model
Accuracy Score 0.2478 0.8431 0.8584
F-score 0.2917 0.6843 0.7315

Answer:

The accuracy score of the optimized model is virtually unchanged for the optimized and unoptimized models, and both of them are very much higher than the benchmark prediction. The F-score, on the ohter hand, does improve significantly for the optimized model; this is unsurprising, since we performed the gridsearch precisely in order to optimize this F-score. Since we have not improved accuracy, we must have traded which examples we mistook, by decreasing false negatives and increasing false positives (i.e. catch a larger proportion of wealthy individuals, and along the way accidentally sending letters to a higher proportion of non-high earners).


Feature Importance

An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.

Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a feature_importance_ attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset.

Question 6 - Feature Relevance Observation

When Exploring the Data, it was shown there are thirteen available features for each individual on record in the census data.
Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?

Answer:

Before looking at the answer from the random forest, I would rank the features as follows:

  1. education-num
  2. capital-gain
  3. age
  4. marital-status
  5. sex

This is in part base on intuition, e.g. it seems very likely to me that ones education, occupation, age, and capital-gain should influence a person's salary, but is largely based on the graphs I plotted in the next cell. There I look at what the average number of wealthy individuals is for each category of the various features, and I chose the 5 that seemed to be able to discriminate most powerfully. I only look at categories for which there is a significant amount of data (I set the minimum threshold to 0.2% of the total amount of data).


In [12]:
from copy import deepcopy
import matplotlib.pyplot as plt
%matplotlib inline

original_data = deepcopy(data)
# Turn income into 0's and 1's
original_data["income"] = (original_data["income"]==">50K").astype(int)
# Give all datapoints a 1, so that we cancount them more easily
original_data["all"] = 1
#original_data["capital-gain"] = original_data["capital-gain"].apply(lambda x: np.log(x + 1))

# Pick which columns we will plot
chosencolumns = ["education-num", "capital-gain", "age", "marital-status", "sex"]
# If there aren't verymany datapoints we will be misled. Hence, choose a minimum number of datapoints required
min_datapoints = 0.002*len(original_data)

# Construct a figure with subplots
numberofrows = 3
fig, axes = plt.subplots(nrows=numberofrows, ncols=2, figsize=(12,5*numberofrows), tight_layout=True)

for col, currentax in zip(chosencolumns, axes.ravel()):
    num_cases_each_category = original_data.groupby(col)["all"].sum()
    indices_of_interest = num_cases_each_category[num_cases_each_category > min_datapoints].index
    average_high_earners = original_data.groupby(col)["income"].mean()
    average_high_earners[indices_of_interest].plot.bar(ax=currentax)
    currentax.set_ylabel("Average number of high earners")
    currentax.set_ylim([0,1])

axes.ravel()[-1].axis("off");


From the above it is clear that education is extremely important. Moreover, virtually everyone with a non-zero capital-gains is a high earner. When it comes to age, we see a natural career progression where people earn more as they age until they retire, then they earn less. For marital status, we see that it's enough to know that you are married to determine a much higher chance of being a high earner. Finally, we clearly see that males earn more than females.

Implementation - Extracting Feature Importance

Choose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.

In the code cell below, you will need to implement the following:

  • Import a supervised learning model from sklearn if it is different from the three used earlier.
  • Train the supervised model on the entire training set.
  • Extract the feature importances using '.feature_importances_'.

In [13]:
# TODO: Import a supervised learning model that has 'feature_importances_'

# TODO: Train the supervised model on the training set 
model = deepcopy(best_clf)
model.fit(X_train, y_train)
print "Chosen parameters for RandomForestClassifier:"
print grid_fit.best_params_

# TODO: Extract the feature importances
importances = model.feature_importances_

# Plot
vs.feature_plot(importances, X_train, y_train)


Chosen parameters for RandomForestClassifier:
{'n_estimators': 100, 'random_state': 42, 'criterion': 'entropy', 'max_depth': 15}

Question 7 - Extracting Feature Importance

Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
How do these five features compare to the five features you discussed in Question 6? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant?

Answer:

The random forest seems to agree with my guess very well, with slight discrepancies on the actual order of the various importances:

  1. It ranks capital-gains as the best feature. I chose this as my second-best feature.
  2. It ranks marital status next. I instead ranked this fourth, but for the same reasons: it's clear married people earn much more than anyone else.
  3. It ranks education-num third. This was my first pick.
  4. Age ranks fourth, whereas I ranked it third.
  5. "Husband" is the final ranking. I chose Age. The two are basically the same thing; I chose age because it is more independent from marital status.

The actual discrepancy in rankings probably has to do with the prevalence of the various features. For example, I might have noticed that everyone with non-zero capital gains is very wealthy, and hence given this feature a high importance. But there may be only a handful of individuals in the entire dataset with a non-zero capital gains. Hence, this feature is unlikely to be very useful in practice. The random forest is able to weight the discriminative power of the various features by their prevalence in the dataset (in an automatic way, i.e. by construction), and so is a better estimator for feature importances.

Feature Selection

How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of all features present in the data. This hints that we can attempt to reduce the feature space and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set with only the top five important features.


In [14]:
# Import functionality for cloning a model
from sklearn.base import clone

# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]

# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)

# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)

# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))


Final Model trained on full data
------
Accuracy on testing data: 0.8584
F-score on testing data: 0.7315

Final Model trained on reduced data
------
Accuracy on testing data: 0.8447
F-score on testing data: 0.6901

Question 8 - Effects of Feature Selection

How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?
If training time was a factor, would you consider using the reduced data as your training set?

Answer:

The accuracy is almost completely unchanged, but the F-score does decrease significantly: I am trading false positives (i.e. erroneously sending out a letter to someone who is not wealthy) for false negatives (accidentally overlooking a wealthy person and not sending out a letter to them).

To decide whether this is worth it, I would need to quantify the cost to my company in having a longer training time and compare it to the cost of overlooking certain wealthy individuals. This would allow me to pick the best strategy.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.