Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been written. You will need to implement additional functionality to successfully answer all of the questions for this project. Unless it is requested, do not modify any of the code that has already been included. In this template code, there are four sections which you must complete to successfully produce a prediction with your model. Each section where you will write code is preceded by a STEP X header with comments describing what must be done. Please read the instructions carefully!
In addition to implementing code, there will be questions that you must answer that relate to the project and your implementation. Each section where you will answer a question is preceded by a QUESTION X header. Be sure that you have carefully read each question and provide thorough answers in the text boxes that begin with "Answer:". Your project submission will be evaluated based on your answers to each of the questions.
A description of the dataset can be found here, which is provided by the UCI Machine Learning Repository.
To familiarize yourself with an iPython Notebook, try double clicking on this cell. You will notice that the text changes so that all the formatting is removed. This allows you to make edits to the block of text you see here. This block of text (and mostly anything that's not code) is written using Markdown, which is a way to format text using headers, links, italics, and many other options! Whether you're editing a Markdown text block or a code block (like the one below), you can use the keyboard shortcut Shift + Enter or Shift + Return to execute the code or text block. In this case, it will show the formatted text.
Let's start by setting up some code we will need to get the rest of the project up and running. Use the keyboard shortcut mentioned above on the following code block to execute it. Alternatively, depending on your iPython Notebook program, you can press the Play button in the hotbar. You'll know the code block executes successfully if the message "Boston Housing dataset loaded successfully!" is printed.
In [1]:
# Importing a few necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as pl
from sklearn import datasets
import seaborn
from sklearn.tree import DecisionTreeRegressor
# Make matplotlib show our plots inline (nicely formatted in the notebook)
%matplotlib inline
# Create our client's feature set for which we will be predicting a selling price
CLIENT_FEATURES = [[11.95, 0.00, 18.100, 0, 0.6590, 5.6090, 90.00, 1.385, 24, 680.0, 20.20, 332.09, 12.13]]
# Load the Boston Housing dataset into the city_data variable
city_data = datasets.load_boston()
# Initialize the housing prices and housing features
housing_prices = city_data.target
housing_features = city_data.data
print("Boston Housing dataset loaded successfully!")
In this first section of the project, you will quickly investigate a few basic statistics about the dataset you are working with. In addition, you'll look at the client's feature set in CLIENT_FEATURES and see how this particular sample relates to the features of the dataset. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand your results.
In the code block below, use the imported numpy library to calculate the requested statistics. You will need to replace each None you find with the appropriate numpy coding for the proper statistic to be printed. Be sure to execute the code block each time to test if your implementation is working successfully. The print statements will show the statistics you calculate!
In [2]:
# X -> features; y -> target (house value)
X = housing_features
y = housing_prices
# Number of houses in the dataset
total_houses = X.shape[0]
# Number of features in the dataset
total_features = X.shape[1]
# Minimum housing value in the dataset
minimum_price = np.min(y)
# Maximum housing value in the dataset
maximum_price = np.max(y)
# Mean house value of the dataset
mean_price = np.mean(y)
# Median house value of the dataset
median_price = np.median(y)
# Standard deviation of housing values of the dataset
std_dev = np.std(y, ddof=1)
# Show the calculated statistics
print "Boston Housing dataset statistics (in $1000's):\n"
print "Total number of houses:", total_houses
print "Total number of features:", total_features
print "Minimum house price:", minimum_price
print "Maximum house price:", maximum_price
print "Mean house price: {0:.3f}".format(mean_price)
print "Median house price:", median_price
print "Standard deviation of house price: {0:.3f}".format(std_dev)
As a reminder, you can view a description of the Boston Housing dataset here, where you can find the different features under Attribute Information. The MEDV attribute relates to the values stored in our housing_prices variable, so we do not consider that a feature of the data.
Of the features available for each data point, choose three that you feel are significant and give a brief description for each of what they measure.
Remember, you can double click the text box below to add your answer!
Answer:
After visualizing the housing dataset with seaborn.pairplot, I believe "DIS", "RM" and "LSTAT" would be among the most significant features.
In [3]:
print CLIENT_FEATURES
pd.set_option('precision', 3)
client_df = pd.DataFrame(CLIENT_FEATURES, columns=city_data.feature_names)
client_df
Out[3]:
Answer:
For the CLIENT_FEATURES dataset:
In the code block below, you will need to implement code so that the shuffle_split_data function does the following:
X and target labels (housing values) y.If you use any functions not already acessible from the imported libraries above, remember to include your import statement below as well!
Ensure that you have executed the code block once you are done. You'll know the shuffle_split_data function is working if the statement "Successfully shuffled and split the data!" is printed.
In [4]:
# Put any import statements you need for this code block here
def shuffle_split_data(X, y):
""" Shuffles and splits data into 70% training and 30% testing subsets,
then returns the training and testing subsets. """
# create dataframe on X
X_df = pd.DataFrame(X)
# Shuffle the data
np.random.seed(123)
indices_shuffled = np.random.permutation(X_df.index)
X_df = X_df.reindex(indices_shuffled)
y = y[indices_shuffled]
# Split the data
percent_train = 0.7
index_split = np.floor(X_df.shape[0] * percent_train).astype(int)
X_train = X_df.values[:index_split, :]
y_train = y[:index_split]
X_test = X_df.values[index_split:, :]
y_test = y[index_split:]
# Return the training and testing data subsets
return X_train, y_train, X_test, y_test
# Test shuffle_split_data
try:
X_train, y_train, X_test, y_test = shuffle_split_data(housing_features, housing_prices)
print "Successfully shuffled and split the data!"
except:
print "Something went wrong with shuffling and splitting the data."
Answer:
After splitting the data, the training set can be used for fitting while the testing set can be used independently to verify model performance with suitable metrics.
In the code block below, you will need to implement code so that the performance_metric function does the following:
y labels y_true and the predicted values of the y labels y_predict.You will need to first choose an appropriate performance metric for this problem. See the sklearn metrics documentation to view a list of available metric functions. Hint: Look at the question below to see a list of the metrics that were covered in the supporting course for this project.
Once you have determined which metric you will use, remember to include the necessary import statement as well!
Ensure that you have executed the code block once you are done. You'll know the performance_metric function is working if the statement "Successfully performed a metric calculation!" is printed.
In [5]:
# Put any import statements you need for this code block here
from sklearn.metrics import mean_squared_error
def performance_metric(y_true, y_predict):
""" Calculates and returns the total error between true and predicted values
based on a performance metric chosen by the student. """
error = mean_squared_error(y_true=y_true, y_pred=y_predict)
return error
# Test performance_metric
try:
total_error = performance_metric(y_train, y_train)
print "Successfully performed a metric calculation!"
except:
print "Something went wrong with performing a metric calculation."
Answer:
Since predicting the housing price is a regression problem, thus the relevant metrics are MSE and MAE. Both metrics are valid measure of total error. In this case, MSE is preferable because samples further away from model predictions are assigned quadratic weight and regressors minimizing MSE would lead to a unique solution as opposed to many possible solutions with MAE. Inccidentally, sklearn's Decision Tree Regressor used in subsequent problems also utilize MSE as the criterion.
Accuracy, precision, recall and F1 score would be useful when evaluating models performing classification tasks.
In the code block below, you will need to implement code so that the fit_model function does the following:
make_scorer documentation.regressor, parameters, and scoring_function. See the sklearn documentation on GridSearchCV.When building the scoring function and GridSearchCV object, be sure that you read the parameters documentation thoroughly. It is not always the case that a default parameter for a function is the appropriate setting for the problem you are working on.
Since you are using sklearn functions, remember to include the necessary import statements below as well!
Ensure that you have executed the code block once you are done. You'll know the fit_model function is working if the statement "Successfully fit a model to the data!" is printed.
In [6]:
# Put any import statements you need for this code block
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
def fit_model(X, y):
""" Tunes a decision tree regressor model using GridSearchCV on the input data X
and target labels y and returns this optimal model. """
# Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# Set up the parameters we wish to tune
parameters = {'max_depth': (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15),
'min_samples_split': (2, 3, 4, 5, 6, 7, 8, 9, 10),
'max_features': ['auto', 'sqrt', 'log2']}
# Make an appropriate scoring function
scoring_function = make_scorer(score_func=performance_metric, greater_is_better=False)
# Make the GridSearchCV object
reg = GridSearchCV(estimator=regressor,
param_grid=parameters,
scoring=scoring_function)
# Fit the learner to the data to obtain the optimal model with tuned parameters
reg.fit(X, y)
# Return the optimal model
return reg.best_estimator_
reg = fit_model(housing_features, housing_prices)
# Test fit_model on entire dataset
try:
reg = fit_model(housing_features, housing_prices)
print "Successfully fit a model!"
except:
print "Something went wrong with fitting a model."
Answer:
Grid search refers to systematically working through multiple combinations of hyperparameters in order to achieve optimal model performance according to specified model evaluation metrics. It's applicable when the model requires hyperparameter inputs from users, such as the 'max_depth' parameter in Decision Tree Regressor.
Answer:
Cross-validation is a strategy to assess how the reporting metrics such as MSE or accuracy would generalize onto unseen testing set. A popular choice is k-fold cross-validation, in which
During grid search, there's chance that the chosen hyperparameters would lead to minimal errors on the training set but fail to generalize to unseen data. Cross-validation can make the reported performance metrics more generalizable to the testing sets, thus help select the hyperparameters that decrease model overfitting.
You have now successfully completed your last code implementation section. Pat yourself on the back! All of your functions written above will be executed in the remaining sections below, and questions will be asked about various results for you to analyze. To prepare the Analysis and Prediction sections, you will need to intialize the two functions below. Remember, there's no need to implement any more code, so sit back and execute the code blocks! Some code comments are provided if you find yourself interested in the functionality.
In [7]:
def learning_curves(X_train, y_train, X_test, y_test):
""" Calculates the performance of several models with varying sizes of training data.
The learning and testing error rates for each model are then plotted. """
print "Creating learning curve graphs for max_depths of 1, 3, 6, and 10. . ."
# Create the figure window
fig = pl.figure(figsize=(10,8))
# We will vary the training set size so that we have 50 different sizes
sizes = np.rint(np.linspace(1, len(X_train), 50)).astype(int)
train_err = np.zeros(len(sizes))
test_err = np.zeros(len(sizes))
# Create four different models based on max_depth
for k, depth in enumerate([1,3,6,10]):
for i, s in enumerate(sizes):
# Setup a decision tree regressor so that it learns a tree with max_depth = depth
regressor = DecisionTreeRegressor(max_depth = depth)
# Fit the learner to the training data
regressor.fit(X_train[:s], y_train[:s])
# Find the performance on the training set
train_err[i] = performance_metric(y_train[:s], regressor.predict(X_train[:s]))
# Find the performance on the testing set
test_err[i] = performance_metric(y_test, regressor.predict(X_test))
# Subplot the learning curve graph
ax = fig.add_subplot(2, 2, k+1)
ax.plot(sizes, test_err, lw = 2, label = 'Testing Error')
ax.plot(sizes, train_err, lw = 2, label = 'Training Error')
ax.legend()
ax.set_title('max_depth = %s'%(depth))
ax.set_xlabel('Number of Data Points in Training Set')
ax.set_ylabel('Total Error')
ax.set_xlim([0, len(X_train)])
# Visual aesthetics
fig.suptitle('Decision Tree Regressor Learning Performances', fontsize=18, y=1.03)
fig.tight_layout()
# fig.show()
In [8]:
def model_complexity(X_train, y_train, X_test, y_test):
""" Calculates the performance of the model as model complexity increases.
The learning and testing errors rates are then plotted. """
print "Creating a model complexity graph. . . "
# We will vary the max_depth of a decision tree model from 1 to 14
max_depth = np.arange(1, 14)
train_err = np.zeros(len(max_depth))
test_err = np.zeros(len(max_depth))
for i, d in enumerate(max_depth):
# Setup a Decision Tree Regressor so that it learns a tree with depth d
regressor = DecisionTreeRegressor(max_depth = d)
# Fit the learner to the training data
regressor.fit(X_train, y_train)
# Find the performance on the training set
train_err[i] = performance_metric(y_train, regressor.predict(X_train))
# Find the performance on the testing set
test_err[i] = performance_metric(y_test, regressor.predict(X_test))
# Plot the model complexity graph
pl.figure(figsize=(7, 5))
pl.title('Decision Tree Regressor Complexity Performance')
pl.plot(max_depth, test_err, lw=2, label = 'Testing Error')
pl.plot(max_depth, train_err, lw=2, label = 'Training Error')
pl.legend()
pl.xlabel('Maximum Depth')
pl.ylabel('Total Error')
#pl.show()
In this third section of the project, you'll take a look at several models' learning and testing error rates on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing max_depth parameter on the full training set to observe how model complexity affects learning and testing errors. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
In [9]:
learning_curves(X_train, y_train, X_test, y_test)
Answer:
For the upper left graph, the "max depth" is 1. As the training set increases, the training error rises from 0, fluctuates a bit in the beginning but eventually settles down to a relatively stable value. For the testing sets, the error starts out high and settles to a relatively stable value as number of data points increase in the training set.
Answer:
When max_depth=1, the model suffers from high bias. The error levels for both training and testing sets are high even though the errors are similar, indicating underfitting.
When max_dpeth=10, the model suffers from high variance because of the discrepancy in training and testing errors, indicating overfitting.
In [10]:
model_complexity(X_train, y_train, X_test, y_test)
Answer:
The training error decreases monotonically as the max depth increases. The testing error first decreases as the max depth increases and then for max depth > 5, it starts to go up and fluctuate.
A model that can generalize the best would have the smallest total error with the least complexity. From visual inspection of the model complexity graph, max_depth = 5 would result in a model that best generalizes the dataset even though the resulting model suffers from relatively high bias.
In this final section of the project, you will make a prediction on the client's feature set using an optimized model from fit_model. When applying grid search along with cross-validation to optimize your model, it would typically be performed and validated on a training set and subsequently evaluated on a dedicated test set. In this project, the optimization below is performed on the entire dataset (as opposed to the training set you made above) due to the many outliers in the data. Using the entire dataset for training provides for a less volatile prediction at the expense of not testing your model's performance.
To answer the following questions, it is recommended that you run the code blocks several times and use the median or mean value of the results.
In [11]:
print "Final model has an optimal max_depth parameter of", reg.get_params()['max_depth']
reg.get_params()
Out[11]:
Answer:
The optimal max_depth for my model is 6, which is close to the expectation that max_depth = 5. The difference may be caused by setting max_features=sqrt as a result of GridSearchCV.
In [12]:
sale_price = reg.predict(CLIENT_FEATURES)
print "Predicted value of client's home: {0:.3f}".format(sale_price[0])
print("Compared to mean: %.3f\nCompared to median: %.3f\nStd of boston housing price data: %.3f"
% ((sale_price[0]-mean_price)/mean_price,
(sale_price[0]-median_price)/median_price,
std_dev))
Answer:
My model predicts 20.51 thousand dollars as the best selling price for the client's home, which is around 9% less the mean price, 3.2% less than the median price and falls within one standard deviation from the mean price.
Answer:
Models based on regressors other than decision trees shall be tested and compared before a choice can be made. Furthermore, feature selection and extraciton algorithms can be used to potentially increase model performace.
As a result, I would not use this model to predict selling price of future clients' homes.