Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note:Code and Markdown cells can be executed using theShift + Enterkeyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.

In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.

The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preoprocessing steps have been made to the dataset:

- 16 data points have an
`'MDEV'`

value of 50.0. These data points likely contain**missing or censored values**and have been removed. - 1 data point has an
`'RM'`

value of 8.78. This data point can be considered an**outlier**and has been removed. - The features
`'RM'`

,`'LSTAT'`

,`'PTRATIO'`

, and`'MDEV'`

are essential. The remaining**non-relevant features**have been excluded. - The feature
`'MDEV'`

has been**multiplicatively scaled**to account for 35 years of market inflation.

Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.

```
In [9]:
```# Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MDEV']
features = data.drop('MDEV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)

```
```

In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.

Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`

, `'LSTAT'`

, and `'PTRATIO'`

, give us quantitative information about each data point. The **target variable**, `'MDEV'`

, will be the variable we seek to predict. These are stored in `features`

and `prices`

, respectively.

For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy`

has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.

In the code cell below, you will need to implement the following:

- Calculate the minimum, maximum, mean, median, and standard deviation of
`'MDEV'`

, which is stored in`prices`

.- Store each calculation in their respective variable.

```
In [10]:
```# TODO: Minimum price of the data
minimum_price = np.amin(prices)
# TODO: Maximum price of the data
maximum_price = np.amax(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)

```
```

As a reminder, we are using three features from the Boston housing dataset: `'RM'`

, `'LSTAT'`

, and `'PTRATIO'`

. For each data point (neighborhood):

`'RM'`

is the average number of rooms among homes in the neighborhood.`'LSTAT'`

is the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood.`'PTRATIO'`

is the ratio of students to teachers in primary and secondary schools in the neighborhood.

*Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MDEV' or a decrease in the value of 'MDEV'? Justify your answer for each.*

`'RM'`

value of 6 be worth more or less than a home that has an `'RM'`

value of 7?**Answer: **

- If RM increases - MDEV will also increase (houses with more rooms are bigger, that is why their price is higher).
- If LSTAT increases, that means that prices in our particular neighborhood are lower compared to others. So the MDEV will decrease.
- Low PTRATIO means that schools have more money and can afford more teachers, that's why increase of this feature will decrease the MDEV value.

In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.

It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the *coefficient of determination*, R^{2}, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.

The values for R^{2} range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R^{2} of 0 always fails to predict the target variable, whereas a model with an R^{2} of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. *A model can be given a negative R ^{2} as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.*

For the `performance_metric`

function in the code cell below, you will need to implement the following:

- Use
`r2_score`

from`sklearn.metrics`

to perform a performance calculation between`y_true`

and`y_predict`

. - Assign the performance score to the
`score`

variable.

```
In [11]:
```from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score

Assume that a dataset contains five data points and a model made the following predictions for the target variable:

True Value | Prediction |
---|---|

3.0 | 2.5 |

-0.5 | 0.0 |

2.0 | 2.1 |

7.0 | 7.8 |

4.2 | 5.3 |

*Would you consider this model to have successfully captured the variation of the target variable? Why or why not?*

Run the code cell below to use the `performance_metric`

function and calculate this model's coefficient of determination.

```
In [12]:
```# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)

```
```

**Answer:** I consider this model to be successful with a coefficient of determination of 0.923 (>90% of data is predictable).

Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.

For the code cell below, you will need to implement the following:

- Use
`train_test_split`

from`sklearn.cross_validation`

to shuffle and split the`features`

and`prices`

data into training and testing sets.- Split the data into 80% training and 20% testing.
- Set the
`random_state`

for`train_test_split`

to a value of your choice. This ensures results are consistent.

- Assign the train and testing splits to
`X_train`

,`X_test`

,`y_train`

, and`y_test`

.

```
In [17]:
```# Import 'train_test_split'
from sklearn import cross_validation
# Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
features, prices, test_size = 0.2, random_state = 0)
# Success
print "Training and testing split was successful."

```
```

**Answer: ** First of all it gives us an estimate on performance on an independent dataset, secondly it serves as a check on overfiting. Basically, without testing the data we can't be sure in our results at all.

In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'`

parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.

The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R^{2}, the coefficient of determination.

Run the code cell below and use these graphs to answer the following question.

```
In [18]:
```# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)

```
```

*Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?*

**Hint:** Are the learning curves converging to particular scores?

**Answer: **
I chose second graph with a depth of 3.
When we add more data - curves converge to a score of ~0.8. More training data, I think, will not improve the model dramatically, so it's not needed.

Each of the graphs testing scores grow rapidly and stay stable after adding only ~50 data points. Training scores on the other hand decrease with adding more data, but also become stable after adding only ~50 data points, except for graphs with max_depth over 6 (where the model is overfitted).

First graph with a depth of 1 has a high bias and both training and testing curves converge to a score with very low accuracy. With the addition of more data points the model isn't improving as it is very simple.

Second graph actually scores considerably well among others. Both curves converge at a score of around ~0.8 and do not improve with more data points after ~150. This graph I've chosen as the best among the results.

Third graph has an overfitting problem. We are starting to see that variance is considerably high and overall prediction score plateues at a score of ~0.7.

Last graph has a high training score and a score of around ~0.6 after adding ~50 data points, the model is also overfitted and can't be used correctly.

The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric`

function.

Run the code cell below and use this graph to answer the following two questions.

```
In [19]:
```vs.ModelComplexity(X_train, y_train)

```
```

*When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?*

**Hint:** How do you know when a model is suffering from high bias or high variance?

**Answer: **
With a max depth of 1 the model suffers from high bias, because the model is not comlex enough which leads to low accuracy.
With a max depth of 10 the model suffers from high variance, because the model was trained with too many details from the training set and it overfits their characteristics on the whole dataset.
When we a have a bias error - training and validation curves are drawn together, but with a very low accuracy score. High variance on the other hand looks like the curves with a max depth of 10: high accuracy score on a training data, low - on a whole dataset due to the overfitting.

**Answer: **
A model with a max_depth of 3 is the best among others, because it is very consistent in scoring on testing and validation data and also having the highest score (R squared) of around 0.8. Other models, especially the first and the last one are useless, as they can't predict data correctly at all (first has a huge bias, last has a huge variance).

**Answer: **
GridSearch is a way of systematically working through multiple combinations of parameter tunes, cross-validating as it goes to determine which tune gives the best performance.

**Answer: **
K-fold training technique divides data in equal sized bins to estimate performance of a model.
K-fold helps GridSearch to evaluate performance of a family of models and it kind of helps us to use all our data for both training and testing.

The biggest advantage of a K-fold cross-validation is that we can use all our data for testing and training, by doing that we maximize both training (best learning result) and testing (best validation) data. If we didn't use CV on data while using GridSearch we can get wrong results, as we wouldn't know the accuracy of the model (which was computed as an average of each iteration in K-Fold CV). So the k-fold CV is computationally expensive, but is great when the number of samples is low.

Your final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'`

parameter for the decision tree. The `'max_depth'`

parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.

For the `fit_model`

function in the code cell below, you will need to implement the following:

- Use
`DecisionTreeRegressor`

from`sklearn.tree`

to create a decision tree regressor object.- Assign this object to the
`'regressor'`

variable.

- Assign this object to the
- Create a dictionary for
`'max_depth'`

with the values from 1 to 10, and assign this to the`'params'`

variable. - Use
`make_scorer`

from`sklearn.metrics`

to create a scoring function object.- Pass the
`performance_metric`

function as a parameter to the object. - Assign this scoring function to the
`'scoring_fnc'`

variable.

- Pass the
- Use
`GridSearchCV`

from`sklearn.grid_search`

to create a grid search object.- Pass the variables
`'regressor'`

,`'params'`

,`'scoring_fnc'`

, and`'cv_sets'`

as parameters to the object. - Assign the
`GridSearchCV`

object to the`'grid'`

variable.

- Pass the variables

```
In [46]:
```# Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn import grid_search
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}
# Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# Create the grid search object
grid = grid_search.GridSearchCV(regressor, params, cv=cv_sets, scoring=scoring_fnc)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_

Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.

```
In [59]:
```# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])

```
```

**Answer: ** Maximum depth for the optimal model is 4. Using the pictures above I've chosen the closest which was 3.

Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:

Feature | Client 1 | Client 2 | Client 3 |
---|---|---|---|

Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |

Household net worth (income) | Top 34th percent | Bottom 45th percent | Top 7th percent |

Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |

*What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?*

**Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response.

Run the code block below to have your optimized model make predictions for each client's home.

```
In [60]:
```# Produce a matrix for client data
client_data = [[5, 34, 15], # Client 1
[4, 55, 22], # Client 2
[8, 7, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)

```
```

**Answer: ** I think these prices are reasonable.

- Small house in a good neighbourhood, maybe this price is a little bit lower that should've been predicted, but not by a big margin.
- This price seems completely reasonable to me: small house, not the best schools in the area.
- Very close to a top price in the dataset. I think, this price is absolutely correct.

An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the `fit_model`

function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.

```
In [52]:
```vs.PredictTrials(features, prices, fit_model, client_data)

```
```

*In a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.*

**Hint:** Some questions to answering:

*How relevant today is data that was collected from 1978?**Are the features present in the data sufficient to describe a home?**Is the model robust enough to make consistent predictions?**Would data collected in an urban city like Boston be applicable in a rural city?*

**Answer: **
Considering the fact that we don't have a real data, but only adjusted, I think that this model can't be used to predict real values of houses in Boston. There are much more features to take into account to predict the price of a house, that what we used in a prediction model.
Data is abosuletely irrelevant. The model is not robust enough, because the spread in prices is very high (10% of a max price which could be a real deal-breaker from too many potential sellers or buyers).
Of course, data from an urban city can't be used to predict prices in a rural city.