In [1]:
import numpy as np
import pandas as pd
%matplotlib inline
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
We'll use one of the the toy datasets that scikit has available, since the focus of this example is too show how to use the validation tools and not how to deal with a "raw" dataset. We'll use the Boston House Prices dataset, which has a median value price for occupied home as target and 13 number of attributes, ranging from things like crime per capita rate to student to teacher ratio. The first thing we will do is to load the dataset and check its shape and values for one of the samples.
In [2]:
boston = datasets.load_boston()
boston.data.shape, boston.target.shape
Out[2]:
In [3]:
print(boston.data[0])
print(boston.target[0])
As we may see, we have 506 samples, with a part named data, where we have the 13 attributes and a part named target, with the target prices for each of those sets of 13 attributes.
We will now procede to train some regression models in order to predict the price of a house given the 13 attributes and introduce methods of validating your results in order to obtain a model able to generalize for unseen data.
The logic of simple cross-validation is to train several models (both different algorithms and with different parameters) and in the end choose the one that yields the best accuracy on a test set.
We will start to tackle this problem by splitting the data set into four different arrays. X_train and y_train have the attributes and target prices for a subset of the samples we have in order to train the model, and X_test and y_test have the attributes and target prices for the rest of the subset, in order to verify the accuracy of the model. This might be done using the train_test_split function on scikit. In this example we will use 30% of the dataset as a test set.
In [4]:
X_train, X_test, y_train, y_test = train_test_split(
boston.data, boston.target, test_size=0.3, random_state=0)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
Next we will train models to solve this problem. There are several models that can be used for training and in the beginning is hard to have the feeling for the best ones for each task. A good way to start is by following scikit-learn algorithm cheat-sheet. As we can see, from what we know from out dataset (predicting a quantity, less than 100k samples and 13 features that intuitively seem that might be important) it's good to start by using a Support vector regressor.
Once again, since the focus of this example notebook is on the validation part we will train this model in a very straightforward way, not giving the importance to parameter tuning that we should and using only the test set, to highlight the importance of using a cross-validation.
In [5]:
clf = SVR(kernel='linear')
clf.fit(X=X_train, y=y_train)
Out[5]:
In [6]:
print("SVRegressor score is: {}".format(clf.score(X_test, y_test)))
So we obtained roughly 62% of correct predictions using this model. It is not an awful number, but we can do better. Let's then try an Ensemble Regressor, as the cheat-sheet suggests. In this case, we're going to use a Gradient Boosting Regressor.
In [7]:
clg = GradientBoostingRegressor(random_state=0)
clg.fit(X=X_train, y=y_train)
Out[7]:
In [8]:
print("Gradient Boosting Regressor score is: {}".format(clg.score(X_test, y_test)))
The base model was able to achieve roughly 85%, which we can consider a good result. Let's now modify some parameters just to see if the results improve or not.
In [9]:
clg = GradientBoostingRegressor(max_depth=2, random_state=0)
clg.fit(X=X_train, y=y_train)
print("Gradient Boosting Regressor score is: {}".format(clg.score(X_test, y_test)))
In [10]:
clg = GradientBoostingRegressor(learning_rate=0.2, random_state=0)
clg.fit(X=X_train, y=y_train)
print("Gradient Boosting Regressor score is: {}".format(clg.score(X_test, y_test)))
So in the end we trained four models and validated them on a test set, with the following accuracies:
Based on this simple cross-validation method, the model we would choose was the Gradient Boosting Regressor with the parameter learning rate set to 0.2.
Notice that to validate our model we had to set apart 30% of the data we have available. In a world in which we can't ever have enough data to train models, this kind of methods can be costly.
Using a technique called k-fold validation, we will be able to get the same end result without having to lose more samples on the training split. This method's procedure is to split the dataset into k splits, and iteratively use k-1 splits to train a model and the remaining split to validate the result. The average of those scores will then be the perfomance measure of the model we have trained. This is done in scikit using the funcion cross_val_score. In this case we will be using a 5-fold validation on the same models we trained on the simple cross-validation section.
An auxiliary visualization of the method is the following:
After this procedure we can average the scores and also obtain a confidence level on those scores. The method that yields the highest average accuracy will be the chosen one.
In [11]:
from sklearn.model_selection import cross_val_score
In [12]:
clf = SVR(kernel='linear')
scores = cross_val_score(clf, boston.data, boston.target, cv=5)
And we can then print the scores obtained and also the mean score obtained aswell as the 95% confidence interval of the mean score value we obtained.
In [13]:
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [14]:
clg = GradientBoostingRegressor(random_state=0)
scores = cross_val_score(clg, boston.data, boston.target, cv=5)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [15]:
clg = GradientBoostingRegressor(max_depth=2, random_state=0)
scores = cross_val_score(clg, boston.data, boston.target, cv=5)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [16]:
clg = GradientBoostingRegressor(learning_rate=0.2, random_state=0)
scores = cross_val_score(clg, boston.data, boston.target, cv=5)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
As we can see the scenario is very different right now. Looking at individual scores is easy to verify that sometimes the test set influences a lot your final accuracy. This means that a method such as k-fold cross-validation is much more robust to this kind of possibilities.
Let's list all the results and check which one did best:
According to k-fold cross-validation, a Gradient Boosting Regressor with parameter max_depth set to 2, is the best model (highest score with lowest uncertainty) from the ones we tested.
Another strategy to use cross-validation is to use random splits instead of fixed splits with function ShuffleSplit. When using random splits (in which the data is randomly chosen from the dataset everytime you "create" a fold) it is not guaranteed that each fold will be different, even though this can be assumed if the dataset is big enough.
Let's see how this influences the results on the models we have been trying.
In [17]:
from sklearn.model_selection import ShuffleSplit
In [18]:
clf = SVR(kernel='linear')
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=1)
scores = cross_val_score(clf, boston.data, boston.target, cv=cv)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [19]:
clg = GradientBoostingRegressor(random_state=0)
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=1)
scores = cross_val_score(clg, boston.data, boston.target, cv=cv)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [20]:
clg = GradientBoostingRegressor(max_depth=2, random_state=0)
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=1)
scores = cross_val_score(clg, boston.data, boston.target, cv=cv)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
In [21]:
clg = GradientBoostingRegressor(learning_rate=0.2, random_state=0)
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=1)
scores = cross_val_score(clg, boston.data, boston.target, cv=cv)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
The score results obtained for the Gradient Boosting Regressor are very high compared with the fixed split. This may mean that the data has some fixed portions in which the model does not work well and those portions keep lowering the scores we obtain. Shuffling the data allows us to have a more varied distribution of samples in each fold and it allows each of them to resemble more to the dataset as an whole.
In this case the results are similar for the three models tested and any of them would be a good choice according to this values.
When training and validating a model it is important to make sure that the same preprocessing and data transformations are done in both training and test/validation data. In scikit, the pipeline function is an easy way to take care of this step.
In [22]:
from sklearn import preprocessing
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, test_size=0.3, random_state=0)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_transformed = scaler.transform(X_train)
clg = GradientBoostingRegressor(random_state=0).fit(X_train_transformed, y_train)
X_test_transformed = scaler.transform(X_test)
print("Accuracy: {}".format(clg.score(X_test_transformed, y_test)))
Now, if we want to extend this to a k-fold cross-validation method, instead of doing this "manually" for each of the possibilities, we can use _make_pipeline_ and easily extend the preprocessing to all the tests being done.
In [23]:
from sklearn.pipeline import make_pipeline
clg = make_pipeline(preprocessing.StandardScaler(), GradientBoostingRegressor(random_state=0))
cross_val_score(clg, boston.data, boston.target, cv=5)
print("Array of scores is :{}\n".format(scores))
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))