Measuring prediction performance

Here we will discuss how to use validation sets to get a better measure of performance for a classifier.

Using the K-neighbors classifier

Here we'll continue to look at the digits data, but we'll switch to the K-Neighbors classifier. The K-neighbors classifier is an instance-based classifier. The K-neighbors classifier predicts the label of an unknown point based on the labels of the K nearest points in the parameter space.


In [ ]:
# Get the data
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target

In [ ]:
# Instantiate and train the classifier
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X, y)

In [ ]:
# Check the results using metrics
from sklearn import metrics
y_pred = clf.predict(X)

In [ ]:
print metrics.confusion_matrix(y_pred, y)

Apparently, we've found a perfect classifier! But this is misleading for the reasons we saw before: the classifier essentially "memorizes" all the samples it has already seen. To really test how well this algorithm does, we need to try some samples it hasn't yet seen.

This problem can also occur with regression models. In the following we fit an other instance-based model named "decision tree" to the Boston Housing price dataset we introduced previously:


In [ ]:
%pylab inline

In [ ]:
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor

data = load_boston()
clf = DecisionTreeRegressor().fit(data.data, data.target)
predicted = clf.predict(data.data)
expected = data.target

plt.scatter(expected, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)')

Here again the predictions are seemingly perfect as the model was able to perfectly memorize the training set.

A Better Approach: Using a validation set

Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data.

To avoid over-fitting, we have to define two different sets:

  • a training set X_train, y_train which is used for learning the parameters of a predictive model
  • a testing set X_test, y_test which is used for evaluating the fitted predictive model

In scikit-learn such a random split can be quickly computed with the train_test_split helper function. It can be used this way:


In [ ]:
from sklearn import cross_validation
X = digits.data
y = digits.target

X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25, random_state=0)

print X.shape, X_train.shape, X_test.shape

Now we train on the training data, and test on the testing data:


In [ ]:
clf = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train)
y_pred = clf.predict(X_test)

In [ ]:
print metrics.confusion_matrix(y_test, y_pred)

In [ ]:
print metrics.classification_report(y_test, y_pred)

The averaged f1-score is often used as a convenient measure of the overall performance of an algorithm. It appears in the bottom row of the classification report; it can also be accessed directly:


In [ ]:
metrics.f1_score(y_test, y_pred)

The over-fitting we saw previously can be quantified by computing the f1-score on the training data itself:


In [ ]:
metrics.f1_score(y_train, clf.predict(X_train))

Validation with a Regression Model

These validation metrics also work in the case of regression models. Here we'll use a Gradient-boosted regression tree, which is a meta-estimator which makes use of the DecisionTreeRegressor we showed above. We'll start by doing the train-test split as we did with the classification case:


In [ ]:
data = load_boston()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25, random_state=0)

print X.shape, X_train.shape, X_test.shape

Next we'll compute the training and testing error using the Decision Tree that we saw before:


In [ ]:
est = DecisionTreeRegressor().fit(X_train, y_train)

print "validation:", metrics.explained_variance_score(y_test, est.predict(X_test))
print "training:", metrics.explained_variance_score(y_train, est.predict(X_train))

This large spread between validation and training error is characteristic of a high variance model. Decision trees are not entirely useless, however: by combining many individual decision trees within ensemble estimators such as Gradient Boosted Trees or Random Forests, we can get much better performance:


In [ ]:
from sklearn.ensemble import GradientBoostingRegressor
est = GradientBoostingRegressor().fit(X_train, y_train)

print "validation:", metrics.explained_variance_score(y_test, est.predict(X_test))
print "training:", metrics.explained_variance_score(y_train, est.predict(X_train))

This model is still over-fitting the data, but not by as much as the single tree.

Exercise: Model Selection via Validation

In the previous notebook, we saw Gaussian Naive Bayes classification of the digits. Here we saw K-neighbors classification of the digits. We've also seen support vector machine classification of digits. Now that we have these validation tools in place, we can ask quantitatively which of the three estimators works best for the digits dataset.

Take a moment and determine the answers to these questions for the digits dataset:

  • With the default hyper-parameters for each estimator, which gives the best f1 score on the validation set? Recall that hyperparameters are the parameters set when you instantiate the classifier: for example, the n_neighbors in

        clf = KNeighborsClassifier(n_neighbors=1)
    
    

    To use the default value, simply leave them unspecified.

  • For each classifier, which value for the hyperparameters gives the best results for the digits data? For LinearSVC, use loss='l2' and loss='l1'. For KNeighborsClassifier use n_neighbors between 1 and 10. Note that GaussianNB does not have any adjustable hyperparameters.
  • Bonus: do the same exercise on the Iris data rather than the Digits data. Does the same classifier/hyperparameter combination win out in this case?

In [ ]:
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier

In [ ]:

Solution


In [ ]:
%load solutions/04C_validation_exercise.py