Supervised learning, in which the data comes with additional attributes that we want to predict. This problem can be either:
MNIST dataset - a set of 70,000 small images of digits handwritten. You can read more via The MNIST Database
In [1]:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
In [2]:
mnist
Out[2]:
In [3]:
len(mnist['data'])
Out[3]:
In [4]:
X, y = mnist['data'], mnist['target']
In [5]:
X
Out[5]:
In [6]:
y
Out[6]:
In [7]:
X[69999]
Out[7]:
In [8]:
y[69999]
Out[8]:
In [9]:
X.shape
Out[9]:
In [10]:
y.shape
Out[10]:
In [11]:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
In [12]:
_ = X[1000]
_image = _.reshape(28, 28)
plt.imshow(_image);
In [13]:
y[1000]
Out[13]:
In [14]:
num_split = 60000
X_train, X_test, y_train, y_test = X[:num_split], X[num_split:], y[:num_split], y[num_split:]
Tips: Typically we shuffle the training set. This ensures the training set is randomised and your data distribution is consistent. However, shuffling is a bad idea for time series data.
In [15]:
import numpy as np
In [16]:
shuffle_index = np.random.permutation(num_split)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
To simplify our problem, we will make this an exercise of "zero" or "non-zero", making it a two-class problem.
We need to first convert our target to 0 or non zero.
In [17]:
y_train_0 = (y_train == 0)
In [18]:
y_train_0
Out[18]:
In [19]:
y_test_0 = (y_test == 0)
In [20]:
y_test_0
Out[20]:
At this point we can pick any classifier and train it. This is the iterative part of choosing and testing all the classifiers and tuning the hyper parameters
In [21]:
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(random_state = 0)
clf.fit(X_train, y_train_0)
Out[21]:
In [22]:
clf.predict(X[1000].reshape(1, -1))
Out[22]:
Let's try with the StratifiedKFold
stratified sampling to create multiple folds. At each iteration, the classifier was cloned and trained using the training folds and makes predictions on the test fold.
In [23]:
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
clf = SGDClassifier(random_state=0)
In [24]:
skfolds = StratifiedKFold(n_splits=3, random_state=100)
In [25]:
for train_index, test_index in skfolds.split(X_train, y_train_0):
clone_clf = clone(clf)
X_train_fold = X_train[train_index]
y_train_folds = (y_train_0[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_0[test_index])
clone_clf.fit(X_train_fold, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print("{0:.4f}".format(n_correct / len(y_pred)))
K-fold cross-validation splits the training set into K-folds and then make predictions and evaluate them on each fold using a model trained on the remaning folds.
In [26]:
from sklearn.model_selection import cross_val_score
In [27]:
cross_val_score(clf, X_train, y_train_0, cv=3, scoring='accuracy')
Out[27]:
Let's check against a dumb classifier
In [28]:
1 - sum(y_train_0) / len(y_train_0)
Out[28]:
A simple check shows that 90.1% of the images are not zero. Any time you guess the image is not zero, you will be right 90.13% of the time.
Bare this in mind when you are dealing with skewed datasets. Because of this, accuracy is generally not the preferred performance measure for classifiers.
In [29]:
from sklearn.model_selection import cross_val_predict
In [30]:
y_train_pred = cross_val_predict(clf, X_train, y_train_0, cv=3)
In [31]:
from sklearn.metrics import confusion_matrix
In [32]:
confusion_matrix(y_train_0, y_train_pred)
Out[32]:
Each row: actual class
Each column: predicted class
First row: Non-zero images, the negative class:
Second row: The images of zeros, the positive class:
In [33]:
from sklearn.metrics import precision_score, recall_score
In [34]:
precision_score(y_train_0, y_train_pred) # 5528 / (717 + 5528)
Out[34]:
In [35]:
5528 / (717+5528)
Out[35]:
In [36]:
recall_score(y_train_0, y_train_pred) # 5528 / (395 + 5528)
Out[36]:
In [37]:
5528 / (395 + 5528)
Out[37]:
$F_1$ score is the harmonic mean of precision and recall. Regular mean gives equal weight to all values. Harmonic mean gives more weight to low values.
$$F_1=\frac{2}{\frac{1}{\textrm{precision}}+\frac{1}{\textrm{recall}}}=2\times \frac{\textrm{precision}\times \textrm{recall}}{\textrm{precision}+ \textrm{recall}}=\frac{TP}{TP+\frac{FN+FP}{2}}$$The $F_1$ score favours classifiers that have similar precision and recall.
In [38]:
from sklearn.metrics import f1_score
In [39]:
f1_score(y_train_0, y_train_pred)
Out[39]:
Our classifier is designed to pick up zeros.
12 observations
Central Arrow
Suppose the decision threshold is positioned at the central arrow:
At this threshold, the precision accuracy is $\frac{4}{5}=80\%$
However, out of the 6 zeros, the classifier only picked up 4. The recall accuracy is $\frac{4}{6}=67\%$
Right Arrow
At this threshold, the precision accuracy is $\frac{3}{3}=100\%$ However, out of the 6 zeros, the classifier only picked up 3. The recall accuracy is $\frac{3}{6}=50\%$
Left Arrow
At this threshold, the precision accuracy is $\frac{6}{8}=75\%$ Out of the 6 zeros, the classifier picked up all 6. The recall accuracy is $\frac{6}{6}=100\%$
In [40]:
clf = SGDClassifier(random_state=0)
clf.fit(X_train, y_train_0)
Out[40]:
In [41]:
y[1000]
Out[41]:
In [42]:
y_scores = clf.decision_function(X[1000].reshape(1, -1))
y_scores
Out[42]:
In [43]:
threshold = 0
In [44]:
y_some_digits_pred = (y_scores > threshold)
In [45]:
y_some_digits_pred
Out[45]:
In [46]:
threshold = 40000
y_some_digits_pred = (y_scores > threshold)
y_some_digits_pred
Out[46]:
In [47]:
y_scores = cross_val_predict(clf, X_train, y_train_0, cv=3, method='decision_function')
In [48]:
plt.figure(figsize=(12,8)); plt.hist(y_scores, bins=100);
With the decision scores, we can compute precision and recall for all possible thresholds using the precision_recall_curve()
function:
In [49]:
from sklearn.metrics import precision_recall_curve
In [50]:
precisions, recalls, thresholds = precision_recall_curve(y_train_0, y_scores)
In [51]:
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision")
plt.plot(thresholds, recalls[:-1], "g--", label="Recall")
plt.xlabel("Threshold")
plt.legend(loc="upper left")
plt.ylim([-0.5,1.5])
In [52]:
plt.figure(figsize=(12,8));
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.show()
With this chart, you can select the threshold value that gives you the best precision/recall tradeoff for your task.
Some tasks may call for higher precision (accuracy of positive predictions). Like designing a classifier that picks up adult contents to protect kids. This will require the classifier to set a high bar to allow any contents to be consumed by children.
Some tasks may call for higher recall (ratio of positive instances that are correctly detected by the classifier). Such as detecting shoplifters/intruders on surveillance images - Anything that remotely resemble "positive" instances to be picked up.
One can also plot precisions against recalls to assist with the threshold selection
In [53]:
plt.figure(figsize=(12,8));
plt.plot(precisions, recalls);
plt.xlabel('recalls');
plt.ylabel('precisions');
plt.title('PR Curve: precisions/recalls tradeoff');
In [54]:
len(precisions)
Out[54]:
In [55]:
len(thresholds)
Out[55]:
In [56]:
plt.figure(figsize=(12,8));
plt.plot(thresholds, precisions[1:]);
In [57]:
idx = len(precisions[precisions < 0.9])
In [58]:
thresholds[idx]
Out[58]:
In [59]:
y_train_pred_90 = (y_scores > 21454)
In [60]:
precision_score(y_train_0, y_train_pred_90)
Out[60]:
In [61]:
recall_score(y_train_0, y_train_pred_90)
Out[61]:
In [62]:
idx = len(precisions[precisions < 0.99])
In [63]:
thresholds[idx]
Out[63]:
In [64]:
y_train_pred_90 = (y_scores > thresholds[idx])
In [65]:
precision_score(y_train_0, y_train_pred_90)
Out[65]:
In [66]:
recall_score(y_train_0, y_train_pred_90)
Out[66]:
Instead of plotting precision versus recall, the ROC curve plots the true positive rate
(another name for recall) against the false positive rate
. The false positive rate
(FPR) is the ratio of negative instances that are incorrectly classified as positive. It is equal to one minus the true negative rate
, which is the ratio of negative instances that are correctly classified as negative.
The TNR is also called specificity
. Hence the ROC curve plots sensitivity
(recall) versus 1 - specificity
.
In [67]:
from sklearn.metrics import roc_curve
In [68]:
fpr, tpr, thresholds = roc_curve(y_train_0, y_scores)
In [69]:
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0,1], [0,1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
In [70]:
plt.figure(figsize=(12,8));
plot_roc_curve(fpr, tpr)
plt.show();
In [71]:
from sklearn.metrics import roc_auc_score
In [72]:
roc_auc_score(y_train_0, y_scores)
Out[72]:
Use PR curve whenever the positive class is rare or when you care more about the false positives than the false negatives
Use ROC curve whenever the negative class is rare or when you care more about the false negatives than the false positives
In the example above, the ROC curve seemed to suggest that the classifier is good. However, when you look at the PR curve, you can see that there are room for improvement.
In [73]:
from sklearn.ensemble import RandomForestClassifier
In [74]:
f_clf = RandomForestClassifier(random_state=0)
In [75]:
y_probas_forest = cross_val_predict(f_clf, X_train, y_train_0,
cv=3, method='predict_proba')
In [76]:
y_scores_forest = y_probas_forest[:, 1]
fpr_forest, tpr_forest, threshold_forest = roc_curve(y_train_0, y_scores_forest)
In [77]:
plt.figure(figsize=(12,8));
plt.plot(fpr, tpr, "b:", label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right")
plt.show();
In [78]:
roc_auc_score(y_train_0, y_scores_forest)
Out[78]:
In [79]:
f_clf.fit(X_train, y_train_0)
Out[79]:
In [80]:
y_train_rf = cross_val_predict(f_clf, X_train, y_train_0, cv=3)
In [81]:
precision_score(y_train_0, y_train_rf)
Out[81]:
In [82]:
recall_score(y_train_0, y_train_rf)
Out[82]:
In [83]:
confusion_matrix(y_train_0, y_train_rf)
Out[83]: