We start off by collecting the dataset. It can be found both online and (in a slightly nicer form) in our GitHub repository, so we just fetch it via `wget`

(note: make sure you first type `pip install wget`

into your Terminal since `wget`

is not a preinstalled Python library). It will download a copy of the dataset to your current working directory.

```
In [1]:
```import wget
import pandas as pd
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=",")
# Take a peak at the data
dataset.head()

```
Out[1]:
```

```
In [ ]:
```knn3scores = cross_val_score(knn3, XTrain, yTrain, cv = 5)
print knn3scores
print "Mean of scores KNN3:", knn3scores.mean()
knn99scores = cross_val_score(knn99, XTrain, yTrain, cv = 5)
print knn99scores
print "Mean of scores KNN99:", knn99scores.mean()
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state = 1) #seed 1
knn = KNeighborsClassifier()
n_neighbors = np.arange(3, 151, 2)
grid = GridSearchCV(knn, [{'n_neighbors':n_neighbors}], cv = 10)
grid.fit(XTrain, yTrain)
cv_scores = [x[1] for x in grid.grid_scores_]
train_scores = list()
test_scores = list()
for n in n_neighbors:
knn.n_neighbors = n
knn.fit(XTrain, yTrain)
train_scores.append(metrics.accuracy_score(yTrain, knn.predict(XTrain)))
test_scores.append(metrics.accuracy_score(yTest, knn.predict(XTest)))
plt.plot(n_neighbors, train_scores, c = "blue", label = "Training Scores")
plt.plot(n_neighbors, test_scores, c = "brown", label = "Test Scores")
plt.plot(n_neighbors, cv_scores, c = "black", label = "CV Scores")
plt.xlabel('Number of K nearest neighbors')
plt.ylabel('Classification Accuracy')
plt.gca().invert_xaxis()
plt.legend(loc = "upper left")
plt.show()

```
In [206]:
```# Examine shape of dataset and some column names
print (dataset.shape)
print (dataset.columns.values)
# Summarise feature values
dataset.describe()

```
Out[206]:
```

```
In [208]:
```import numpy as np
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]

```
In [209]:
```from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)

We are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to theory behind random forests. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on the features (and feature values) that split the classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles these base learners together.

Out of the box, scikit's random forest classifier already performs quite well on the spam dataset:

```
In [211]:
```from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
rf = RandomForestClassifier()
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2))

```
```

In order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. In the following example, we will introduce different strategies of searching for the set of HPs that define the best model, but we will first need to make a slight detour to explain how to avoid a major pitfall when it comes to tuning models - **overfitting**.

The hallmark of overfitting is good training performance and bad testing performance.

As we mentioned above, HPs are not optimised while a learning algorithm is learning. Hence, we need other strategies to optimise them. The most basic way would just to test different possible values for the HPs and see how the model performs. In a random forest, some hyperparameters we can optimise are `n_estimators`

and `max_features`

. `n_estimators`

controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. `max_features`

controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.

Let's try out some HP values.

```
In [212]:
```n_estimators = np.array([5, 100])
max_features = np.array([10, 50])

```
In [117]:
```from itertools import product
# get grid of all possible combinations of hp values
hp_combinations = list(itertools.product(n_estimators, max_features))
for hp_combo in range(len(hp_combinations)):
print (hp_combinations[hp_combo])
# Train and output accuracies
rf = RandomForestClassifier(n_estimators=hp_combinations[hp_combo][0],
max_features=hp_combinations[hp_combo][1])
rf.fit(XTrain, yTrain)
RF_predictions = rf.predict(XTest)
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))

```
```

`n_estimators`

and the lower value of `max_features`

did better. However, manually searching for the best HPs in this way is not efficient and could potentially lead to models that perform well on the training data, but do not generalise well to a new dataset, which is what we are actually interested in. This phenomenon of building models that do not generalise well, or that are fitting too closely to the training dataset, is called **overfitting**. This is a very important concept in machine learning and it is very much worth to get a better understanding of what it is. Let's briefly discuss the **bias-variance tradeoff**.

When we train machine learning algorithms on a dataset, what we are really interested in is how our model will perform on an independent data set. It is not enough to predict whether emails in our training set are spam - how well would the model fare when predicting if a completely new, previously unseen datapoint is spam or not?

This is the idea behind splitting your dataset into a **training set** (on which models can be trained) and a **test set** (which is held out until the very end of your analysis, and provides an accurate measure of model performance). Essentially, we are only interested in building models that are **generalizable** - getting 100% accuracy on the training set is not impressive, and is simply an indicator of **overfitting**. Overfitting is the situation in which we have fitted our model too closely to the data, and have tuned to the noise instead of just to the signal.

In fact, this concept of wanting to fit algorithms to the training data well, but not so tightly that the model doesn't generalize, is a pervasive problem in machine learning. A common term for this balancing act is **the bias-variance trade-off**. Here is a nice introductory article on the topic that goes into more depth.

Have a look at how underfitting (high bias, low variance), properly fitting, and overfitting (low bias, high variance) models fare on the training compared to the test sets.

"Bias" and "variance" have got to be some of the least helpful terms in machine learning. One way to think of them is: a model that underfits (e.g. the straight line) is quite a bit wrong - it models the underlying generative process of the data as something too simple, and this is highly biased away from the truth. But, the straight line fit is not going to change very much across different datasets. The opposite trend applies to overfitted models.

Hence, we never try to optimize the model's perfomance on the training data because it is a misguided and fruitless endeavour. But wait, didn't we also say that the test set is not meant to be touched until we are completely done training our model? How are we meant to optimize our hyperparameters then?

Enter **k-fold cross-validation**, which is a handy technique for measuring a model's performance using *only* the training set. k-fold CV is a general method, and is not specific to hyperparameter optimization, but is very useful for that purpose. Say that we want to do e.g. 10-fold cross-validation. The process is as follows: we randomly partition the training set into 10 equal sections. Then, we train an algorithm on 9/10ths (i.e. 9 out of the 10 sections) of that training set. We then evaluate its performance on the remaining 1 section. This gives us some measure of the model's performance (e.g. overall accuracy). We then train the algorithm on a *different* 9/10ths of the training set, and evaluate on the other (different from before) remaining 1 section. We continue the process 10 times, get 10 different measures of model performance, and average these values to get an overall measure of performance. Of course, we could have chosen some number other than 10. To keep on with the example, the process behind 10-fold CV looks like this:

We can use k-fold cross validation to optimize HPs. Say we are deciding whether to use 1,3 or 5 nearest-neighbours in our nearest-neighbours classifier. We can start by setting the `n_neighbours`

HP in our classifier object to 1, running 10-fold CV, and getting a measurement of the model's performance. Repeating the process with the other HP values will lead to different levels of performance, and we can simply choose the `n_neighbours`

value that worked best.

In the context of HP optimization, we perform k-fold cross validation together with **grid search** or **randomized search** to get a more robust estimate of the model performance associated with specific HP values.

In this post, we started with the motivation for tuning machine learning algorithms (i.e. nicer, bigger numbers in your models' performance reports!). We used different methods of searching over hyperparameter space, and evaluated candidate models at different points using k-fold cross validation. Tuned models were then run on the test set. Note that the concepts of training/test sets, cross-validation, and overfitting extend beyond the topic of tuning hyperparameters, though it is a nice application to demonstrate these ideas.

In this post, we tried to maximize the accuracy of our models, but there are problems for which you might want to maximize something else, like the model's specificity or the sensitivity. For example, if we were doing medical diagnostics and trying to detect a deadly illness, it would be very bad to accidentally label a sick person as healthy (this would be called a "false negative" in the lingo). Maybe it's not so bad if we misclassify healthy people as sick people ("false positive"), since in the worst case we would just annoy people by having them retake the diagnostic test. Hence, we might want our diagnostic model to be weighted towards optimizing sensitivity. Here is a good introduction to sensitivity and specificity which continues with the example of diagnostic tests.

Arguably, in spam detection, it is worse to misclassify real email as spam (false positive) than to let a few spam emails pass through your filter (false negative) and show up in people's mailboxes. In this case, we might aim to maximize specificity. Of course, we cannot be so focused on improving the specificity of our classifier that we completely bomb our sensitivity. There is a natural trade-off between these quantities (see this primer on ROC curves), and part of our job as statistical modellers is to practice the dark art of deciding where to draw the line.

Sometimes there is no tuning to be done. For example, a Naive Bayes (NB) classifier just operates by calculating conditional probabilities, and there is no real hyperparameter optimization stage. NB is actually a very interesting algorithm that is famous for classifying text documents (and the `spam`

dataset in particular), so if you have time, check out a great overview and Python implementation here). It's a "naive" classifier because it rests on the assumption that the features in your dataset are independent, which is often not strictly true. In our spam dataset, you can image that the occurence of the strings "win", "money", and "!!!" is probably not independent. Despite this, NB often still does a decent job at classification tasks.

In our next post, we will take these different tuned models and build them up into an ensemble model to increase our predictive performance even more. Stay... tuned! *Cue groans*.