by Alejandro Correa Bahnsen and Jesus Solano
version 1.5, February 2019
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Why are we learning about ensembling?
Students will be able to:
Ensemble learning is a widely studied topic in the machine learning community. The main idea behind the ensemble methodology is to combine several individual base classifiers in order to have a classifier that outperforms each of them.
Nowadays, ensemble methods are one of the most popular and well studied machine learning techniques, and it can be noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core principle in ensemble learning, is to induce random perturbations into the learning procedure in order to produce several different base classifiers from a single training set, then combining the base classifiers in order to make the final prediction. In order to induce the random permutations and therefore create the different base classifiers, several methods have been proposed, in particular:
Finally, after the base classifiers are trained, they are typically combined using either:
There are three main reasons regarding why ensemble methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too small, an algorithm can find several good models within the search space, that arise to the same performance on the training set $\mathcal{S}$. Nevertheless, without a validation set, there is a risk of choosing the wrong model. The second reason is computational; in general, algorithms rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may solve this by focusing different algorithms to different spaces across the training set. The last reason is representational. In most cases, for a learning set of finite size, the true function $f$ cannot be represented by any of the candidate models. By combining several models in an ensemble, it may be possible to obtain a model with a larger coverage across the space of representable functions.
Let's pretend that instead of building a single model to solve a binary classification problem, you created five independent models, and each model was correct about 70% of the time. If you combined these models into an "ensemble" and used their majority vote as a prediction, how often would the ensemble be correct?
In [1]:
import numpy as np
# set a seed for reproducibility
np.random.seed(1234)
# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations
mod1 = np.random.rand(1000)
mod2 = np.random.rand(1000)
mod3 = np.random.rand(1000)
mod4 = np.random.rand(1000)
mod5 = np.random.rand(1000)
# each model independently predicts 1 (the "correct response") if random number was at least 0.3
preds1 = np.where(mod1 > 0.3, 1, 0)
preds2 = np.where(mod2 > 0.3, 1, 0)
preds3 = np.where(mod3 > 0.3, 1, 0)
preds4 = np.where(mod4 > 0.3, 1, 0)
preds5 = np.where(mod5 > 0.3, 1, 0)
# print the first 20 predictions from each model
print(preds1[:20])
print(preds2[:20])
print(preds3[:20])
print(preds4[:20])
print(preds5[:20])
In [2]:
# average the predictions and then round to 0 or 1
ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)
# print the ensemble's first 20 predictions
print(ensemble_preds[:20])
In [3]:
# how accurate was each individual model?
print(preds1.mean())
print(preds2.mean())
print(preds3.mean())
print(preds4.mean())
print(preds5.mean())
In [4]:
# how accurate was the ensemble?
print(ensemble_preds.mean())
Note: As you add more models to the voting process, the probability of error decreases, which is known as Condorcet's Jury Theorem.
Ensemble learning (or "ensembling") is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.
For ensembling to work well, the models must have the following characteristics:
The big idea: If you have a collection of individually imperfect (and independent) models, the "one-off" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.
There are two basic methods for ensembling:
If we assume that each one of the $T$ base classifiers has a probability $\rho$ of being correct, the probability of an ensemble making the correct decision, assuming independence, denoted by $P_c$, can be calculated using the binomial distribution
$$P_c = \sum_{j>T/2}^{T} {{T}\choose{j}} \rho^j(1-\rho)^{T-j}.$$Furthermore, as shown, if $T\ge3$ then:
$$ \lim_{T \to \infty} P_c= \begin{cases} 1 &\mbox{if } \rho>0.5 \\ 0 &\mbox{if } \rho<0.5 \\ 0.5 &\mbox{if } \rho=0.5 , \end{cases} $$leading to the conclusion that
$$
\rho \ge 0.5 \quad \text{and} \quad T\ge3 \quad \Rightarrow \quad P_c\ge \rho.
$$
Machine learning flowchart created by the winner of Kaggle's CrowdFlower competition
In [5]:
# read in and prepare the vehicle training data
import pandas as pd
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_train.csv'
train = pd.read_csv(url)
train['vtype'] = train.vtype.map({'car':0, 'truck':1})
# read in and prepare the vehicle testing data
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_test.csv'
test = pd.read_csv(url)
test['vtype'] = test.vtype.map({'car':0, 'truck':1})
In [6]:
train.head()
Out[6]:
In [7]:
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsRegressor
models = {'lr': LinearRegression(),
'dt': DecisionTreeRegressor(),
'nb': GaussianNB(),
'kn': KNeighborsRegressor()}
In [8]:
# Train all the models
X_train = train.iloc[:, 1:]
X_test = test.iloc[:, 1:]
y_train = train.price
y_test = test.price
for model in models.keys():
models[model].fit(X_train, y_train)
In [9]:
# predict test for each model
y_pred = pd.DataFrame(index=test.index, columns=models.keys())
for model in models.keys():
y_pred[model] = models[model].predict(X_test)
In [10]:
# Evaluate each model
from sklearn.metrics import mean_squared_error
for model in models.keys():
print(model,np.sqrt(mean_squared_error(y_pred[model], y_test)))
In [11]:
np.sqrt(mean_squared_error(y_pred.mean(axis=1), y_test))
Out[11]:
Advantages of manual ensembling:
Disadvantages of manual ensembling:
The primary weakness of decision trees is that they don't tend to have the best predictive accuracy. This is partially due to high variance, meaning that different splits in the training data can lead to very different trees.
Bagging is a general purpose procedure for reducing the variance of a machine learning method, but is particularly useful for decision trees. Bagging is short for bootstrap aggregation, meaning the aggregation of bootstrap samples.
What is a bootstrap sample? A random sample with replacement:
In [12]:
# set a seed for reproducibility
np.random.seed(1)
# create an array of 1 through 20
nums = np.arange(1, 21)
print(nums)
# sample that array 20 times with replacement
print(np.random.choice(a=nums, size=20, replace=True))
How does bagging work (for decision trees)?
Notes:
Bagging increases predictive accuracy by reducing the variance, similar to how cross-validation reduces the variance associated with train/test split (for estimating out-of-sample error) by splitting many times an averaging the results.
In [13]:
# set a seed for reproducibility
np.random.seed(123)
n_samples = train.shape[0]
n_B = 10
# create ten bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(1, n_B +1 )]
samples
Out[13]:
In [14]:
# show the rows for the first decision tree
train.iloc[samples[0], :]
Out[14]:
Build one tree for each sample
In [15]:
from sklearn.tree import DecisionTreeRegressor
# grow each tree deep
treereg = DecisionTreeRegressor(max_depth=None, random_state=123)
# DataFrame for storing predicted price from each tree
y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])
# grow one tree for each bootstrap sample and make predictions on testing data
for i, sample in enumerate(samples):
X_train = train.iloc[sample, 1:]
y_train = train.iloc[sample, 0]
treereg.fit(X_train, y_train)
y_pred[i] = treereg.predict(X_test)
In [16]:
y_pred
Out[16]:
Results of each tree
In [17]:
for i in range(n_B):
print(i, np.sqrt(mean_squared_error(y_pred[i], y_test)))
Results of the ensemble
In [18]:
y_pred.mean(axis=1)
Out[18]:
In [19]:
np.sqrt(mean_squared_error(y_test, y_pred.mean(axis=1)))
Out[19]:
In [20]:
# define the training and testing sets
X_train = train.iloc[:, 1:]
y_train = train.iloc[:, 0]
X_test = test.iloc[:, 1:]
y_test = test.iloc[:, 0]
In [21]:
# instruct BaggingRegressor to use DecisionTreeRegressor as the "base estimator"
from sklearn.ensemble import BaggingRegressor
bagreg = BaggingRegressor(DecisionTreeRegressor(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=1)
In [22]:
# fit and predict
bagreg.fit(X_train, y_train)
y_pred = bagreg.predict(X_test)
y_pred
Out[22]:
In [23]:
# calculate RMSE
np.sqrt(mean_squared_error(y_test, y_pred))
Out[23]:
In [24]:
# show the first bootstrap sample
samples[0]
Out[24]:
In [25]:
# show the "in-bag" observations for each sample
for sample in samples:
print(set(sample))
In [26]:
# show the "out-of-bag" observations for each sample
for sample in samples:
print(sorted(set(range(n_samples)) - set(sample)))
How to calculate "out-of-bag error":
When B is sufficiently large, the out-of-bag error is an accurate estimate of out-of-sample error.
In [27]:
# compute the out-of-bag R-squared score (not MSE, unfortunately!) for B=500
bagreg.oob_score_
Out[27]:
Bagging increases predictive accuracy, but decreases model interpretability because it's no longer possible to visualize the tree to understand the importance of each feature.
However, we can still obtain an overall summary of feature importance from bagged models:
The most typical form of an ensemble is made by combining $T$ different base classifiers.
Each base classifier $M(\mathcal{S}_j)$ is trained by applying algorithm $M$ to a random subset
$\mathcal{S}_j$ of the training set $\mathcal{S}$.
For simplicity we define $M_j \equiv M(\mathcal{S}_j)$ for $j=1,\dots,T$, and
$\mathcal{M}=\{M_j\}_{j=1}^{T}$ a set of base classifiers.
Then, these models are combined using majority voting to create the ensemble $H$ as follows
$$
f_{mv}(\mathcal{S},\mathcal{M}) = max_{c \in \{0,1\}} \sum_{j=1}^T
\mathbf{1}_c(M_j(\mathcal{S})).
$$
In [28]:
# read in and prepare the churn data
# Download the dataset
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/churn.csv'
data = pd.read_csv(url)
# Create X and y
# Select only the numeric features
X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)
# Convert bools to floats
X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))
y = (data.iloc[:, -1] == 'True.').astype(np.int)
In [29]:
X.head()
Out[29]:
In [30]:
y.value_counts().to_frame('count').assign(percentage = lambda x: x/x.sum())
Out[30]:
In [31]:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Create 100 decision trees
In [32]:
n_estimators = 100
# set a seed for reproducibility
np.random.seed(123)
n_samples = X_train.shape[0]
# create bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(n_estimators)]
In [33]:
from sklearn.tree import DecisionTreeClassifier
np.random.seed(123)
seeds = np.random.randint(1, 10000, size=n_estimators)
trees = {}
for i in range(n_estimators):
trees[i] = DecisionTreeClassifier(max_features="sqrt", max_depth=None, random_state=seeds[i])
trees[i].fit(X_train.iloc[samples[i]], y_train.iloc[samples[i]])
In [34]:
# Predict
y_pred_df = pd.DataFrame(index=X_test.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
y_pred_df.iloc[:, i] = trees[i].predict(X_test)
y_pred_df.head()
Out[34]:
Predict using majority voting
In [35]:
y_pred_df.sum(axis=1)[:10]
Out[35]:
In [36]:
y_pred = (y_pred_df.sum(axis=1) >= (n_estimators / 2)).astype(np.int)
from sklearn import metrics
metrics.f1_score(y_pred, y_test)
Out[36]:
In [37]:
metrics.accuracy_score(y_pred, y_test)
Out[37]:
In [38]:
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
In [39]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[39]:
The majority voting approach gives the same weight to each classfier regardless of the performance of each one. Why not take into account the oob performance of each classifier
First, in the traditional approach, a similar comparison of the votes of the base classifiers is made, but giving a weight $\alpha_j$ to each classifier $M_j$ during the voting phase $$ f_{wv}(\mathcal{S},\mathcal{M}, \alpha) =\max_{c \in \{0,1\}} \sum_{j=1}^T \alpha_j \mathbf{1}_c(M_j(\mathcal{S})), $$ where $\alpha=\{\alpha_j\}_{j=1}^T$. The calculation of $\alpha_j$ is related to the performance of each classifier $M_j$. It is usually defined as the normalized misclassification error $\epsilon$ of the base classifier $M_j$ in the out of bag set $\mathcal{S}_j^{oob}=\mathcal{S}-\mathcal{S}_j$ \begin{equation} \alpha_j=\frac{1-\epsilon(M_j(\mathcal{S}_j^{oob}))}{\sum_{j_1=1}^T 1-\epsilon(M_{j_1}(\mathcal{S}_{j_1}^{oob}))}. \end{equation}
Select each oob sample
In [40]:
samples_oob = []
# show the "out-of-bag" observations for each sample
for sample in samples:
samples_oob.append(sorted(set(range(n_samples)) - set(sample)))
Estimate the oob error of each classifier
In [41]:
errors = np.zeros(n_estimators)
for i in range(n_estimators):
y_pred_ = trees[i].predict(X_train.iloc[samples_oob[i]])
errors[i] = 1 - metrics.accuracy_score(y_train.iloc[samples_oob[i]], y_pred_)
In [42]:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
plt.scatter(range(n_estimators), errors)
plt.xlim([0, n_estimators])
plt.title('OOB error of each tree')
Out[42]:
Estimate $\alpha$
In [43]:
alpha = (1 - errors) / (1 - errors).sum()
In [44]:
weighted_sum_1 = ((y_pred_df) * alpha).sum(axis=1)
In [45]:
weighted_sum_1.head(20)
Out[45]:
In [46]:
y_pred = (weighted_sum_1 >= 0.5).astype(np.int)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[46]:
In [47]:
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[47]:
In [48]:
errors = np.zeros(clf.n_estimators)
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
oob_sample = ~clf.estimators_samples_[i]
y_pred_ = clf.estimators_[i].predict(X_train.values[oob_sample])
errors[i] = metrics.accuracy_score(y_pred_, y_train.values[oob_sample])
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
alpha = (1 - errors) / (1 - errors).sum()
y_pred = (np.sum(y_pred_all_ * alpha, axis=1) >= 0.5).astype(np.int)
In [49]:
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[49]:
The staking method consists in combining the different base classifiers by learning a second level algorithm on top of them. In this framework, once the base classifiers are constructed using the training set $\mathcal{S}$, a new set is constructed where the output of the base classifiers are now considered as the features while keeping the class labels.
Even though there is no restriction on which algorithm can be used as a second level learner, it is common to use a linear model, such as $$ f_s(\mathcal{S},\mathcal{M},\beta) = g \left( \sum_{j=1}^T \beta_j M_j(\mathcal{S}) \right), $$ where $\beta=\{\beta_j\}_{j=1}^T$, and $g(\cdot)$ is the sign function $g(z)=sign(z)$ in the case of a linear regression or the sigmoid function, defined as $g(z)=1/(1+e^{-z})$, in the case of a logistic regression.
Lets first get a new training set consisting of the output of every classifier
In [50]:
X_train_2 = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
X_train_2[i] = trees[i].predict(X_train)
In [51]:
X_train_2.head()
Out[51]:
In [52]:
from sklearn.linear_model import LogisticRegressionCV
In [53]:
lr = LogisticRegressionCV(cv = 5 )
lr.fit(X_train_2, y_train)
Out[53]:
In [54]:
lr.coef_
Out[54]:
In [55]:
y_pred = lr.predict(y_pred_df)
In [56]:
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[56]:
In [57]:
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
X_train_3 = np.zeros((X_train.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
X_train_3[:, i] = clf.estimators_[i].predict(X_train)
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
lr = LogisticRegressionCV(cv=5)
lr.fit(X_train_3, y_train)
y_pred = lr.predict(y_pred_all_)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[57]:
vs using only one dt
In [58]:
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Out[58]: