12 - Ensemble Methods - Boosting

by Alejandro Correa Bahnsen and Jesus Solano

version 1.5, February 2019

Part of the class Practical Machine Learning

This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham)

Why are we learning about ensembling?

  • Very popular method for improving the predictive performance of machine learning models
  • Provides a foundation for understanding more sophisticated models

Part 5: Boosting

While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted: examples that are misclassified gain weight and examples that are classified correctly lose weight (some boosting algorithms actually decrease the weight of repeatedly misclassified examples, e.g., boost by majority and BrownBoost). Thus, future weak learners focus more on the examples that previous weak learners misclassified. (Wikipedia)


In [32]:
from IPython.display import Image
Image(url= "http://vision.cs.chubu.ac.jp/wp/wp-content/uploads/2013/07/OurMethodv81.png", width=900)


Out[32]:

Adaboost

AdaBoost (adaptive boosting) is an ensemble learning algorithm that can be used for classification or regression. Although AdaBoost is more resistant to overfitting than many machine learning algorithms, it is often sensitive to noisy data and outliers.

AdaBoost is called adaptive because it uses multiple iterations to generate a single composite strong learner. AdaBoost creates the strong learner (a classifier that is well-correlated to the true classifier) by iteratively adding weak learners (a classifier that is only slightly correlated to the true classifier). During each round of training, a new weak learner is added to the ensemble and a weighting vector is adjusted to focus on examples that were misclassified in previous rounds. The result is a classifier that has higher accuracy than the weak learners’ classifiers.

Algorithm:

  • Initialize all weights ($w_i$) to 1 / n_samples
  • Train a classifier $h_t$ using weights
  • Estimate training error $e_t$
  • set $alpha_t = log\left(\frac{1-e_t}{e_t}\right)$
  • Update weights $$w_i^{t+1} = w_i^{t}e^{\left(\alpha_t \mathbf{I}\left(y_i \ne h_t(x_t)\right)\right)}$$
  • Repeat while $e_t<0.5$ and $t<T$

In [33]:
# read in and prepare the churn data
# Download the dataset
import pandas as pd
import numpy as np

url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/churn.csv'
data = pd.read_csv(url)

# Create X and y

# Select only the numeric features
X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)
# Convert bools to floats
X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))

y = (data.iloc[:, -1] == 'True.').astype(np.int)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
n_samples = X_train.shape[0]

In [34]:
n_estimators = 10
weights = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))

In [35]:
t = 0
weights[t] = 1 / n_samples

Train the classifier


In [36]:
from sklearn.tree import DecisionTreeClassifier
trees = []
trees.append(DecisionTreeClassifier(max_depth=1))
trees[t].fit(X_train, y_train, sample_weight=weights[t].values)


Out[36]:
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=1,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best')

Estimate error


In [37]:
y_pred_ = trees[t].predict(X_train)
error = []
error.append(1 - metrics.accuracy_score(y_pred_, y_train))
error[t]


Out[37]:
0.13613972234661886

In [38]:
alpha = []
alpha.append(np.log((1 - error[t]) / error[t]))
alpha[t]


Out[38]:
1.8477293114995077

Update weights


In [39]:
weights[t + 1] = weights[t]
filter_ = y_pred_ != y_train

In [40]:
weights.loc[filter_, t + 1] = weights.loc[filter_, t] * np.exp(alpha[t])

Normalize weights


In [41]:
weights[t + 1] = weights[t + 1] / weights[t + 1].sum()

Iteration 2 - n_estimators


In [42]:
for t in range(1, n_estimators):
    trees.append(DecisionTreeClassifier(max_depth=1))
    trees[t].fit(X_train, y_train, sample_weight=weights[t].values)
    y_pred_ = trees[t].predict(X_train)
    error.append(1 - metrics.accuracy_score(y_pred_, y_train))
    alpha.append(np.log((1 - error[t]) / error[t]))
    weights[t + 1] = weights[t]
    filter_ = y_pred_ != y_train
    weights.loc[filter_, t + 1] = weights.loc[filter_, t] * np.exp(alpha[t])
    weights[t + 1] = weights[t + 1] / weights[t + 1].sum()

In [43]:
error


Out[43]:
[0.13613972234661886,
 0.15629198387819077,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092,
 0.8437080161218092]

Create classification

Only classifiers when error < 0.5


In [44]:
new_n_estimators = np.sum([x<0.5 for x in error])

In [45]:
y_pred_all = np.zeros((X_test.shape[0], new_n_estimators))
for t in range(new_n_estimators):
    y_pred_all[:, t] = trees[t].predict(X_test)

In [46]:
y_pred = (np.sum(y_pred_all * alpha[:new_n_estimators], axis=1) >= 1).astype(np.int)

In [47]:
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)


Out[47]:
(0.5105105105105104, 0.8518181818181818)

Using sklearn


In [48]:
from sklearn.ensemble import AdaBoostClassifier

In [49]:
clf = AdaBoostClassifier()
clf


Out[49]:
AdaBoostClassifier(algorithm='SAMME.R', base_estimator=None,
          learning_rate=1.0, n_estimators=50, random_state=None)

In [50]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)


Out[50]:
(0.29107981220657275, 0.8627272727272727)

Gradient Boosting


In [51]:
from sklearn.ensemble import GradientBoostingClassifier

clf = GradientBoostingClassifier()
clf


Out[51]:
GradientBoostingClassifier(criterion='friedman_mse', init=None,
              learning_rate=0.1, loss='deviance', max_depth=3,
              max_features=None, max_leaf_nodes=None,
              min_impurity_decrease=0.0, min_impurity_split=None,
              min_samples_leaf=1, min_samples_split=2,
              min_weight_fraction_leaf=0.0, n_estimators=100,
              n_iter_no_change=None, presort='auto', random_state=None,
              subsample=1.0, tol=0.0001, validation_fraction=0.1,
              verbose=0, warm_start=False)

In [52]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)


Out[52]:
(0.5289256198347108, 0.8963636363636364)