The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Let's get started!
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
In [1]:
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
%matplotlib inline
We will be using the same LendingClub dataset as in the previous assignment.
In [2]:
loans = pd.read_csv('lending-club-data.csv')
loans.head(2)
Out[2]:
In [3]:
loans.columns
Out[3]:
We will now repeat some of the feature processing steps that we saw in the previous assignment:
First, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.
Next, we select four categorical features:
In [4]:
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis=1)
target = 'safe_loans'
loans = loans[features + [target]]
In [5]:
print loans.shape
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details):
In [6]:
categorical_variables = []
for feat_name, feat_type in zip(loans.columns, loans.dtypes):
if feat_type == object:
categorical_variables.append(feat_name)
for feature in categorical_variables:
loans_one_hot_encoded = pd.get_dummies(loans[feature],prefix=feature)
loans_one_hot_encoded.fillna(0)
#print loans_one_hot_encoded
loans = loans.drop(feature, axis=1)
for col in loans_one_hot_encoded.columns:
loans[col] = loans_one_hot_encoded[col]
print loans.head(2)
print loans.columns
Let's see what the feature columns look like now:
In [7]:
with open('module-8-assignment-2-train-idx.json') as train_data_file:
train_idx = json.load(train_data_file)
with open('module-8-assignment-2-test-idx.json') as test_data_file:
test_idx = json.load(test_data_file)
print train_idx[:3]
print test_idx[:3]
In [8]:
print len(train_idx)
print len(test_idx)
In [9]:
train_data = loans.iloc[train_idx]
test_data = loans.iloc[test_idx]
In [10]:
print len(train_data.dtypes)
print len(loans.dtypes )
In [11]:
features = list(train_data.columns)
features.remove('safe_loans')
print list(train_data.columns)
print features
In [12]:
print len(features)
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Consider a model with $N$ data points with:
Then the weighted error is defined by: $$ \mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} $$ where $1[y_i \neq \hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \neq \hat{y_i}$.
Write a function that calculates the weight of mistakes for making the "weighted-majority" predictions for a dataset. The function accepts two inputs:
labels_in_node
: Targets $y_1 ... y_n$ data_weights
: Data point weights $\alpha_1 ... \alpha_n$We are interested in computing the (total) weight of mistakes, i.e. $$ \mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]. $$ This quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way: $$ \mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i} $$
The function intermediate_node_weighted_mistakes should first compute two weights:
$\mathrm{WM}_{+1}$: weight of mistakes when all predictions are $\hat{y}_i = +1$ i.e $\mbox{WM}(\mathbf{\alpha}, \mathbf{+1}$)
where $\mathbf{-1}$ and $\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.
After computing $\mathrm{WM}_{-1}$ and $\mathrm{WM}_{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE
to be filled in several places.
In [94]:
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
"""
print 'labels_in_node: '+ str(labels_in_node)
print 'data_weights: '+str(data_weights)
print data_weights[labels_in_node == +1]
print np.array(data_weights[labels_in_node == +1])
print np.sum(np.array(data_weights[labels_in_node == +1]))
"""
labels_in_node = np.array(labels_in_node)
data_weights = np.array(data_weights)
total_weight_positive = np.sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
# Sum the weights of all entries with label -1
### YOUR CODE HERE
"""
print np.array(data_weights[labels_in_node == -1])
print np.sum(np.array(data_weights[labels_in_node == -1]))
"""
total_weight_negative = np.sum(data_weights[labels_in_node == -1])
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
#print "total_weight_positive: {}, total_weight_negative: {}".format(total_weight_positive, total_weight_negative)
if total_weight_positive >= total_weight_negative:
return (total_weight_negative, +1)
else:
return (total_weight_positive, -1)
Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:
In [95]:
example_labels = np.array([-1, -1, 1, 1, 1])
example_data_weights = np.array([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
Recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}} $$
Quiz Question: If we set the weights $\mathbf{\alpha} = 1$ for all data points, how is the weight of mistakes $\mbox{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ related to the classification error
?
equal
We continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.
The best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:
data_weights
to take account of weights of data points.Complete the following function. Comments starting with DIFFERENT HERE
mark the sections where the weighted version differs from the original implementation.
In [104]:
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
print "len(data_weights): {}".format(len(data_weights))
data['data_weights'] = data_weights
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# print "len(left_split): {}, len(right_split): {}".format(len(left_split), len(right_split))
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = left_split['data_weights']
right_data_weights = right_split['data_weights']
"""
print "left_data_weights: {}, right_data_weights: {}".format(left_data_weights, right_data_weights)
print "len(left_data_weights): {}, len(right_data_weights): {}".format(len(left_data_weights), len(right_data_weights))
print "sum(left_data_weights): {}, sum(right_data_weights): {}".format(sum(left_data_weights), sum(right_data_weights))
"""
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
#print "np.array type: {}".format(np.array(left_split[target]))
left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(np.array(left_split[target]), np.array(left_data_weights))
right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(np.array(right_split[target]), np.array(right_data_weights))
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = (left_weighted_mistakes + right_weighted_mistakes) * 1. / sum(data_weights)
"""
print "left_weighted_mistakes: {}, right_weighted_mistakes: {}".format(left_weighted_mistakes, right_weighted_mistakes)
print "left_weighted_mistakes + right_weighted_mistakes: {}, error: {}".format(left_weighted_mistakes + right_weighted_mistakes, error)
print "feature and error: "
print "feature: {}, error: {}".format(feature, error)
"""
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
#print "best_feature: {}, best_error: {}".format(feature, error)
best_feature = feature
best_error = error
#print "best_feature: {}, best_error: {}".format(best_feature, best_error)
# Return the best feature we found
return best_feature
Checkpoint: Now, we have another checkpoint to make sure you are on the right track.
In [105]:
example_data_weights = np.array(len(train_data)* [1.5])
#print "example_data_weights: {}".format(example_data_weights)
#print "train_data: \n {}, features: {}, target: {}, example_data_weights: {}".format(train_data, features, target, example_data_weights)
#print best_splitting_feature(train_data, features, target, example_data_weights)
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so $$ \mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}. $$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}_{\mathrm{left}}, \mathbf{\hat{y}}_{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}_{\mathrm{right}}, \mathbf{\hat{y}}_{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split: $$ \mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{yi}] = \sum{\mathrm{left}} \alpha_i \times 1[y_i \neq \hat{y_i}]
With the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'features_remaining' : List of features that are posible splits.
}
Let us start with a function that creates a leaf node given a set of target values:
In [98]:
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = best_class
return leaf
We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:
In [109]:
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
data['data_weights'] = data_weights
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
"""
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
"""
left_data_weights = np.array(left_split['data_weights'])
right_data_weights = np.array(right_split['data_weights'])
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Here is a recursive function to count the nodes in your tree:
In [110]:
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
In [111]:
example_data_weights = np.array([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf': False,
'left': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'splitting_feature': 'grade.A'
},
'prediction': None,
'right': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'splitting_feature': 'grade.D'
},
'splitting_feature': 'term. 36 months'
}
In [112]:
small_data_decision_tree
Out[112]:
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
In [113]:
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}} $$
The function called evaluate_classification_error takes in as input:
tree
(as described above)data
(an SFrame)The function does not change because of adding data point weights.
In [118]:
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x), axis=1)
# Once you've made the predictions, calculate the classification error
return (data[target] != np.array(prediction)).values.sum() / float(len(data))
In [119]:
evaluate_classification_error(small_data_decision_tree, test_data)
Out[119]:
In [120]:
evaluate_classification_error(small_data_decision_tree, train_data)
Out[120]:
To build intuition on how weighted data points affect the tree being built, consider the following:
Suppose we only care about making good predictions for the first 10 and last 10 items in train_data
, we assign weights:
Let us fit a weighted decision tree with max_depth = 2
.
In [121]:
# Assign weights
example_data_weights = np.array([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
Now, we will compute the classification error on the subset_20
, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
In [122]:
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
Out[122]:
Now, let us compare the classification error of the model small_data_decision_tree_subset_20
on the entire test set train_data
:
In [123]:
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
Out[123]:
The model small_data_decision_tree_subset_20
performs a lot better on subset_20
than on train_data
.
So, what does this mean?
Quiz Question: Will you get the same model as small_data_decision_tree_subset_20
if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20
?
Yes
In [124]:
# Assign weights
sth_example_data_weights = np.array([1.] * 10 + [1.] * 10)
# Train a weighted decision tree model.
sth_test_model = weighted_decision_tree_create(subset_20, features, target,
sth_example_data_weights, max_depth=2)
In [125]:
small_data_decision_tree_subset_20
Out[125]:
In [126]:
sth_test_model
Out[126]:
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1
.
Recall from the lecture the procedure for Adaboost:
1. Start with unweighted data with $\alpha_j = 1$
2. For t = 1,...T:
Complete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE
.
In [137]:
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = np.array([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = np.sum(np.array(is_wrong) * alpha) * 1. / np.sum(alpha)
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = 1. / 2 * log((1 - weighted_error) * 1. / (weighted_error))
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
alpha = alpha * np.array(adjustment)
alpha = alpha / np.sum(alpha)
return weights, tree_stumps
In [138]:
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
In [141]:
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('_')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
Here is what the first stump looks like:
In [142]:
print_stump(tree_stumps[0])
Here is what the next stump looks like:
In [143]:
print_stump(tree_stumps[1])
In [144]:
print stump_weights
If your Adaboost is correctly implemented, the following things should be true:
tree_stumps[0]
should split on term. 36 months with the prediction -1 on the left and +1 on the right.tree_stumps[1]
should split on grade.A with the prediction -1 on the left and +1 on the right.[0.158, 0.177]
Reminders
Let us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:
train_data
features
target
num_tree_stumps = 10
In [145]:
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
Recall from the lecture that in order to make predictions, we use the following formula: $$ \hat{y} = sign\left(\sum_{t=1}^T \hat{w}_t f_t(x)\right) $$
We need to do the following things:
stump_weights
with the predictions $f_t(x)$ from the decision treesComplete the following skeleton for making predictions:
In [164]:
def predict_adaboost(stump_weights, tree_stumps, data):
scores = np.array([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Accumulate predictions on scores array
# YOUR CODE HERE
scores = scores + stump_weights[i] * np.array(predictions)
# return the prediction
return np.array(1 * (scores > 0) + (-1) * (scores <= 0))
In [167]:
traindata_predictions = predict_adaboost(stump_weights, tree_stumps, train_data)
train_accuracy = np.sum(np.array(train_data[target]) == traindata_predictions) / float(len(traindata_predictions))
print 'training data Accuracy of 10-component ensemble = %s' % train_accuracy
In [165]:
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = np.sum(np.array(test_data[target]) == predictions) / float(len(predictions))
print 'test data Accuracy of 10-component ensemble = %s' % accuracy
Now, let us take a quick look what the stump_weights
look like at the end of each iteration of the 10-stump ensemble:
In [166]:
stump_weights
Out[166]:
In [148]:
plt.plot(stump_weights)
plt.show()
Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither? Neither
Reminder: Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
In this section, we will try to reproduce some of the performance plots dicussed in the lecture.
We will now train an ensemble with:
train_data
features
target
num_tree_stumps = 30
Once we are done with this, we will then do the following:
First, lets train the model.
In [168]:
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
In [169]:
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = np.sum(np.array(train_data[target]) != predictions) / float(len(predictions))
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
In [170]:
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.
Performing well on the training data is cheating, so lets make sure it works on the test_data
as well. Here, we will compute the classification error on the test_data
at the end of each iteration.
In [171]:
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = np.sum(np.array(test_data[target]) != predictions) / float(len(predictions))
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
In [172]:
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
Quiz Question: From this plot (with 30 trees), is there massive overfitting as the # of iterations increases?
NO
In [ ]: