In this tutorial, we show how to build a well-tuned H2O GBM model for a supervised classification task. We specifically don't focus on feature engineering and use a small dataset to allow you to reproduce these results in a few minutes on a laptop. This script can be directly transferred to datasets that are hundreds of GBs large and H2O clusters with dozens of compute nodes.
You can download the source from H2O's github repository.
Ports to R Markdown and Flow UI (now part of Example Flows) are available as well.
Either download H2O from H2O.ai's website or install the latest version of H2O into Python with the following set of commands:
Install dependencies from command line (prepending with sudo
if needed):
[sudo] pip install -U requests
[sudo] pip install -U tabulate
[sudo] pip install -U future
[sudo] pip install -U six
The following command removes the H2O module for Python.
[sudo] pip uninstall h2o
Next, use pip to install this version of the H2O Python module.
[sudo] pip install http://h2o-release.s3.amazonaws.com/h2o/rel-zahradnik/3/Python/h2o-3.30.0.3-py2.py3-none-any.whl
In [1]:
import h2o
import numpy as np
import math
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(nthreads=-1, strict_version_check=True)
## optional: connect to a running H2O cluster
#h2o.init(ip="mycluster", port=55555)
Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters. Here, we use a small public dataset (Titanic), but you can use datasets that are hundreds of GBs large.
In [2]:
## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.
df = h2o.import_file(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
print(df.dim)
print(df.head)
print(df.tail)
print(df.describe)
## pick a response for the supervised problem
response = "survived"
## the response variable is an integer, we will turn it into a categorical/factor for binary classification
df[response] = df[response].asfactor()
## use all other columns (except for the name & the response column ("survived")) as predictors
predictors = df.columns
del predictors[1:3]
print(predictors)
From now on, everything is generic and directly applies to most datasets. We assume that all feature engineering is done at this stage and focus on model tuning. For multi-class problems, you can use h2o.logloss()
or h2o.confusion_matrix()
instead of h2o.auc()
and for regression problems, you can use h2o.mean_residual_deviance()
or h2o.mse()
.
We split the data into three pieces: 60% for training, 20% for validation, 20% for final testing. Here, we use random splitting, but this assumes i.i.d. data. If this is not the case (e.g., when events span across multiple rows or data has a time structure), you'll have to sample your data non-randomly.
In [3]:
train, valid, test = df.split_frame(
ratios=[0.6,0.2],
seed=1234,
destination_frames=['train.hex','valid.hex','test.hex']
)
As the first step, we'll build some default models to see what accuracy we can expect. Let's use the AUC metric for this demo, but you can use h2o.logloss()
and stopping_metric="logloss"
as well. It ranges from 0.5 for random models to 1 for perfect models.
The first model is a default GBM, trained on the 60% training split
In [4]:
#We only provide the required parameters, everything else is default
gbm = H2OGradientBoostingEstimator()
gbm.train(x=predictors, y=response, training_frame=train)
## Show a detailed model summary
print(gbm)
In [5]:
## Get the AUC on the validation set
perf = gbm.model_performance(valid)
print(perf.auc())
The AUC is 95%, so this model is highly predictive!
The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds. Note that cross-validation takes longer and is not usually done for really large datasets.
In [6]:
## rbind() makes a copy here, so it's better to use split_frame with `ratios = c(0.8)` instead above
cv_gbm = H2OGradientBoostingEstimator(nfolds = 4, seed = 0xDECAF)
cv_gbm.train(x = predictors, y = response, training_frame = train.rbind(valid))
We see that the cross-validated performance is similar to the validation set performance:
In [7]:
## Show a detailed summary of the cross validation metrics
## This gives you an idea of the variance between the folds
cv_summary = cv_gbm.cross_validation_metrics_summary().as_data_frame()
#print(cv_summary) ## Full summary of all metrics
#print(cv_summary.iloc[4]) ## get the row with just the AUCs
## Get the cross-validated AUC by scoring the combined holdout predictions.
## (Instead of taking the average of the metrics across the folds)
perf_cv = cv_gbm.model_performance(xval=True)
print(perf_cv.auc())
Next, we train a GBM with "I feel lucky" parameters. We'll use early stopping to automatically tune the number of trees using the validation AUC. We'll use a lower learning rate (lower is always better, just takes more trees to converge). We'll also use stochastic sampling of rows and columns to (hopefully) improve generalization.
In [8]:
gbm_lucky = H2OGradientBoostingEstimator(
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees = 10000,
## smaller learning rate is better (this is a good value for most datasets, but see below for annealing)
learn_rate = 0.01,
## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events
stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC",
## sample 80% of rows per tree
sample_rate = 0.8,
## sample 80% of columns per split
col_sample_rate = 0.8,
## fix a random number generator seed for reproducibility
seed = 1234,
## score every 10 trees to make early stopping reproducible (it depends on the scoring interval)
score_tree_interval = 10)
gbm_lucky.train(x=predictors, y=response, training_frame=train, validation_frame=valid)
This model doesn't seem to be better than the previous models:
In [9]:
perf_lucky = gbm_lucky.model_performance(valid)
print(perf_lucky.auc())
For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we'll let this parameter tune freshly below, so no worries.
Next, we'll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%).
The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following "rules":
ntrees
) as it takes until the validation set error starts increasing.learn_rate
) is generally better, but will require more trees. Using learn_rate=0.02
and learn_rate_annealing=0.995
(reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead.max_depth
) is data dependent, deeper trees take longer to train, especially at depths greater than 10.sample_rate
and col_sample_rate
) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (col_sample_rate_per_tree
) can also be tuned. Note that it is multiplicative with col_sample_rate
, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split.sample_rate_per_class
(array of ratios, one per response class in lexicographic order).First we want to know what value of max_depth
to use because it has a big impact on the model training time and optimal values depend strongly on the dataset.
We'll do a quick Cartesian grid search to get a rough idea of good candidate max_depth
values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before.
We'll use learning rate annealing to speed up convergence without sacrificing too much accuracy.
In [10]:
## Depth 10 is usually plenty of depth for most datasets, but you never know
hyper_params = {'max_depth' : list(range(1,30,2))}
#hyper_params = {max_depth = [4,6,8,12,16,20]} ##faster for larger datasets
#Build initial GBM Model
gbm_grid = H2OGradientBoostingEstimator(
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## sample 80% of rows per tree
sample_rate = 0.8,
## sample 80% of columns per split
col_sample_rate = 0.8,
## fix a random number generator seed for reproducibility
seed = 1234,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
grid = H2OGridSearch(gbm_grid,hyper_params,
grid_id = 'depth_grid',
search_criteria = {'strategy': "Cartesian"})
#Train grid search
grid.train(x=predictors,
y=response,
training_frame = train,
validation_frame = valid)
In [11]:
## by default, display the grid search results sorted by increasing logloss (since this is a classification task)
print(grid)
In [12]:
## sort the grid models by decreasing AUC
sorted_grid = grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_grid)
It appears that max_depth
values of 5 to 13 are best suited for this dataset, which is unusally deep!
In [13]:
max_depths = sorted_grid.sorted_metric_table()['max_depth'][0:5]
new_max = int(max(max_depths, key=int))
new_min = int(min(max_depths, key=int))
print("MaxDepth", new_max)
print("MinDepth", new_min)
Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don't know what combinations of hyper-parameters will result in the best model, we'll use random hyper-parameter search to "let the machine get luckier than a best guess of any human".
In [14]:
# create hyperameter and search criteria lists (ranges are inclusive..exclusive))
hyper_params_tune = {'max_depth' : list(range(new_min,new_max+1,1)),
'sample_rate': [x/100. for x in range(20,101)],
'col_sample_rate' : [x/100. for x in range(20,101)],
'col_sample_rate_per_tree': [x/100. for x in range(20,101)],
'col_sample_rate_change_per_level': [x/100. for x in range(90,111)],
'min_rows': [2**x for x in range(0,int(math.log(train.nrow,2)-1)+1)],
'nbins': [2**x for x in range(4,11)],
'nbins_cats': [2**x for x in range(4,13)],
'min_split_improvement': [0,1e-8,1e-6,1e-4],
'histogram_type': ["UniformAdaptive","QuantilesGlobal","RoundRobin"]}
search_criteria_tune = {'strategy': "RandomDiscrete",
'max_runtime_secs': 3600, ## limit the runtime to 60 minutes
'max_models': 100, ## build no more than 100 models
'seed' : 1234,
'stopping_rounds' : 5,
'stopping_metric' : "AUC",
'stopping_tolerance': 1e-3
}
In [15]:
gbm_final_grid = H2OGradientBoostingEstimator(distribution='bernoulli',
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## fix a random number generator seed for reproducibility
seed = 1234,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
final_grid = H2OGridSearch(gbm_final_grid, hyper_params = hyper_params_tune,
grid_id = 'final_grid',
search_criteria = search_criteria_tune)
#Train grid search
final_grid.train(x=predictors,
y=response,
## early stopping based on timeout (no model should take more than 1 hour - modify as needed)
max_runtime_secs = 3600,
training_frame = train,
validation_frame = valid)
print(final_grid)
We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!
In [16]:
## Sort the grid models by AUC
sorted_final_grid = final_grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_final_grid)
You can also see the results of the grid search in Flow:
In [17]:
#Get the best model from the list (the model name listed at the top of the table)
best_model = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
performance_best_model = best_model.model_performance(test)
print(performance_best_model.auc())
Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:
We can inspect the winning model's parameters:
In [18]:
params_list = []
for key, value in best_model.params.items():
params_list.append(str(key)+" = "+str(value['actual']))
params_list
Out[18]:
Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):
In [19]:
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
setattr(gbm_best,key,params[key]['actual'])
In [20]:
gbm_best.train(x=predictors, y=response, training_frame=df)
In [21]:
print(gbm_best.cross_validation_metrics_summary())
It looks like the winning model performs slightly better on the validation and test sets than during cross-validation on the training set as the mean AUC on the 5 folds is estimated to be only 97.4%, but with a fairly large standard deviation of 0.9%. For small datasets, such a large variance is not unusual. To get a better estimate of model performance, the Random hyper-parameter search could have used nfolds = 5
(or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as nfolds+1
models will be built for every set of parameters.
Instead, to save time, let's just scan through the top 5 models and cross-validate their parameters with nfolds=5
on the entire dataset:
In [22]:
for i in range(5):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
new_model = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(new_model) and getattr(new_model,key) != params[key]['actual']:
setattr(new_model,key,params[key]['actual'])
new_model.train(x = predictors, y = response, training_frame = df)
cv_summary = new_model.cross_validation_metrics_summary().as_data_frame()
print(gbm.model_id)
print(cv_summary.iloc[1]) ## AUC
The avid reader might have noticed that we just implicitly did further parameter tuning using the "final" test set (which is part of the entire dataset df
), which is not good practice - one is not supposed to use the "final" test set more than once. Hence, we're not going to pick a different "best" model, but we're just learning about the variance in AUCs. It turns out, for this tiny dataset, that the variance is rather large, which is not surprising.
Keeping the same "best" model, we can make test set predictions as follows:
In [23]:
preds = best_model.predict(test)
preds.head()
Out[23]:
Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (p1
). The probability for death (p0
) is given for convenience, as it is just 1-p1
.
In [24]:
best_model.model_performance(valid)
Out[24]:
In [25]:
# Key of best model:
best_model.key
Out[25]:
You can also see the "best" model in more detail in Flow:
The model and the predictions can be saved to file as follows:
In [26]:
# uncomment if you want to export the best model
# h2o.save_model(best_model, "/tmp/bestModel.csv", force=True)
# h2o.export_file(preds, "/tmp/bestPreds.csv", force=True)
In [27]:
# print pojo to screen, or provide path to download location
# h2o.download_pojo(best_model)
The model can also be exported as a plain old Java object (POJO) for H2O-independent (standalone/Storm/Kafka/UDF) scoring in any Java environment.
/*
Licensed under the Apache License, Version 2.0
http://www.apache.org/licenses/LICENSE-2.0.html
AUTOGENERATED BY H2O at 2016-07-17T18:38:50.337-07:00
3.8.3.3
Standalone prediction code with sample test data for GBMModel named final_grid_model_45
How to download, compile and execute:
mkdir tmpdir
cd tmpdir
curl http://127.0.0.1:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
curl http://127.0.0.1:54321/3/Models.java/final_grid_model_45 > final_grid_model_45.java
javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m final_grid_model_45.java
(Note: Try java argument -XX:+PrintCompilation to show runtime JIT compiler behavior.)
*/
import java.util.Map;
import hex.genmodel.GenModel;
import hex.genmodel.annotations.ModelPojo;
...
class final_grid_model_45_Tree_0_class_0 {
static final double score0(double[] data) {
double pred = (Double.isNaN(data[1]) || !GenModel.bitSetContains(GRPSPLIT0, 0, data[1 /* sex */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT1, 13, data[7 /* cabin */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT2, 9, data[7 /* cabin */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT3, 9, data[7 /* cabin */]) ?
(data[2 /* age */] <1.4174492f ?
0.13087687f :
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT4, 9, data[7 /* cabin */]) ?
(Double.isNaN(data[3]) || data[3 /* sibsp */] <1.000313f ?
(data[6 /* fare */] <7.91251f ?
(Double.isNaN(data[5]) || data[5 /* ticket */] <368744.5f ?
-0.08224204f :
(Double.isNaN(data[2]) || data[2 /* age */] <13.0f ?
-0.028962314f :
-0.08224204f)) :
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT5, 9, data[7 /* cabin */]) ?
(data[6 /* fare */] <7.989957f ?
(Double.isNaN(data[3]) || data[3 /* sibsp */] <0.0017434144f ?
0.07759714f :
0.13087687f) :
(data[6 /* fare */] <12.546303f ?
-0.07371729f :
(Double.isNaN(data[4]) || data[4 /* parch */] <1.0020853f ?
-0.037374903f :
-0.08224204f))) :
0.0f)) :
-0.08224204f) :
0.0f)) :
0.0f) :
-0.08224204f) :
-0.08224204f) :
...
After learning above that the variance of the test set AUC of the top few models was rather large, we might be able to turn this into our advantage by using ensembling techniques. The simplest one is taking the average of the predictions (survival probabilities) of the top k
grid search model predictions (here, we use k=10
):
In [28]:
prob = None
k=10
for i in range(0,k):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
if (prob is None):
prob = gbm.predict(test)["p1"]
else:
prob = prob + gbm.predict(test)["p1"]
prob = prob/k
We now have a blended probability of survival for each person on the Titanic.
In [29]:
prob.head()
Out[29]:
We can bring those ensemble predictions to our Python session's memory space and use other Python packages.
In [30]:
from sklearn.metrics import roc_auc_score
# convert prob and test[response] h2oframes to pandas' frames and then convert them each to numpy array
np_array_prob = prob.as_data_frame().values
np_array_test = test[response].as_data_frame().values
probInPy = np_array_prob
labeInPy = np_array_test
# compare true scores (test[response]) to probability scores (prob)
roc_auc_score(labeInPy, probInPy)
Out[30]:
This simple blended ensemble test set prediction has an even higher AUC than the best single model, but we need to do more validation studies, ideally using cross-validation. We leave this as an exercise for the reader - take the parameters of the top 10
models, retrain them with nfolds=5
on the full dataset, set keep_holdout_predictions=True
and sum up their predicted probabilities, then score that with sklearn's roc_auc_score as shown above.
For more sophisticated ensembling approaches, such as stacking via a superlearner, we refer to the H2O Ensemble github page.
We learned how to build H2O GBM models for a binary classification task on a small but realistic dataset with numerical and categorical variables, with the goal to maximize the AUC (ranges from 0.5 to 1). We first established a baseline with the default model, then carefully tuned the remaining hyper-parameters without "too much" human guess-work. We used both Cartesian and Random hyper-parameter searches to find good models. We were able to get the AUC on a holdout test set from 95% range with the default model to 97% range after tuning, and to above 98% with some simple ensembling technique known as blending. We performed simple cross-validation variance analysis to learn that results were slightly "lucky" due to the specific train/valid/test set splits, and settled to expect 97% AUCs instead.
Note that this script and the findings therein are directly transferrable to large datasets on distributed clusters including Spark/Hadoop environments.
More information can be found here http://www.h2o.ai/docs/.