H2O GBM Tuning Tutorial for Python

Arno Candel, PhD, Chief Architect, H2O.ai

Ported to Python by Navdeep Gill, M.S., Hacker/Data Scientist, H2O.ai

In this tutorial, we show how to build a well-tuned H2O GBM model for a supervised classification task. We specifically don't focus on feature engineering and use a small dataset to allow you to reproduce these results in a few minutes on a laptop. This script can be directly transferred to datasets that are hundreds of GBs large and H2O clusters with dozens of compute nodes.

You can download the source from H2O's github repository.

Ports to R Markdown and Flow UI (now part of Example Flows) are available as well.

Installation of the H2O Python Package

Either download H2O from H2O.ai's website or install the latest version of H2O into Python with the following set of commands:

Install dependencies from command line (prepending with sudo if needed):

[sudo] pip install -U requests
[sudo] pip install -U tabulate
[sudo] pip install -U future
[sudo] pip install -U six

The following command removes the H2O module for Python.

[sudo] pip uninstall h2o

Next, use pip to install this version of the H2O Python module.

[sudo] pip install http://h2o-release.s3.amazonaws.com/h2o/rel-turchin/8/Python/h2o-3.8.2.8-py2.py3-none-any.whl

Launch an H2O cluster on localhost


In [1]:
import h2o
import numpy as np
import math
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(nthreads=-1, strict_version_check=True)
## optional: connect to a running H2O cluster
#h2o.init(ip="mycluster", port=55555)



No instance found at ip and port: localhost:54321. Trying to start local jar...


JVM stdout: /var/folders/k_/kpp1czqs3957vq2pr5qngck00000gn/T/tmp4QnSj1/h2o_laurend_started_from_python.out
JVM stderr: /var/folders/k_/kpp1czqs3957vq2pr5qngck00000gn/T/tmp12e2TZ/h2o_laurend_started_from_python.err
Using ice_root: /var/folders/k_/kpp1czqs3957vq2pr5qngck00000gn/T/tmpNoGLeV


Java Version: java version "1.8.0_73"
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)


Starting H2O JVM and connecting: ............... Connection successful!
H2O cluster uptime: 1 seconds 591 milliseconds
H2O cluster version: 3.8.3.3
H2O cluster name: H2O_started_from_python_laurend_jcu391
H2O cluster total nodes: 1
H2O cluster total free memory: 3.56 GB
H2O cluster total cores: 8
H2O cluster allowed cores: 8
H2O cluster healthy: True
H2O Connection ip: 127.0.0.1
H2O Connection port: 54321
H2O Connection proxy: None
Python Version: 2.7.11

Import the data into H2O

Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters. Here, we use a small public dataset (Titanic), but you can use datasets that are hundreds of GBs large.


In [2]:
## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.
df = h2o.import_file(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
print df.dim
print df.head
print df.tail
print df.describe

## pick a response for the supervised problem
response = "survived"

## the response variable is an integer, we will turn it into a categorical/factor for binary classification
df[response] = df[response].asfactor()           

## use all other columns (except for the name & the response column ("survived")) as predictors
predictors = df.columns
del predictors[1:3]
print predictors


Parse Progress: [##################################################] 100%
[1309, 14]
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
<bound method H2OFrame.head of >
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
<bound method H2OFrame.tail of >
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
<bound method H2OFrame.describe of >
[u'pclass', u'sex', u'age', u'sibsp', u'parch', u'ticket', u'fare', u'cabin', u'embarked', u'boat', u'body', u'home.dest']

From now on, everything is generic and directly applies to most datasets. We assume that all feature engineering is done at this stage and focus on model tuning. For multi-class problems, you can use h2o.logloss() or h2o.confusion_matrix() instead of h2o.auc() and for regression problems, you can use h2o.mean_residual_deviance() or h2o.mse().

Split the data for Machine Learning

We split the data into three pieces: 60% for training, 20% for validation, 20% for final testing. Here, we use random splitting, but this assumes i.i.d. data. If this is not the case (e.g., when events span across multiple rows or data has a time structure), you'll have to sample your data non-randomly.


In [3]:
train, valid, test = df.split_frame(
    ratios=[0.6,0.2], 
    seed=1234, 
    destination_frames=['train.hex','valid.hex','test.hex']
)

Establish baseline performance

As the first step, we'll build some default models to see what accuracy we can expect. Let's use the AUC metric for this demo, but you can use h2o.logloss() and stopping_metric="logloss" as well. It ranges from 0.5 for random models to 1 for perfect models.

The first model is a default GBM, trained on the 60% training split


In [4]:
#We only provide the required parameters, everything else is default
gbm = H2OGradientBoostingEstimator()
gbm.train(x=predictors, y=response, training_frame=train)

## Show a detailed model summary
print gbm


gbm Model Build Progress: [##################################################] 100%
Model Details
=============
H2OGradientBoostingEstimator :  Gradient Boosting Method
Model Key:  GBM_model_python_1470090404952_1

Model Summary: 
number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves max_leaves mean_leaves
50.0 50.0 17180.0 2.0 5.0 4.92 3.0 21.0 9.14

ModelMetricsBinomial: gbm
** Reported on train data. **

MSE: 0.0406797359799
R^2: 0.828341496541
LogLoss: 0.168135896237
Mean Per-Class Error: 0.0438517398512
AUC: 0.988222972832
Gini: 0.976445945665

Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.40045091311: 
0 1 Error Rate
0 472.0 7.0 0.0146 (7.0/479.0)
1 22.0 279.0 0.0731 (22.0/301.0)
Total 494.0 286.0 0.0372 (29.0/780.0)
Maximum Metrics: Maximum metrics at their respective thresholds

metric threshold value idx
max f1 0.4004509 0.9505963 192.0
max f2 0.1995825 0.9455959 205.0
max f0point5 0.4571007 0.9687722 189.0
max accuracy 0.4339709 0.9628205 191.0
max precision 0.9799591 1.0 0.0
max recall 0.0829058 1.0 251.0
max specificity 0.9799591 1.0 0.0
max absolute_MCC 0.4339709 0.9217198 191.0
max min_per_class_accuracy 0.2099932 0.9394572 203.0
max mean_per_class_accuracy 0.4004509 0.9561483 192.0
Gains/Lift Table: Avg response rate: 38.59 %

group cumulative_data_fraction lower_threshold lift cumulative_lift response_rate cumulative_response_rate capture_rate cumulative_capture_rate gain cumulative_gain
1 0.0102564 0.9630355 2.5913621 2.5913621 1.0 1.0 0.0265781 0.0265781 159.1362126 159.1362126
2 0.0217949 0.9568772 2.5913621 2.5913621 1.0 1.0 0.0299003 0.0564784 159.1362126 159.1362126
3 0.0307692 0.9543191 2.5913621 2.5913621 1.0 1.0 0.0232558 0.0797342 159.1362126 159.1362126
4 0.0410256 0.9519030 2.5913621 2.5913621 1.0 1.0 0.0265781 0.1063123 159.1362126 159.1362126
5 0.05 0.9511906 2.5913621 2.5913621 1.0 1.0 0.0232558 0.1295681 159.1362126 159.1362126
6 0.1076923 0.9477912 2.5913621 2.5913621 1.0 1.0 0.1495017 0.2790698 159.1362126 159.1362126
7 0.15 0.9460051 2.5913621 2.5913621 1.0 1.0 0.1096346 0.3887043 159.1362126 159.1362126
8 0.2 0.9431959 2.5913621 2.5913621 1.0 1.0 0.1295681 0.5182724 159.1362126 159.1362126
9 0.3012821 0.8373551 2.5913621 2.5913621 1.0 1.0 0.2624585 0.7807309 159.1362126 159.1362126
10 0.4 0.2041091 1.6153946 2.3504983 0.6233766 0.9070513 0.1594684 0.9401993 61.5394572 135.0498339
11 0.5012821 0.1280484 0.3936246 1.9551198 0.1518987 0.7544757 0.0398671 0.9800664 -60.6375373 95.5119763
12 0.6012821 0.0844031 0.0996678 1.6465371 0.0384615 0.6353945 0.0099668 0.9900332 -90.0332226 64.6537129
13 0.7 0.0779819 0.1009622 1.4285714 0.0389610 0.5512821 0.0099668 1.0 -89.9037839 42.8571429
14 0.8 0.0647129 0.0 1.25 0.0 0.4823718 0.0 1.0 -100.0 25.0
15 0.9782051 0.0520278 0.0 1.0222805 0.0 0.3944954 0.0 1.0 -100.0 2.2280472
16 1.0 0.0490107 0.0 1.0 0.0 0.3858974 0.0 1.0 -100.0 0.0

Scoring History: 
timestamp duration number_of_trees training_MSE training_logloss training_AUC training_lift training_classification_error
2016-08-01 15:27:00 0.013 sec 0.0 0.2369806 0.6668775 0.5 1.0 0.6141026
2016-08-01 15:27:00 0.117 sec 1.0 0.2064855 0.6033605 0.8851116 2.5913621 0.0897436
2016-08-01 15:27:00 0.157 sec 2.0 0.1821387 0.5530978 0.8851428 2.5913621 0.0884615
2016-08-01 15:27:00 0.185 sec 3.0 0.1625714 0.5122612 0.8851428 2.5913621 0.0884615
2016-08-01 15:27:00 0.210 sec 4.0 0.1467807 0.4785030 0.8851428 2.5913621 0.0884615
--- --- --- --- --- --- --- --- ---
2016-08-01 15:27:01 1.024 sec 46.0 0.0406763 0.1681376 0.9882230 2.5913621 0.0371795
2016-08-01 15:27:01 1.042 sec 47.0 0.0406773 0.1681370 0.9882230 2.5913621 0.0371795
2016-08-01 15:27:01 1.061 sec 48.0 0.0406782 0.1681366 0.9882230 2.5913621 0.0371795
2016-08-01 15:27:01 1.078 sec 49.0 0.0406790 0.1681362 0.9882230 2.5913621 0.0371795
2016-08-01 15:27:01 1.095 sec 50.0 0.0406797 0.1681359 0.9882230 2.5913621 0.0371795
See the whole table with table.as_data_frame()

Variable Importances: 
variable relative_importance scaled_importance percentage
boat 628.0942993 1.0 0.6422850
home.dest 147.9191895 0.2355047 0.1512612
cabin 129.4477234 0.2060960 0.1323724
sex 50.2480087 0.0800007 0.0513833
ticket 11.9593115 0.0190406 0.0122295
age 5.8832302 0.0093668 0.0060162
body 1.7419853 0.0027734 0.0017813
embarked 1.0638593 0.0016938 0.0010879
parch 0.8865532 0.0014115 0.0009066
fare 0.5306281 0.0008448 0.0005426
sibsp 0.1311035 0.0002087 0.0001341
pclass 0.0 0.0 0.0


In [5]:
## Get the AUC on the validation set
perf = gbm.model_performance(valid)
print perf.auc()


0.948013524937

The AUC is over 94%, so this model is highly predictive!

The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds. Note that cross-validation takes longer and is not usually done for really large datasets.


In [6]:
## rbind() makes a copy here, so it's better to use split_frame with `ratios = c(0.8)` instead above
cv_gbm = H2OGradientBoostingEstimator(nfolds = 4, seed = 0xDECAF)
cv_gbm.train(x = predictors, y = response, training_frame = train.rbind(valid))


gbm Model Build Progress: [##################################################] 100%

We see that the cross-validated performance is similar to the validation set performance:


In [7]:
## Show a detailed summary of the cross validation metrics
## This gives you an idea of the variance between the folds
cv_summary = cv_gbm.cross_validation_metrics_summary().as_data_frame()
#print(cv_summary) ## Full summary of all metrics
#print(cv_summary.iloc[4]) ## get the row with just the AUCs

## Get the cross-validated AUC by scoring the combined holdout predictions.
## (Instead of taking the average of the metrics across the folds)
perf_cv = cv_gbm.model_performance(xval=True)
print perf_cv.auc()


0.948595146871

Next, we train a GBM with "I feel lucky" parameters. We'll use early stopping to automatically tune the number of trees using the validation AUC. We'll use a lower learning rate (lower is always better, just takes more trees to converge). We'll also use stochastic sampling of rows and columns to (hopefully) improve generalization.


In [8]:
gbm_lucky = H2OGradientBoostingEstimator(
  ## more trees is better if the learning rate is small enough 
  ## here, use "more than enough" trees - we have early stopping
  ntrees = 10000,                                                            

  ## smaller learning rate is better (this is a good value for most datasets, but see below for annealing)
  learn_rate = 0.01,                                                         

  ## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events
  stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC", 

  ## sample 80% of rows per tree
  sample_rate = 0.8,                                                       

  ## sample 80% of columns per split
  col_sample_rate = 0.8,                                                   

  ## fix a random number generator seed for reproducibility
  seed = 1234,                                                             

  ## score every 10 trees to make early stopping reproducible (it depends on the scoring interval)
  score_tree_interval = 10)

gbm_lucky.train(x=predictors, y=response, training_frame=train, validation_frame=valid)


gbm Model Build Progress: [##################################################] 100%

This model doesn't seem to be better than the previous models:


In [9]:
perf_lucky = gbm_lucky.model_performance(valid)
print perf_lucky.auc()


0.943195266272

For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we'll let this parameter tune freshly below, so no worries.

Next, we'll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%).

The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following "rules":

  1. Build as many trees (ntrees) as it takes until the validation set error starts increasing.
  2. A lower learning rate (learn_rate) is generally better, but will require more trees. Using learn_rate=0.02and learn_rate_annealing=0.995 (reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead.
  3. The optimum maximum allowed depth for the trees (max_depth) is data dependent, deeper trees take longer to train, especially at depths greater than 10.
  4. Row and column sampling (sample_rate and col_sample_rate) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (col_sample_rate_per_tree) can also be tuned. Note that it is multiplicative with col_sample_rate, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split.
  5. For highly imbalanced classification datasets (e.g., fewer buyers than non-buyers), stratified row sampling based on response class membership can help improve predictive accuracy. It is configured with sample_rate_per_class (array of ratios, one per response class in lexicographic order).
  6. Most other options only have a small impact on the model performance, but are worth tuning with a Random hyper-parameter search nonetheless, if highest performance is critical.

First we want to know what value of max_depth to use because it has a big impact on the model training time and optimal values depend strongly on the dataset. We'll do a quick Cartesian grid search to get a rough idea of good candidate max_depth values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before. We'll use learning rate annealing to speed up convergence without sacrificing too much accuracy.


In [10]:
## Depth 10 is usually plenty of depth for most datasets, but you never know
hyper_params = {'max_depth' : range(1,30,2)}
#hyper_params = {max_depth = [4,6,8,12,16,20]} ##faster for larger datasets

#Build initial GBM Model
gbm_grid = H2OGradientBoostingEstimator(
        ## more trees is better if the learning rate is small enough 
        ## here, use "more than enough" trees - we have early stopping
        ntrees=10000,
        ## smaller learning rate is better
        ## since we have learning_rate_annealing, we can afford to start with a 
        #bigger learning rate
        learn_rate=0.05,
        ## learning rate annealing: learning_rate shrinks by 1% after every tree 
        ## (use 1.00 to disable, but then lower the learning_rate)
        learn_rate_annealing = 0.99,
        ## sample 80% of rows per tree
        sample_rate = 0.8,
        ## sample 80% of columns per split
        col_sample_rate = 0.8,
        ## fix a random number generator seed for reproducibility
        seed = 1234,
        ## score every 10 trees to make early stopping reproducible 
        #(it depends on the scoring interval)
        score_tree_interval = 10, 
        ## early stopping once the validation AUC doesn't improve by at least 0.01% for 
        #5 consecutive scoring events
        stopping_rounds = 5,
        stopping_metric = "AUC",
        stopping_tolerance = 1e-4)

#Build grid search with previously made GBM and hyper parameters
grid = H2OGridSearch(gbm_grid,hyper_params,
                         grid_id = 'depth_grid',
                         search_criteria = {'strategy': "Cartesian"})


#Train grid search
grid.train(x=predictors, 
           y=response,
           training_frame = train,
           validation_frame = valid)


gbm Grid Build Progress: [##################################################] 100%

In [11]:
## by default, display the grid search results sorted by increasing logloss (since this is a classification task)
print grid


     max_depth            model_ids              logloss
0           17   depth_grid_model_8  0.20544094075930078
1           19   depth_grid_model_9  0.20584402503242194
2           27  depth_grid_model_13  0.20627418156921704
3           11   depth_grid_model_5   0.2069364255413584
4           13   depth_grid_model_6   0.2078569528636169
5           25  depth_grid_model_12  0.20834760530631993
6            9   depth_grid_model_4  0.20842232867415922
7           29  depth_grid_model_14  0.20904163538087436
8           15   depth_grid_model_7  0.20991531457742935
9           23  depth_grid_model_11   0.2104361858121492
10          21  depth_grid_model_10  0.21069590143686837
11           7   depth_grid_model_3  0.21127939637392396
12           5   depth_grid_model_2  0.21509420086032935
13           3   depth_grid_model_1  0.21854010261642962
14           1   depth_grid_model_0  0.23892331983893703


In [12]:
## sort the grid models by decreasing AUC
sorted_grid = grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_grid)


     max_depth            model_ids                 auc
0           13   depth_grid_model_6  0.9552831783601015
1           27  depth_grid_model_13  0.9547196393350239
2           17   depth_grid_model_8  0.9543251620174698
3           11   depth_grid_model_5  0.9538743307974078
4            9   depth_grid_model_4  0.9534798534798535
5           19   depth_grid_model_9  0.9534234995773457
6           25  depth_grid_model_12  0.9529726683572838
7           29  depth_grid_model_14  0.9528036066497605
8           21  depth_grid_model_10  0.9526908988447449
9           15   depth_grid_model_7  0.9526345449422373
10           7   depth_grid_model_3   0.951789236404621
11          23  depth_grid_model_11  0.9505494505494505
12           3   depth_grid_model_1   0.949084249084249
13           5   depth_grid_model_2  0.9484361792054099
14           1   depth_grid_model_0  0.9478162862778248

It appears that max_depth values of 9 to 27 are best suited for this dataset, which is unusally deep!


In [13]:
max_depths = sorted_grid.sorted_metric_table()['max_depth'][0:5]
new_max = int(max(max_depths, key=int))
new_min = int(min(max_depths, key=int))

print "MaxDepth", new_max
print "MinDepth", new_min


MaxDepth 27
MinDepth 9

Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don't know what combinations of hyper-parameters will result in the best model, we'll use random hyper-parameter search to "let the machine get luckier than a best guess of any human".


In [14]:
# create hyperameter and search criteria lists (ranges are inclusive..exclusive))
hyper_params_tune = {'max_depth' : list(range(new_min,new_max+1,1)),
                'sample_rate': [x/100. for x in range(20,101)],
                'col_sample_rate' : [x/100. for x in range(20,101)],
                'col_sample_rate_per_tree': [x/100. for x in range(20,101)],
                'col_sample_rate_change_per_level': [x/100. for x in range(90,111)],
                'min_rows': [2**x for x in range(0,int(math.log(train.nrow,2)-1)+1)],
                'nbins': [2**x for x in range(4,11)],
                'nbins_cats': [2**x for x in range(4,13)],
                'min_split_improvement': [0,1e-8,1e-6,1e-4],
                'histogram_type': ["UniformAdaptive","QuantilesGlobal","RoundRobin"]}
search_criteria_tune = {'strategy': "RandomDiscrete",
                   'max_runtime_secs': 3600,  ## limit the runtime to 60 minutes
                   'max_models': 100,  ## build no more than 100 models
                   'seed' : 1234,
                   'stopping_rounds' : 5,
                   'stopping_metric' : "AUC",
                   'stopping_tolerance': 1e-3
                   }

In [15]:
gbm_final_grid = H2OGradientBoostingEstimator(distribution='bernoulli',
                    ## more trees is better if the learning rate is small enough 
                    ## here, use "more than enough" trees - we have early stopping
                    ntrees=10000,
                    ## smaller learning rate is better
                    ## since we have learning_rate_annealing, we can afford to start with a 
                    #bigger learning rate
                    learn_rate=0.05,
                    ## learning rate annealing: learning_rate shrinks by 1% after every tree 
                    ## (use 1.00 to disable, but then lower the learning_rate)
                    learn_rate_annealing = 0.99,
                    ## score every 10 trees to make early stopping reproducible 
                    #(it depends on the scoring interval)
                    score_tree_interval = 10,
                    ## fix a random number generator seed for reproducibility
                    seed = 1234,
                    ## early stopping once the validation AUC doesn't improve by at least 0.01% for 
                    #5 consecutive scoring events
                    stopping_rounds = 5,
                    stopping_metric = "AUC",
                    stopping_tolerance = 1e-4)
            
#Build grid search with previously made GBM and hyper parameters
final_grid = H2OGridSearch(gbm_final_grid, hyper_params = hyper_params_tune,
                                    grid_id = 'final_grid',
                                    search_criteria = search_criteria_tune)
#Train grid search
final_grid.train(x=predictors, 
           y=response,
           ## early stopping based on timeout (no model should take more than 1 hour - modify as needed)
           max_runtime_secs = 3600, 
           training_frame = train,
           validation_frame = valid)

print final_grid


gbm Grid Build Progress: [##################################################] 100%
      col_sample_rate col_sample_rate_change_per_level  \
0                0.49                             1.04   
1                0.35                             1.09   
2                0.73                              0.9   
3                0.97                             0.96   
4                 0.5                             1.02   
5                 0.6                              1.0   
6                 0.5                             0.94   
7                0.81                             0.94   
8                0.63                              1.0   
9                 0.4                             1.01   
10               0.65                             1.08   
11                0.2                             0.96   
12               0.61                             1.04   
13               0.45                             1.03   
14               0.85                             1.07   
15               0.62                             0.96   
16                0.7                             1.02   
17               0.55                             1.05   
18               0.96                             1.04   
19               0.76                             0.97   
20                0.9                              1.1   
21                0.9                             1.01   
22               0.92                             0.93   
23               0.47                             0.98   
24               0.42                             1.08   
25               0.75                             0.99   
26               0.42                             0.98   
27               0.92                             1.06   
28               0.96                              1.1   
29               0.92                             1.04   
.. ..             ...                              ...   
70               0.88                             1.05   
71                0.2                             0.94   
72               0.48                              1.0   
73               0.52                              1.0   
74               0.25                             1.04   
75               0.74                             1.07   
76               0.99                              1.0   
77               0.44                             1.03   
78               0.33                             1.06   
79               0.45                             1.06   
80               0.25                             1.06   
81               0.37                             0.94   
82               0.67                             1.04   
83               0.46                             0.94   
84               0.31                             0.94   
85               0.66                             1.01   
86               0.22                             1.06   
87               0.57                              1.0   
88               0.22                             1.09   
89               0.76                             0.94   
90               0.27                             1.07   
91                0.2                             0.94   
92               0.57                              1.1   
93               0.92                             1.06   
94                0.7                             1.08   
95               0.96                             0.94   
96               0.61                             0.97   
97                0.5                             1.03   
98               0.87                              1.0   
99               0.24                             1.08   

   col_sample_rate_per_tree   histogram_type max_depth min_rows  \
0                      0.94  QuantilesGlobal        27      2.0   
1                      0.83  QuantilesGlobal        14      4.0   
2                       0.6  QuantilesGlobal        12      1.0   
3                      0.96  QuantilesGlobal         9      1.0   
4                      0.65       RoundRobin        13      2.0   
5                      0.89  UniformAdaptive        20      1.0   
6                      0.92       RoundRobin         9      2.0   
7                      0.89  QuantilesGlobal         9     16.0   
8                      0.85       RoundRobin        12      2.0   
9                      0.55  QuantilesGlobal        19      2.0   
10                     0.95  UniformAdaptive        11      4.0   
11                     0.96       RoundRobin        12      8.0   
12                     0.61  UniformAdaptive        23      1.0   
13                     0.79       RoundRobin        11     32.0   
14                     0.95  UniformAdaptive        17      8.0   
15                     0.68       RoundRobin        19     64.0   
16                     0.56  UniformAdaptive        25      2.0   
17                      1.0  UniformAdaptive        13      8.0   
18                     0.99  UniformAdaptive        22      1.0   
19                     0.91  UniformAdaptive        24      8.0   
20                     0.59       RoundRobin        26     32.0   
21                     0.71  QuantilesGlobal        25      4.0   
22                     0.56  QuantilesGlobal        13      4.0   
23                     0.68  UniformAdaptive        23      8.0   
24                     0.45       RoundRobin        13      1.0   
25                      0.8  UniformAdaptive        14     32.0   
26                     0.53  UniformAdaptive        11      1.0   
27                     0.73  UniformAdaptive         9      8.0   
28                     0.43       RoundRobin        27      2.0   
29                      1.0  QuantilesGlobal        26     64.0   
..                      ...              ...       ...      ...   
70                     0.31  UniformAdaptive        22    128.0   
71                     0.45  UniformAdaptive        19      1.0   
72                     0.35  QuantilesGlobal        23     64.0   
73                     0.42  UniformAdaptive        26     64.0   
74                     0.87  QuantilesGlobal        10    128.0   
75                      0.4       RoundRobin         9    128.0   
76                     0.22       RoundRobin        14     64.0   
77                     0.38  QuantilesGlobal        22    128.0   
78                     0.69       RoundRobin        21    128.0   
79                     0.28  UniformAdaptive         9     32.0   
80                     0.22  QuantilesGlobal        20      8.0   
81                     0.23  QuantilesGlobal        16      2.0   
82                     0.21  UniformAdaptive        11     16.0   
83                     0.24  UniformAdaptive        24      1.0   
84                      0.3  UniformAdaptive        15     64.0   
85                     0.22       RoundRobin        13      4.0   
86                     0.55       RoundRobin        20     64.0   
87                     0.23       RoundRobin        26     64.0   
88                     0.22  UniformAdaptive        20    128.0   
89                      0.2       RoundRobin         9    128.0   
90                     0.39       RoundRobin        10    128.0   
91                     0.49  QuantilesGlobal        27    128.0   
92                     0.68       RoundRobin        11    256.0   
93                     0.64       RoundRobin        13    256.0   
94                     0.99  QuantilesGlobal        26    256.0   
95                     0.62  QuantilesGlobal        11    256.0   
96                     0.36  QuantilesGlobal        24    256.0   
97                     0.45       RoundRobin        25    256.0   
98                      0.2       RoundRobin        17    256.0   
99                      0.3  UniformAdaptive        15    256.0   

   min_split_improvement nbins nbins_cats sample_rate            model_ids  \
0                    0.0    32        256        0.86  final_grid_model_68   
1                 1.0E-8    64        128        0.69  final_grid_model_38   
2                 1.0E-4  1024        256        0.29  final_grid_model_45   
3                 1.0E-4  1024         64        0.32  final_grid_model_75   
4                 1.0E-8   512        512        0.64   final_grid_model_0   
5                    0.0    16        128        0.64   final_grid_model_6   
6                    0.0   128       2048        0.61  final_grid_model_14   
7                 1.0E-8  1024         32        0.71  final_grid_model_69   
8                    0.0   128       4096        0.37   final_grid_model_5   
9                    0.0    64         32        0.42  final_grid_model_88   
10                1.0E-8    64        512         0.4  final_grid_model_33   
11                1.0E-8    64        256        0.93  final_grid_model_87   
12                1.0E-4    64         16        0.69  final_grid_model_81   
13                1.0E-4    64         16        0.55  final_grid_model_54   
14                1.0E-6    16        512        0.55  final_grid_model_97   
15                   0.0   256        128        0.96  final_grid_model_76   
16                1.0E-8    32        256        0.34  final_grid_model_22   
17                1.0E-4  1024       2048        0.86  final_grid_model_17   
18                1.0E-6  1024       2048        0.29  final_grid_model_39   
19                1.0E-4  1024       2048        0.43  final_grid_model_79   
20                1.0E-8   128        256        0.62  final_grid_model_91   
21                   0.0    32       4096        0.46  final_grid_model_46   
22                   0.0   128        128        0.93  final_grid_model_96   
23                1.0E-8   128        128        0.87   final_grid_model_2   
24                1.0E-6    16        256        0.35  final_grid_model_16   
25                1.0E-6  1024        512         0.5   final_grid_model_7   
26                1.0E-4    64         16        0.69  final_grid_model_55   
27                1.0E-8    32        128        0.28  final_grid_model_60   
28                1.0E-8   512       2048        0.48  final_grid_model_64   
29                   0.0   256         16        0.57  final_grid_model_48   
..                   ...   ...        ...         ...                  ...   
70                1.0E-6  1024       4096        0.37  final_grid_model_31   
71                1.0E-6    16       4096        0.39  final_grid_model_71   
72                1.0E-4    32        512         0.4  final_grid_model_56   
73                   0.0  1024         16        0.67  final_grid_model_59   
74                1.0E-8   256       4096        0.95  final_grid_model_72   
75                1.0E-4    16        128         0.3  final_grid_model_89   
76                   0.0   256        128         0.5  final_grid_model_57   
77                1.0E-6   256        512        0.55   final_grid_model_9   
78                1.0E-4    64        256        0.35  final_grid_model_18   
79                1.0E-8   256         64        0.28  final_grid_model_92   
80                   0.0   512       2048        0.23  final_grid_model_36   
81                1.0E-6   512       4096        0.47  final_grid_model_63   
82                   0.0    32         64         0.6  final_grid_model_10   
83                   0.0   128        512        0.65  final_grid_model_42   
84                1.0E-8   128        128         0.4  final_grid_model_47   
85                1.0E-6    16       4096        0.59  final_grid_model_19   
86                1.0E-4    32         16        0.28  final_grid_model_73   
87                1.0E-6    64         64        0.25  final_grid_model_35   
88                1.0E-8  1024       2048        0.59  final_grid_model_61   
89                   0.0   256       1024        0.62  final_grid_model_67   
90                1.0E-6    16         64        0.57   final_grid_model_1   
91                   0.0    16         32         0.4  final_grid_model_28   
92                   0.0    16       4096        0.58   final_grid_model_8   
93                   0.0    16         32        0.78  final_grid_model_70   
94                1.0E-4    32         16        0.49  final_grid_model_86   
95                1.0E-6    64       4096        0.57  final_grid_model_95   
96                1.0E-6   128       1024        0.65  final_grid_model_98   
97                1.0E-8   512         16        0.28  final_grid_model_58   
98                1.0E-6   512       1024        0.97  final_grid_model_51   
99                1.0E-4    32         64        0.97  final_grid_model_44   

                logloss  
0    0.1724195177377109  
1    0.1785402233722598  
2   0.18361800905283707  
3    0.1871222263287576  
4   0.18959285673409573  
5   0.19152584981364526  
6   0.19274618621918435  
7    0.1947930984585484  
8     0.196485575764478  
9    0.1968403959647829  
10   0.1999688912894301  
11  0.20201657250742558  
12  0.20266301087084315  
13  0.20516449122756517  
14   0.2067531116433615  
15   0.2087686550469798  
16  0.20877820607021177  
17  0.20900838036852132  
18  0.21087515188016223  
19   0.2115041324592003  
20  0.21208885921765247  
21  0.21216588823615523  
22  0.21265074880389523  
23  0.21299994256302826  
24  0.21362387535587823  
25   0.2149390445444144  
26  0.21510488500476962  
27  0.21564848516181673  
28  0.21625308206132732  
29  0.21802015810369804  
..                  ...  
70   0.2774771827112313  
71   0.2819876568815201  
72    0.286371071676637  
73  0.29041035952972577  
74  0.29823678827120337  
75   0.3016387593981173  
76   0.3118405801095551  
77  0.32686853593555837  
78  0.32716497253228505  
79   0.3285607327288318  
80  0.33296974353472003  
81   0.3350584463863175  
82   0.3377299690705004  
83  0.34081574296481054  
84   0.3556018858084106  
85  0.38840541246389937  
86   0.4125497858974529  
87  0.41411961317447754  
88  0.42004941940980145  
89   0.4205701620399925  
90   0.4416592685663139  
91   0.4534058856469468  
92   0.5093556450277551  
93   0.5118825801170095  
94   0.5142960257124096  
95   0.5245975877044032  
96   0.5403552362246625  
97   0.5440442492091082  
98   0.5487016151393015  
99   0.5827120934746949  

[100 rows x 13 columns]

We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!


In [16]:
## Sort the grid models by AUC
sorted_final_grid = final_grid.get_grid(sort_by='auc',decreasing=True)

print sorted_final_grid


      col_sample_rate col_sample_rate_change_per_level  \
0                0.73                              0.9   
1                0.49                             1.04   
2                0.92                             0.93   
3                 0.5                             1.02   
4                0.35                             1.09   
5                 0.7                             1.02   
6                0.81                             0.94   
7                0.61                             1.04   
8                 0.4                             1.01   
9                0.97                             0.96   
10               0.91                             0.96   
11                0.5                             0.94   
12               0.51                              1.1   
13               0.66                             0.96   
14               0.96                              1.1   
15               0.71                              0.9   
16               0.66                             1.03   
17                0.2                             0.96   
18               0.72                             1.08   
19               0.65                             1.08   
20               0.47                             0.98   
21               0.75                             0.94   
22               0.42                             0.98   
23               0.63                              1.0   
24               0.45                             1.03   
25                0.6                              1.0   
26               0.92                             1.06   
27               0.81                             1.06   
28               0.29                             1.02   
29               0.52                              1.1   
.. ..             ...                              ...   
70               0.25                             1.06   
71               0.54                             1.07   
72               0.91                             1.02   
73                0.8                              0.9   
74               0.31                             0.94   
75               0.33                             1.06   
76               0.88                             1.05   
77               0.32                             1.02   
78                0.5                             0.99   
79               0.25                             1.04   
80               0.44                             1.03   
81               0.66                             0.99   
82               0.69                             0.97   
83               0.37                             0.94   
84               0.46                             0.94   
85               0.66                             1.01   
86               0.57                              1.0   
87               0.22                             1.06   
88               0.76                             0.94   
89               0.22                             1.09   
90                0.2                             0.94   
91               0.27                             1.07   
92               0.61                             0.97   
93               0.57                              1.1   
94               0.92                             1.06   
95                0.7                             1.08   
96                0.5                             1.03   
97               0.87                              1.0   
98               0.24                             1.08   
99               0.96                             0.94   

   col_sample_rate_per_tree   histogram_type max_depth min_rows  \
0                       0.6  QuantilesGlobal        12      1.0   
1                      0.94  QuantilesGlobal        27      2.0   
2                      0.56  QuantilesGlobal        13      4.0   
3                      0.65       RoundRobin        13      2.0   
4                      0.83  QuantilesGlobal        14      4.0   
5                      0.56  UniformAdaptive        25      2.0   
6                      0.89  QuantilesGlobal         9     16.0   
7                      0.61  UniformAdaptive        23      1.0   
8                      0.55  QuantilesGlobal        19      2.0   
9                      0.96  QuantilesGlobal         9      1.0   
10                      0.4       RoundRobin        15      4.0   
11                     0.92       RoundRobin         9      2.0   
12                      0.5  UniformAdaptive        14      4.0   
13                     0.38  UniformAdaptive        23      8.0   
14                     0.43       RoundRobin        27      2.0   
15                     0.37  UniformAdaptive        14     16.0   
16                     0.42  QuantilesGlobal        12     16.0   
17                     0.96       RoundRobin        12      8.0   
18                     0.32  UniformAdaptive        23      4.0   
19                     0.95  UniformAdaptive        11      4.0   
20                     0.68  UniformAdaptive        23      8.0   
21                      0.6  QuantilesGlobal        12      1.0   
22                     0.53  UniformAdaptive        11      1.0   
23                     0.85       RoundRobin        12      2.0   
24                     0.79       RoundRobin        11     32.0   
25                     0.89  UniformAdaptive        20      1.0   
26                     0.73  UniformAdaptive         9      8.0   
27                     0.68  QuantilesGlobal         9      1.0   
28                     0.85  UniformAdaptive        21      1.0   
29                     0.33  QuantilesGlobal        25     32.0   
..                      ...              ...       ...      ...   
70                     0.22  QuantilesGlobal        20      8.0   
71                     0.58  UniformAdaptive        23    128.0   
72                     0.97  UniformAdaptive        18      2.0   
73                     0.57  UniformAdaptive        18    128.0   
74                      0.3  UniformAdaptive        15     64.0   
75                     0.69       RoundRobin        21    128.0   
76                     0.31  UniformAdaptive        22    128.0   
77                     0.88  UniformAdaptive        21     64.0   
78                     0.81  QuantilesGlobal        13    128.0   
79                     0.87  QuantilesGlobal        10    128.0   
80                     0.38  QuantilesGlobal        22    128.0   
81                     0.68  UniformAdaptive        14     64.0   
82                     0.98  UniformAdaptive        26    128.0   
83                     0.23  QuantilesGlobal        16      2.0   
84                     0.24  UniformAdaptive        24      1.0   
85                     0.22       RoundRobin        13      4.0   
86                     0.23       RoundRobin        26     64.0   
87                     0.55       RoundRobin        20     64.0   
88                      0.2       RoundRobin         9    128.0   
89                     0.22  UniformAdaptive        20    128.0   
90                     0.49  QuantilesGlobal        27    128.0   
91                     0.39       RoundRobin        10    128.0   
92                     0.36  QuantilesGlobal        24    256.0   
93                     0.68       RoundRobin        11    256.0   
94                     0.64       RoundRobin        13    256.0   
95                     0.99  QuantilesGlobal        26    256.0   
96                     0.45       RoundRobin        25    256.0   
97                      0.2       RoundRobin        17    256.0   
98                      0.3  UniformAdaptive        15    256.0   
99                     0.62  QuantilesGlobal        11    256.0   

   min_split_improvement nbins nbins_cats sample_rate            model_ids  \
0                 1.0E-4  1024        256        0.29  final_grid_model_45   
1                    0.0    32        256        0.86  final_grid_model_68   
2                    0.0   128        128        0.93  final_grid_model_96   
3                 1.0E-8   512        512        0.64   final_grid_model_0   
4                 1.0E-8    64        128        0.69  final_grid_model_38   
5                 1.0E-8    32        256        0.34  final_grid_model_22   
6                 1.0E-8  1024         32        0.71  final_grid_model_69   
7                 1.0E-4    64         16        0.69  final_grid_model_81   
8                    0.0    64         32        0.42  final_grid_model_88   
9                 1.0E-4  1024         64        0.32  final_grid_model_75   
10                1.0E-4    32       1024        0.67  final_grid_model_41   
11                   0.0   128       2048        0.61  final_grid_model_14   
12                   0.0   128       1024        0.46  final_grid_model_25   
13                1.0E-8   256         32        0.92  final_grid_model_50   
14                1.0E-8   512       2048        0.48  final_grid_model_64   
15                1.0E-4  1024         32        0.92  final_grid_model_37   
16                1.0E-4  1024        512        0.59  final_grid_model_21   
17                1.0E-8    64        256        0.93  final_grid_model_87   
18                   0.0   512         32        0.54  final_grid_model_77   
19                1.0E-8    64        512         0.4  final_grid_model_33   
20                1.0E-8   128        128        0.87   final_grid_model_2   
21                1.0E-4    16        512         0.5  final_grid_model_11   
22                1.0E-4    64         16        0.69  final_grid_model_55   
23                   0.0   128       4096        0.37   final_grid_model_5   
24                1.0E-4    64         16        0.55  final_grid_model_54   
25                   0.0    16        128        0.64   final_grid_model_6   
26                1.0E-8    32        128        0.28  final_grid_model_60   
27                   0.0   128       2048        0.77  final_grid_model_29   
28                1.0E-6    64       4096        0.75  final_grid_model_40   
29                   0.0    64         64        0.65  final_grid_model_20   
..                   ...   ...        ...         ...                  ...   
70                   0.0   512       2048        0.23  final_grid_model_36   
71                1.0E-4   128       2048        0.72  final_grid_model_85   
72                1.0E-6    64       2048        0.26   final_grid_model_4   
73                1.0E-6  1024       4096        0.37  final_grid_model_43   
74                1.0E-8   128        128         0.4  final_grid_model_47   
75                1.0E-4    64        256        0.35  final_grid_model_18   
76                1.0E-6  1024       4096        0.37  final_grid_model_31   
77                1.0E-8   512       1024        0.46  final_grid_model_13   
78                1.0E-6   256       4096        0.76   final_grid_model_3   
79                1.0E-8   256       4096        0.95  final_grid_model_72   
80                1.0E-6   256        512        0.55   final_grid_model_9   
81                1.0E-4  1024       1024        0.23  final_grid_model_27   
82                   0.0   256       4096        0.82  final_grid_model_93   
83                1.0E-6   512       4096        0.47  final_grid_model_63   
84                   0.0   128        512        0.65  final_grid_model_42   
85                1.0E-6    16       4096        0.59  final_grid_model_19   
86                1.0E-6    64         64        0.25  final_grid_model_35   
87                1.0E-4    32         16        0.28  final_grid_model_73   
88                   0.0   256       1024        0.62  final_grid_model_67   
89                1.0E-8  1024       2048        0.59  final_grid_model_61   
90                   0.0    16         32         0.4  final_grid_model_28   
91                1.0E-6    16         64        0.57   final_grid_model_1   
92                1.0E-6   128       1024        0.65  final_grid_model_98   
93                   0.0    16       4096        0.58   final_grid_model_8   
94                   0.0    16         32        0.78  final_grid_model_70   
95                1.0E-4    32         16        0.49  final_grid_model_86   
96                1.0E-8   512         16        0.28  final_grid_model_58   
97                1.0E-6   512       1024        0.97  final_grid_model_51   
98                1.0E-4    32         64        0.97  final_grid_model_44   
99                1.0E-6    64       4096        0.57  final_grid_model_95   

                   auc  
0   0.9723584108199492  
1   0.9714003944773175  
2   0.9711186249647789  
3   0.9710059171597633  
4   0.9707805015497324  
5   0.9699351930121162  
6   0.9684136376444069  
7   0.9681318681318682  
8   0.9680191603268526  
9   0.9653141730064807  
10  0.9649760495914341  
11  0.9644125105663568  
12  0.9642998027613412  
13  0.9638489715412792  
14  0.9636235559312482  
15  0.9636235559312482  
16  0.9635672020287406  
17  0.9628909551986475  
18  0.9627218934911242  
19  0.9627218934911242  
20  0.9626655395886166  
21  0.9625528317836011  
22  0.9624964778810933  
23  0.9623837700760778  
24  0.9623837700760778  
25  0.9622710622710622  
26  0.9621020005635391  
27  0.9617638771484927  
28  0.9617075232459849  
29  0.9607495069033531  
..                 ...  
70  0.9487461256692026  
71  0.9483516483516483  
72  0.9482389405466328  
73  0.9481544096928712  
74  0.9469709777402086  
75  0.9465201465201465  
76  0.9464919695688926  
77  0.9461538461538462  
78  0.9454212454212454  
79  0.9451676528599605  
80  0.9443223443223443  
81  0.9435333896872359  
82    0.94305438151592  
83  0.9426599041983657  
84   0.937644406875176  
85  0.9364609749225133  
86  0.9340095801634263  
87  0.9326852634544942  
88  0.9298957452803607  
89  0.9296703296703297  
90  0.9256410256410257  
91   0.913468582699352  
92  0.8084249084249083  
93  0.8080022541561004  
94  0.8048182586644125  
95  0.8014370245139476  
96  0.7997464074387151  
97  0.7940264863341787  
98  0.7854888701042547  
99  0.7838827838827839  

[100 rows x 13 columns]

You can also see the results of the grid search in Flow:

Model Inspection and Final Test Set Scoring

Let's see how well the best model of the grid search (as judged by validation set AUC) does on the held out test set:


In [17]:
#Get the best model from the list (the model name listed at the top of the table)
best_model = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
performance_best_model = best_model.model_performance(test)
print performance_best_model.auc()


0.97878948064

Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:

We can inspect the winning model's parameters:


In [18]:
params_list = []
for key, value in best_model.params.iteritems():
    params_list.append(str(key)+" = "+str(value['actual']))
params_list


Out[18]:
['learn_rate = 0.05',
 'fold_column = None',
 'col_sample_rate_per_tree = 0.6',
 'learn_rate_annealing = 0.99',
 'score_tree_interval = 10',
 'sample_rate_per_class = None',
 'seed = 1234',
 'keep_cross_validation_predictions = False',
 "model_id = {u'URL': u'/3/Models/final_grid_model_45', u'_exclude_fields': u'', u'type': u'Key<Model>', u'name': u'final_grid_model_45', u'__meta': {u'schema_name': u'ModelKeyV3', u'schema_version': 3, u'schema_type': u'Key<Model>'}}",
 'nfolds = 0',
 'max_abs_leafnode_pred = 1.79769313486e+308',
 'offset_column = None',
 'quantile_alpha = 0.5',
 'stopping_tolerance = 0.0001',
 'fold_assignment = AUTO',
 "training_frame = {u'URL': u'/3/Frames/train.hex', u'_exclude_fields': u'', u'type': u'Key<Frame>', u'name': u'train.hex', u'__meta': {u'schema_name': u'FrameKeyV3', u'schema_version': 3, u'schema_type': u'Key<Frame>'}}",
 'max_runtime_secs = 3519.479',
 'checkpoint = None',
 'balance_classes = False',
 'r2_stopping = 0.999999',
 "validation_frame = {u'URL': u'/3/Frames/valid.hex', u'_exclude_fields': u'', u'type': u'Key<Frame>', u'name': u'valid.hex', u'__meta': {u'schema_name': u'FrameKeyV3', u'schema_version': 3, u'schema_type': u'Key<Frame>'}}",
 'max_depth = 12',
 "response_column = {u'is_member_of_frames': None, u'_exclude_fields': u'', u'column_name': u'survived', u'__meta': {u'schema_name': u'ColSpecifierV3', u'schema_version': 3, u'schema_type': u'VecSpecifier'}}",
 'build_tree_one_node = False',
 'ntrees = 10000',
 'min_split_improvement = 0.0001',
 "ignored_columns = [u'name']",
 'tweedie_power = 1.5',
 'min_rows = 1.0',
 'max_confusion_matrix_size = 20',
 'score_each_iteration = False',
 'nbins_top_level = 1024',
 'max_after_balance_size = 5.0',
 'nbins = 1024',
 'histogram_type = QuantilesGlobal',
 'col_sample_rate = 0.73',
 'stopping_metric = AUC',
 'weights_column = None',
 'stopping_rounds = 5',
 'col_sample_rate_change_per_level = 0.9',
 'max_hit_ratio_k = 0',
 'nbins_cats = 256',
 'sample_rate = 0.29',
 'distribution = bernoulli',
 'class_sampling_factors = None',
 'ignore_const_cols = True',
 'keep_cross_validation_fold_assignment = False']

Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):


In [19]:
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None}
for key in new_params.keys():
    params[key]['actual'] = new_params[key] 
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
    if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
        setattr(gbm_best,key,params[key]['actual'])

In [20]:
gbm_best.train(x=predictors, y=response, training_frame=df)


gbm Model Build Progress: [##################################################] 100%

In [21]:
print gbm_best.cross_validation_metrics_summary()


Cross-Validation Metrics Summary: 
mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid
accuracy 0.9443999 0.0064340 0.9400749 0.9335793 0.9379845 0.9566929 0.9536679
auc 0.9704713 0.0070059 0.9639977 0.9584524 0.9655612 0.9839788 0.9803662
err 0.0556001 0.0064340 0.0599251 0.0664207 0.0620155 0.0433071 0.0463320
err_count 14.6 1.8761663 16.0 18.0 16.0 11.0 12.0
f0point5 0.9483252 0.0102195 0.9469697 0.9210526 0.9562212 0.9619687 0.955414
f1 0.9237261 0.0100577 0.9259259 0.9032258 0.9120879 0.9398907 0.9375
f2 0.9005541 0.0133487 0.9057971 0.886076 0.8718488 0.9188034 0.9202454
lift_top_group 2.6258688 0.0998947 2.3839285 2.8229167 2.632653 2.6736841 2.6161616
logloss 0.1869302 0.0142185 0.2034029 0.2098996 0.1933566 0.1559504 0.1720417
max_per_class_error 0.11417 0.0161180 0.1071429 0.125 0.1530612 0.0947368 0.0909091
mcc 0.8823097 0.0141451 0.8774228 0.8537732 0.8707511 0.9077432 0.9018585
mean_per_class_accuracy 0.9331479 0.0080393 0.9335253 0.9203572 0.9203444 0.9463423 0.9451705
mean_per_class_error 0.0668521 0.0080393 0.0664747 0.0796429 0.0796556 0.0536577 0.0548295
mse 0.0501457 0.0042764 0.0544056 0.0572033 0.0529171 0.0411026 0.0451000
precision 0.9655963 0.0130557 0.9615384 0.9333333 0.9880952 0.9772728 0.9677419
r2 0.7870655 0.0187190 0.7765828 0.7499365 0.7753588 0.8244438 0.8090054
recall 0.88583 0.0161180 0.8928571 0.875 0.8469388 0.9052632 0.9090909
specificity 0.9804658 0.0069456 0.9741936 0.9657143 0.99375 0.9874214 0.98125

It looks like the winning model performs slightly better on the validation and test sets than during cross-validation on the training set as the mean AUC on the 5 folds is estimated to be only 97.04%, but with a fairly large standard deviation of 0.7%. For small datasets, such a large variance is not unusual. To get a better estimate of model performance, the Random hyper-parameter search could have used nfolds = 5 (or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as nfolds+1 models will be built for every set of parameters.

Instead, to save time, let's just scan through the top 5 models and cross-validate their parameters with nfolds=5 on the entire dataset:


In [22]:
for i in range(5): 
    gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
    #get the parameters from the Random grid search model and modify them slightly
    params = gbm.params
    new_params = {"nfolds":5, "model_id":None}
    for key in new_params.keys():
        params[key]['actual'] = new_params[key]
    new_model = H2OGradientBoostingEstimator()
    for key in params.keys():
        if key in dir(new_model) and getattr(new_model,key) != params[key]['actual']:
            setattr(new_model,key,params[key]['actual'])
    new_model.train(x = predictors, y = response, training_frame = df)  
    cv_summary = new_model.cross_validation_metrics_summary().as_data_frame()
    print(gbm.model_id)
    print(cv_summary.iloc[1]) ## AUC


gbm Model Build Progress: [##################################################] 100%
final_grid_model_45
                       auc
mean            0.97047126
sd            0.0070059407
cv_1_valid       0.9639977
cv_2_valid       0.9584524
cv_3_valid       0.9655612
cv_4_valid       0.9839788
cv_5_valid       0.9803662
Name: 1, dtype: object

gbm Model Build Progress: [##################################################] 100%
final_grid_model_68
                      auc
mean            0.9719337
sd            0.006935956
cv_1_valid      0.9698157
cv_2_valid     0.95497024
cv_3_valid      0.9716199
cv_4_valid     0.98106587
cv_5_valid       0.982197
Name: 1, dtype: object

gbm Model Build Progress: [##################################################] 100%
final_grid_model_96
                      auc
mean            0.9722442
sd            0.005797062
cv_1_valid      0.9639401
cv_2_valid     0.96068454
cv_3_valid       0.978125
cv_4_valid       0.978484
cv_5_valid      0.9799874
Name: 1, dtype: object

gbm Model Build Progress: [##################################################] 100%
final_grid_model_0
                      auc
mean           0.96713257
sd            0.008364663
cv_1_valid     0.96059906
cv_2_valid     0.94699407
cv_3_valid     0.97302294
cv_4_valid     0.97815293
cv_5_valid     0.97689396
Name: 1, dtype: object

gbm Model Build Progress: [##################################################] 100%
final_grid_model_38
                      auc
mean           0.97262114
sd            0.005390482
cv_1_valid     0.96797234
cv_2_valid      0.9598512
cv_3_valid      0.9776148
cv_4_valid     0.98020524
cv_5_valid      0.9774621
Name: 1, dtype: object

The avid reader might have noticed that we just implicitly did further parameter tuning using the "final" test set (which is part of the entire dataset df), which is not good practice - one is not supposed to use the "final" test set more than once. Hence, we're not going to pick a different "best" model, but we're just learning about the variance in AUCs. It turns out, for this tiny dataset, that the variance is rather large, which is not surprising.

Keeping the same "best" model, we can make test set predictions as follows:


In [23]:
preds = best_model.predict(test)
preds.head()


gbm prediction Progress: [##################################################] 100%
predict p0 p1
00.948394 0.0516064
00.940819 0.0591814
00.924897 0.0751026
10.01562490.984375
10.01161960.98838
00.854808 0.145192
10.04406460.955935
10.01159730.988403
10.05443650.945563
00.919649 0.0803507
Out[23]:

Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (p1). The probability for death (p0) is given for convenience, as it is just 1-p1.


In [24]:
best_model.model_performance(valid)


ModelMetricsBinomial: gbm
** Reported on test data. **

MSE: 0.0490254194251
R^2: 0.792582001197
LogLoss: 0.183618009053
Mean Per-Class Error: 0.0666666666667
AUC: 0.97235841082
Gini: 0.94471682164

Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.492182327674: 
0 1 Error Rate
0 169.0 0.0 0.0 (0.0/169.0)
1 14.0 91.0 0.1333 (14.0/105.0)
Total 183.0 91.0 0.0511 (14.0/274.0)
Maximum Metrics: Maximum metrics at their respective thresholds

metric threshold value idx
max f1 0.4921823 0.9285714 90.0
max f2 0.1612587 0.9074074 119.0
max f0point5 0.4921823 0.9701493 90.0
max accuracy 0.4921823 0.9489051 90.0
max precision 0.9885634 1.0 0.0
max recall 0.0501451 1.0 199.0
max specificity 0.9885634 1.0 0.0
max absolute_MCC 0.4921823 0.8946308 90.0
max min_per_class_accuracy 0.2158661 0.9053254 111.0
max mean_per_class_accuracy 0.4921823 0.9333333 90.0
Gains/Lift Table: Avg response rate: 38.32 %

group cumulative_data_fraction lower_threshold lift cumulative_lift response_rate cumulative_response_rate capture_rate cumulative_capture_rate gain cumulative_gain
1 0.0109489 0.9878848 2.6095238 2.6095238 1.0 1.0 0.0285714 0.0285714 160.9523810 160.9523810
2 0.0218978 0.9871058 2.6095238 2.6095238 1.0 1.0 0.0285714 0.0571429 160.9523810 160.9523810
3 0.0328467 0.9866763 2.6095238 2.6095238 1.0 1.0 0.0285714 0.0857143 160.9523810 160.9523810
4 0.0401460 0.9858998 2.6095238 2.6095238 1.0 1.0 0.0190476 0.1047619 160.9523810 160.9523810
5 0.0510949 0.9856960 2.6095238 2.6095238 1.0 1.0 0.0285714 0.1333333 160.9523810 160.9523810
6 0.1021898 0.9798586 2.6095238 2.6095238 1.0 1.0 0.1333333 0.2666667 160.9523810 160.9523810
7 0.1496350 0.9731845 2.6095238 2.6095238 1.0 1.0 0.1238095 0.3904762 160.9523810 160.9523810
8 0.2007299 0.9548071 2.6095238 2.6095238 1.0 1.0 0.1333333 0.5238095 160.9523810 160.9523810
9 0.2992701 0.8924580 2.6095238 2.6095238 1.0 1.0 0.2571429 0.7809524 160.9523810 160.9523810
10 0.4014599 0.2327126 1.2115646 2.2536797 0.4642857 0.8636364 0.1238095 0.9047619 21.1564626 125.3679654
11 0.5 0.1163121 0.4832451 1.9047619 0.1851852 0.7299270 0.0476190 0.9523810 -51.6754850 90.4761905
12 0.5985401 0.0723806 0.1932981 1.6229965 0.0740741 0.6219512 0.0190476 0.9714286 -80.6701940 62.2996516
13 0.7007299 0.0526387 0.1863946 1.4134921 0.0714286 0.5416667 0.0190476 0.9904762 -81.3605442 41.3492063
14 0.7992701 0.0440208 0.0966490 1.2511416 0.0370370 0.4794521 0.0095238 1.0 -90.3350970 25.1141553
15 0.8978102 0.0369130 0.0 1.1138211 0.0 0.4268293 0.0 1.0 -100.0 11.3821138
16 1.0 0.0165575 0.0 1.0 0.0 0.3832117 0.0 1.0 -100.0 0.0

Out[24]:

You can also see the "best" model in more detail in Flow:

The model and the predictions can be saved to file as follows:


In [25]:
# uncomment if you want to export the best model
# h2o.save_model(best_model, "/tmp/bestModel.csv", force=True)
# h2o.export_file(preds, "/tmp/bestPreds.csv", force=True)

In [27]:
# print pojo to screen, or provide path to download location
# h2o.download_pojo(best_model)

The model can also be exported as a plain old Java object (POJO) for H2O-independent (standalone/Storm/Kafka/UDF) scoring in any Java environment.

/*
 Licensed under the Apache License, Version 2.0
    http://www.apache.org/licenses/LICENSE-2.0.html

  AUTOGENERATED BY H2O at 2016-07-17T18:38:50.337-07:00
  3.8.3.3

  Standalone prediction code with sample test data for GBMModel named final_grid_model_45

  How to download, compile and execute:
      mkdir tmpdir
      cd tmpdir
      curl http://127.0.0.1:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
      curl http://127.0.0.1:54321/3/Models.java/final_grid_model_45 > final_grid_model_45.java
      javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m final_grid_model_45.java

     (Note:  Try java argument -XX:+PrintCompilation to show runtime JIT compiler behavior.)
*/
import java.util.Map;
import hex.genmodel.GenModel;
import hex.genmodel.annotations.ModelPojo;

...
class final_grid_model_45_Tree_0_class_0 {
  static final double score0(double[] data) {
    double pred =      (Double.isNaN(data[1]) || !GenModel.bitSetContains(GRPSPLIT0, 0, data[1 /* sex */]) ? 
         (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT1, 13, data[7 /* cabin */]) ? 
             (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT2, 9, data[7 /* cabin */]) ? 
                 (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT3, 9, data[7 /* cabin */]) ? 
                     (data[2 /* age */] <1.4174492f ? 
                        0.13087687f : 
                         (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT4, 9, data[7 /* cabin */]) ? 
                             (Double.isNaN(data[3]) || data[3 /* sibsp */] <1.000313f ? 
                                 (data[6 /* fare */] <7.91251f ? 
                                     (Double.isNaN(data[5]) || data[5 /* ticket */] <368744.5f ? 
                                        -0.08224204f : 
                                         (Double.isNaN(data[2]) || data[2 /* age */] <13.0f ? 
                                            -0.028962314f : 
                                            -0.08224204f)) : 
                                     (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT5, 9, data[7 /* cabin */]) ? 
                                         (data[6 /* fare */] <7.989957f ? 
                                             (Double.isNaN(data[3]) || data[3 /* sibsp */] <0.0017434144f ? 
                                                0.07759714f : 
                                                0.13087687f) : 
                                             (data[6 /* fare */] <12.546303f ? 
                                                -0.07371729f : 
                                                 (Double.isNaN(data[4]) || data[4 /* parch */] <1.0020853f ? 
                                                    -0.037374903f : 
                                                    -0.08224204f))) : 
                                        0.0f)) : 
                                -0.08224204f) : 
                            0.0f)) : 
                    0.0f) : 
                -0.08224204f) : 
            -0.08224204f) :  
...

Ensembling Techniques

After learning above that the variance of the test set AUC of the top few models was rather large, we might be able to turn this into our advantage by using ensembling techniques. The simplest one is taking the average of the predictions (survival probabilities) of the top k grid search model predictions (here, we use k=10):


In [28]:
prob = None
k=10
for i in range(0,k): 
    gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
    if (prob is None):
        prob = gbm.predict(test)["p1"]
    else:
        prob = prob + gbm.predict(test)["p1"]
prob = prob/k


gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

gbm prediction Progress: [##################################################] 100%

We now have a blended probability of survival for each person on the Titanic.


In [29]:
prob.head()


p1
0.0596246
0.0511568
0.115645
0.977491
0.981366
0.210353
0.948326
0.976989
0.944669
0.107771
Out[29]:

We can bring those ensemble predictions to our Python session's memory space and use other Python packages.


In [30]:
from sklearn.metrics import roc_auc_score
# convert prob and test[response] h2oframes to pandas' frames and then convert them each to numpy array
np_array_prob = prob.as_data_frame().as_matrix()
np_array_test = test[response].as_data_frame().as_matrix()
probInPy = np_array_prob
labeInPy = np_array_test
# compare true scores (test[response]) to probability scores (prob)
roc_auc_score(labeInPy, probInPy)


Out[30]:
0.98202722347033167

This simple blended ensemble test set prediction has an even higher AUC than the best single model, but we need to do more validation studies, ideally using cross-validation. We leave this as an exercise for the reader - take the parameters of the top 10 models, retrain them with nfolds=5 on the full dataset, set keep_holdout_predictions=True and sum up their predicted probabilities, then score that with sklearn's roc_auc_score as shown above.

For more sophisticated ensembling approaches, such as stacking via a superlearner, we refer to the H2O Ensemble github page.

Summary

We learned how to build H2O GBM models for a binary classification task on a small but realistic dataset with numerical and categorical variables, with the goal to maximize the AUC (ranges from 0.5 to 1). We first established a baseline with the default model, then carefully tuned the remaining hyper-parameters without "too much" human guess-work. We used both Cartesian and Random hyper-parameter searches to find good models. We were able to get the AUC on a holdout test set from the low 94% range with the default model to the mid 97% range after tuning, and to above 98% with some simple ensembling technique known as blending. We performed simple cross-validation variance analysis to learn that results were slightly "lucky" due to the specific train/valid/test set splits, and settled to expect 97% AUCs instead.

Note that this script and the findings therein are directly transferrable to large datasets on distributed clusters including Spark/Hadoop environments.

More information can be found here http://www.h2o.ai/docs/.