Gradient Boosting Machines



Image Source: brucecompany.com

Introduction

Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in an iterative fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.

It is recommended that you read through the accompanying Classification and Regression Trees Tutorial for an overview of decision trees.

History

Boosting is one of the most powerful learning ideas introduced in the last twenty years. It was originally designed for classification problems, but it can be extended to regression as well. The motivation for boosting was a procedure that combines the outputs of many "weak" classifiers to produce a powerful "committee." A weak classifier (e.g. decision tree) is one whose error rate is only slightly better than random guessing.

AdaBoost short for "Adaptive Boosting", is a machine learning meta-algorithm formulated by Yoav Freund and Robert Schapire in 1996, which is now considered to be a special case of Gradient Boosting. There are some differences between the AdaBoost algorithm and modern Gradient Boosting. In the AdaBoost algorithm, the "shortcomings" of existing weak learners are identified by high-weight data points, however in Gradient Boosting, the shortcomings are identified by gradients.

The idea of gradient boosting originated in the observation by Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost function. Explicit regression gradient boosting algorithms were subsequently developed by Jerome H. Friedman, simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean. The latter two papers introduced the abstract view of boosting algorithms as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification.

In general, in terms of model performance, we have the following heirarchy:

$$Boosting > Random \: Forest > Bagging > Single \: Tree$$

Boosting

Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.

The purpose of boosting is to sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers $G_m(x)$, $m = 1, 2, ... , M$.

Stagewise Additive Modeling

Boosting builds an additive model:

$$F(x) = \sum_{m=1}^M \beta_m b(x; \gamma_m)$$

where $b(x; \gamma_m)$ is a tree and $\gamma_m$ parameterizes the splits. With boosting, the parameters, $(\beta_m, \gamma_m)$ are fit in a stagewise fashion. This slows the process down, and overfits less quickly.

AdaBoost

  • AdaBoost builds an additive logistic regression model by stagewise fitting.
  • AdaBoost uses an exponential loss function of the form, $L(y, F(x)) = exp(-yF(x))$, similar to the negative binomial log-likelihood loss.
  • The principal attraction of the exponential loss in the context of additive modeling is computational; it leads to the simple modular reweighting
  • Instead of fitting trees to residuals, the special form of the exponential loss function in AdaBoost leads to fitting trees to weighted versions of the original data.

Source: Elements of Statistical Learning

Source: Elements of Statistical Learning

Gradient Boosting Algorithm

  • The idea of gradient boosting originated in the observation by Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost function.
  • Explicit regression gradient boosting algorithms were subsequently developed by Jerome H. Friedman simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean.
  • The latter two papers introduced the abstract view of boosting algorithms as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction.
  • This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification.

Friedman's Gradient Boosting Algorithm for a generic loss function, $L(y_i, \gamma)$:

Source: Elements of Statistical Learning

Loss Functions and Gradients

Source: Elements of Statistical Learning

The optimal number of iterations, T, and the learning rate, λ, depend on each other.

Stochastic GBM

Stochastic Gradient Boosting (Friedman, 2002) proposed the stochastic gradient boosting algorithm that simply samples uniformly without replacement from the dataset before estimating the next gradient step. He found that this additional step greatly improved performance.

Some implemenations of Stochastic GBM have both column and row sampling (per split and per tree) for better generalization. XGBoost and H2O are two implemenations that have a per-tree column and row sampling. This is reason that these implementations are popular among competitive data mining competitors (e.g. Kaggle).

Practical Tips

  • It's more common to grow shorter trees ("shrubs" or "stumps") in GBM than you do in Random Forest.
  • It's useful to try a variety of column sample (and column sample per tree) rates.
  • Don't assume that the set of optimal tuning parameters for one implementation of GBM will carry over and also be optimal in a different GBM implementation.

GBM Software in R

This is not a comprehensive list of GBM software in R, however, we detail a few of the most popular implementations below: gbm, xgboost and h2o.

The CRAN Machine Learning Task View lists the following projects as well. The Hinge-loss is optimized by the boosting implementation in package bst. Package GAMBoost can be used to fit generalized additive models by a boosting algorithm. An extensible boosting framework for generalized linear, additive and nonparametric models is available in package mboost. Likelihood-based boosting for Cox models is implemented in CoxBoost and for mixed models in GMMBoost. GAMLSS models can be fitted using boosting by gamboostLSS.

gbm

Authors: Originally written by Greg Ridgeway, added to by various authors, currently maintained by Harry Southworth

Backend: C++

The gbm R package is an implementation of extensions to Freund and Schapire's AdaBoost algorithm and Friedman's gradient boosting machine. This is the original R implementation of GBM. A presentation is available here by Mark Landry.

Features:

  • Stochastic GBM.
  • Supports up to 1024 factor levels.
  • Supports Classification and regression trees.
  • Includes regression methods for:
    • least squares
    • absolute loss
    • t-distribution loss
    • quantile regression
    • logistic
    • multinomial logistic
    • Poisson
    • Cox proportional hazards partial likelihood
    • AdaBoost exponential loss
    • Huberized hinge loss
    • Learning to Rank measures (LambdaMart)
  • Out-of-bag estimator for the optimal number of iterations is provided.
  • Easy to overfit since early stopping functionality is not automated in this package.
  • If internal cross-validation is used, this can be parallelized to all cores on the machine.
  • Currently undergoing a major refactoring & rewrite (and has been for some time).
  • GPL-2/3 License.

In [1]:
#install.packages("gbm")
#install.packages("cvAUC")
library(gbm)
library(cvAUC)


Loading required package: survival
Loading required package: lattice
Loading required package: splines
Loading required package: parallel
Loaded gbm 2.1.3
Loading required package: ROCR
Loading required package: gplots

Attaching package: ‘gplots’

The following object is masked from ‘package:stats’:

    lowess

Loading required package: data.table
 
cvAUC version: 1.1.0
Notice to cvAUC users: Major speed improvements in version 1.1.0
 

In [2]:
# Load 2-class HIGGS dataset
train <- read.csv("higgs_train_10k.csv")
test <- read.csv("higgs_test_5k.csv")

In [3]:
set.seed(1)
model <- gbm(formula = response ~ ., 
             distribution = "bernoulli",
             data = train,
             n.trees = 70,
             interaction.depth = 5,
             shrinkage = 0.3,
             bag.fraction = 0.5,
             train.fraction = 1.0,
             n.cores = NULL)  #will use all cores by default

In [4]:
print(model)


gbm(formula = response ~ ., distribution = "bernoulli", data = train, 
    n.trees = 70, interaction.depth = 5, shrinkage = 0.3, bag.fraction = 0.5, 
    train.fraction = 1, n.cores = NULL)
A gradient boosted model with bernoulli loss function.
70 iterations were performed.
There were 28 predictors of which 28 had non-zero influence.

In [5]:
# Generate predictions on test dataset
preds <- predict(model, newdata = test, n.trees = 70)
labels <- test[ , "response"]

# Compute AUC on the test set
cvAUC::AUC(predictions = preds, labels = labels)


0.774160905116416

xgboost

Authors: Tianqi Chen, Tong He, Michael Benesty

Backend: C++

The xgboost R package provides an R API to "Extreme Gradient Boosting", which is an efficient implementation of gradient boosting framework. Parameter tuning guide and more resources here. The xgboost package is quite popular on Kaggle for data mining competitions.

Features:

  • Stochastic GBM with column and row sampling (per split and per tree) for better generalization.
  • Includes efficient linear model solver and tree learning algorithms.
  • Parallel computation on a single machine.
  • Supports various objective functions, including regression, classification and ranking.
  • The package is made to be extensible, so that users are also allowed to define their own objectives easily.
  • Apache 2.0 License.

In [6]:
#install.packages("xgboost")
#install.packages("cvAUC")
library(xgboost)
library(Matrix)
library(cvAUC)

In [7]:
# Load 2-class HIGGS dataset
train <- read.csv("higgs_train_10k.csv")
test <- read.csv("higgs_test_5k.csv")

In [8]:
# Set seed because we column-sample
set.seed(1)

y <- "response"
train.mx <- sparse.model.matrix(response ~ ., train)
test.mx <- sparse.model.matrix(response ~ ., test)
dtrain <- xgb.DMatrix(train.mx, label = train[ , y])
dtest <- xgb.DMatrix(test.mx, label = test[ , y])

train.gdbt <- xgb.train(params = list(objective = "binary:logistic",
                                      #num_class = 2,
                                      #eval_metric = "mlogloss",
                                      eta = 0.3,
                                      max_depth = 5,
                                      subsample = 1,
                                      colsample_bytree = 0.5), 
                                      data = dtrain, 
                                      nrounds = 70, 
                                      watchlist = list(train = dtrain, test = dtest))


[1]	train-error:0.410100	test-error:0.427000 
[2]	train-error:0.375900	test-error:0.405200 
[3]	train-error:0.383100	test-error:0.407000 
[4]	train-error:0.363500	test-error:0.400200 
[5]	train-error:0.359900	test-error:0.398600 
[6]	train-error:0.356000	test-error:0.398400 
[7]	train-error:0.348000	test-error:0.397600 
[8]	train-error:0.344700	test-error:0.394200 
[9]	train-error:0.337600	test-error:0.390800 
[10]	train-error:0.335300	test-error:0.391200 
[11]	train-error:0.328600	test-error:0.389600 
[12]	train-error:0.328500	test-error:0.386200 
[13]	train-error:0.322100	test-error:0.388400 
[14]	train-error:0.315200	test-error:0.388400 
[15]	train-error:0.314900	test-error:0.388600 
[16]	train-error:0.310500	test-error:0.388000 
[17]	train-error:0.309200	test-error:0.389400 
[18]	train-error:0.308000	test-error:0.389400 
[19]	train-error:0.306200	test-error:0.388800 
[20]	train-error:0.303800	test-error:0.388800 
[21]	train-error:0.300700	test-error:0.389400 
[22]	train-error:0.299100	test-error:0.390200 
[23]	train-error:0.296000	test-error:0.391600 
[24]	train-error:0.294900	test-error:0.393600 
[25]	train-error:0.290900	test-error:0.391800 
[26]	train-error:0.290400	test-error:0.392200 
[27]	train-error:0.288500	test-error:0.392000 
[28]	train-error:0.285800	test-error:0.390400 
[29]	train-error:0.281200	test-error:0.389400 
[30]	train-error:0.279800	test-error:0.389800 
[31]	train-error:0.279000	test-error:0.388000 
[32]	train-error:0.278400	test-error:0.389200 
[33]	train-error:0.273300	test-error:0.388600 
[34]	train-error:0.263400	test-error:0.386600 
[35]	train-error:0.261300	test-error:0.387200 
[36]	train-error:0.261000	test-error:0.387200 
[37]	train-error:0.254500	test-error:0.385800 
[38]	train-error:0.253600	test-error:0.386000 
[39]	train-error:0.251500	test-error:0.386400 
[40]	train-error:0.249100	test-error:0.387400 
[41]	train-error:0.243700	test-error:0.388000 
[42]	train-error:0.241400	test-error:0.386600 
[43]	train-error:0.241100	test-error:0.386200 
[44]	train-error:0.235700	test-error:0.390400 
[45]	train-error:0.230800	test-error:0.390400 
[46]	train-error:0.228900	test-error:0.387800 
[47]	train-error:0.229000	test-error:0.388600 
[48]	train-error:0.227600	test-error:0.389000 
[49]	train-error:0.227700	test-error:0.388400 
[50]	train-error:0.227200	test-error:0.388400 
[51]	train-error:0.227200	test-error:0.389600 
[52]	train-error:0.223300	test-error:0.389800 
[53]	train-error:0.218500	test-error:0.395800 
[54]	train-error:0.216900	test-error:0.393400 
[55]	train-error:0.212900	test-error:0.392200 
[56]	train-error:0.213100	test-error:0.392000 
[57]	train-error:0.210900	test-error:0.394000 
[58]	train-error:0.207300	test-error:0.394200 
[59]	train-error:0.203500	test-error:0.395400 
[60]	train-error:0.199800	test-error:0.394800 
[61]	train-error:0.198600	test-error:0.395400 
[62]	train-error:0.197200	test-error:0.394800 
[63]	train-error:0.196600	test-error:0.395800 
[64]	train-error:0.196600	test-error:0.395600 
[65]	train-error:0.195900	test-error:0.395800 
[66]	train-error:0.193600	test-error:0.391600 
[67]	train-error:0.192100	test-error:0.395000 
[68]	train-error:0.188900	test-error:0.393800 
[69]	train-error:0.187500	test-error:0.394800 
[70]	train-error:0.187100	test-error:0.395200 

In [9]:
# Generate predictions on test dataset
preds <- predict(train.gdbt, newdata = dtest)
labels <- test[ , y]

# Compute AUC on the test set
cvAUC::AUC(predictions = preds, labels = labels)


0.639851345970536

In [10]:
#Advanced functionality of xgboost
#install.packages("Ckmeans.1d.dp")
library(Ckmeans.1d.dp)

# Compute feature importance matrix
names <- dimnames(data.matrix(train[ , -1]))[[2]]
importance_matrix <- xgb.importance(names, model = train.gdbt)

# Plot feature importance
xgb.plot.importance(importance_matrix[1:10,])


h2o

Authors: Arno Candel, Cliff Click, H2O.ai contributors

Backend: Java

H2O GBM Tuning guide by Arno Candel and H2O GBM Vignette.

Features:

  • Distributed and parallelized computation on either a single node or a multi-node cluster.
  • Automatic early stopping based on convergence of user-specied metrics to user-specied relative tolerance.
  • Stochastic GBM with column and row sampling (per split and per tree) for better generalization.
  • Support for exponential families (Poisson, Gamma, Tweedie) and loss functions in addition to binomial (Bernoulli), Gaussian and multinomial distributions, such as Quantile regression (including Laplace)ˆ.
  • Grid search for hyperparameter optimization and model selection.
  • Data-distributed, which means the entire dataset does not need to fit into memory on a single node, hence scales to any size training set.
  • Uses histogram approximations of continuous variables for speedup.
  • Uses dynamic binning - bin limits are reset at each tree level based on the split bins' min and max values discovered during the last pass.
  • Uses squared error to determine optimal splits.
  • Distributed implementation details outlined in a blog post by Cliff Click.
  • Unlimited factor levels.
  • Multiclass trees (one for each class) built in parallel with each other.
  • Apache 2.0 Licensed.
  • Model export in plain Java code for deployment in production environments.
  • GUI for training & model eval/viz (H2O Flow).

In [11]:
#install.packages("h2o")
library(h2o)
#h2o.shutdown(prompt = FALSE)  #if required
h2o.init(nthreads = -1)  #Start a local H2O cluster using nthreads = num available cores


----------------------------------------------------------------------

Your next step is to start H2O:
    > h2o.init()

For H2O package documentation, ask for help:
    > ??h2o

After starting H2O, you can use the Web UI at http://localhost:54321
For more information visit http://docs.h2o.ai

----------------------------------------------------------------------


Attaching package: ‘h2o’

The following objects are masked from ‘package:data.table’:

    hour, month, week, year

The following objects are masked from ‘package:stats’:

    cor, sd, var

The following objects are masked from ‘package:base’:

    &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,
    colnames<-, ifelse, is.character, is.factor, is.numeric, log,
    log10, log1p, log2, round, signif, trunc

 Connection successful!

R is connected to the H2O cluster: 
    H2O cluster uptime:         3 minutes 44 seconds 
    H2O cluster version:        3.10.5.3 
    H2O cluster version age:    1 month and 8 days  
    H2O cluster name:           H2O_started_from_R_robertstevens_ppr077 
    H2O cluster total nodes:    1 
    H2O cluster total memory:   3.27 GB 
    H2O cluster total cores:    8 
    H2O cluster allowed cores:  8 
    H2O cluster healthy:        TRUE 
    H2O Connection ip:          localhost 
    H2O Connection port:        54321 
    H2O Connection proxy:       NA 
    H2O Internal Security:      FALSE 
    R Version:                  R version 3.4.1 (2017-06-30) 


In [13]:
# Load 10-class MNIST dataset
train <- h2o.importFile("higgs_train_10k.csv")
test <- h2o.importFile("higgs_test_5k.csv")
dim(train)
dim(test)


  |======================================================================| 100%
  |======================================================================| 100%
  1. 10000
  2. 29
  1. 5000
  2. 29

In [14]:
# Identity the response column
y <- "response"

# Identify the predictor columns
x <- setdiff(names(train), y)

# Convert response to factor
train[ , y] <- as.factor(train[ , y])
test[ , y] <- as.factor(test[ , y])

In [15]:
# Train an H2O GBM model
model <- h2o.gbm(x = x,
                 y = y,
                 training_frame = train,
                 ntrees = 70,
                 learn_rate = 0.3,
                 sample_rate = 1.0,
                 max_depth = 5,
                 col_sample_rate_per_tree = 0.5,
                 seed = 1)


  |======================================================================| 100%

In [17]:
# Get model performance on a test set
perf <- h2o.performance(model, test)
perf


H2OBinomialMetrics: gbm

MSE:  0.1938163
RMSE:  0.4402457
LogLoss:  0.5701889
Mean Per-Class Error:  0.3376409
AUC:  0.7735663
Gini:  0.5471326

Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
          0    1    Error        Rate
0       994 1321 0.570626  =1321/2315
1       281 2404 0.104655   =281/2685
Totals 1275 3725 0.320400  =1602/5000

Maximum Metrics: Maximum metrics at their respective thresholds
                        metric threshold    value idx
1                       max f1  0.306264 0.750078 288
2                       max f2  0.110686 0.861722 364
3                 max f0point5  0.582499 0.733454 170
4                 max accuracy  0.427211 0.704600 237
5                max precision  0.946492 0.966102  14
6                   max recall  0.017942 1.000000 397
7              max specificity  0.995085 0.999568   0
8             max absolute_mcc  0.582499 0.407274 170
9   max min_per_class_accuracy  0.514693 0.702376 198
10 max mean_per_class_accuracy  0.582499 0.702967 170

Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`

In [18]:
# To retreive individual metrics
h2o.auc(perf)


0.773566288998556

In [19]:
# Print confusion matrix
h2o.confusionMatrix(perf)


01ErrorRate
0 994 1321 0.5706263 =1321/2315
1 281 2404 0.1046555 =281/2685
Totals1275 3725 0.3204000 =1602/5000

In [20]:
# Plot scoring history over time
plot(model)



In [21]:
# Retreive feature importance
vi <- h2o.varimp(model)
vi[1:10, ]


variablerelative_importancescaled_importancepercentage
x26 541.3356 1.0000000 0.20010571
x28 229.3425 0.4236604 0.08477688
x25 202.5561 0.3741784 0.07487524
x6 158.7745 0.2933014 0.05869130
x23 154.0504 0.2845746 0.05694500
x27 150.6403 0.2782753 0.05568448
x4 137.7830 0.2545241 0.05093174
x10 111.8098 0.2065443 0.04133069
x1 105.7128 0.1952814 0.03907693
x22 98.4624 0.1818879 0.03639681

In [22]:
# Plot feature importance
barplot(vi$scaled_importance,
        names.arg = vi$variable,
        space = 1,
        las = 2,
        main = "Variable Importance: H2O GBM")


Note that all models, data and model metrics can be viewed via the H2O Flow GUI, which should already be running since you started the H2O cluster with h2o.init().


In [77]:
# Early stopping example
# Keep in mind that when you use early stopping, you should pass a validation set
# Since the validation set is used to detmine the stopping point, a separate test set should be used for model eval

#fit <- h2o.gbm(x = x,
#               y = y,
#               training_frame = train,
#               model_id = "gbm_fit3",
#               validation_frame = valid,  #only used if stopping_rounds > 0
#               ntrees = 500,
#               score_tree_interval = 5,      #used for early stopping
#               stopping_rounds = 3,          #used for early stopping
#               stopping_metric = "misclassification", #used for early stopping
#               stopping_tolerance = 0.0005,  #used for early stopping
#               seed = 1)

In [23]:
# GBM hyperparamters
gbm_params <- list(learn_rate = seq(0.01, 0.1, 0.01),
                   max_depth = seq(2, 10, 1),
                   sample_rate = seq(0.5, 1.0, 0.1),
                   col_sample_rate = seq(0.1, 1.0, 0.1))
search_criteria <- list(strategy = "RandomDiscrete", 
                         max_models = 20)

# Train and validate a grid of GBMs
gbm_grid <- h2o.grid("gbm", x = x, y = y,
                      grid_id = "gbm_grid",
                      training_frame = train,
                      validation_frame = test,  #test frame will only be used to calculate metrics
                      ntrees = 70,
                      seed = 1,
                      hyper_params = gbm_params,
                      search_criteria = search_criteria)

gbm_gridperf <- h2o.getGrid(grid_id = "gbm_grid", 
                            sort_by = "auc", 
                            decreasing = TRUE)
gbm_gridperf


  |======================================================================| 100%
H2O Grid Details
================

Grid ID: gbm_grid 
Used hyper parameters: 
  -  col_sample_rate 
  -  learn_rate 
  -  max_depth 
  -  sample_rate 
Number of models: 20 
Number of failed models: 0 

Hyper-Parameter Search Summary: ordered by decreasing auc
   col_sample_rate learn_rate max_depth sample_rate         model_ids
1              0.6       0.05         8         0.6 gbm_grid_model_16
2              0.6       0.05         9         0.9 gbm_grid_model_12
3              0.3       0.06         7         0.6 gbm_grid_model_17
4              0.3       0.05         8         0.9 gbm_grid_model_10
5              0.3       0.01         9         0.8  gbm_grid_model_2
6              0.9       0.03         6         0.6 gbm_grid_model_13
7              0.9       0.02         8         0.5  gbm_grid_model_0
8              0.8       0.07        10         0.8  gbm_grid_model_5
9              0.3       0.09         8         0.9  gbm_grid_model_8
10             0.3       0.03         6         0.9 gbm_grid_model_15
11             0.5       0.02         6         0.9 gbm_grid_model_14
12             0.1        0.1         9         0.7  gbm_grid_model_3
13             0.1        0.1         7         0.8  gbm_grid_model_9
14             0.5       0.09         2         0.7  gbm_grid_model_7
15             0.4       0.08         2         1.0  gbm_grid_model_1
16             0.1       0.01         6         0.6 gbm_grid_model_11
17             0.1        0.1         2         0.5  gbm_grid_model_4
18             0.1       0.01         3         0.5 gbm_grid_model_18
19             0.9       0.02         2         0.7  gbm_grid_model_6
20             0.4       0.01         2         0.7 gbm_grid_model_19
                  auc
1  0.7853426966066179
2  0.7838682223857846
3  0.7833193769079478
4  0.7825367713599671
5  0.7805576134914793
6  0.7802882987238116
7  0.7801650638898608
8   0.779965410588382
9  0.7777372893967366
10 0.7765919776697193
11 0.7750927277773085
12 0.7700243332488709
13 0.7673691534844811
14  0.760143103635508
15 0.7564406691040135
16 0.7505315105517816
17 0.7437081458064361
18 0.7282650996858798
19 0.7255086453418923
20 0.7154854382599113

The grid search helped a lot. The first model we trained only had a 0.774 test set AUC, but the top GBM in our grid has a test set AUC of 0.786. More information about grid search is available in the H2O grid search R tutorial.