Grid Search in REP

This notebook demonstrates tools to optimize classification model provided by Reproducible experiment platform (REP) package:

  • grid search for the best classifier hyperparameters

  • different optimization algorithms

  • different scoring models (optimization of arbirtary figure of merit)


In [1]:
%pylab inline


Populating the interactive namespace from numpy and matplotlib

Loading data

Dataset 'magic' from UCI


In [2]:
!cd toy_datasets; wget -O magic04.data -nc https://archive.ics.uci.edu/ml/machine-learning-databases/magic/magic04.data


File `magic04.data' already there; not retrieving.

In [3]:
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score

columns = ['fLength', 'fWidth', 'fSize', 'fConc', 'fConc1', 'fAsym', 'fM3Long', 'fM3Trans', 'fAlpha', 'fDist', 'g']
data = pandas.read_csv('toy_datasets/magic04.data', names=columns)
labels = numpy.array(data['g'] == 'g', dtype=int)
data = data.drop('g', axis=1)


---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-3-7ea5ce90f447> in <module>()
----> 1 import numpy, pandas
      2 from rep.utils import train_test_split
      3 from sklearn.metrics import roc_auc_score
      4 
      5 columns = ['fLength', 'fWidth', 'fSize', 'fConc', 'fConc1', 'fAsym', 'fM3Long', 'fM3Trans', 'fAlpha', 'fDist', 'g']

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/__init__.pyc in <module>()
     42 import pandas.core.config_init
     43 
---> 44 from pandas.core.api import *
     45 from pandas.sparse.api import *
     46 from pandas.stats.api import *

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/api.py in <module>()
      7 from pandas.core.common import isnull, notnull
      8 from pandas.core.categorical import Categorical
----> 9 from pandas.core.groupby import Grouper
     10 from pandas.core.format import set_eng_float_format
     11 from pandas.core.index import Index, CategoricalIndex, Int64Index, Float64Index, MultiIndex

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/groupby.py in <module>()
     14 from pandas.core.base import PandasObject
     15 from pandas.core.categorical import Categorical
---> 16 from pandas.core.frame import DataFrame
     17 from pandas.core.generic import NDFrame
     18 from pandas.core.index import Index, MultiIndex, CategoricalIndex, _ensure_index

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/frame.py in <module>()
     31                                 is_internal_type, is_datetimetz,
     32                                 _possibly_infer_to_datetimelike, _dict_compat)
---> 33 from pandas.core.generic import NDFrame, _shared_docs
     34 from pandas.core.index import Index, MultiIndex, _ensure_index
     35 from pandas.core.indexing import (maybe_droplevels,

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/generic.py in <module>()
     15 from pandas.tseries.index import DatetimeIndex
     16 from pandas.tseries.period import PeriodIndex
---> 17 from pandas.core.internals import BlockManager
     18 import pandas.core.common as com
     19 import pandas.core.datetools as datetools

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/internals.py in <module>()
     27 import pandas.core.common as com
     28 import pandas.core.convert as convert
---> 29 from pandas.sparse.array import _maybe_to_sparse, SparseArray
     30 import pandas.lib as lib
     31 import pandas.tslib as tslib

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/sparse/array.py in <module>()
     17 import pandas._sparse as splib
     18 import pandas.index as _index
---> 19 import pandas.core.ops as ops
     20 
     21 

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/core/ops.py in <module>()
     15 from pandas.util.decorators import Appender
     16 import pandas.core.common as com
---> 17 import pandas.computation.expressions as expressions
     18 from pandas.lib import isscalar
     19 from pandas.tslib import iNaT

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/computation/expressions.py in <module>()
     13 
     14 try:
---> 15     import numexpr as ne
     16     ver = ne.__version__
     17     _NUMEXPR_INSTALLED = ver >= LooseVersion('2.1')

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/numexpr/__init__.py in <module>()
     29     use_vml = False
     30 
---> 31 from cpuinfo import cpu
     32 
     33 if cpu.is_AMD() or cpu.is_Intel():

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/numexpr/cpuinfo.py in <module>()
    789     cpuinfo = CPUInfoBase
    790 
--> 791 cpu = cpuinfo()
    792 
    793 if __name__ == "__main__":

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/numexpr/cpuinfo.py in __init__(self)
    442             return
    443         info = command_info(arch='arch',
--> 444                             machine='machine')
    445         info['sysctl_hw'] = key_value_from_command(['sysctl', 'hw'], sep='=')
    446         self.__class__.info = info

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/numexpr/cpuinfo.py in command_info(successful_status, stacklevel, **kw)
     51     for key in kw:
     52         ok, output = getoutput(kw[key], successful_status=successful_status,
---> 53                                stacklevel=stacklevel + 1)
     54         if ok:
     55             info[key] = output.strip()

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/numexpr/cpuinfo.py in getoutput(cmd, successful_status, stacklevel)
     36 def getoutput(cmd, successful_status=(0,), stacklevel=1):
     37     try:
---> 38         p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
     39         output, _ = p.communicate()
     40         status = p.returncode

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
    708                                 p2cread, p2cwrite,
    709                                 c2pread, c2pwrite,
--> 710                                 errread, errwrite)
    711         except Exception:
    712             # Preserve original exception in case os.close raises.

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
   1314                 # Wait for exec to fail or succeed; possibly raising exception
   1315                 # Exception limited to 1M
-> 1316                 data = _eintr_retry_call(os.read, errpipe_read, 1048576)
   1317             finally:
   1318                 if p2cread is not None and p2cwrite is not None:

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/subprocess.pyc in _eintr_retry_call(func, *args)
    474     while True:
    475         try:
--> 476             return func(*args)
    477         except (OSError, IOError) as e:
    478             if e.errno == errno.EINTR:

KeyboardInterrupt: 

Simple grid search example

In this example we are optimizing

  • parameters of GradientBoostingClassifier
  • we maximize RocAuc (= area under the ROC curve)
  • using 4 threads (each time we train 4 classifiers)
  • we use 3-Folding to estimate quality.
  • we use only 30 trees to make examples run fast

In [4]:
import numpy

In [5]:
import numexpr

In [7]:
import pandas


---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-7-d6ac987968b6> in <module>()
----> 1 import pandas

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/__init__.py in <module>()
     11                       "pandas from the source directory, you may need to run "
     12                       "'python setup.py build_ext --inplace' to build the C "
---> 13                       "extensions first.".format(module))
     14 
     15 from datetime import datetime

ImportError: C extension: hashtable not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first.

In [6]:
from rep import utils


---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-6-935575ef9703> in <module>()
----> 1 from rep import utils

/Users/axelr/ipython/REP/rep/utils.py in <module>()
      8 
      9 import numpy
---> 10 import pandas
     11 from sklearn.utils.validation import column_or_1d
     12 from sklearn.metrics import roc_curve

/Users/axelr/miniconda/envs/rep_env2/lib/python2.7/site-packages/pandas/__init__.py in <module>()
     11                       "pandas from the source directory, you may need to run "
     12                       "'python setup.py build_ext --inplace' to build the C "
---> 13                       "extensions first.".format(module))
     14 
     15 from datetime import datetime

ImportError: C extension: hashtable not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first.

In [ ]:
from sklearn.ensemble import GradientBoostingClassifier

In [ ]:
from rep.report.metrics import RocAuc

In [ ]:
from rep.metaml import GridOptimalSearchCV, FoldingScorer, RandomParameterOptimizer

from rep.estimators import SklearnClassifier, XGBoostClassifier

In [ ]:
# define grid parameters
grid_param = {}
grid_param['learning_rate'] = [0.2, 0.1, 0.05, 0.02, 0.01]
grid_param['max_depth'] = [2, 3, 4, 5]

# use random hyperparameter optimization algorithm 
generator = RandomParameterOptimizer(grid_param)

# define folding scorer
scorer = FoldingScorer(RocAuc(), folds=3, fold_checks=3)

In [ ]:
estimator = SklearnClassifier(GradientBoostingClassifier(n_estimators=30))
grid_finder = GridOptimalSearchCV(estimator, generator, scorer, parallel_profile='threads-4')
grid_finder.fit(data[::2], labels[::2])

Looking at results


In [ ]:
grid_finder.params_generator.print_results()

Optimizing the parameters and threshold

In many applications we need to optimize some binary metrics for classification (f1, BER, misclassification error), in which case we need each time after training classifier to find optimal threshold on predicted probabilities (default one is usually bad).

In this example:

  • we are optimizing AMS (binary metric, that was used in Higgs competition at kaggle)
  • tuning parameters of XGBoost
  • using GaussianProcesses to make good guesses about next points to check

In [ ]:
from rep.metaml import RegressionParameterOptimizer
from sklearn.gaussian_process import GaussianProcess
from rep.report.metrics import OptimalMetric, ams

In [ ]:
# OptimalMetrics is a wrapper which is able to check all possible thresholds
# expected number of signal and background events are taken as some arbitrary numbers
optimal_ams = OptimalMetric(ams, expected_s=100, expected_b=1000)

# define grid parameters
grid_param = {'eta': [0.4, 0.2, 0.1, 0.05, 0.02, 0.01], 
              'max_depth': [1, 2, 3, 4, 5, 6], 
              'subsample': [0.1, 0.2, 0.4, 0.6, 0.8],
              # one more fature is you can pass different sets of features to be compared
              'features': [columns[:2], columns[:3], columns[:4]],
             }

# using GaussianProcesses 
generator = RegressionParameterOptimizer(grid_param, n_evaluations=20, regressor=GaussianProcess(), n_attempts=10)

# define folding scorer
scorer = FoldingScorer(optimal_ams, folds=2, fold_checks=2)

grid_finder = GridOptimalSearchCV(XGBoostClassifier(), generator, scorer, parallel_profile='threads-3')
grid_finder.fit(data, labels)

Looking at results


In [ ]:
grid_finder.generator.print_results()

Let's see dynamics over time


In [ ]:
plot(grid_finder.generator.grid_scores_.values())

Optimizing complex models + using custom scorer

REP supports sklearn-way of combining classifiers and getting/setting their parameters.

So you can tune complex models using the same approach.

Let's optimize

  • BaggingRegressor over TMVA's GBDT regressor, we will select appropriate values for both
  • we will roll new scorer, which test everything on special part of dataset
  • for simplicity, we will use the same data, which will be once split into train and test
  • optimizing MAE (mean absolute error)

In [ ]:
from sklearn.ensemble import BaggingRegressor
from rep.estimators import TMVARegressor

In [ ]:
from rep.utils import train_test_split
# splitting into train and test
train_data, test_data, train_labels, test_labels = train_test_split(data, labels)

In [ ]:
from sklearn.metrics import mean_absolute_error
from sklearn.base import clone

class MyMAEScorer(object):
    def __init__(self, test_data, test_labels):
        self.test_data = test_data
        self.test_labels = test_labels
        
    def __call__(self, base_estimator, params, X, y, sample_weight=None):
        cl = clone(base_estimator)
        cl.set_params(**params)
        cl.fit(X, y)
        # Returning with minus, because we maximize metric
        return - mean_absolute_error(self.test_labels, cl.predict(self.test_data))

In [ ]:
import root_numpy

In [ ]:
# define grid parameters
grid_param = {
    # parameters of sklearn classifier
    'n_estimators': [1, 3, 5, 7], 
    'max_samples': [0.2, 0.4, 0.6, 0.8],
    # parameters of base (TMVA)
    'base_estimator__NTrees': [10, 20, 40], 
    'base_estimator__Shrinkage': [0.1, 0.2, 0.4, 0.6, 0.8]
}

# using Gaussian Processes 
generator = RegressionParameterOptimizer(grid_param, n_evaluations=9, regressor=GaussianProcess(), n_attempts=10)

estimator = BaggingRegressor(TMVARegressor(BoostType='Grad', NTrees=10), n_estimators=10)

scorer = MyMAEScorer(test_data, test_labels)

grid_finder = GridOptimalSearchCV(estimator, generator, scorer, parallel_profile='threads-3')
grid_finder.fit(data, labels)

In [ ]:
grid_finder.generator.print_results()

Summary

Grid search in REP extends sklearn grid search, uses optimization techniques to avoid complete search of estimator parameters.

REP has predefined scorers, metric functions, optimization techniques. Each component is replaceable and you can optimize complex models and pipelines (Folders/Bagging/Boosting and so on).

Structure together

  • ParameterOptimizer is responsible for generating new set of parameters which will be checked

    • RandomParameterOptimizer
    • AnnealingParameterOptimizer
    • SubgridParameterOptimizer
    • RegressionParameterOptimizer (this one can use any regression model, like GaussianProcesses)
  • Scorer is responsible for training and evaluating metrics

    • Folding scorer (uses metrics with REP interface), uses averaging quality after kFolding
  • GridOptimalSearchCV makes all of this work together and sends tasks to IPython cluster or separate threads.