This notebook demonstrates some very basic functionality currently present in the library


In [29]:
import keras
from importlib import reload
import os
import sys
sys.path.insert(0, "C:/Users/magaxels/AutoML")
import gazer; reload(gazer)
from gazer import GazerMetaLearner

Load som dummy-data


In [30]:
from sklearn.datasets import load_digits
X, y = load_digits(return_X_y=True)


from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = \
    train_test_split(X, y, test_size=0.25, random_state=0)

from sklearn.preprocessing import (MaxAbsScaler, 
                                   RobustScaler, 
                                   StandardScaler, 
                                   MinMaxScaler)

scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

In [31]:
X_train.shape, y_train.shape, X_test.shape, y_test.shape


Out[31]:
((1347, 64), (1347,), (450, 64), (450,))

Import selected algorithms using

  • method = 'select'
  • estimators = ['adaboost', 'svm', 'neuralnet', 'logreg']
  • verbose = 1
    • Provides some feedback

In [32]:
learner = GazerMetaLearner(method='select', 
                           estimators=['neuralnet', 'adaboost', 'logreg', 'svm'], 
                           verbose=1)


Available algorithms (use '.clf' attribute for access):
adaboost, logreg, svm, neuralnet

In [33]:
learner.names


Out[33]:
['adaboost', 'logreg', 'svm', 'neuralnet']

Inspect neural network parameters

Note that these have been automatically set for you (~ reasonable defaults)

We shall learn how to change these


In [34]:
learner.clf['neuralnet'].network


Out[34]:
{'activation': ['relu', 'relu', 'relu'],
 'batch_size': 32,
 'batchnorm': [False, False, False],
 'callbacks': [],
 'chkpnt_period': 1,
 'decay_units': False,
 'dropout': [True, True, True],
 'epochs': 50,
 'history': None,
 'lr': 0.001,
 'n_hidden': 2,
 'optimizer': 'adam',
 'p': 0.1,
 'units': [250, 250, 250]}

Perform parameter update using the self.update method


In [35]:
learner.update('neuralnet', {'epochs': 100, 'n_hidden': 3, 'input_units': 500})

Note changes in the below dictionary

  • We set
    • epochs = 100
    • n_hidden = 3
    • input_units = 500

Note that input_units is the number of neurons in each layer


In [36]:
learner.clf['neuralnet'].network


Out[36]:
{'activation': ['relu', 'relu', 'relu', 'relu'],
 'batch_size': 32,
 'batchnorm': [False, False, False, False],
 'callbacks': [],
 'chkpnt_period': 1,
 'decay_units': False,
 'dropout': [True, True, True, True],
 'epochs': 100,
 'history': None,
 'lr': 0.001,
 'n_hidden': 3,
 'optimizer': 'adam',
 'p': 0.1,
 'units': [500, 500, 500, 500]}

If the user fails to provide proper input then (providing self.verbose = 1) we provide the signature to the $__init__()$ method. This helps to determine allowed parameters and their values


In [37]:
learner.update('logreg', {'bla': 1})


C:/Users/magaxels/AutoML\gazer\core.py:185: UserWarning: Failed to update logreg. Msg: __init__() got an unexpected keyword argument 'bla'
  .format(name, desc))
Variable:           Default value:
<penalty>           l2
<C>                 1.0
<fit_intercept>     True
<random_state>      None
<solver>            liblinear
<max_iter>          100
<warm_start>        False

Train learner

We train all initialized algorithms using learner.fit


In [38]:
learner.fit(X_train, y_train)


adaboost: training time = 0.00 (min)
svm: training time = 0.00 (min)
C:/Users/magaxels/AutoML\gazer\classifiers\neural_network.py:288: UserWarning: Keras expects one-hot encoded label data: your data does not seem to fit this requirement.
                       
Will attempt to apply one-hot encoding before sending to `fit` method.
  \nWill attempt to apply one-hot encoding before sending to `fit` method.""")
Train on 1212 samples, validate on 135 samples
Epoch 1/100
1212/1212 [==============================] - ETA: 14s - loss: 2.2862 - acc: 0.15 - ETA: 1s - loss: 2.1217 - acc: 0.3906 - ETA: 0s - loss: 1.7682 - acc: 0.514 - ETA: 0s - loss: 1.3824 - acc: 0.614 - ETA: 0s - loss: 1.1075 - acc: 0.688 - 1s 630us/step - loss: 1.0176 - acc: 0.7120 - val_loss: 0.3502 - val_acc: 0.8741
Epoch 2/100
1212/1212 [==============================] - ETA: 0s - loss: 0.5793 - acc: 0.812 - ETA: 0s - loss: 0.3411 - acc: 0.897 - ETA: 0s - loss: 0.2825 - acc: 0.920 - ETA: 0s - loss: 0.2406 - acc: 0.931 - ETA: 0s - loss: 0.2253 - acc: 0.936 - ETA: 0s - loss: 0.2304 - acc: 0.933 - 0s 285us/step - loss: 0.2160 - acc: 0.9389 - val_loss: 0.1251 - val_acc: 0.9407
Epoch 3/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0147 - acc: 1.000 - ETA: 0s - loss: 0.1251 - acc: 0.957 - ETA: 0s - loss: 0.1215 - acc: 0.960 - ETA: 0s - loss: 0.1301 - acc: 0.956 - ETA: 0s - loss: 0.1229 - acc: 0.961 - ETA: 0s - loss: 0.1162 - acc: 0.963 - 0s 254us/step - loss: 0.1177 - acc: 0.9629 - val_loss: 0.0712 - val_acc: 0.9704
Epoch 4/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0714 - acc: 0.968 - ETA: 0s - loss: 0.0863 - acc: 0.971 - ETA: 0s - loss: 0.1034 - acc: 0.960 - ETA: 0s - loss: 0.1359 - acc: 0.952 - 0s 190us/step - loss: 0.1243 - acc: 0.9554 - val_loss: 0.0538 - val_acc: 0.9630
Epoch 5/100
1212/1212 [==============================] - ETA: 0s - loss: 0.2049 - acc: 0.937 - ETA: 0s - loss: 0.0770 - acc: 0.977 - ETA: 0s - loss: 0.0618 - acc: 0.981 - ETA: 0s - loss: 0.0687 - acc: 0.979 - ETA: 0s - loss: 0.0675 - acc: 0.979 - 0s 230us/step - loss: 0.0630 - acc: 0.9810 - val_loss: 0.0816 - val_acc: 0.9630
Epoch 6/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0322 - acc: 1.000 - ETA: 0s - loss: 0.0667 - acc: 0.980 - ETA: 0s - loss: 0.0545 - acc: 0.985 - ETA: 0s - loss: 0.0562 - acc: 0.984 - ETA: 0s - loss: 0.0494 - acc: 0.986 - ETA: 0s - loss: 0.0439 - acc: 0.988 - 0s 268us/step - loss: 0.0448 - acc: 0.9876 - val_loss: 0.0298 - val_acc: 0.9926
Epoch 7/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0251 - acc: 1.000 - ETA: 0s - loss: 0.0464 - acc: 0.984 - ETA: 0s - loss: 0.0409 - acc: 0.988 - ETA: 0s - loss: 0.0353 - acc: 0.990 - ETA: 0s - loss: 0.0302 - acc: 0.992 - 0s 247us/step - loss: 0.0287 - acc: 0.9926 - val_loss: 0.0248 - val_acc: 0.9926
Epoch 8/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0154 - acc: 1.000 - ETA: 0s - loss: 0.0139 - acc: 0.996 - ETA: 0s - loss: 0.0172 - acc: 0.994 - ETA: 0s - loss: 0.0252 - acc: 0.992 - ETA: 0s - loss: 0.0411 - acc: 0.985 - 0s 259us/step - loss: 0.0380 - acc: 0.9860 - val_loss: 0.0341 - val_acc: 0.9926
Epoch 9/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0254 - acc: 1.000 - ETA: 0s - loss: 0.0294 - acc: 0.996 - ETA: 0s - loss: 0.0254 - acc: 0.994 - ETA: 0s - loss: 0.0264 - acc: 0.992 - ETA: 0s - loss: 0.0307 - acc: 0.992 - ETA: 0s - loss: 0.0431 - acc: 0.988 - 0s 272us/step - loss: 0.0425 - acc: 0.9884 - val_loss: 0.0865 - val_acc: 0.9704
Epoch 10/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0122 - acc: 1.000 - ETA: 0s - loss: 0.0325 - acc: 0.984 - ETA: 0s - loss: 0.0376 - acc: 0.984 - ETA: 0s - loss: 0.0548 - acc: 0.981 - ETA: 0s - loss: 0.0709 - acc: 0.976 - 0s 243us/step - loss: 0.0739 - acc: 0.9736 - val_loss: 0.1297 - val_acc: 0.9630
Epoch 11/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0035 - acc: 1.000 - ETA: 0s - loss: 0.0455 - acc: 0.984 - ETA: 0s - loss: 0.0408 - acc: 0.982 - ETA: 0s - loss: 0.0447 - acc: 0.985 - ETA: 0s - loss: 0.0411 - acc: 0.987 - ETA: 0s - loss: 0.0383 - acc: 0.989 - 0s 286us/step - loss: 0.0362 - acc: 0.9893 - val_loss: 0.0786 - val_acc: 0.9704
Epoch 12/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0077 - acc: 1.000 - ETA: 0s - loss: 0.0202 - acc: 0.996 - ETA: 0s - loss: 0.0129 - acc: 0.997 - ETA: 0s - loss: 0.0135 - acc: 0.997 - ETA: 0s - loss: 0.0116 - acc: 0.997 - ETA: 0s - loss: 0.0114 - acc: 0.998 - 0s 273us/step - loss: 0.0110 - acc: 0.9983 - val_loss: 0.0445 - val_acc: 0.9852
Epoch 13/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0043 - acc: 1.000 - ETA: 0s - loss: 0.0033 - acc: 1.000 - ETA: 0s - loss: 0.0027 - acc: 1.000 - ETA: 0s - loss: 0.0032 - acc: 1.000 - ETA: 0s - loss: 0.0045 - acc: 0.999 - 0s 248us/step - loss: 0.0040 - acc: 0.9992 - val_loss: 0.0522 - val_acc: 0.9852
Epoch 14/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0011 - acc: 1.000 - ETA: 0s - loss: 0.0021 - acc: 1.000 - ETA: 0s - loss: 0.0017 - acc: 1.000 - ETA: 0s - loss: 0.0017 - acc: 1.000 - ETA: 0s - loss: 0.0023 - acc: 1.000 - 0s 253us/step - loss: 0.0054 - acc: 0.9992 - val_loss: 0.0333 - val_acc: 0.9926
Epoch 15/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0012 - acc: 1.000 - ETA: 0s - loss: 0.0021 - acc: 1.000 - ETA: 0s - loss: 0.0098 - acc: 0.995 - ETA: 0s - loss: 0.0073 - acc: 0.996 - ETA: 0s - loss: 0.0078 - acc: 0.995 - 0s 198us/step - loss: 0.0076 - acc: 0.9959 - val_loss: 0.0370 - val_acc: 0.9852
Epoch 16/100
1212/1212 [==============================] - ETA: 0s - loss: 2.6304e-04 - acc: 1.000 - ETA: 0s - loss: 0.0062 - acc: 0.9972    - ETA: 0s - loss: 0.0045 - acc: 0.998 - ETA: 0s - loss: 0.0034 - acc: 0.999 - 0s 192us/step - loss: 0.0031 - acc: 0.9992 - val_loss: 0.0346 - val_acc: 0.9778
Epoch 17/100
1212/1212 [==============================] - ETA: 0s - loss: 6.4047e-04 - acc: 1.000 - ETA: 0s - loss: 9.1267e-04 - acc: 1.000 - ETA: 0s - loss: 8.4287e-04 - acc: 1.000 - ETA: 0s - loss: 8.6765e-04 - acc: 1.000 - 0s 205us/step - loss: 7.8187e-04 - acc: 1.0000 - val_loss: 0.0199 - val_acc: 0.9926
Epoch 18/100
1212/1212 [==============================] - ETA: 0s - loss: 5.4081e-04 - acc: 1.000 - ETA: 0s - loss: 5.3977e-04 - acc: 1.000 - ETA: 0s - loss: 5.3119e-04 - acc: 1.000 - ETA: 0s - loss: 5.8710e-04 - acc: 1.000 - 0s 193us/step - loss: 7.1477e-04 - acc: 1.0000 - val_loss: 0.0123 - val_acc: 0.9926
Epoch 19/100
1212/1212 [==============================] - ETA: 0s - loss: 8.3640e-05 - acc: 1.000 - ETA: 0s - loss: 4.4051e-04 - acc: 1.000 - ETA: 0s - loss: 4.9931e-04 - acc: 1.000 - ETA: 0s - loss: 5.0769e-04 - acc: 1.000 - 0s 201us/step - loss: 5.4510e-04 - acc: 1.0000 - val_loss: 0.0114 - val_acc: 0.9926
Epoch 20/100
1212/1212 [==============================] - ETA: 0s - loss: 1.9017e-04 - acc: 1.000 - ETA: 0s - loss: 2.4257e-04 - acc: 1.000 - ETA: 0s - loss: 3.9817e-04 - acc: 1.000 - ETA: 0s - loss: 4.0855e-04 - acc: 1.000 - ETA: 0s - loss: 5.5812e-04 - acc: 1.000 - 0s 197us/step - loss: 5.4176e-04 - acc: 1.0000 - val_loss: 0.0153 - val_acc: 0.9926
Epoch 21/100
1212/1212 [==============================] - ETA: 0s - loss: 2.8963e-04 - acc: 1.000 - ETA: 0s - loss: 4.9587e-04 - acc: 1.000 - ETA: 0s - loss: 5.1448e-04 - acc: 1.000 - ETA: 0s - loss: 4.3635e-04 - acc: 1.000 - 0s 203us/step - loss: 4.0397e-04 - acc: 1.0000 - val_loss: 0.0177 - val_acc: 0.9926
Epoch 22/100
1212/1212 [==============================] - ETA: 0s - loss: 3.9739e-04 - acc: 1.000 - ETA: 0s - loss: 6.7930e-04 - acc: 1.000 - ETA: 0s - loss: 4.8565e-04 - acc: 1.000 - ETA: 0s - loss: 3.8727e-04 - acc: 1.000 - ETA: 0s - loss: 3.8395e-04 - acc: 1.000 - 0s 218us/step - loss: 4.6719e-04 - acc: 1.0000 - val_loss: 0.0168 - val_acc: 0.9926
Epoch 23/100
1212/1212 [==============================] - ETA: 0s - loss: 1.8439e-04 - acc: 1.000 - ETA: 0s - loss: 3.7087e-04 - acc: 1.000 - ETA: 0s - loss: 2.9541e-04 - acc: 1.000 - ETA: 0s - loss: 2.4787e-04 - acc: 1.000 - ETA: 0s - loss: 2.4232e-04 - acc: 1.000 - 0s 218us/step - loss: 2.3526e-04 - acc: 1.0000 - val_loss: 0.0193 - val_acc: 0.9926
Epoch 24/100
1212/1212 [==============================] - ETA: 0s - loss: 8.3801e-05 - acc: 1.000 - ETA: 0s - loss: 2.3786e-04 - acc: 1.000 - ETA: 0s - loss: 2.3423e-04 - acc: 1.000 - ETA: 0s - loss: 2.5359e-04 - acc: 1.000 - 0s 208us/step - loss: 2.4388e-04 - acc: 1.0000 - val_loss: 0.0155 - val_acc: 0.9926
Epoch 25/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0011 - acc: 1.000 - ETA: 0s - loss: 4.4051e-04 - acc: 1.000 - ETA: 0s - loss: 3.1222e-04 - acc: 1.000 - ETA: 0s - loss: 2.7497e-04 - acc: 1.000 - ETA: 0s - loss: 2.6188e-04 - acc: 1.000 - 0s 215us/step - loss: 2.6390e-04 - acc: 1.0000 - val_loss: 0.0120 - val_acc: 0.9926
Epoch 26/100
1212/1212 [==============================] - ETA: 0s - loss: 1.1999e-04 - acc: 1.000 - ETA: 0s - loss: 3.8854e-04 - acc: 1.000 - ETA: 0s - loss: 4.0129e-04 - acc: 1.000 - ETA: 0s - loss: 3.3075e-04 - acc: 1.000 - 0s 199us/step - loss: 3.1180e-04 - acc: 1.0000 - val_loss: 0.0096 - val_acc: 0.9926
Epoch 27/100
1212/1212 [==============================] - ETA: 0s - loss: 1.1947e-04 - acc: 1.000 - ETA: 0s - loss: 4.7084e-04 - acc: 1.000 - ETA: 0s - loss: 2.6610e-04 - acc: 1.000 - ETA: 0s - loss: 2.2297e-04 - acc: 1.000 - ETA: 0s - loss: 2.0059e-04 - acc: 1.000 - 0s 213us/step - loss: 1.9336e-04 - acc: 1.0000 - val_loss: 0.0104 - val_acc: 0.9926
Epoch 28/100
1212/1212 [==============================] - ETA: 0s - loss: 5.7687e-06 - acc: 1.000 - ETA: 0s - loss: 1.5071e-04 - acc: 1.000 - ETA: 0s - loss: 2.1955e-04 - acc: 1.000 - ETA: 0s - loss: 2.5634e-04 - acc: 1.000 - 0s 177us/step - loss: 2.6130e-04 - acc: 1.0000 - val_loss: 0.0108 - val_acc: 0.9926
Epoch 29/100
1212/1212 [==============================] - ETA: 0s - loss: 3.9874e-04 - acc: 1.000 - ETA: 0s - loss: 3.6956e-04 - acc: 1.000 - ETA: 0s - loss: 2.4105e-04 - acc: 1.000 - ETA: 0s - loss: 2.9286e-04 - acc: 1.000 - 0s 196us/step - loss: 2.6728e-04 - acc: 1.0000 - val_loss: 0.0098 - val_acc: 0.9926
Epoch 30/100
1212/1212 [==============================] - ETA: 0s - loss: 7.3830e-05 - acc: 1.000 - ETA: 0s - loss: 1.3677e-04 - acc: 1.000 - ETA: 0s - loss: 1.6068e-04 - acc: 1.000 - ETA: 0s - loss: 1.7953e-04 - acc: 1.000 - 0s 199us/step - loss: 1.7759e-04 - acc: 1.0000 - val_loss: 0.0102 - val_acc: 0.9926
Epoch 31/100
1212/1212 [==============================] - ETA: 0s - loss: 2.7701e-04 - acc: 1.000 - ETA: 0s - loss: 1.4746e-04 - acc: 1.000 - ETA: 0s - loss: 1.7068e-04 - acc: 1.000 - ETA: 0s - loss: 1.7232e-04 - acc: 1.000 - ETA: 0s - loss: 2.2014e-04 - acc: 1.000 - 0s 211us/step - loss: 2.1618e-04 - acc: 1.0000 - val_loss: 0.0102 - val_acc: 0.9926
Epoch 32/100
1212/1212 [==============================] - ETA: 0s - loss: 3.3155e-05 - acc: 1.000 - ETA: 0s - loss: 2.4520e-04 - acc: 1.000 - ETA: 0s - loss: 2.0433e-04 - acc: 1.000 - ETA: 0s - loss: 2.0571e-04 - acc: 1.000 - 0s 203us/step - loss: 1.9301e-04 - acc: 1.0000 - val_loss: 0.0099 - val_acc: 0.9926
Epoch 33/100
1212/1212 [==============================] - ETA: 0s - loss: 5.3720e-06 - acc: 1.000 - ETA: 0s - loss: 8.3868e-05 - acc: 1.000 - ETA: 0s - loss: 1.9171e-04 - acc: 1.000 - ETA: 0s - loss: 2.0829e-04 - acc: 1.000 - ETA: 0s - loss: 2.0496e-04 - acc: 1.000 - 0s 201us/step - loss: 2.0326e-04 - acc: 1.0000 - val_loss: 0.0104 - val_acc: 0.9926
Epoch 34/100
1212/1212 [==============================] - ETA: 0s - loss: 5.8382e-05 - acc: 1.000 - ETA: 0s - loss: 1.7633e-04 - acc: 1.000 - ETA: 0s - loss: 2.3278e-04 - acc: 1.000 - ETA: 0s - loss: 2.4187e-04 - acc: 1.000 - ETA: 0s - loss: 2.0825e-04 - acc: 1.000 - 0s 200us/step - loss: 1.9898e-04 - acc: 1.0000 - val_loss: 0.0106 - val_acc: 0.9926
Epoch 35/100
1212/1212 [==============================] - ETA: 0s - loss: 2.2878e-04 - acc: 1.000 - ETA: 0s - loss: 1.6748e-04 - acc: 1.000 - ETA: 0s - loss: 2.0082e-04 - acc: 1.000 - ETA: 0s - loss: 2.4198e-04 - acc: 1.000 - 0s 195us/step - loss: 2.2762e-04 - acc: 1.0000 - val_loss: 0.0098 - val_acc: 0.9926
Epoch 36/100
1212/1212 [==============================] - ETA: 0s - loss: 7.0572e-05 - acc: 1.000 - ETA: 0s - loss: 1.1257e-04 - acc: 1.000 - ETA: 0s - loss: 1.1007e-04 - acc: 1.000 - ETA: 0s - loss: 1.2785e-04 - acc: 1.000 - ETA: 0s - loss: 1.5635e-04 - acc: 1.000 - 0s 211us/step - loss: 1.5331e-04 - acc: 1.0000 - val_loss: 0.0096 - val_acc: 0.9926
Epoch 37/100
1212/1212 [==============================] - ETA: 0s - loss: 1.2404e-05 - acc: 1.000 - ETA: 0s - loss: 1.0160e-04 - acc: 1.000 - ETA: 0s - loss: 1.6171e-04 - acc: 1.000 - ETA: 0s - loss: 2.0835e-04 - acc: 1.000 - 0s 186us/step - loss: 2.1146e-04 - acc: 1.0000 - val_loss: 0.0099 - val_acc: 0.9926
Epoch 38/100
1212/1212 [==============================] - ETA: 0s - loss: 1.6149e-04 - acc: 1.000 - ETA: 0s - loss: 1.3212e-04 - acc: 1.000 - ETA: 0s - loss: 1.5556e-04 - acc: 1.000 - ETA: 0s - loss: 3.8419e-04 - acc: 1.000 - 0s 179us/step - loss: 3.6810e-04 - acc: 1.0000 - val_loss: 0.0107 - val_acc: 0.9926
Epoch 39/100
1212/1212 [==============================] - ETA: 0s - loss: 5.6300e-05 - acc: 1.000 - ETA: 0s - loss: 8.1207e-05 - acc: 1.000 - ETA: 0s - loss: 1.8253e-04 - acc: 1.000 - ETA: 0s - loss: 1.6797e-04 - acc: 1.000 - 0s 205us/step - loss: 1.7006e-04 - acc: 1.0000 - val_loss: 0.0113 - val_acc: 0.9926
Epoch 40/100
1212/1212 [==============================] - ETA: 0s - loss: 3.2178e-04 - acc: 1.000 - ETA: 0s - loss: 3.1717e-04 - acc: 1.000 - ETA: 0s - loss: 2.6560e-04 - acc: 1.000 - ETA: 0s - loss: 2.4021e-04 - acc: 1.000 - 0s 189us/step - loss: 2.1010e-04 - acc: 1.0000 - val_loss: 0.0117 - val_acc: 0.9926
Epoch 41/100
1212/1212 [==============================] - ETA: 0s - loss: 0.0019 - acc: 1.000 - ETA: 0s - loss: 3.1938e-04 - acc: 1.000 - ETA: 0s - loss: 2.2874e-04 - acc: 1.000 - ETA: 0s - loss: 1.6954e-04 - acc: 1.000 - ETA: 0s - loss: 1.4901e-04 - acc: 1.000 - 0s 198us/step - loss: 1.4667e-04 - acc: 1.0000 - val_loss: 0.0124 - val_acc: 0.9926
Epoch 42/100
1212/1212 [==============================] - ETA: 0s - loss: 1.5238e-04 - acc: 1.000 - ETA: 0s - loss: 1.3822e-04 - acc: 1.000 - ETA: 0s - loss: 1.2234e-04 - acc: 1.000 - ETA: 0s - loss: 1.5521e-04 - acc: 1.000 - ETA: 0s - loss: 1.4223e-04 - acc: 1.000 - 0s 205us/step - loss: 1.4010e-04 - acc: 1.0000 - val_loss: 0.0124 - val_acc: 0.9926
Epoch 43/100
1212/1212 [==============================] - ETA: 0s - loss: 2.4542e-05 - acc: 1.000 - ETA: 0s - loss: 8.6040e-05 - acc: 1.000 - ETA: 0s - loss: 1.1596e-04 - acc: 1.000 - ETA: 0s - loss: 1.1669e-04 - acc: 1.000 - 0s 192us/step - loss: 1.3958e-04 - acc: 1.0000 - val_loss: 0.0123 - val_acc: 0.9926
Epoch 44/100
1212/1212 [==============================] - ETA: 0s - loss: 2.3429e-04 - acc: 1.000 - ETA: 0s - loss: 7.6003e-05 - acc: 1.000 - ETA: 0s - loss: 1.5481e-04 - acc: 1.000 - ETA: 0s - loss: 1.6529e-04 - acc: 1.000 - 0s 195us/step - loss: 1.6370e-04 - acc: 1.0000 - val_loss: 0.0125 - val_acc: 0.9926
Epoch 45/100
1212/1212 [==============================] - ETA: 0s - loss: 2.5098e-05 - acc: 1.000 - ETA: 0s - loss: 9.6218e-05 - acc: 1.000 - ETA: 0s - loss: 1.3785e-04 - acc: 1.000 - ETA: 0s - loss: 3.1847e-04 - acc: 1.000 - ETA: 0s - loss: 2.6609e-04 - acc: 1.000 - ETA: 0s - loss: 2.2625e-04 - acc: 1.000 - 0s 253us/step - loss: 2.3121e-04 - acc: 1.0000 - val_loss: 0.0127 - val_acc: 0.9926
Epoch 46/100
1212/1212 [==============================] - ETA: 0s - loss: 6.0522e-05 - acc: 1.000 - ETA: 0s - loss: 8.4055e-05 - acc: 1.000 - ETA: 0s - loss: 1.0800e-04 - acc: 1.000 - ETA: 0s - loss: 1.0714e-04 - acc: 1.000 - ETA: 0s - loss: 1.2397e-04 - acc: 1.000 - 0s 252us/step - loss: 1.2181e-04 - acc: 1.0000 - val_loss: 0.0128 - val_acc: 0.9926
neuralnet: training time = 0.23 (min)
logreg: training time = 0.00 (min)
Out[38]:
<gazer.core.GazerMetaLearner at 0x24cb44a74a8>

Since verbose = 1 we get a lot of output. If you wish not to see it, then set verbose = 0. That way, gazer stays mute during the training process.


In [39]:
learner.verbose = 0

In [40]:
learner.fit(X_train, y_train)


Out[40]:
<gazer.core.GazerMetaLearner at 0x24cb44a74a8>

See Mom; no output!

Evalute on test data

We can easily evaluate how well algorithms generalize using learner.evaluate


In [41]:
learner.evaluate(X_test, y_test, metric='accuracy', get_loss=True)


450/450 [==============================] - ETA:  - 0s 63us/step
Out[41]:
{'adaboost': {'loss': 5.4495, 'score': 0.8422},
 'logreg': {'loss': 0.234, 'score': 0.9644},
 'neuralnet': {'loss': 0.1008, 'score': 0.9867},
 'svm': {'loss': 'N/A', 'score': 0.9356}}

.. ..

End of demo