Your first neural network

In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.


In [1]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

Load and prepare the data

A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!


In [2]:
data_path = 'Bike-Sharing-Dataset/hour.csv'

rides = pd.read_csv(data_path)

In [3]:
rides.head()


Out[3]:
instant dteday season yr mnth hr holiday weekday workingday weathersit temp atemp hum windspeed casual registered cnt
0 1 2011-01-01 1 0 1 0 0 6 0 1 0.24 0.2879 0.81 0.0 3 13 16
1 2 2011-01-01 1 0 1 1 0 6 0 1 0.22 0.2727 0.80 0.0 8 32 40
2 3 2011-01-01 1 0 1 2 0 6 0 1 0.22 0.2727 0.80 0.0 5 27 32
3 4 2011-01-01 1 0 1 3 0 6 0 1 0.24 0.2879 0.75 0.0 3 10 13
4 5 2011-01-01 1 0 1 4 0 6 0 1 0.24 0.2879 0.75 0.0 0 1 1

Checking out the data

This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.

Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.


In [4]:
rides[:24*10].plot(x='dteday', y='cnt')


Out[4]:
<matplotlib.axes._subplots.AxesSubplot at 0x10ad8f4a8>

Dummy variables

Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().


In [5]:
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
    dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
    rides = pd.concat([rides, dummies], axis=1)

fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 
                  'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()


Out[5]:
yr holiday temp hum windspeed casual registered cnt season_1 season_2 ... hr_21 hr_22 hr_23 weekday_0 weekday_1 weekday_2 weekday_3 weekday_4 weekday_5 weekday_6
0 0 0 0.24 0.81 0.0 3 13 16 1 0 ... 0 0 0 0 0 0 0 0 0 1
1 0 0 0.22 0.80 0.0 8 32 40 1 0 ... 0 0 0 0 0 0 0 0 0 1
2 0 0 0.22 0.80 0.0 5 27 32 1 0 ... 0 0 0 0 0 0 0 0 0 1
3 0 0 0.24 0.75 0.0 3 10 13 1 0 ... 0 0 0 0 0 0 0 0 0 1
4 0 0 0.24 0.75 0.0 0 1 1 1 0 ... 0 0 0 0 0 0 0 0 0 1

5 rows × 59 columns

Scaling target variables

To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.

The scaling factors are saved so we can go backwards when we use the network for predictions.


In [6]:
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
    mean, std = data[each].mean(), data[each].std()
    scaled_features[each] = [mean, std]
    data.loc[:, each] = (data[each] - mean)/std

Splitting the data into training, testing, and validation sets

We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.


In [7]:
# Save data for approximately the last 21 days 
test_data = data[-21*24:]

# Now remove the test data from the data set 
data = data[:-21*24]

# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]

We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).


In [8]:
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]

Time to build the network

Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.

The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.

We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.

Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.

Below, you have these tasks:

  1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
  2. Implement the forward pass in the train method.
  3. Implement the backpropagation algorithm in the train method, including calculating the output error.
  4. Implement the forward pass in the run method.

In [9]:
class NeuralNetwork(object):
    def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Set number of nodes in input, hidden and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

        # Initialize weights
        self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, 
                                       (self.input_nodes, self.hidden_nodes))

        self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, 
                                       (self.hidden_nodes, self.output_nodes))
        self.lr = learning_rate
        
        self.activation_function = lambda x : 1 / (1 + np.exp(-x))
                    
    def train(self, features, targets):
        ''' Train the network on batch of features and targets. 
        
            Arguments
            ---------
            
            features: 2D array, each row is one data record, each column is a feature
            targets: 1D array of target values
        
        '''
        n_records = features.shape[0]
        delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
        delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
        for X, y in zip(features, targets):
            ### Forward pass ###
            # Hidden layer 
            hidden_inputs = np.dot(X, self.weights_input_to_hidden)
            hidden_outputs = self.activation_function(hidden_inputs)

            # Output layer
            final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
            final_outputs = final_inputs # f(x) = y
            
            ### Backward pass ###

            # Output error
            error = y - final_outputs
            
            # Calculate the hidden layer's contribution to the error
            hidden_error = np.dot(self.weights_hidden_to_output, error)
            
            # Backpropagated error terms 
            output_error_term = error # f'(x) = 1
            hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)

            # Weight step (input to hidden)
            delta_weights_i_h += hidden_error_term * X[:, None] 
            # Weight step (hidden to output)
            delta_weights_h_o += output_error_term * hidden_outputs[:, None]

        # Update the weights 
        self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
        self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
 
    def run(self, features):
        ''' Run a forward pass through the network with input features 
        
            Arguments
            ---------
            features: 1D array of feature values
        '''
        
        #### forward pass ####
        # Hidden layer
        hidden_inputs = np.dot(features, self.weights_input_to_hidden)
        hidden_outputs = self.activation_function(hidden_inputs)

        # Output layer
        final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
        final_outputs = final_inputs # f(x) = y
        
        return final_outputs

In [10]:
def MSE(y, Y):
    return np.mean((y-Y)**2)

Unit tests

Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.


In [11]:
import unittest

inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
                       [0.4, 0.5],
                       [-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
                       [-0.1]])

class TestMethods(unittest.TestCase):
    
    ##########
    # Unit tests for data loading
    ##########
    
    def test_data_path(self):
        # Test that file path to dataset has been unaltered
        self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
        
    def test_data_loaded(self):
        # Test that data frame loaded
        self.assertTrue(isinstance(rides, pd.DataFrame))
    
    ##########
    # Unit tests for network functionality
    ##########

    def test_activation(self):
        network = NeuralNetwork(3, 2, 1, 0.5)
        # Test that the activation function is a sigmoid
        self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))

    def test_train(self):
        # Test that weights are updated correctly on training
        network = NeuralNetwork(3, 2, 1, 0.5)
        network.weights_input_to_hidden = test_w_i_h.copy()
        network.weights_hidden_to_output = test_w_h_o.copy()
        
        network.train(inputs, targets)
        self.assertTrue(np.allclose(network.weights_hidden_to_output, 
                                    np.array([[ 0.37275328], 
                                              [-0.03172939]])))
        self.assertTrue(np.allclose(network.weights_input_to_hidden,
                                    np.array([[ 0.10562014, -0.20185996], 
                                              [0.39775194, 0.50074398], 
                                              [-0.29887597, 0.19962801]])))

    def test_run(self):
        # Test correctness of run method
        network = NeuralNetwork(3, 2, 1, 0.5)
        network.weights_input_to_hidden = test_w_i_h.copy()
        network.weights_hidden_to_output = test_w_h_o.copy()

        self.assertTrue(np.allclose(network.run(inputs), 0.09998924))

suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)


.....
----------------------------------------------------------------------
Ran 5 tests in 0.009s

OK
Out[11]:
<unittest.runner.TextTestResult run=5 errors=0 failures=0>

Training the network

Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.

You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.

Choose the number of iterations

This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.

Choose the learning rate

This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.

Choose the number of hidden nodes

The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.

Hyperparamater Searching

My first submission used 1500 iterations, a learning rate of 2, and 6 hidden nodes - all of which seemed like suspicious. But since they were getting similar results to the ~0.70 and ~1.4 training and validation losses other students were reporting in the project channels, and since trying slightly different values (like 10 hidden nodes), I assumed this was sufficient for this small data set.

The review feedback I got indicated that I just hadn't search a large enough space for possible settings. So I wrote the below code to loop through a wide range of parameter settings and try all the combinations - similar to a grid search but with specific values to try in order to narrow down possible values worth exploring more. I also added a naive over-fitting detection, not meant to be bullet proof, but just to short-circuit combinations that didn't look promising so it took less time to execute.


In [101]:
def is_overfitting(train_losses, val_losses):
    ''' Does a rough check to see if the losses (sampled at regular intervals) indicate over-fitting is happening.
    This is done by checking if average of last n values is greater the average of the previous n values (to 
    on a one-time random blip) for both val_losses, and the gap between train_losses and val_losses.
    
    I don't know if that's a good heuristic for it or not, I'm just trying to avoid training for a configuration
    of hyperparameters that appears to be trending poorly and should be abandoned. '''
    
    if(len(val_losses) < 6):
        return False
    
    val_losses = np.array(val_losses)
    last_val_losses_avg = val_losses[-3:].mean()
    prev_val_losses_avg = val_losses[-6:-3].mean()
    
    if(last_val_losses_avg > prev_val_losses_avg):
        # val losses are increasing
        return True
    
    train_losses = np.array(train_losses)
    last_train_losses_avg = train_losses[-3:].mean()
    prev_train_losses_avg = train_losses[-6:-3].mean()
    
    last_gap_avg = last_val_losses_avg - last_train_losses_avg
    prev_gap_avg = prev_val_losses_avg - prev_train_losses_avg
    
    if(last_gap_avg > prev_gap_avg):
        # the gap between val losses and train losses is increasing
        return True
    
    return False

In [102]:
import sys

### Set the hyperparameters here ###
iterations = 10000
learning_rates_list = [2, 1, 0.5, 0.1, 0.05, 0.001]
hidden_nodes_list = [5, 10, 15, 20, 25, 30, 40, 50, 75, 100]
output_nodes = 1

N_i = train_features.shape[1]
    
for learning_rate in learning_rates_list:
    for hidden_nodes in hidden_nodes_list:
        print("iterations \t learning_rate \t hidden_nodes \t train_loss \t val_loss")
        network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)

        train_losses = []
        val_losses = []
        
        for ii in range(iterations):
            # Go through a random batch of 128 records from the training data set
            batch = np.random.choice(train_features.index, size=128)
            X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
                             
            network.train(X, y)
            
            # check progress every 500 iterations
            if ii % 500 == 0:
                current_train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
                current_val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
                
                train_losses.append(current_train_loss)
                val_losses.append(current_val_loss)
                
                #print the progress thus far
                output = "{} \t\t {} \t\t {} \t\t {:2.3f} \t\t {:2.3f}"
                print(output.format(ii, learning_rate, hidden_nodes, current_train_loss, current_val_loss))
                    
                # if these settings seem to be overfitting
                if is_overfitting(train_losses, val_losses):
                    print("Over-fitting detected")
                    break # move on to the next hyperparameter settings
                else:
                    continue # keep going on next 500 iterations


iterations 	 learning_rate 	 hidden_nodes 	 train_loss 	 val_loss
0 		 2 		 5 		 2.031 		 1.832
500 		 2 		 5 		 0.193 		 0.332
1000 		 2 		 5 		 0.133 		 0.233
1500 		 2 		 5 		 0.139 		 0.194
2000 		 2 		 5 		 0.256 		 0.238
2500 		 2 		 5 		 0.130 		 0.211
Over-fitting detected
0 		 2 		 10 		 1.958 		 1.692
500 		 2 		 10 		 0.385 		 0.661
1000 		 2 		 10 		 0.185 		 0.398
1500 		 2 		 10 		 0.104 		 0.231
2000 		 2 		 10 		 0.074 		 0.153
2500 		 2 		 10 		 0.066 		 0.159
Over-fitting detected
0 		 2 		 15 		 6.582 		 8.761
500 		 2 		 15 		 0.916 		 1.364
1000 		 2 		 15 		 0.751 		 1.217
1500 		 2 		 15 		 0.736 		 1.186
2000 		 2 		 15 		 0.729 		 1.173
2500 		 2 		 15 		 0.726 		 1.169
3000 		 2 		 15 		 0.719 		 1.166
3500 		 2 		 15 		 0.712 		 1.163
4000 		 2 		 15 		 0.707 		 1.162
Over-fitting detected
0 		 2 		 20 		 21.455 		 18.478
500 		 2 		 20 		 0.940 		 1.361
1000 		 2 		 20 		 0.915 		 1.367
1500 		 2 		 20 		 0.896 		 1.350
2000 		 2 		 20 		 0.883 		 1.337
2500 		 2 		 20 		 0.871 		 1.325
Over-fitting detected
0 		 2 		 25 		 214.272 		 231.312
500 		 2 		 25 		 0.576 		 0.661
1000 		 2 		 25 		 0.486 		 0.539
1500 		 2 		 25 		 0.439 		 0.447
2000 		 2 		 25 		 0.433 		 0.426
2500 		 2 		 25 		 0.421 		 0.410
3000 		 2 		 25 		 0.412 		 0.404
3500 		 2 		 25 		 0.409 		 0.419
4000 		 2 		 25 		 0.407 		 0.408
Over-fitting detected
0 		 2 		 30 		 113.419 		 120.553
/Users/gregashby/anaconda3/envs/dlnd/lib/python3.6/site-packages/ipykernel/__main__.py:16: RuntimeWarning: overflow encountered in exp
/Users/gregashby/anaconda3/envs/dlnd/lib/python3.6/site-packages/ipykernel/__main__.py:51: RuntimeWarning: invalid value encountered in multiply
500 		 2 		 30 		 nan 		 nan
1000 		 2 		 30 		 nan 		 nan
1500 		 2 		 30 		 nan 		 nan
2000 		 2 		 30 		 nan 		 nan
2500 		 2 		 30 		 nan 		 nan
3000 		 2 		 30 		 nan 		 nan
3500 		 2 		 30 		 nan 		 nan
4000 		 2 		 30 		 nan 		 nan
4500 		 2 		 30 		 nan 		 nan
5000 		 2 		 30 		 nan 		 nan
5500 		 2 		 30 		 nan 		 nan
6000 		 2 		 30 		 nan 		 nan
6500 		 2 		 30 		 nan 		 nan
7000 		 2 		 30 		 nan 		 nan
7500 		 2 		 30 		 nan 		 nan
8000 		 2 		 30 		 nan 		 nan
8500 		 2 		 30 		 nan 		 nan
9000 		 2 		 30 		 nan 		 nan
9500 		 2 		 30 		 nan 		 nan
0 		 2 		 40 		 2.446 		 2.071
500 		 2 		 40 		 nan 		 nan
1000 		 2 		 40 		 nan 		 nan
1500 		 2 		 40 		 nan 		 nan
2000 		 2 		 40 		 nan 		 nan
2500 		 2 		 40 		 nan 		 nan
3000 		 2 		 40 		 nan 		 nan
3500 		 2 		 40 		 nan 		 nan
4000 		 2 		 40 		 nan 		 nan
4500 		 2 		 40 		 nan 		 nan
5000 		 2 		 40 		 nan 		 nan
5500 		 2 		 40 		 nan 		 nan
6000 		 2 		 40 		 nan 		 nan
6500 		 2 		 40 		 nan 		 nan
7000 		 2 		 40 		 nan 		 nan
7500 		 2 		 40 		 nan 		 nan
8000 		 2 		 40 		 nan 		 nan
8500 		 2 		 40 		 nan 		 nan
9000 		 2 		 40 		 nan 		 nan
9500 		 2 		 40 		 nan 		 nan
0 		 2 		 50 		 81.412 		 89.288
500 		 2 		 50 		 nan 		 nan
1000 		 2 		 50 		 nan 		 nan
1500 		 2 		 50 		 nan 		 nan
2000 		 2 		 50 		 nan 		 nan
2500 		 2 		 50 		 nan 		 nan
3000 		 2 		 50 		 nan 		 nan
3500 		 2 		 50 		 nan 		 nan
4000 		 2 		 50 		 nan 		 nan
4500 		 2 		 50 		 nan 		 nan
5000 		 2 		 50 		 nan 		 nan
5500 		 2 		 50 		 nan 		 nan
6000 		 2 		 50 		 nan 		 nan
6500 		 2 		 50 		 nan 		 nan
7000 		 2 		 50 		 nan 		 nan
7500 		 2 		 50 		 nan 		 nan
8000 		 2 		 50 		 nan 		 nan
8500 		 2 		 50 		 nan 		 nan
9000 		 2 		 50 		 nan 		 nan
9500 		 2 		 50 		 nan 		 nan
0 		 2 		 75 		 802.630 		 781.736
500 		 2 		 75 		 0.885 		 1.357
1000 		 2 		 75 		 0.754 		 1.195
1500 		 2 		 75 		 0.736 		 1.176
2000 		 2 		 75 		 0.724 		 1.169
2500 		 2 		 75 		 0.718 		 1.166
Over-fitting detected
0 		 2 		 100 		 1185.225 		 1196.766
500 		 2 		 100 		 nan 		 nan
1000 		 2 		 100 		 nan 		 nan
1500 		 2 		 100 		 nan 		 nan
2000 		 2 		 100 		 nan 		 nan
2500 		 2 		 100 		 nan 		 nan
3000 		 2 		 100 		 nan 		 nan
3500 		 2 		 100 		 nan 		 nan
4000 		 2 		 100 		 nan 		 nan
4500 		 2 		 100 		 nan 		 nan
5000 		 2 		 100 		 nan 		 nan
5500 		 2 		 100 		 nan 		 nan
6000 		 2 		 100 		 nan 		 nan
6500 		 2 		 100 		 nan 		 nan
7000 		 2 		 100 		 nan 		 nan
7500 		 2 		 100 		 nan 		 nan
8000 		 2 		 100 		 nan 		 nan
8500 		 2 		 100 		 nan 		 nan
9000 		 2 		 100 		 nan 		 nan
9500 		 2 		 100 		 nan 		 nan
0 		 1 		 5 		 0.866 		 1.317
500 		 1 		 5 		 0.237 		 0.416
1000 		 1 		 5 		 0.138 		 0.246
1500 		 1 		 5 		 0.108 		 0.181
2000 		 1 		 5 		 0.124 		 0.178
2500 		 1 		 5 		 0.084 		 0.181
3000 		 1 		 5 		 0.069 		 0.156
3500 		 1 		 5 		 0.076 		 0.132
Over-fitting detected
0 		 1 		 10 		 1.557 		 1.561
500 		 1 		 10 		 0.220 		 0.377
1000 		 1 		 10 		 0.130 		 0.296
1500 		 1 		 10 		 0.084 		 0.171
2000 		 1 		 10 		 0.073 		 0.159
2500 		 1 		 10 		 0.065 		 0.146
3000 		 1 		 10 		 0.062 		 0.143
3500 		 1 		 10 		 0.072 		 0.142
4000 		 1 		 10 		 0.059 		 0.154
4500 		 1 		 10 		 0.058 		 0.151
Over-fitting detected
0 		 1 		 15 		 0.986 		 1.272
500 		 1 		 15 		 0.230 		 0.395
1000 		 1 		 15 		 0.137 		 0.276
1500 		 1 		 15 		 0.091 		 0.190
2000 		 1 		 15 		 0.072 		 0.149
2500 		 1 		 15 		 0.067 		 0.136
3000 		 1 		 15 		 0.058 		 0.126
3500 		 1 		 15 		 0.057 		 0.127
4000 		 1 		 15 		 0.057 		 0.128
4500 		 1 		 15 		 0.055 		 0.143
Over-fitting detected
0 		 1 		 20 		 0.880 		 1.230
500 		 1 		 20 		 0.627 		 0.749
1000 		 1 		 20 		 0.548 		 0.647
1500 		 1 		 20 		 0.527 		 0.622
2000 		 1 		 20 		 0.249 		 0.440
2500 		 1 		 20 		 0.195 		 0.370
3000 		 1 		 20 		 0.107 		 0.196
Over-fitting detected
0 		 1 		 25 		 9.921 		 8.002
500 		 1 		 25 		 0.650 		 0.785
1000 		 1 		 25 		 0.559 		 0.647
1500 		 1 		 25 		 0.540 		 0.628
2000 		 1 		 25 		 0.531 		 0.626
2500 		 1 		 25 		 0.526 		 0.627
Over-fitting detected
0 		 1 		 30 		 42.514 		 38.636
500 		 1 		 30 		 0.980 		 1.368
1000 		 1 		 30 		 0.979 		 1.368
1500 		 1 		 30 		 0.978 		 1.368
2000 		 1 		 30 		 0.977 		 1.368
2500 		 1 		 30 		 0.977 		 1.368
Over-fitting detected
0 		 1 		 40 		 25.041 		 23.018
500 		 1 		 40 		 0.980 		 1.368
1000 		 1 		 40 		 0.978 		 1.368
1500 		 1 		 40 		 0.978 		 1.368
2000 		 1 		 40 		 0.977 		 1.368
2500 		 1 		 40 		 0.977 		 1.368
Over-fitting detected
0 		 1 		 50 		 15.736 		 13.618
500 		 1 		 50 		 0.992 		 1.368
1000 		 1 		 50 		 0.985 		 1.368
1500 		 1 		 50 		 0.982 		 1.368
2000 		 1 		 50 		 0.981 		 1.368
2500 		 1 		 50 		 0.980 		 1.368
Over-fitting detected
0 		 1 		 75 		 57.885 		 63.996
500 		 1 		 75 		 2.111 		 1.358
1000 		 1 		 75 		 1.317 		 1.357
1500 		 1 		 75 		 1.171 		 1.357
2000 		 1 		 75 		 1.086 		 1.357
2500 		 1 		 75 		 1.076 		 1.356
3000 		 1 		 75 		 0.992 		 1.356
Over-fitting detected
0 		 1 		 100 		 39.439 		 43.954
500 		 1 		 100 		 nan 		 nan
1000 		 1 		 100 		 nan 		 nan
1500 		 1 		 100 		 nan 		 nan
2000 		 1 		 100 		 nan 		 nan
2500 		 1 		 100 		 nan 		 nan
3000 		 1 		 100 		 nan 		 nan
3500 		 1 		 100 		 nan 		 nan
4000 		 1 		 100 		 nan 		 nan
4500 		 1 		 100 		 nan 		 nan
5000 		 1 		 100 		 nan 		 nan
5500 		 1 		 100 		 nan 		 nan
6000 		 1 		 100 		 nan 		 nan
6500 		 1 		 100 		 nan 		 nan
7000 		 1 		 100 		 nan 		 nan
7500 		 1 		 100 		 nan 		 nan
8000 		 1 		 100 		 nan 		 nan
8500 		 1 		 100 		 nan 		 nan
9000 		 1 		 100 		 nan 		 nan
9500 		 1 		 100 		 nan 		 nan
0 		 0.5 		 5 		 0.973 		 1.345
500 		 0.5 		 5 		 0.282 		 0.448
1000 		 0.5 		 5 		 0.232 		 0.397
1500 		 0.5 		 5 		 0.200 		 0.357
2000 		 0.5 		 5 		 0.174 		 0.327
2500 		 0.5 		 5 		 0.123 		 0.229
3000 		 0.5 		 5 		 0.102 		 0.197
3500 		 0.5 		 5 		 0.091 		 0.173
4000 		 0.5 		 5 		 0.084 		 0.166
4500 		 0.5 		 5 		 0.079 		 0.175
5000 		 0.5 		 5 		 0.075 		 0.156
5500 		 0.5 		 5 		 0.071 		 0.155
Over-fitting detected
0 		 0.5 		 10 		 0.975 		 1.286
500 		 0.5 		 10 		 0.295 		 0.478
1000 		 0.5 		 10 		 0.262 		 0.438
1500 		 0.5 		 10 		 0.229 		 0.393
2000 		 0.5 		 10 		 0.186 		 0.353
2500 		 0.5 		 10 		 0.136 		 0.273
3000 		 0.5 		 10 		 0.101 		 0.236
3500 		 0.5 		 10 		 0.084 		 0.204
4000 		 0.5 		 10 		 0.074 		 0.186
4500 		 0.5 		 10 		 0.068 		 0.182
5000 		 0.5 		 10 		 0.064 		 0.170
5500 		 0.5 		 10 		 0.062 		 0.158
6000 		 0.5 		 10 		 0.062 		 0.160
6500 		 0.5 		 10 		 0.059 		 0.158
7000 		 0.5 		 10 		 0.059 		 0.160
7500 		 0.5 		 10 		 0.058 		 0.159
Over-fitting detected
0 		 0.5 		 15 		 0.987 		 1.349
500 		 0.5 		 15 		 0.279 		 0.445
1000 		 0.5 		 15 		 0.265 		 0.432
1500 		 0.5 		 15 		 0.205 		 0.377
2000 		 0.5 		 15 		 0.151 		 0.287
2500 		 0.5 		 15 		 0.110 		 0.215
3000 		 0.5 		 15 		 0.093 		 0.188
3500 		 0.5 		 15 		 0.084 		 0.170
4000 		 0.5 		 15 		 0.155 		 0.215
4500 		 0.5 		 15 		 0.070 		 0.156
5000 		 0.5 		 15 		 0.068 		 0.166
5500 		 0.5 		 15 		 0.073 		 0.188
Over-fitting detected
0 		 0.5 		 20 		 0.943 		 1.303
500 		 0.5 		 20 		 0.276 		 0.453
1000 		 0.5 		 20 		 0.239 		 0.416
1500 		 0.5 		 20 		 0.218 		 0.374
2000 		 0.5 		 20 		 0.159 		 0.297
2500 		 0.5 		 20 		 0.115 		 0.225
3000 		 0.5 		 20 		 0.088 		 0.180
3500 		 0.5 		 20 		 0.076 		 0.153
4000 		 0.5 		 20 		 0.076 		 0.138
4500 		 0.5 		 20 		 0.065 		 0.135
5000 		 0.5 		 20 		 0.060 		 0.127
5500 		 0.5 		 20 		 0.059 		 0.127
6000 		 0.5 		 20 		 0.055 		 0.119
6500 		 0.5 		 20 		 0.054 		 0.120
7000 		 0.5 		 20 		 0.053 		 0.117
7500 		 0.5 		 20 		 0.052 		 0.121
Over-fitting detected
0 		 0.5 		 25 		 2.262 		 3.432
500 		 0.5 		 25 		 0.262 		 0.446
1000 		 0.5 		 25 		 0.233 		 0.407
1500 		 0.5 		 25 		 0.196 		 0.350
2000 		 0.5 		 25 		 0.149 		 0.278
2500 		 0.5 		 25 		 0.106 		 0.206
3000 		 0.5 		 25 		 0.084 		 0.179
3500 		 0.5 		 25 		 0.070 		 0.152
4000 		 0.5 		 25 		 0.064 		 0.143
4500 		 0.5 		 25 		 0.061 		 0.135
5000 		 0.5 		 25 		 0.061 		 0.124
5500 		 0.5 		 25 		 0.058 		 0.136
6000 		 0.5 		 25 		 0.056 		 0.122
6500 		 0.5 		 25 		 0.054 		 0.120
7000 		 0.5 		 25 		 0.055 		 0.127
7500 		 0.5 		 25 		 0.056 		 0.126
Over-fitting detected
0 		 0.5 		 30 		 4.127 		 3.598
500 		 0.5 		 30 		 0.286 		 0.480
1000 		 0.5 		 30 		 0.240 		 0.418
1500 		 0.5 		 30 		 0.219 		 0.389
2000 		 0.5 		 30 		 0.184 		 0.328
2500 		 0.5 		 30 		 0.145 		 0.276
Over-fitting detected
0 		 0.5 		 40 		 17.609 		 15.201
500 		 0.5 		 40 		 0.839 		 1.289
1000 		 0.5 		 40 		 0.303 		 0.570
1500 		 0.5 		 40 		 0.244 		 0.399
2000 		 0.5 		 40 		 0.233 		 0.387
2500 		 0.5 		 40 		 0.230 		 0.374
Over-fitting detected
0 		 0.5 		 50 		 1.499 		 1.504
500 		 0.5 		 50 		 0.740 		 1.228
1000 		 0.5 		 50 		 0.625 		 1.034
1500 		 0.5 		 50 		 0.570 		 0.916
2000 		 0.5 		 50 		 0.530 		 0.779
2500 		 0.5 		 50 		 0.501 		 0.720
3000 		 0.5 		 50 		 0.457 		 0.661
3500 		 0.5 		 50 		 0.351 		 0.591
4000 		 0.5 		 50 		 0.247 		 0.430
4500 		 0.5 		 50 		 0.196 		 0.380
5000 		 0.5 		 50 		 0.166 		 0.364
5500 		 0.5 		 50 		 0.143 		 0.336
6000 		 0.5 		 50 		 0.121 		 0.288
6500 		 0.5 		 50 		 0.098 		 0.236
7000 		 0.5 		 50 		 0.080 		 0.181
7500 		 0.5 		 50 		 0.071 		 0.156
8000 		 0.5 		 50 		 0.066 		 0.146
8500 		 0.5 		 50 		 0.064 		 0.143
9000 		 0.5 		 50 		 0.063 		 0.142
9500 		 0.5 		 50 		 0.062 		 0.152
0 		 0.5 		 75 		 4.766 		 3.947
500 		 0.5 		 75 		 0.990 		 1.368
1000 		 0.5 		 75 		 0.977 		 1.368
1500 		 0.5 		 75 		 0.976 		 1.368
2000 		 0.5 		 75 		 0.975 		 1.368
2500 		 0.5 		 75 		 0.975 		 1.368
Over-fitting detected
0 		 0.5 		 100 		 11.209 		 9.679
500 		 0.5 		 100 		 nan 		 nan
1000 		 0.5 		 100 		 nan 		 nan
1500 		 0.5 		 100 		 nan 		 nan
2000 		 0.5 		 100 		 nan 		 nan
2500 		 0.5 		 100 		 nan 		 nan
3000 		 0.5 		 100 		 nan 		 nan
3500 		 0.5 		 100 		 nan 		 nan
4000 		 0.5 		 100 		 nan 		 nan
4500 		 0.5 		 100 		 nan 		 nan
5000 		 0.5 		 100 		 nan 		 nan
5500 		 0.5 		 100 		 nan 		 nan
6000 		 0.5 		 100 		 nan 		 nan
6500 		 0.5 		 100 		 nan 		 nan
7000 		 0.5 		 100 		 nan 		 nan
7500 		 0.5 		 100 		 nan 		 nan
8000 		 0.5 		 100 		 nan 		 nan
8500 		 0.5 		 100 		 nan 		 nan
9000 		 0.5 		 100 		 nan 		 nan
9500 		 0.5 		 100 		 nan 		 nan
0 		 0.1 		 5 		 1.424 		 2.179
500 		 0.1 		 5 		 0.439 		 0.710
1000 		 0.1 		 5 		 0.307 		 0.491
1500 		 0.1 		 5 		 0.281 		 0.453
2000 		 0.1 		 5 		 0.269 		 0.442
2500 		 0.1 		 5 		 0.262 		 0.441
3000 		 0.1 		 5 		 0.254 		 0.429
3500 		 0.1 		 5 		 0.249 		 0.417
4000 		 0.1 		 5 		 0.238 		 0.401
4500 		 0.1 		 5 		 0.228 		 0.388
5000 		 0.1 		 5 		 0.218 		 0.376
5500 		 0.1 		 5 		 0.209 		 0.361
6000 		 0.1 		 5 		 0.198 		 0.354
6500 		 0.1 		 5 		 0.188 		 0.339
7000 		 0.1 		 5 		 0.178 		 0.330
7500 		 0.1 		 5 		 0.170 		 0.317
8000 		 0.1 		 5 		 0.160 		 0.300
8500 		 0.1 		 5 		 0.153 		 0.287
9000 		 0.1 		 5 		 0.145 		 0.276
9500 		 0.1 		 5 		 0.138 		 0.269
0 		 0.1 		 10 		 1.102 		 1.788
500 		 0.1 		 10 		 0.442 		 0.718
1000 		 0.1 		 10 		 0.308 		 0.493
1500 		 0.1 		 10 		 0.289 		 0.460
2000 		 0.1 		 10 		 0.277 		 0.441
2500 		 0.1 		 10 		 0.266 		 0.431
3000 		 0.1 		 10 		 0.260 		 0.428
3500 		 0.1 		 10 		 0.254 		 0.419
4000 		 0.1 		 10 		 0.249 		 0.417
Over-fitting detected
0 		 0.1 		 15 		 1.047 		 1.566
500 		 0.1 		 15 		 0.428 		 0.699
1000 		 0.1 		 15 		 0.311 		 0.491
1500 		 0.1 		 15 		 0.295 		 0.463
2000 		 0.1 		 15 		 0.285 		 0.450
2500 		 0.1 		 15 		 0.275 		 0.451
3000 		 0.1 		 15 		 0.269 		 0.447
3500 		 0.1 		 15 		 0.264 		 0.445
Over-fitting detected
0 		 0.1 		 20 		 0.998 		 1.351
500 		 0.1 		 20 		 0.387 		 0.636
1000 		 0.1 		 20 		 0.304 		 0.481
1500 		 0.1 		 20 		 0.301 		 0.465
2000 		 0.1 		 20 		 0.290 		 0.460
2500 		 0.1 		 20 		 0.276 		 0.449
3000 		 0.1 		 20 		 0.270 		 0.439
3500 		 0.1 		 20 		 0.264 		 0.440
Over-fitting detected
0 		 0.1 		 25 		 1.048 		 1.316
500 		 0.1 		 25 		 0.441 		 0.722
1000 		 0.1 		 25 		 0.312 		 0.493
1500 		 0.1 		 25 		 0.298 		 0.469
2000 		 0.1 		 25 		 0.290 		 0.453
2500 		 0.1 		 25 		 0.281 		 0.449
3000 		 0.1 		 25 		 0.274 		 0.443
3500 		 0.1 		 25 		 0.265 		 0.433
4000 		 0.1 		 25 		 0.260 		 0.429
Over-fitting detected
0 		 0.1 		 30 		 1.039 		 1.409
500 		 0.1 		 30 		 0.440 		 0.719
1000 		 0.1 		 30 		 0.309 		 0.487
1500 		 0.1 		 30 		 0.294 		 0.455
2000 		 0.1 		 30 		 0.277 		 0.448
2500 		 0.1 		 30 		 0.269 		 0.444
3000 		 0.1 		 30 		 0.265 		 0.440
3500 		 0.1 		 30 		 0.264 		 0.434
Over-fitting detected
0 		 0.1 		 40 		 0.905 		 1.325
500 		 0.1 		 40 		 0.397 		 0.644
1000 		 0.1 		 40 		 0.310 		 0.483
1500 		 0.1 		 40 		 0.300 		 0.465
2000 		 0.1 		 40 		 0.294 		 0.455
2500 		 0.1 		 40 		 0.288 		 0.450
3000 		 0.1 		 40 		 0.281 		 0.443
3500 		 0.1 		 40 		 0.274 		 0.443
4000 		 0.1 		 40 		 0.277 		 0.449
Over-fitting detected
0 		 0.1 		 50 		 1.026 		 1.376
500 		 0.1 		 50 		 0.466 		 0.730
1000 		 0.1 		 50 		 0.313 		 0.498
1500 		 0.1 		 50 		 0.310 		 0.473
2000 		 0.1 		 50 		 0.296 		 0.456
2500 		 0.1 		 50 		 0.290 		 0.462
3000 		 0.1 		 50 		 0.296 		 0.459
3500 		 0.1 		 50 		 0.274 		 0.446
Over-fitting detected
0 		 0.1 		 75 		 1.109 		 1.704
500 		 0.1 		 75 		 0.414 		 0.640
1000 		 0.1 		 75 		 0.309 		 0.479
1500 		 0.1 		 75 		 0.300 		 0.464
2000 		 0.1 		 75 		 0.311 		 0.468
2500 		 0.1 		 75 		 0.292 		 0.456
3000 		 0.1 		 75 		 0.283 		 0.457
3500 		 0.1 		 75 		 0.284 		 0.453
Over-fitting detected
0 		 0.1 		 100 		 3.592 		 4.984
500 		 0.1 		 100 		 0.415 		 0.690
1000 		 0.1 		 100 		 0.309 		 0.494
1500 		 0.1 		 100 		 0.309 		 0.483
2000 		 0.1 		 100 		 0.282 		 0.456
2500 		 0.1 		 100 		 0.272 		 0.445
3000 		 0.1 		 100 		 0.268 		 0.440
3500 		 0.1 		 100 		 0.264 		 0.440
4000 		 0.1 		 100 		 0.262 		 0.436
Over-fitting detected
0 		 0.05 		 5 		 1.071 		 1.336
500 		 0.05 		 5 		 0.581 		 1.032
1000 		 0.05 		 5 		 0.430 		 0.708
1500 		 0.05 		 5 		 0.338 		 0.555
2000 		 0.05 		 5 		 0.306 		 0.485
2500 		 0.05 		 5 		 0.295 		 0.462
3000 		 0.05 		 5 		 0.288 		 0.451
3500 		 0.05 		 5 		 0.282 		 0.446
4000 		 0.05 		 5 		 0.277 		 0.442
4500 		 0.05 		 5 		 0.272 		 0.438
5000 		 0.05 		 5 		 0.269 		 0.436
Over-fitting detected
0 		 0.05 		 10 		 1.214 		 1.347
500 		 0.05 		 10 		 0.587 		 0.951
1000 		 0.05 		 10 		 0.447 		 0.726
1500 		 0.05 		 10 		 0.347 		 0.558
2000 		 0.05 		 10 		 0.310 		 0.485
2500 		 0.05 		 10 		 0.299 		 0.465
3000 		 0.05 		 10 		 0.295 		 0.460
3500 		 0.05 		 10 		 0.287 		 0.452
4000 		 0.05 		 10 		 0.281 		 0.445
4500 		 0.05 		 10 		 0.273 		 0.441
5000 		 0.05 		 10 		 0.269 		 0.437
Over-fitting detected
0 		 0.05 		 15 		 0.999 		 1.365
500 		 0.05 		 15 		 0.565 		 0.939
1000 		 0.05 		 15 		 0.420 		 0.680
1500 		 0.05 		 15 		 0.336 		 0.538
2000 		 0.05 		 15 		 0.311 		 0.486
2500 		 0.05 		 15 		 0.303 		 0.469
3000 		 0.05 		 15 		 0.299 		 0.457
3500 		 0.05 		 15 		 0.297 		 0.456
4000 		 0.05 		 15 		 0.294 		 0.461
4500 		 0.05 		 15 		 0.290 		 0.455
5000 		 0.05 		 15 		 0.286 		 0.450
Over-fitting detected
0 		 0.05 		 20 		 0.905 		 1.386
500 		 0.05 		 20 		 0.569 		 0.992
1000 		 0.05 		 20 		 0.426 		 0.699
1500 		 0.05 		 20 		 0.338 		 0.547
2000 		 0.05 		 20 		 0.309 		 0.490
2500 		 0.05 		 20 		 0.300 		 0.468
3000 		 0.05 		 20 		 0.296 		 0.462
3500 		 0.05 		 20 		 0.291 		 0.456
4000 		 0.05 		 20 		 0.286 		 0.453
4500 		 0.05 		 20 		 0.279 		 0.447
5000 		 0.05 		 20 		 0.273 		 0.443
Over-fitting detected
0 		 0.05 		 25 		 1.171 		 1.380
500 		 0.05 		 25 		 0.627 		 1.117
1000 		 0.05 		 25 		 0.490 		 0.791
1500 		 0.05 		 25 		 0.369 		 0.603
2000 		 0.05 		 25 		 0.316 		 0.503
2500 		 0.05 		 25 		 0.303 		 0.471
3000 		 0.05 		 25 		 0.297 		 0.462
3500 		 0.05 		 25 		 0.294 		 0.458
4000 		 0.05 		 25 		 0.289 		 0.451
4500 		 0.05 		 25 		 0.288 		 0.450
5000 		 0.05 		 25 		 0.279 		 0.442
5500 		 0.05 		 25 		 0.275 		 0.435
6000 		 0.05 		 25 		 0.271 		 0.437
6500 		 0.05 		 25 		 0.268 		 0.434
Over-fitting detected
0 		 0.05 		 30 		 0.964 		 1.335
500 		 0.05 		 30 		 0.586 		 0.975
1000 		 0.05 		 30 		 0.435 		 0.722
1500 		 0.05 		 30 		 0.343 		 0.566
2000 		 0.05 		 30 		 0.313 		 0.495
2500 		 0.05 		 30 		 0.301 		 0.470
3000 		 0.05 		 30 		 0.296 		 0.459
3500 		 0.05 		 30 		 0.291 		 0.457
4000 		 0.05 		 30 		 0.286 		 0.448
4500 		 0.05 		 30 		 0.282 		 0.450
5000 		 0.05 		 30 		 0.278 		 0.446
5500 		 0.05 		 30 		 0.276 		 0.439
Over-fitting detected
0 		 0.05 		 40 		 1.053 		 1.246
500 		 0.05 		 40 		 0.552 		 0.947
1000 		 0.05 		 40 		 0.401 		 0.663
1500 		 0.05 		 40 		 0.329 		 0.529
2000 		 0.05 		 40 		 0.305 		 0.476
2500 		 0.05 		 40 		 0.299 		 0.464
3000 		 0.05 		 40 		 0.294 		 0.458
3500 		 0.05 		 40 		 0.291 		 0.458
4000 		 0.05 		 40 		 0.287 		 0.451
4500 		 0.05 		 40 		 0.281 		 0.447
5000 		 0.05 		 40 		 0.278 		 0.442
Over-fitting detected
0 		 0.05 		 50 		 0.950 		 1.468
500 		 0.05 		 50 		 0.556 		 0.924
1000 		 0.05 		 50 		 0.417 		 0.681
1500 		 0.05 		 50 		 0.336 		 0.541
2000 		 0.05 		 50 		 0.309 		 0.490
2500 		 0.05 		 50 		 0.300 		 0.468
3000 		 0.05 		 50 		 0.298 		 0.465
3500 		 0.05 		 50 		 0.291 		 0.457
4000 		 0.05 		 50 		 0.287 		 0.457
4500 		 0.05 		 50 		 0.282 		 0.452
5000 		 0.05 		 50 		 0.279 		 0.450
Over-fitting detected
0 		 0.05 		 75 		 0.936 		 1.413
500 		 0.05 		 75 		 0.567 		 0.907
1000 		 0.05 		 75 		 0.403 		 0.670
1500 		 0.05 		 75 		 0.328 		 0.530
2000 		 0.05 		 75 		 0.322 		 0.497
2500 		 0.05 		 75 		 0.302 		 0.472
3000 		 0.05 		 75 		 0.299 		 0.463
3500 		 0.05 		 75 		 0.302 		 0.467
4000 		 0.05 		 75 		 0.298 		 0.465
4500 		 0.05 		 75 		 0.292 		 0.456
5000 		 0.05 		 75 		 0.288 		 0.456
5500 		 0.05 		 75 		 0.296 		 0.464
Over-fitting detected
0 		 0.05 		 100 		 0.949 		 1.363
500 		 0.05 		 100 		 0.499 		 0.841
1000 		 0.05 		 100 		 0.388 		 0.652
1500 		 0.05 		 100 		 0.316 		 0.517
2000 		 0.05 		 100 		 0.304 		 0.479
2500 		 0.05 		 100 		 0.320 		 0.487
3000 		 0.05 		 100 		 0.305 		 0.470
3500 		 0.05 		 100 		 0.302 		 0.466
4000 		 0.05 		 100 		 0.304 		 0.466
4500 		 0.05 		 100 		 0.293 		 0.459
5000 		 0.05 		 100 		 0.298 		 0.462
5500 		 0.05 		 100 		 0.289 		 0.451
Over-fitting detected
0 		 0.001 		 5 		 1.042 		 1.407
500 		 0.001 		 5 		 1.004 		 1.401
1000 		 0.001 		 5 		 0.973 		 1.397
1500 		 0.001 		 5 		 0.947 		 1.386
2000 		 0.001 		 5 		 0.924 		 1.375
2500 		 0.001 		 5 		 0.902 		 1.362
Over-fitting detected
0 		 0.001 		 10 		 0.941 		 1.376
500 		 0.001 		 10 		 0.927 		 1.363
1000 		 0.001 		 10 		 0.913 		 1.355
1500 		 0.001 		 10 		 0.900 		 1.349
2000 		 0.001 		 10 		 0.886 		 1.343
2500 		 0.001 		 10 		 0.873 		 1.335
Over-fitting detected
0 		 0.001 		 15 		 1.019 		 1.618
500 		 0.001 		 15 		 0.913 		 1.362
1000 		 0.001 		 15 		 0.888 		 1.326
1500 		 0.001 		 15 		 0.865 		 1.308
2000 		 0.001 		 15 		 0.844 		 1.293
2500 		 0.001 		 15 		 0.824 		 1.286
3000 		 0.001 		 15 		 0.805 		 1.274
Over-fitting detected
0 		 0.001 		 20 		 0.961 		 1.669
500 		 0.001 		 20 		 0.831 		 1.320
1000 		 0.001 		 20 		 0.809 		 1.286
1500 		 0.001 		 20 		 0.788 		 1.269
2000 		 0.001 		 20 		 0.770 		 1.258
2500 		 0.001 		 20 		 0.753 		 1.246
3000 		 0.001 		 20 		 0.738 		 1.232
Over-fitting detected
0 		 0.001 		 25 		 0.940 		 1.510
500 		 0.001 		 25 		 0.888 		 1.348
1000 		 0.001 		 25 		 0.867 		 1.328
1500 		 0.001 		 25 		 0.848 		 1.325
2000 		 0.001 		 25 		 0.831 		 1.325
2500 		 0.001 		 25 		 0.814 		 1.308
3000 		 0.001 		 25 		 0.799 		 1.301
Over-fitting detected
0 		 0.001 		 30 		 1.035 		 1.584
500 		 0.001 		 30 		 0.942 		 1.371
1000 		 0.001 		 30 		 0.908 		 1.346
1500 		 0.001 		 30 		 0.878 		 1.329
2000 		 0.001 		 30 		 0.850 		 1.308
2500 		 0.001 		 30 		 0.825 		 1.287
3000 		 0.001 		 30 		 0.802 		 1.271
Over-fitting detected
0 		 0.001 		 40 		 1.542 		 1.489
500 		 0.001 		 40 		 0.927 		 1.324
1000 		 0.001 		 40 		 0.897 		 1.311
1500 		 0.001 		 40 		 0.870 		 1.296
2000 		 0.001 		 40 		 0.845 		 1.282
2500 		 0.001 		 40 		 0.821 		 1.274
Over-fitting detected
0 		 0.001 		 50 		 1.252 		 1.363
500 		 0.001 		 50 		 0.981 		 1.353
1000 		 0.001 		 50 		 0.953 		 1.343
1500 		 0.001 		 50 		 0.927 		 1.330
2000 		 0.001 		 50 		 0.903 		 1.314
2500 		 0.001 		 50 		 0.880 		 1.304
Over-fitting detected
0 		 0.001 		 75 		 1.337 		 1.397
500 		 0.001 		 75 		 0.931 		 1.339
1000 		 0.001 		 75 		 0.890 		 1.313
1500 		 0.001 		 75 		 0.855 		 1.304
2000 		 0.001 		 75 		 0.825 		 1.282
2500 		 0.001 		 75 		 0.798 		 1.272
Over-fitting detected
0 		 0.001 		 100 		 0.934 		 1.250
500 		 0.001 		 100 		 0.861 		 1.301
1000 		 0.001 		 100 		 0.825 		 1.275
1500 		 0.001 		 100 		 0.794 		 1.254
2000 		 0.001 		 100 		 0.769 		 1.253
2500 		 0.001 		 100 		 0.748 		 1.241
Over-fitting detected

Hyperparameter searching part 2

I skimmed through the above results, looking for anything that got good training and validation losses before seeming to over-fit. Learning rates of 2 or < 0.05 and hidden_nodes of 100 seemed easy to eliminate, and I focused down on the following combinations:

  • learning rate = 1, hidden_nodes 10-15
  • learning rate = 0.5, hidden_nodes 20-50

And changed the algorithm to just try specific variations of those combinations. (I left iterations at 10000 since I'm aborting when it looks like it may be over-fitting, and that'll give me a good idea of what iterations to use with these combinations).


In [108]:
### Set the hyperparameters here ###
iterations = 10000
learning_rates_list = [1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8, 
                       0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6]
hidden_nodes_list = [8, 8, 8, 10, 10, 10, 13, 13, 13, 15, 15, 15, 17, 17, 17, 
                     18, 18, 18, 20, 20, 20, 25, 25, 25, 30, 30, 30, 40, 40, 40, 50, 50, 50]
output_nodes = 1
if(len(learning_rates_list) != len(hidden_nodes_list)):
    print("your configuration is wrong, you probably want to abort...")

    
N_i = train_features.shape[1]


for learning_rate, hidden_nodes in zip(learning_rates_list, hidden_nodes_list):
    print("iterations \t learning_rate \t hidden_nodes \t train_loss \t val_loss")
    network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)

    train_losses = []
    val_losses = []
        
    for ii in range(iterations):
        # Go through a random batch of 128 records from the training data set
        batch = np.random.choice(train_features.index, size=128)
        X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
                             
        network.train(X, y)
            
        # check progress every 500 iterations
        if ii % 500 == 0:
            current_train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
            current_val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
                
            train_losses.append(current_train_loss)
            val_losses.append(current_val_loss)
                
            #print the progress thus far
            output = "{} \t\t {} \t\t {} \t\t {:2.3f} \t\t {:2.3f}"
            print(output.format(ii, learning_rate, hidden_nodes, current_train_loss, current_val_loss))
                    
            # if these settings seem to be overfitting
            if is_overfitting(train_losses, val_losses):
                print("Over-fitting detected")
                break # move on to the next hyperparameter settings
            else:
                continue # keep going on next 500 iterations


iterations 	 learning_rate 	 hidden_nodes 	 train_loss 	 val_loss
starting new network config
0 		 1 		 8 		 1.034 		 1.599
500 		 1 		 8 		 0.227 		 0.394
1000 		 1 		 8 		 0.167 		 0.327
1500 		 1 		 8 		 0.113 		 0.219
2000 		 1 		 8 		 0.083 		 0.186
2500 		 1 		 8 		 0.076 		 0.168
3000 		 1 		 8 		 0.068 		 0.156
3500 		 1 		 8 		 0.067 		 0.160
4000 		 1 		 8 		 0.063 		 0.145
4500 		 1 		 8 		 0.058 		 0.145
5000 		 1 		 8 		 0.057 		 0.144
5500 		 1 		 8 		 0.057 		 0.174
Over-fitting detected
starting new network config
0 		 1.2 		 8 		 1.076 		 1.266
500 		 1.2 		 8 		 0.204 		 0.370
1000 		 1.2 		 8 		 0.108 		 0.200
1500 		 1.2 		 8 		 0.075 		 0.151
2000 		 1.2 		 8 		 0.069 		 0.155
2500 		 1.2 		 8 		 0.071 		 0.145
3000 		 1.2 		 8 		 0.063 		 0.151
3500 		 1.2 		 8 		 0.066 		 0.137
4000 		 1.2 		 8 		 0.062 		 0.139
4500 		 1.2 		 8 		 0.062 		 0.183
Over-fitting detected
starting new network config
0 		 0.8 		 8 		 1.026 		 1.266
500 		 0.8 		 8 		 0.257 		 0.419
1000 		 0.8 		 8 		 0.222 		 0.373
1500 		 0.8 		 8 		 0.152 		 0.311
2000 		 0.8 		 8 		 0.094 		 0.197
2500 		 0.8 		 8 		 0.071 		 0.156
3000 		 0.8 		 8 		 0.065 		 0.156
3500 		 0.8 		 8 		 0.061 		 0.154
4000 		 0.8 		 8 		 0.059 		 0.135
4500 		 0.8 		 8 		 0.058 		 0.144
5000 		 0.8 		 8 		 0.059 		 0.145
5500 		 0.8 		 8 		 0.056 		 0.157
Over-fitting detected
starting new network config
0 		 1 		 10 		 0.945 		 1.263
500 		 1 		 10 		 0.232 		 0.398
1000 		 1 		 10 		 0.169 		 0.320
1500 		 1 		 10 		 0.110 		 0.227
2000 		 1 		 10 		 0.105 		 0.232
2500 		 1 		 10 		 0.077 		 0.195
3000 		 1 		 10 		 0.062 		 0.189
3500 		 1 		 10 		 0.060 		 0.168
4000 		 1 		 10 		 0.062 		 0.164
4500 		 1 		 10 		 0.061 		 0.174
5000 		 1 		 10 		 0.058 		 0.179
5500 		 1 		 10 		 0.056 		 0.185
Over-fitting detected
starting new network config
0 		 1.2 		 10 		 1.448 		 1.421
500 		 1.2 		 10 		 0.277 		 0.503
1000 		 1.2 		 10 		 0.151 		 0.281
1500 		 1.2 		 10 		 0.104 		 0.196
2000 		 1.2 		 10 		 0.082 		 0.179
2500 		 1.2 		 10 		 0.083 		 0.166
3000 		 1.2 		 10 		 0.069 		 0.169
3500 		 1.2 		 10 		 0.123 		 0.163
4000 		 1.2 		 10 		 0.068 		 0.180
4500 		 1.2 		 10 		 0.060 		 0.157
5000 		 1.2 		 10 		 0.059 		 0.147
Over-fitting detected
starting new network config
0 		 0.8 		 10 		 0.976 		 1.374
500 		 0.8 		 10 		 0.240 		 0.409
1000 		 0.8 		 10 		 0.169 		 0.330
1500 		 0.8 		 10 		 0.112 		 0.238
2000 		 0.8 		 10 		 0.088 		 0.190
2500 		 0.8 		 10 		 0.078 		 0.159
3000 		 0.8 		 10 		 0.068 		 0.139
3500 		 0.8 		 10 		 0.067 		 0.135
4000 		 0.8 		 10 		 0.061 		 0.142
4500 		 0.8 		 10 		 0.066 		 0.154
5000 		 0.8 		 10 		 0.060 		 0.135
Over-fitting detected
starting new network config
0 		 1 		 13 		 2.839 		 4.069
500 		 1 		 13 		 0.255 		 0.448
1000 		 1 		 13 		 0.189 		 0.366
1500 		 1 		 13 		 0.122 		 0.233
2000 		 1 		 13 		 0.092 		 0.195
2500 		 1 		 13 		 0.079 		 0.191
3000 		 1 		 13 		 0.072 		 0.191
3500 		 1 		 13 		 0.067 		 0.179
4000 		 1 		 13 		 0.086 		 0.201
Over-fitting detected
starting new network config
0 		 1.2 		 13 		 3.404 		 4.807
500 		 1.2 		 13 		 0.240 		 0.406
1000 		 1.2 		 13 		 0.204 		 0.356
1500 		 1.2 		 13 		 0.180 		 0.247
2000 		 1.2 		 13 		 0.128 		 0.232
2500 		 1.2 		 13 		 0.123 		 0.239
3000 		 1.2 		 13 		 0.080 		 0.193
3500 		 1.2 		 13 		 0.071 		 0.174
Over-fitting detected
starting new network config
0 		 0.8 		 13 		 1.234 		 2.059
500 		 0.8 		 13 		 0.260 		 0.429
1000 		 0.8 		 13 		 0.175 		 0.325
1500 		 0.8 		 13 		 0.113 		 0.230
2000 		 0.8 		 13 		 0.084 		 0.162
2500 		 0.8 		 13 		 0.067 		 0.158
3000 		 0.8 		 13 		 0.066 		 0.137
3500 		 0.8 		 13 		 0.061 		 0.155
4000 		 0.8 		 13 		 0.057 		 0.139
4500 		 0.8 		 13 		 0.056 		 0.133
Over-fitting detected
starting new network config
0 		 1 		 15 		 1.098 		 1.700
500 		 1 		 15 		 0.243 		 0.413
1000 		 1 		 15 		 0.196 		 0.370
1500 		 1 		 15 		 0.147 		 0.256
2000 		 1 		 15 		 0.104 		 0.206
2500 		 1 		 15 		 0.083 		 0.169
3000 		 1 		 15 		 0.088 		 0.178
3500 		 1 		 15 		 0.080 		 0.164
4000 		 1 		 15 		 0.070 		 0.179
4500 		 1 		 15 		 0.072 		 0.175
Over-fitting detected
starting new network config
0 		 1.2 		 15 		 4.813 		 4.378
500 		 1.2 		 15 		 0.246 		 0.475
1000 		 1.2 		 15 		 0.225 		 0.375
1500 		 1.2 		 15 		 0.214 		 0.382
2000 		 1.2 		 15 		 0.200 		 0.373
2500 		 1.2 		 15 		 0.129 		 0.285
Over-fitting detected
starting new network config
0 		 0.8 		 15 		 1.245 		 1.979
500 		 0.8 		 15 		 0.236 		 0.397
1000 		 0.8 		 15 		 0.149 		 0.300
1500 		 0.8 		 15 		 0.102 		 0.205
2000 		 0.8 		 15 		 0.081 		 0.173
2500 		 0.8 		 15 		 0.070 		 0.150
3000 		 0.8 		 15 		 0.065 		 0.144
3500 		 0.8 		 15 		 0.063 		 0.141
4000 		 0.8 		 15 		 0.063 		 0.140
4500 		 0.8 		 15 		 0.061 		 0.152
5000 		 0.8 		 15 		 0.058 		 0.133
Over-fitting detected
starting new network config
0 		 1 		 17 		 2.281 		 3.415
500 		 1 		 17 		 0.243 		 0.395
1000 		 1 		 17 		 0.222 		 0.380
1500 		 1 		 17 		 0.148 		 0.278
2000 		 1 		 17 		 0.093 		 0.177
2500 		 1 		 17 		 0.070 		 0.141
3000 		 1 		 17 		 0.062 		 0.125
3500 		 1 		 17 		 0.058 		 0.122
4000 		 1 		 17 		 0.056 		 0.125
4500 		 1 		 17 		 0.055 		 0.131
5000 		 1 		 17 		 0.055 		 0.122
Over-fitting detected
starting new network config
0 		 1.2 		 17 		 1.324 		 2.147
500 		 1.2 		 17 		 0.701 		 1.181
1000 		 1.2 		 17 		 0.533 		 0.740
1500 		 1.2 		 17 		 0.482 		 0.638
2000 		 1.2 		 17 		 0.408 		 0.598
2500 		 1.2 		 17 		 0.359 		 0.551
3000 		 1.2 		 17 		 0.269 		 0.440
3500 		 1.2 		 17 		 0.234 		 0.379
4000 		 1.2 		 17 		 0.216 		 0.361
4500 		 1.2 		 17 		 0.222 		 0.353
5000 		 1.2 		 17 		 0.177 		 0.279
5500 		 1.2 		 17 		 0.135 		 0.212
6000 		 1.2 		 17 		 0.114 		 0.196
6500 		 1.2 		 17 		 0.108 		 0.205
7000 		 1.2 		 17 		 0.102 		 0.175
7500 		 1.2 		 17 		 0.105 		 0.172
8000 		 1.2 		 17 		 0.101 		 0.197
8500 		 1.2 		 17 		 0.113 		 0.187
9000 		 1.2 		 17 		 0.105 		 0.193
Over-fitting detected
starting new network config
0 		 0.8 		 17 		 1.931 		 3.093
500 		 0.8 		 17 		 0.249 		 0.428
1000 		 0.8 		 17 		 0.194 		 0.343
1500 		 0.8 		 17 		 0.140 		 0.286
2000 		 0.8 		 17 		 0.097 		 0.210
2500 		 0.8 		 17 		 0.087 		 0.180
3000 		 0.8 		 17 		 0.080 		 0.159
3500 		 0.8 		 17 		 0.091 		 0.164
4000 		 0.8 		 17 		 0.073 		 0.197
4500 		 0.8 		 17 		 0.080 		 0.163
5000 		 0.8 		 17 		 0.107 		 0.223
Over-fitting detected
starting new network config
0 		 0.4 		 18 		 1.282 		 1.299
500 		 0.4 		 18 		 0.343 		 0.530
1000 		 0.4 		 18 		 0.261 		 0.448
1500 		 0.4 		 18 		 0.266 		 0.438
2000 		 0.4 		 18 		 0.239 		 0.402
2500 		 0.4 		 18 		 0.213 		 0.365
Over-fitting detected
starting new network config
0 		 0.5 		 18 		 0.954 		 1.261
500 		 0.5 		 18 		 0.343 		 0.507
1000 		 0.5 		 18 		 0.246 		 0.417
1500 		 0.5 		 18 		 0.198 		 0.352
2000 		 0.5 		 18 		 0.134 		 0.259
2500 		 0.5 		 18 		 0.118 		 0.211
3000 		 0.5 		 18 		 0.079 		 0.171
3500 		 0.5 		 18 		 0.072 		 0.176
4000 		 0.5 		 18 		 0.064 		 0.157
4500 		 0.5 		 18 		 0.066 		 0.151
5000 		 0.5 		 18 		 0.062 		 0.159
5500 		 0.5 		 18 		 0.059 		 0.162
6000 		 0.5 		 18 		 0.055 		 0.159
Over-fitting detected
starting new network config
0 		 0.6 		 18 		 1.486 		 2.526
500 		 0.6 		 18 		 0.259 		 0.430
1000 		 0.6 		 18 		 0.217 		 0.378
1500 		 0.6 		 18 		 0.151 		 0.302
2000 		 0.6 		 18 		 0.099 		 0.212
2500 		 0.6 		 18 		 0.075 		 0.173
3000 		 0.6 		 18 		 0.073 		 0.154
3500 		 0.6 		 18 		 0.060 		 0.141
4000 		 0.6 		 18 		 0.063 		 0.167
4500 		 0.6 		 18 		 0.055 		 0.138
5000 		 0.6 		 18 		 0.056 		 0.132
Over-fitting detected
starting new network config
0 		 0.4 		 20 		 1.601 		 1.539
500 		 0.4 		 20 		 0.348 		 0.528
1000 		 0.4 		 20 		 0.300 		 0.482
1500 		 0.4 		 20 		 0.241 		 0.412
2000 		 0.4 		 20 		 0.186 		 0.338
2500 		 0.4 		 20 		 0.140 		 0.269
Over-fitting detected
starting new network config
0 		 0.5 		 20 		 1.451 		 2.328
500 		 0.5 		 20 		 0.264 		 0.433
1000 		 0.5 		 20 		 0.240 		 0.406
1500 		 0.5 		 20 		 0.191 		 0.351
2000 		 0.5 		 20 		 0.150 		 0.291
2500 		 0.5 		 20 		 0.095 		 0.210
3000 		 0.5 		 20 		 0.075 		 0.180
3500 		 0.5 		 20 		 0.070 		 0.173
4000 		 0.5 		 20 		 0.061 		 0.139
4500 		 0.5 		 20 		 0.061 		 0.142
5000 		 0.5 		 20 		 0.058 		 0.148
5500 		 0.5 		 20 		 0.054 		 0.135
6000 		 0.5 		 20 		 0.056 		 0.138
6500 		 0.5 		 20 		 0.056 		 0.139
7000 		 0.5 		 20 		 0.053 		 0.150
Over-fitting detected
starting new network config
0 		 0.6 		 20 		 1.395 		 2.137
500 		 0.6 		 20 		 0.271 		 0.424
1000 		 0.6 		 20 		 0.194 		 0.360
1500 		 0.6 		 20 		 0.124 		 0.255
2000 		 0.6 		 20 		 0.088 		 0.190
2500 		 0.6 		 20 		 0.078 		 0.156
3000 		 0.6 		 20 		 0.071 		 0.143
3500 		 0.6 		 20 		 0.060 		 0.140
4000 		 0.6 		 20 		 0.058 		 0.144
4500 		 0.6 		 20 		 0.063 		 0.127
5000 		 0.6 		 20 		 0.056 		 0.138
Over-fitting detected
starting new network config
0 		 0.4 		 25 		 0.945 		 1.532
500 		 0.4 		 25 		 0.273 		 0.456
1000 		 0.4 		 25 		 0.247 		 0.424
1500 		 0.4 		 25 		 0.224 		 0.393
2000 		 0.4 		 25 		 0.173 		 0.319
2500 		 0.4 		 25 		 0.122 		 0.252
3000 		 0.4 		 25 		 0.102 		 0.202
3500 		 0.4 		 25 		 0.077 		 0.172
4000 		 0.4 		 25 		 0.069 		 0.152
4500 		 0.4 		 25 		 0.065 		 0.143
5000 		 0.4 		 25 		 0.062 		 0.148
5500 		 0.4 		 25 		 0.058 		 0.141
6000 		 0.4 		 25 		 0.061 		 0.138
6500 		 0.4 		 25 		 0.057 		 0.140
7000 		 0.4 		 25 		 0.055 		 0.151
Over-fitting detected
starting new network config
0 		 0.5 		 25 		 1.275 		 1.369
500 		 0.5 		 25 		 0.266 		 0.460
1000 		 0.5 		 25 		 0.230 		 0.390
1500 		 0.5 		 25 		 0.194 		 0.342
2000 		 0.5 		 25 		 0.150 		 0.274
2500 		 0.5 		 25 		 0.112 		 0.216
3000 		 0.5 		 25 		 0.087 		 0.186
3500 		 0.5 		 25 		 0.073 		 0.161
4000 		 0.5 		 25 		 0.072 		 0.141
4500 		 0.5 		 25 		 0.061 		 0.150
5000 		 0.5 		 25 		 0.060 		 0.146
5500 		 0.5 		 25 		 0.060 		 0.135
6000 		 0.5 		 25 		 0.060 		 0.148
Over-fitting detected
starting new network config
0 		 0.6 		 25 		 1.981 		 1.795
500 		 0.6 		 25 		 0.261 		 0.443
1000 		 0.6 		 25 		 0.207 		 0.360
1500 		 0.6 		 25 		 0.165 		 0.315
2000 		 0.6 		 25 		 0.117 		 0.238
2500 		 0.6 		 25 		 0.094 		 0.205
Over-fitting detected
starting new network config
0 		 0.4 		 30 		 2.734 		 2.174
500 		 0.4 		 30 		 0.271 		 0.466
1000 		 0.4 		 30 		 0.243 		 0.413
1500 		 0.4 		 30 		 0.207 		 0.367
2000 		 0.4 		 30 		 0.152 		 0.300
2500 		 0.4 		 30 		 0.111 		 0.233
Over-fitting detected
starting new network config
0 		 0.5 		 30 		 2.572 		 2.173
500 		 0.5 		 30 		 0.356 		 0.623
1000 		 0.5 		 30 		 0.226 		 0.393
1500 		 0.5 		 30 		 0.159 		 0.304
2000 		 0.5 		 30 		 0.106 		 0.220
2500 		 0.5 		 30 		 0.082 		 0.170
Over-fitting detected
starting new network config
0 		 0.6 		 30 		 0.920 		 1.385
500 		 0.6 		 30 		 0.618 		 1.016
1000 		 0.6 		 30 		 0.261 		 0.448
1500 		 0.6 		 30 		 0.220 		 0.384
2000 		 0.6 		 30 		 0.197 		 0.366
2500 		 0.6 		 30 		 0.143 		 0.304
3000 		 0.6 		 30 		 0.099 		 0.195
3500 		 0.6 		 30 		 0.081 		 0.176
4000 		 0.6 		 30 		 0.076 		 0.158
4500 		 0.6 		 30 		 0.071 		 0.150
5000 		 0.6 		 30 		 0.066 		 0.148
5500 		 0.6 		 30 		 0.065 		 0.135
6000 		 0.6 		 30 		 0.064 		 0.143
6500 		 0.6 		 30 		 0.068 		 0.142
7000 		 0.6 		 30 		 0.061 		 0.135
7500 		 0.6 		 30 		 0.060 		 0.138
8000 		 0.6 		 30 		 0.059 		 0.140
Over-fitting detected
starting new network config
0 		 0.4 		 40 		 6.095 		 8.245
500 		 0.4 		 40 		 0.324 		 0.619
1000 		 0.4 		 40 		 0.229 		 0.389
1500 		 0.4 		 40 		 0.189 		 0.331
2000 		 0.4 		 40 		 0.153 		 0.286
2500 		 0.4 		 40 		 0.129 		 0.252
3000 		 0.4 		 40 		 0.110 		 0.224
3500 		 0.4 		 40 		 0.095 		 0.196
4000 		 0.4 		 40 		 0.090 		 0.186
4500 		 0.4 		 40 		 0.087 		 0.180
5000 		 0.4 		 40 		 0.082 		 0.172
5500 		 0.4 		 40 		 0.079 		 0.176
6000 		 0.4 		 40 		 0.076 		 0.187
Over-fitting detected
starting new network config
0 		 0.5 		 40 		 2.862 		 4.047
500 		 0.5 		 40 		 0.908 		 1.274
1000 		 0.5 		 40 		 0.288 		 0.486
1500 		 0.5 		 40 		 0.228 		 0.392
2000 		 0.5 		 40 		 0.218 		 0.387
2500 		 0.5 		 40 		 0.204 		 0.377
3000 		 0.5 		 40 		 0.167 		 0.322
3500 		 0.5 		 40 		 0.135 		 0.265
4000 		 0.5 		 40 		 0.108 		 0.226
4500 		 0.5 		 40 		 0.087 		 0.181
5000 		 0.5 		 40 		 0.076 		 0.155
5500 		 0.5 		 40 		 0.072 		 0.145
6000 		 0.5 		 40 		 0.066 		 0.141
6500 		 0.5 		 40 		 0.064 		 0.136
7000 		 0.5 		 40 		 0.063 		 0.133
7500 		 0.5 		 40 		 0.061 		 0.134
8000 		 0.5 		 40 		 0.060 		 0.134
8500 		 0.5 		 40 		 0.063 		 0.128
9000 		 0.5 		 40 		 0.059 		 0.127
9500 		 0.5 		 40 		 0.060 		 0.135
starting new network config
0 		 0.6 		 40 		 2.825 		 3.999
500 		 0.6 		 40 		 1.037 		 1.366
1000 		 0.6 		 40 		 1.010 		 1.366
1500 		 0.6 		 40 		 0.999 		 1.366
2000 		 0.6 		 40 		 0.992 		 1.366
2500 		 0.6 		 40 		 0.987 		 1.366
3000 		 0.6 		 40 		 0.976 		 1.365
Over-fitting detected
starting new network config
0 		 0.4 		 50 		 3.690 		 3.124
500 		 0.4 		 50 		 0.958 		 1.427
1000 		 0.4 		 50 		 0.910 		 1.400
1500 		 0.4 		 50 		 0.894 		 1.389
2000 		 0.4 		 50 		 0.884 		 1.382
2500 		 0.4 		 50 		 0.878 		 1.377
Over-fitting detected
starting new network config
0 		 0.5 		 50 		 15.988 		 19.113
500 		 0.5 		 50 		 0.963 		 1.398
1000 		 0.5 		 50 		 0.919 		 1.384
1500 		 0.5 		 50 		 0.880 		 1.362
2000 		 0.5 		 50 		 0.824 		 1.310
2500 		 0.5 		 50 		 0.780 		 1.262
3000 		 0.5 		 50 		 0.760 		 1.234
Over-fitting detected
starting new network config
0 		 0.6 		 50 		 6.354 		 7.849
500 		 0.6 		 50 		 0.971 		 1.365
1000 		 0.6 		 50 		 0.946 		 1.362
1500 		 0.6 		 50 		 0.913 		 1.351
2000 		 0.6 		 50 		 0.861 		 1.323
2500 		 0.6 		 50 		 0.791 		 1.250
3000 		 0.6 		 50 		 0.724 		 1.168
Over-fitting detected

Selecting Hyperparameters

There's a lot of noise in those results (due to the random nature of the weights and sample selections). But the general indication is optimal results will be training losses around 0.060 and validation losses of ~0.135, and these can be generally achieved somewhere in the ranges of:

  • learning rates between 0.5 and 0.8
  • hidden_nodes 8 and 40

With the caveats that the higher end of the hidden node range generally needs more iterations and doesn't produce any better outputs (in fact it seems less likely to produce good results once passed 30 nodes). Staying under ~20 hidden nodes allows the network to train well with fewer than ~5000 iterations.

Given that, I opted for settings at the lower end (see below), although trial and error with different parameter values indicates anything in that range will perform reasonably well.


In [119]:
import sys

### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.6
hidden_nodes = 10
output_nodes = 1

N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)

losses = {'train':[], 'validation':[]}
for ii in range(iterations):
    # Go through a random batch of 128 records from the training data set
    batch = np.random.choice(train_features.index, size=128)
    X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
                             
    network.train(X, y)
    
    # Printing out the training progress
    train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
    val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
    sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
                     + "% ... Training loss: " + str(train_loss)[:5] \
                     + " ... Validation loss: " + str(val_loss)[:5])
    sys.stdout.flush()
    
    losses['train'].append(train_loss)
    losses['validation'].append(val_loss)


Progress: 100.0% ... Training loss: 0.066 ... Validation loss: 0.134

In [120]:
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim([0, 2]) # limit y range just so the lower end is clearer


Check out your predictions

Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.


In [121]:
fig, ax = plt.subplots(figsize=(8,4))

mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()

dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)


OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).

Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?

Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter

Your answer below

It models it fairly well. The place it does poorly is over the holiday season. I haven't looked too closely at the data, but I read in the slack channel that someone else found there's less data covering that time period, so it hasn't trained as well to handle the patterns that occur during the holidays, which makes sense. Training with more holiday data should allow this to be corrected.

Also, despite further tuning the hyperparameters, the tendency to over/under shoot slightly on the peak and low times remains. I'm looking forward to learning more advanced techniques in the course to build better deep learning networks!


In [ ]: