Your first neural network

In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.


In [1]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

Load and prepare the data

A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!


In [2]:
data_path = 'Bike-Sharing-Dataset/hour.csv'

rides = pd.read_csv(data_path)

In [3]:
rides.head()


Out[3]:
instant dteday season yr mnth hr holiday weekday workingday weathersit temp atemp hum windspeed casual registered cnt
0 1 2011-01-01 1 0 1 0 0 6 0 1 0.24 0.2879 0.81 0.0 3 13 16
1 2 2011-01-01 1 0 1 1 0 6 0 1 0.22 0.2727 0.80 0.0 8 32 40
2 3 2011-01-01 1 0 1 2 0 6 0 1 0.22 0.2727 0.80 0.0 5 27 32
3 4 2011-01-01 1 0 1 3 0 6 0 1 0.24 0.2879 0.75 0.0 3 10 13
4 5 2011-01-01 1 0 1 4 0 6 0 1 0.24 0.2879 0.75 0.0 0 1 1

Checking out the data

This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.

Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.


In [4]:
rides[:24*10].plot(x='dteday', y='cnt')


Out[4]:
<matplotlib.axes._subplots.AxesSubplot at 0x109695d30>

Dummy variables

Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().


In [5]:
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
    dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
    rides = pd.concat([rides, dummies], axis=1)

fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 
                  'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()


Out[5]:
yr holiday temp hum windspeed casual registered cnt season_1 season_2 ... hr_21 hr_22 hr_23 weekday_0 weekday_1 weekday_2 weekday_3 weekday_4 weekday_5 weekday_6
0 0 0 0.24 0.81 0.0 3 13 16 1 0 ... 0 0 0 0 0 0 0 0 0 1
1 0 0 0.22 0.80 0.0 8 32 40 1 0 ... 0 0 0 0 0 0 0 0 0 1
2 0 0 0.22 0.80 0.0 5 27 32 1 0 ... 0 0 0 0 0 0 0 0 0 1
3 0 0 0.24 0.75 0.0 3 10 13 1 0 ... 0 0 0 0 0 0 0 0 0 1
4 0 0 0.24 0.75 0.0 0 1 1 1 0 ... 0 0 0 0 0 0 0 0 0 1

5 rows × 59 columns

Scaling target variables

To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.

The scaling factors are saved so we can go backwards when we use the network for predictions.


In [6]:
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
    mean, std = data[each].mean(), data[each].std()
    scaled_features[each] = [mean, std]
    data.loc[:, each] = (data[each] - mean)/std

Splitting the data into training, testing, and validation sets

We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.


In [7]:
# Save data for approximately the last 21 days 
test_data = data[-21*24:]

# Now remove the test data from the data set 
data = data[:-21*24]

# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]

We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).


In [8]:
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]

Time to build the network

Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.

The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.

We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.

Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.

Below, you have these tasks:

  1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
  2. Implement the forward pass in the train method.
  3. Implement the backpropagation algorithm in the train method, including calculating the output error.
  4. Implement the forward pass in the run method.

In [9]:
from numba import jitclass, jit
from numba import int64, float64, float32

In [35]:
import numpy as np
from collections import OrderedDict
from numba import jitclass, jit
from numba import int64, float64

spec = OrderedDict({
    'input_nodes': int64,
    'hidden_nodes': int64,
    'output_nodes': int64,
    'weights_input_to_hidden': float64[:, :],
    'weights_hidden_to_output': float64[:, :],
    'lr': float64
})

class NeuralNetwork(object):
    
    def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Set number of nodes in input, hidden and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes
        
        self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10
        self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10

        # Initialize weights
#         self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
#                                                         (self.input_nodes, self.hidden_nodes))
#         self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
#                                                          (self.hidden_nodes, self.output_nodes))
        self.lr = learning_rate
                    
    def __repr__(self):
        return '<NeuralNetwork: {:,} -> {:,} -> {:,}; lr: {:}>'.format(
            self.input_nodes, self.hidden_nodes, self.output_nodes, self.lr
        )
    
    def activation_function(self, x):
        return 1 / (1 + np.exp(-x))
        
    def train(self, features, targets):
        ''' Train the network on batch of features and targets. 
        
            Arguments
            ---------
            
            features: 2D array, each row is one data record,
                      each column is a feature
            targets: 1D array of target values
        
        '''
        n_records = features.shape[0]
        
        # Eg: 4 (input) -> 2 (hidden) -> 1 (output)
        
        # (n_records, 4), (n_records, 1)
        X, y = features, targets
        
        ### Forward pass ###
        # (n_records, 1), (n_records, 2)
        final_outputs, hidden_outputs = self._run(X)
        
        ### Backward pass ###
        # (n_records, 1)
        error = y - final_outputs # Output error
        # (n_records, 1)
        output_error_term = error  # because f'(x) = 1

        # Calculate for each node in the hidden layer's contribution to the error
        # (n_recors, 1) @ (1, 2) = (n_records, 2)
        hidden_error = output_error_term @ self.weights_hidden_to_output.T
        # Backpropagated error terms
        # (n_records, 2) * (n_records, 2) = (n_records, 2) 
        hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)

        # Weight step (input to hidden)
        # (4, n_records) * (n_records, 2) = (4, 2)
        delta_weights_i_h = X.T @ hidden_error_term
        # Weight step (hidden to output)
        # (2, n_records) * (n_records, 1) = (2, 1)
        delta_weights_h_o = hidden_outputs.T @ output_error_term

        # Update the weights
        self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
        self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
 
    def _run(self, features):
        # Hidden layer
        # (n, 4) @ (4, 2) = (n, 2)
        hidden_inputs = features @ self.weights_input_to_hidden
        hidden_outputs = self.activation_function(hidden_inputs)
        
        # Output layer
        # (n, 2) @ (2, 1) = (n, 1)
        final_inputs = hidden_outputs @ self.weights_hidden_to_output
        # (n, 1)
        final_outputs = final_inputs  # f(x) = x
        
        return final_outputs, hidden_outputs
        
    def run(self, features):
        ''' Run a forward pass through the network with input features 
        
            Arguments
            ---------
            features: 1D array of feature values
        '''
        final_outputs, _ = self._run(features)
        return final_outputs

In [36]:
inputs = np.array([[0.5, -0.2, 0.1, 0.2],
                   [0.5, -0.2, 0.1, 0.2]])
targets = np.array([[0.4], [0.4]])
network = NeuralNetwork(4, 2, 1, 0.5)
network.train(inputs, targets)

In [57]:
inputs = np.array([[1.0, 0.0], [0.0, 1]])
targets = np.array([[1.0], [0.0]])
network = NeuralNetwork(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)


[[ 0.10354426]
 [ 0.09980362]]
[[ 0.17047878]]

In [65]:
inputs = np.array([[1.0, 0.0]])
targets = np.array([[1.0]])
network = NeuralNetwork(2, 1, 1, 0.3)
network.train(inputs, targets)
print(np.round(network.weights_input_to_hidden, 6))
print(np.round(network.weights_hidden_to_output, 6))
print('')
network.train(np.array([[0.0, 1.0]]), np.array([[0.0]]))
print(np.round(network.weights_input_to_hidden, 8))
print(np.round(network.weights_hidden_to_output, 6))


[[ 0.107089]
 [ 0.1     ]]
[[ 0.249226]]

[[ 0.10708853]
 [ 0.09756048]]
[[ 0.228619]]

In [59]:
class NeuralNetwork2(object):
    def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Set number of nodes in input, hidden and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

                
        self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10
        self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10
        
        # Initialize weights
#         self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, 
#                                        (self.input_nodes, self.hidden_nodes))

#         self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, 
#                                        (self.hidden_nodes, self.output_nodes))
        self.lr = learning_rate
        self.activation_function = lambda x : 1/(1 + np.exp(-x))
    
    def train(self, features, targets):
        ''' Train the network on batch of features and targets. 
        
            Arguments
            ---------
            
            features: 2D array, each row is one data record, each column is a feature
            targets: 1D array of target values
        
        '''
        n_records = features.shape[0]
        delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
        delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
        for X, y in zip(features, targets):
            ### Forward pass ###
            hidden_inputs = np.dot(X, self.weights_input_to_hidden)
            hidden_outputs = self.activation_function(hidden_inputs)
            final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
            # since the last layer just passes on its value, we don't have to apply the sigmoid here.
            final_outputs = final_inputs
            
            ### Backward pass ###
            error = y - final_outputs
            # The derivative of the activation function y=x is 1
            output_error_term = error * 1.0
            hidden_error = np.dot(self.weights_hidden_to_output, error) 
            # Backpropagated error terms
            hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
            # Weight step (input to hidden)
            delta_weights_i_h += hidden_error_term * X[:,None]
            # Weight step (hidden to output)
            delta_weights_h_o += output_error_term * hidden_outputs[:,None]
        # Weights update
        self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records
        self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records
        
    def run(self, features):
        ''' Run a forward pass through the network with input features 
        
            Arguments
            ---------
            features: 1D array of feature values
        '''
        # Forward pass
        hidden_inputs =  np.dot(features, self.weights_input_to_hidden)
        hidden_outputs = self.activation_function(hidden_inputs)
        final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
        final_outputs = final_inputs 
        return final_outputs

In [60]:
inputs = np.array([[1.0, 0.0], [0.0, 1]])
targets = np.array([[1.0], [0.0]])
network = NeuralNetwork2(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)


[[ 0.10354426]
 [ 0.09980362]]
[[ 0.17047878]]

In [62]:
inputs = np.array([[1.0, 0.0]])
targets = np.array([[1.0]])
network = NeuralNetwork2(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)
print('')
network.train(np.array([[0.0, 1.0]]), np.array([[0.0]]))
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)


[[ 0.10708853]
 [ 0.1       ]]
[[ 0.24922566]]

[[ 0.10708853]
 [ 0.09756048]]
[[ 0.22861945]]

In [14]:
def MSE(y, Y):
    return np.mean((y-Y)**2)

Unit tests

Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.


In [15]:
import unittest

inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
                       [0.4, 0.5],
                       [-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
                       [-0.1]])

class TestMethods(unittest.TestCase):
    
    ##########
    # Unit tests for data loading
    ##########
    
    def test_data_path(self):
        # Test that file path to dataset has been unaltered
        self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
        
    def test_data_loaded(self):
        # Test that data frame loaded
        self.assertTrue(isinstance(rides, pd.DataFrame))
    
    ##########
    # Unit tests for network functionality
    ##########

    def test_activation(self):
        network = NeuralNetwork(3, 2, 1, 0.5)
        # Test that the activation function is a sigmoid
        self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))

    def test_train(self):
        # Test that weights are updated correctly on training
        network = NeuralNetwork(3, 2, 1, 0.5)
        network.weights_input_to_hidden = test_w_i_h.copy()
        network.weights_hidden_to_output = test_w_h_o.copy()
        
        network.train(inputs, targets)
        self.assertTrue(np.allclose(network.weights_hidden_to_output, 
                                    np.array([[ 0.37275328], 
                                              [-0.03172939]])))
        self.assertTrue(np.allclose(network.weights_input_to_hidden,
                                    np.array([[ 0.10562014, -0.20185996], 
                                              [0.39775194, 0.50074398], 
                                              [-0.29887597, 0.19962801]])))

    def test_run(self):
        # Test correctness of run method
        network = NeuralNetwork(3, 2, 1, 0.5)
        network.weights_input_to_hidden = test_w_i_h.copy()
        network.weights_hidden_to_output = test_w_h_o.copy()
        self.assertTrue(np.allclose(network.run(inputs), 0.09998924))

suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
_ = unittest.TextTestRunner().run(suite)


.....
----------------------------------------------------------------------
Ran 5 tests in 0.023s

OK

Training the network

Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.

You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.

Choose the number of iterations

This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.

Choose the learning rate

This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.

Choose the number of hidden nodes

The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.


In [16]:
%%timeit -n 1 -r 1
import sys

# declare global variables because %%timeit will
# put the whole cell in a closure
global losses
global network

### Set the hyperparameters here ###
iterations = 4000
learning_rate = 1.3
hidden_nodes = 7
output_nodes = 1

N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)

losses = {'train':[], 'validation':[]}

train_features_arr = np.array(train_features)
val_features_arr = np.array(val_features)
train_targets_cnt = np.array(train_targets.cnt, ndmin=2).T
val_targets_cnt = np.array(val_targets.cnt, ndmin=2).T

for ii in range(iterations):
    # Go through a random batch of 128 records from the training data set
    batch = np.random.choice(train_features.index, size=128)
    X, y = train_features_arr[batch], train_targets_cnt[batch]
    
    network.train(X, y)
    
    # Printing out the training progress
    train_loss = MSE(network.run(train_features_arr), train_targets_cnt)
    val_loss = MSE(network.run(val_features_arr), val_targets_cnt)
    sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
                     + "% ... Training loss: " + str(train_loss)[:5] \
                     + " ... Validation loss: " + str(val_loss)[:5])
    sys.stdout.flush()
    
    losses['train'].append(train_loss)
    losses['validation'].append(val_loss)
# give room for timeit result
print('\n')


Progress: 14.9% ... Training loss: 0.198 ... Validation loss: 0.338
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-16-794a17934947> in <module>()
----> 1 get_ipython().run_cell_magic('timeit', '-n 1 -r 1', 'import sys\n\n# declare global variables because %%timeit will\n# put the whole cell in a closure\nglobal losses\nglobal network\n\n### Set the hyperparameters here ###\niterations = 4000\nlearning_rate = 1.3\nhidden_nodes = 7\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {\'train\':[], \'validation\':[]}\n\ntrain_features_arr = np.array(train_features)\nval_features_arr = np.array(val_features)\ntrain_targets_cnt = np.array(train_targets.cnt, ndmin=2).T\nval_targets_cnt = np.array(val_targets.cnt, ndmin=2).T\n\nfor ii in range(iterations):\n    # Go through a random batch of 128 records from the training data set\n    batch = np.random.choice(train_features.index, size=128)\n    X, y = train_features_arr[batch], train_targets_cnt[batch]\n    \n    network.train(X, y)\n    \n    # Printing out the training progress\n    train_loss = MSE(network.run(train_features_arr), train_targets_cnt)\n    val_loss = MSE(network.run(val_features_arr), val_targets_cnt)\n    sys.stdout.write("\\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \\\n                     + "% ... Training loss: " + str(train_loss)[:5] \\\n                     + " ... Validation loss: " + str(val_loss)[:5])\n    sys.stdout.flush()\n    \n    losses[\'train\'].append(train_loss)\n    losses[\'validation\'].append(val_loss)\n# give room for timeit result\nprint(\'\\n\')')

~/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell)
   2101             magic_arg_s = self.var_expand(line, stack_depth)
   2102             with self.builtin_trap:
-> 2103                 result = fn(magic_arg_s, cell)
   2104             return result
   2105 

<decorator-gen-61> in timeit(self, line, cell)

~/anaconda3/lib/python3.6/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
    185     # but it's overkill for just that one bit of state.
    186     def magic_deco(arg):
--> 187         call = lambda f, *a, **k: f(*a, **k)
    188 
    189         if callable(arg):

~/anaconda3/lib/python3.6/site-packages/IPython/core/magics/execution.py in timeit(self, line, cell)
   1082                     break
   1083 
-> 1084         all_runs = timer.repeat(repeat, number)
   1085         best = min(all_runs) / number
   1086         worst = max(all_runs) / number

~/anaconda3/lib/python3.6/timeit.py in repeat(self, repeat, number)
    204         r = []
    205         for i in range(repeat):
--> 206             t = self.timeit(number)
    207             r.append(t)
    208         return r

~/anaconda3/lib/python3.6/site-packages/IPython/core/magics/execution.py in timeit(self, number)
    158         gc.disable()
    159         try:
--> 160             timing = self.inner(it, self.timer)
    161         finally:
    162             if gcold:

<magic-timeit> in inner(_it, _timer)

<ipython-input-11-0ad80bf9c10f> in run(self, features)
    103             features: 1D array of feature values
    104         '''
--> 105         final_outputs, _ = self._run(features)
    106         return final_outputs

<ipython-input-11-0ad80bf9c10f> in _run(self, features)
     86         # (n, 4) @ (4, 2) = (n, 2)
     87         hidden_inputs = features @ self.weights_input_to_hidden
---> 88         hidden_outputs = self.activation_function(hidden_inputs)
     89 
     90         # Output layer

<ipython-input-11-0ad80bf9c10f> in activation_function(self, x)
     34 
     35     def activation_function(self, x):
---> 36         return 1 / (1 + np.exp(-x))
     37 
     38     def train(self, features, targets):

KeyboardInterrupt: 

In [300]:
fig, ax = plt.subplots(figsize=(7,4))

ax.plot(losses['train'], label='Training loss')
ax.plot(losses['validation'], label='Validation loss')
ax.legend()
ax.set_xlabel('epoch')
ax.set_ylabel('loss')
_ = plt.ylim([0, 1])


Check out your predictions

Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.


In [306]:
fig, ax = plt.subplots(figsize=(8,4))

mean, std = scaled_features['cnt']
predictions = network.run(np.array(test_features))*std + mean
ax.plot(predictions[:,0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()

dates = pd.to_datetime(rides.loc[test_data.index, 'dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)


OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).

Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?

Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter

Your answer below

The data predicts hourly patterns pretty well. It understands the difference between day and night. But it failed to predict the Chirstmas/New Year holidy season, during which the number of rides had a sharp decrease.

We need to add more variables to indicate such monthly or yearly patterns; or, feed the full year of data as training set.