In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
In [1]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
In [2]:
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
In [3]:
rides.head()
Out[3]:
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt
column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
In [4]:
rides[:24*10].plot(x='dteday', y='cnt')
Out[4]:
In [5]:
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Out[5]:
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
In [6]:
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
In [7]:
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
In [8]:
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
self.activation_function
in __init__
to your sigmoid function.train
method.train
method, including calculating the output error.run
method.
In [9]:
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
### Forward pass ###
# Hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
final_outputs = final_inputs # f(x) = y
### Backward pass ###
# Output error
error = y - final_outputs
# Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# Backpropagated error terms
output_error_term = error # f'(x) = 1
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# Update the weights
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### forward pass ####
# Hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
final_outputs = final_inputs # f(x) = y
return final_outputs
In [10]:
def MSE(y, Y):
return np.mean((y-Y)**2)
In [11]:
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Out[11]:
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
My first submission used 1500 iterations, a learning rate of 2, and 6 hidden nodes - all of which seemed like suspicious. But since they were getting similar results to the ~0.70 and ~1.4 training and validation losses other students were reporting in the project channels, and since trying slightly different values (like 10 hidden nodes), I assumed this was sufficient for this small data set.
The review feedback I got indicated that I just hadn't search a large enough space for possible settings. So I wrote the below code to loop through a wide range of parameter settings and try all the combinations - similar to a grid search but with specific values to try in order to narrow down possible values worth exploring more. I also added a naive over-fitting detection, not meant to be bullet proof, but just to short-circuit combinations that didn't look promising so it took less time to execute.
In [101]:
def is_overfitting(train_losses, val_losses):
''' Does a rough check to see if the losses (sampled at regular intervals) indicate over-fitting is happening.
This is done by checking if average of last n values is greater the average of the previous n values (to
on a one-time random blip) for both val_losses, and the gap between train_losses and val_losses.
I don't know if that's a good heuristic for it or not, I'm just trying to avoid training for a configuration
of hyperparameters that appears to be trending poorly and should be abandoned. '''
if(len(val_losses) < 6):
return False
val_losses = np.array(val_losses)
last_val_losses_avg = val_losses[-3:].mean()
prev_val_losses_avg = val_losses[-6:-3].mean()
if(last_val_losses_avg > prev_val_losses_avg):
# val losses are increasing
return True
train_losses = np.array(train_losses)
last_train_losses_avg = train_losses[-3:].mean()
prev_train_losses_avg = train_losses[-6:-3].mean()
last_gap_avg = last_val_losses_avg - last_train_losses_avg
prev_gap_avg = prev_val_losses_avg - prev_train_losses_avg
if(last_gap_avg > prev_gap_avg):
# the gap between val losses and train losses is increasing
return True
return False
In [102]:
import sys
### Set the hyperparameters here ###
iterations = 10000
learning_rates_list = [2, 1, 0.5, 0.1, 0.05, 0.001]
hidden_nodes_list = [5, 10, 15, 20, 25, 30, 40, 50, 75, 100]
output_nodes = 1
N_i = train_features.shape[1]
for learning_rate in learning_rates_list:
for hidden_nodes in hidden_nodes_list:
print("iterations \t learning_rate \t hidden_nodes \t train_loss \t val_loss")
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
train_losses = []
val_losses = []
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# check progress every 500 iterations
if ii % 500 == 0:
current_train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
current_val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
train_losses.append(current_train_loss)
val_losses.append(current_val_loss)
#print the progress thus far
output = "{} \t\t {} \t\t {} \t\t {:2.3f} \t\t {:2.3f}"
print(output.format(ii, learning_rate, hidden_nodes, current_train_loss, current_val_loss))
# if these settings seem to be overfitting
if is_overfitting(train_losses, val_losses):
print("Over-fitting detected")
break # move on to the next hyperparameter settings
else:
continue # keep going on next 500 iterations
I skimmed through the above results, looking for anything that got good training and validation losses before seeming to over-fit. Learning rates of 2 or < 0.05 and hidden_nodes of 100 seemed easy to eliminate, and I focused down on the following combinations:
And changed the algorithm to just try specific variations of those combinations. (I left iterations at 10000 since I'm aborting when it looks like it may be over-fitting, and that'll give me a good idea of what iterations to use with these combinations).
In [108]:
### Set the hyperparameters here ###
iterations = 10000
learning_rates_list = [1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8, 1, 1.2, 0.8,
0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.4, 0.5, 0.6]
hidden_nodes_list = [8, 8, 8, 10, 10, 10, 13, 13, 13, 15, 15, 15, 17, 17, 17,
18, 18, 18, 20, 20, 20, 25, 25, 25, 30, 30, 30, 40, 40, 40, 50, 50, 50]
output_nodes = 1
if(len(learning_rates_list) != len(hidden_nodes_list)):
print("your configuration is wrong, you probably want to abort...")
N_i = train_features.shape[1]
for learning_rate, hidden_nodes in zip(learning_rates_list, hidden_nodes_list):
print("iterations \t learning_rate \t hidden_nodes \t train_loss \t val_loss")
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
train_losses = []
val_losses = []
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# check progress every 500 iterations
if ii % 500 == 0:
current_train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
current_val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
train_losses.append(current_train_loss)
val_losses.append(current_val_loss)
#print the progress thus far
output = "{} \t\t {} \t\t {} \t\t {:2.3f} \t\t {:2.3f}"
print(output.format(ii, learning_rate, hidden_nodes, current_train_loss, current_val_loss))
# if these settings seem to be overfitting
if is_overfitting(train_losses, val_losses):
print("Over-fitting detected")
break # move on to the next hyperparameter settings
else:
continue # keep going on next 500 iterations
There's a lot of noise in those results (due to the random nature of the weights and sample selections). But the general indication is optimal results will be training losses around 0.060 and validation losses of ~0.135, and these can be generally achieved somewhere in the ranges of:
With the caveats that the higher end of the hidden node range generally needs more iterations and doesn't produce any better outputs (in fact it seems less likely to produce good results once passed 30 nodes). Staying under ~20 hidden nodes allows the network to train well with fewer than ~5000 iterations.
Given that, I opted for settings at the lower end (see below), although trial and error with different parameter values indicates anything in that range will perform reasonably well.
In [119]:
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.6
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
In [120]:
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim([0, 2]) # limit y range just so the lower end is clearer
In [121]:
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
It models it fairly well. The place it does poorly is over the holiday season. I haven't looked too closely at the data, but I read in the slack channel that someone else found there's less data covering that time period, so it hasn't trained as well to handle the patterns that occur during the holidays, which makes sense. Training with more holiday data should allow this to be corrected.
Also, despite further tuning the hyperparameters, the tendency to over/under shoot slightly on the peak and low times remains. I'm looking forward to learning more advanced techniques in the course to build better deep learning networks!
In [ ]: