Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
The network you'll build with Keras is similar to the example that you can find in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here.
Here are the steps you'll take to build the network:
Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.
In [13]:
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
import math
In [24]:
# TODO: Implement load the data here.
with open('train.p', 'rb') as f:
data = pickle.load(f)
Split the training data into a training and validation set.
Measure the validation accuracy of the network after two training epochs.
Hint: Use the train_test_split()
method from scikit-learn.
In [25]:
# TODO: Use `train_test_split` here.
X_train, X_val, y_train, y_val = train_test_split(data['features'], data['labels'], random_state=0, test_size=0.33)
In [26]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(X_train.shape[0] == y_train.shape[0]), "The number of images is not equal to the number of labels."
assert(X_train.shape[1:] == (32,32,3)), "The dimensions of the images are not 32 x 32 x 3."
assert(X_val.shape[0] == y_val.shape[0]), "The number of images is not equal to the number of labels."
assert(X_val.shape[1:] == (32,32,3)), "The dimensions of the images are not 32 x 32 x 3."
In [27]:
# TODO: Implement data normalization here.
X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
X_train = X_train / 255 - 0.5
X_val = X_val / 255 - 0.5
In [28]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(math.isclose(np.min(X_train), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_train), 0.5, abs_tol=1e-5)), "The range of the training data is: %.1f to %.1f" % (np.min(X_train), np.max(X_train))
assert(math.isclose(np.min(X_val), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_val), 0.5, abs_tol=1e-5)), "The range of the validation data is: %.1f to %.1f" % (np.min(X_val), np.max(X_val))
The code you've written so far is for data processing, not specific to Keras. Here you're going to build Keras-specific code.
Build a two-layer feedforward neural network, with 128 neurons in the fully-connected hidden layer.
To get started, review the Keras documentation about models and layers.
The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
In [19]:
# TODO: Build a two-layer feedforward neural network with Keras here.
from keras.models import Sequential
from keras.layers import Dense, Input, Activation
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(32*32*3,)))
model.add(Dense(43, activation='softmax'))
In [20]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
dense_layers = []
for l in model.layers:
if type(l) == Dense:
dense_layers.append(l)
assert(len(dense_layers) == 2), "There should be 2 Dense layers."
d1 = dense_layers[0]
d2 = dense_layers[1]
assert(d1.input_shape == (None, 3072))
assert(d1.output_shape == (None, 128))
assert(d2.input_shape == (None, 128))
assert(d2.output_shape == (None, 43))
last_layer = model.layers[-1]
assert(last_layer.activation.__name__ == 'softmax'), "Last layer should be softmax activation, is {}.".format(last_layer.activation.__name__)
In [21]:
# Debugging
for l in model.layers:
print(l.name, l.input_shape, l.output_shape, l.activation)
Compile and train the network for 2 epochs. Use the adam
optimizer, with categorical_crossentropy
loss.
Hint 1: In order to use categorical cross entropy, you will need to one-hot encode the labels.
Hint 2: In order to pass the input images to the fully-connected hidden layer, you will need to reshape the input.
Hint 3: Keras's .fit()
method returns a History.history
object, which the tests below use. Save that to a variable named history
.
In [22]:
# TODO: Compile and train the model here.
from keras.utils import np_utils
Y_train = np_utils.to_categorical(y_train, 43)
Y_val = np_utils.to_categorical(y_val, 43)
X_train_flat = X_train.reshape(-1, 32*32*3)
X_val_flat = X_val.reshape(-1, 32*32*3)
model.summary()
# TODO: Compile and train the model here.
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train_flat, Y_train,
batch_size=128, nb_epoch=20,
verbose=1, validation_data=(X_val_flat, Y_val))
In [23]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['acc'][-1] > 0.92), "The training accuracy was: %.3f" % history.history['acc'][-1]
assert(history.history['val_acc'][-1] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][-1]
Validation Accuracy: (fill in here)
Build a new network, similar to your existing network. Before the hidden layer, add a 3x3 convolutional layer with 32 filters and valid padding.
Then compile and train the network.
Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
Hint 2: Now that the first layer of the network is a convolutional layer, you no longer need to reshape the input images before passing them to the network. You might need to reload your training data to recover the original shape.
Hint 3: Add a Flatten()
layer between the convolutional layer and the fully-connected hidden layer.
In [29]:
# TODO: Re-construct the network and add a convolutional layer before the first fully-connected layer.
# NOTE: RELOAD DATA & NORMALIZE BEFORE RUNNING
from keras.layers import Conv2D, Flatten
Y_train = np_utils.to_categorical(y_train, 43)
Y_val = np_utils.to_categorical(y_val, 43)
model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=(32, 32, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(43, activation='softmax'))
model.summary()
# TODO: Compile and train the model here.
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=128, nb_epoch=20,
verbose=1, validation_data=(X_val, Y_val))
In [31]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['val_acc'][-1] > 0.93), "The validation accuracy is: %.3f" % history.history['val_acc'][-1]
Validation Accuracy: (fill in here)
Re-construct your network and add a 2x2 pooling layer immediately following your convolutional layer.
Then compile and train the network.
In [32]:
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
from keras.layers import Conv2D, Flatten, MaxPooling2D, Activation
Y_train = np_utils.to_categorical(y_train, 43)
Y_val = np_utils.to_categorical(y_val, 43)
model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2,2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(43, activation='softmax'))
model.summary()
# TODO: Compile and train the model here.
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=128, nb_epoch=20,
verbose=1, validation_data=(X_val, Y_val))
In [33]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['val_acc'][-1] > 0.93), "The validation accuracy is: %.3f" % history.history['val_acc'][-1]
Validation Accuracy: (fill in here)
Re-construct your network and add dropout after the pooling layer. Set the dropout rate to 50%.
In [34]:
# TODO: Re-construct the network and add dropout after the pooling layer.
from keras.layers import Dropout
model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2,2)))
model.add((Dropout(0.5)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(43, activation='softmax'))
model.summary()
# TODO: Compile and train the model here.
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=128, nb_epoch=20,
verbose=1, validation_data=(X_val, Y_val))
In [35]:
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['val_acc'][-1] > 0.93), "The validation accuracy is: %.3f" % history.history['val_acc'][-1]
Validation Accuracy: (fill in here)
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
In [ ]:
Best Validation Accuracy: (fill in here)
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate()
method to see how well it does.
Hint 1: The evaluate()
method should return an array of numbers. Use the metrics_names()
method to get the labels.
In [36]:
# TODO: Load test data
with open('./test.p', mode='rb') as f:
test = pickle.load(f)
# TODO: Preprocess data & one-hot encode the labels
X_test = test['features']
y_test = test['labels']
X_test = X_test.astype('float32')
X_test /= 255
X_test -= 0.5
Y_test = np_utils.to_categorical(y_test, 43)
# TODO: Evaluate model on test data
model.evaluate(X_test, Y_test)
Out[36]:
Test Accuracy: (fill in here)