You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
TensorFlow has many excellent tutorials available, including those from Google themselves.
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
In [1]:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
In [2]:
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
. Remember that our image data is initially N x H x W x C, where:
This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
In [3]:
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session
object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.
Optionally we can also specify a device context such as /cpu:0
or /gpu:0
. For documentation on this behavior see this TensorFlow guide
You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
In [4]:
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%X_train.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[i:i+batch_size].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
In [8]:
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
Wconv = tf.get_variable('Wconv1', shape=[7, 7, 3, 32])
bconv = tf.get_variable('bconv1', shape=[32])
W1 = tf.get_variable('W1', shape=[5408, 1024])
b1 = tf.get_variable('b1', shape=[1024])
W2 = tf.get_variable('W2', shape=[1024, 10])
b2 = tf.get_variable('b2', shape=[10])
# 7x7 Convolutional Layer with 32 filters and stride of 1
conv = tf.nn.conv2d(X, Wconv, strides=[1, 1, 1, 1], padding='VALID') + bconv
# ReLU Activation Layer
conv_relu = tf.nn.relu(conv)
# Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
conv_batch = tf.contrib.layers.batch_norm(
conv_relu, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
# 2x2 Max Pooling layer with a stride of 2
conv_pool = tf.nn.max_pool(
conv_batch, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
pool_flat = tf.reshape(conv_pool, [-1, 5408])
# Affine layer with 1024 output units
fc_hidden = tf.matmul(pool_flat, W1) + b1
# ReLU Activation Layer
fc_hidden_relu = tf.nn.relu(fc_hidden)
# Affine layer from 1024 input units to 10 outputs
y_out = tf.matmul(fc_hidden_relu, W2) + b2
return y_out
y_out = complex_model(X,y,is_training)
To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
In [9]:
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
You should see the following from the run above
(64, 10)
True
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
In [10]:
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
In [11]:
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
mean_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.one_hot(y, 10), logits=y_out))
optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-3)
In [12]:
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
In [13]:
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
Out[13]:
In [14]:
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Out[14]:
tf.layers
. For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
Have fun and happy training!
In [5]:
# Feel free to play with this cell
CONVOLUTION_LAYERS = [
{
'conv_kernel_size_1': [3, 3],
'conv_filters_1': 32,
'conv_kernel_size_2': [3, 3],
'conv_filters_2': 32,
'pool_size': [2, 2],
'pool_strides': [2, 2]
},
{
'conv_kernel_size_1': [3, 3],
'conv_filters_1': 64,
'conv_kernel_size_2': [3, 3],
'conv_filters_2': 64,
'pool_size': [2, 2],
'pool_strides': [2, 2]
}
]
def conv_layer(x, id,
conv1_kernel_size,
conv1_filters, conv2_kernel_size, conv2_filters, pool_size,
pool_strides, is_training=False):
with tf.variable_scope('conv_' + str(id)):
conv1 = tf.layers.conv2d(
inputs=x,
filters=conv1_filters,
kernel_size=conv1_kernel_size,
padding='same',
activation=tf.nn.relu)
conv_batch = tf.contrib.layers.batch_norm(
conv1, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
conv2 = tf.layers.conv2d(
inputs=conv_batch,
filters=conv2_filters,
kernel_size=conv2_kernel_size,
padding='same',
activation=tf.nn.relu)
conv2_batch = tf.contrib.layers.batch_norm(
conv2, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
pool = tf.layers.max_pooling2d(
inputs=conv2_batch, pool_size=pool_size, strides=pool_strides)
return pool
def my_model(X, y, is_training=False, fc_size=1024, convolution_layers=CONVOLUTION_LAYERS):
current_layer = X
for i, layer_desc in enumerate(CONVOLUTION_LAYERS):
current_layer = conv_layer(
current_layer, i, layer_desc['conv_kernel_size_1'], layer_desc['conv_filters_1'],
layer_desc['conv_kernel_size_2'], layer_desc['conv_filters_2'],
layer_desc['pool_size'], layer_desc['pool_strides'], is_training=is_training)
flatten_layer = tf.contrib.layers.flatten(inputs=current_layer)
with tf.variable_scope('fc_1'):
fc_layer = tf.layers.dense(inputs=flatten_layer, units=fc_size, activation=tf.nn.relu)
dropout_layer = tf.layers.dropout(
inputs=fc_layer, rate=0.5, training=is_training)
logits = tf.layers.dense(inputs=dropout_layer, units=10)
return logits
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X,y,is_training=is_training)
mean_loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf.one_hot(y, 10), logits=y_out))
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
In [6]:
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Out[6]:
In [7]:
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Out[7]:
Simple model with the following architecture:
[conv-relu-batch-conv-relu-batch-pool]x2 -> [affine]x2 -> [softmax]
In [15]:
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
Out[15]: