What's this TensorFlow business?

You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.

For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)

What is it?

TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.

Why?

  • Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
  • We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
  • We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
  • We want you to be exposed to the sort of deep learning code you might run into in academia or industry.

How will I learn TensorFlow?

TensorFlow has many excellent tutorials available, including those from Google themselves.

Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.

Load Datasets


In [1]:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline

In [2]:
from cs231n.data_utils import load_CIFAR10

def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
    """
    Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
    it for the two-layer neural net classifier. These are the same steps as
    we used for the SVM, but condensed to a single function.  
    """
    # Load the raw CIFAR-10 data
    cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)

    # Subsample the data
    mask = range(num_training, num_training + num_validation)
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = range(num_training)
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = range(num_test)
    X_test = X_test[mask]
    y_test = y_test[mask]

    # Normalize the data: subtract the mean image
    mean_image = np.mean(X_train, axis=0)
    X_train -= mean_image
    X_val -= mean_image
    X_test -= mean_image

    return X_train, y_train, X_val, y_val, X_test, y_test


# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)


('Train data shape: ', (49000, 32, 32, 3))
('Train labels shape: ', (49000,))
('Validation data shape: ', (1000, 32, 32, 3))
('Validation labels shape: ', (1000,))
('Test data shape: ', (10000, 32, 32, 3))
('Test labels shape: ', (10000,))

Example Model

Some useful utilities

. Remember that our image data is initially N x H x W x C, where:

  • N is the number of datapoints
  • H is the height of each image in pixels
  • W is the height of each image in pixels
  • C is the number of channels (usually 3: R, G, B)

This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.

The example model itself

The first step to training your own model is defining its architecture.

Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.

In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.

Make sure you understand why the parameters of the Linear layer are 5408 and 10.

TensorFlow Details

In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.


In [3]:
# clear old variables
tf.reset_default_graph()

# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)

def simple_model(X,y):
    # define our weights (e.g. init_two_layer_convnet)
    
    # setup variables
    Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
    bconv1 = tf.get_variable("bconv1", shape=[32])
    W1 = tf.get_variable("W1", shape=[5408, 10])
    b1 = tf.get_variable("b1", shape=[10])

    # define our graph (e.g. two_layer_convnet)
    a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
    h1 = tf.nn.relu(a1)
    h1_flat = tf.reshape(h1,[-1,5408])
    y_out = tf.matmul(h1_flat,W1) + b1
    return y_out

y_out = simple_model(X,y)

# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)

# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)

TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).

Training the model on one epoch

While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.

Optionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide

You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below


In [4]:
def run_model(session, predict, loss_val, Xd, yd,
              epochs=1, batch_size=64, print_every=100,
              training=None, plot_losses=False):
    # have tensorflow compute accuracy
    correct_prediction = tf.equal(tf.argmax(predict,1), y)
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    # shuffle indicies
    train_indicies = np.arange(Xd.shape[0])
    np.random.shuffle(train_indicies)

    training_now = training is not None
    
    # setting up variables we want to compute (and optimizing)
    # if we have a training function, add that to things we compute
    variables = [mean_loss,correct_prediction,accuracy]
    if training_now:
        variables[-1] = training
    
    # counter 
    iter_cnt = 0
    for e in range(epochs):
        # keep track of losses and accuracy
        correct = 0
        losses = []
        # make sure we iterate over the dataset once
        for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
            # generate indicies for the batch
            start_idx = (i*batch_size)%Xd.shape[0]
            idx = train_indicies[start_idx:start_idx+batch_size]
            
            # create a feed dictionary for this batch
            feed_dict = {X: Xd[idx,:],
                         y: yd[idx],
                         is_training: training_now }
            # get batch size
            actual_batch_size = yd[idx].shape[0]
            
            # have tensorflow compute loss and correct predictions
            # and (if given) perform a training step
            loss, corr, _ = session.run(variables,feed_dict=feed_dict)
            
            # aggregate performance stats
            losses.append(loss*actual_batch_size)
            correct += np.sum(corr)
            
            # print every now and then
            if training_now and (iter_cnt % print_every) == 0:
                print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
                      .format(iter_cnt,loss,1.0*np.sum(corr)/actual_batch_size))
            iter_cnt += 1
        total_correct = 1.0*correct/Xd.shape[0]
        total_loss = np.sum(losses)/Xd.shape[0]
        print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
              .format(total_loss,total_correct,e+1))
        if plot_losses:
            plt.plot(losses)
            plt.grid(True)
            plt.title('Epoch {} Loss'.format(e+1))
            plt.xlabel('minibatch number')
            plt.ylabel('minibatch loss')
            plt.show()
    return total_loss,total_correct

with tf.Session() as sess:
    with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0" 
        sess.run(tf.global_variables_initializer())
        print('Training')
        run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
        print('Validation')
        run_model(sess,y_out,mean_loss,X_val,y_val,1,64)


Training
Iteration 0: with minibatch training loss = 10.7 and accuracy of 0.11
Iteration 100: with minibatch training loss = 0.864 and accuracy of 0.3
Iteration 200: with minibatch training loss = 0.702 and accuracy of 0.28
Iteration 300: with minibatch training loss = 0.749 and accuracy of 0.33
Iteration 400: with minibatch training loss = 0.533 and accuracy of 0.36
Iteration 500: with minibatch training loss = 0.599 and accuracy of 0.36
Iteration 600: with minibatch training loss = 0.519 and accuracy of 0.42
Iteration 700: with minibatch training loss = 0.458 and accuracy of 0.28
Epoch 1, Overall loss = 0.789 and accuracy of 0.307
Validation
Epoch 1, Overall loss = 0.446 and accuracy of 0.352

Training a specific model

In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.

Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:

  • 7x7 Convolutional Layer with 32 filters and stride of 1
  • ReLU Activation Layer
  • Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
  • 2x2 Max Pooling layer with a stride of 2
  • Affine layer with 1024 output units
  • ReLU Activation Layer
  • Affine layer from 1024 input units to 10 outputs

In [5]:
# clear old variables
tf.reset_default_graph()

# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)

# define model
def complex_model(X,y,is_training):
    Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
    bconv1 = tf.get_variable("bconv1", shape=[32])
    W1 = tf.get_variable("W1", shape=[5408, 1024])
    b1 = tf.get_variable("b1", shape=[1024])
    W2 = tf.get_variable("W2", shape=[1024, 10])
    b2 = tf.get_variable("b2", shape=[10])

    # define our graph (e.g. two_layer_convnet)
    a1 = tf.nn.conv2d(X, Wconv1, strides=[1,1,1,1], padding='VALID') + bconv1
    h1 = tf.nn.relu(a1)
    bn1 = tf.layers.batch_normalization(h1, training=is_training)
    p1 = tf.nn.max_pool(bn1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    p1_flat = tf.reshape(p1,[-1,5408])
    a2 = tf.matmul(p1_flat,W1) + b1
    h2 = tf.nn.relu(a2)
    y_out = tf.matmul(h2, W2) + b2
    
    return y_out

y_out = complex_model(X,y,is_training)

To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):


In [6]:
# Now we're going to feed a random batch into the model 
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
    with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
        tf.global_variables_initializer().run()

        ans = sess.run(y_out,feed_dict={X:x,is_training:True})
        %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
        print(ans.shape)
        print(np.array_equal(ans.shape, np.array([64, 10])))


100 loops, best of 3: 10.6 ms per loop
(64, 10)
True

You should see the following from the run above

(64, 10)

True

GPU!

Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.


In [7]:
try:
    with tf.Session() as sess:
        with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
            tf.global_variables_initializer().run()

            ans = sess.run(y_out,feed_dict={X:x,is_training:True})
            %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
    print("no gpu found, please use Google Cloud if you want GPU acceleration")    
    # rebuild the graph
    # trying to start a GPU throws an exception 
    # and also trashes the original graph
    tf.reset_default_graph()
    X = tf.placeholder(tf.float32, [None, 32, 32, 3])
    y = tf.placeholder(tf.int64, [None])
    is_training = tf.placeholder(tf.bool)
    y_out = complex_model(X,y,is_training)


100 loops, best of 3: 10.3 ms per loop

You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.

Train the model.

Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).

Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.

First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information


In [8]:
# Inputs
#     y_out: is what your model computes
#     y: is your TensorFlow variable with label information
# Outputs
#    mean_loss: a TensorFlow variable (scalar) with numerical loss
#    optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
mean_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=y_out))
optimizer = tf.train.RMSPropOptimizer(5e-4)
pass

In [9]:
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
    train_step = optimizer.minimize(mean_loss)

Train the model

Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization


In [10]:
sess = tf.Session()

sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)


Training
Iteration 0: with minibatch training loss = 2.82 and accuracy of 0.14
Iteration 100: with minibatch training loss = 2.95 and accuracy of 0.36
Iteration 200: with minibatch training loss = 1.62 and accuracy of 0.5
Iteration 300: with minibatch training loss = 1.6 and accuracy of 0.56
Iteration 400: with minibatch training loss = 1.66 and accuracy of 0.53
Iteration 500: with minibatch training loss = 1.18 and accuracy of 0.59
Iteration 600: with minibatch training loss = 0.982 and accuracy of 0.62
Iteration 700: with minibatch training loss = 1.11 and accuracy of 0.67
Epoch 1, Overall loss = 1.58 and accuracy of 0.478
Out[10]:
(1.5837277603149413, 0.4777142857142857)

Check the accuracy of the model.

Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.


In [11]:
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)


Validation
Epoch 1, Overall loss = 1.15 and accuracy of 0.566
Out[11]:
(1.1458439254760742, 0.566)

Train a great model on CIFAR-10!

Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.

Things you should try:

  • Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
  • Number of filters: Above we used 32 filters. Do more or fewer do better?
  • Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
  • Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
  • Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
    • [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
    • [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
    • [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
  • Use TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for how to use tf.layers.
  • Use Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay.
  • Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
  • Regularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial

Tips for training

For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:

  • If the parameters are working well, you should see improvement within a few hundred iterations
  • Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
  • Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
  • You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.

Going above and beyond

If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.

  • Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
  • Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
  • Model ensembles
  • Data augmentation
  • New Architectures

If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.

What we expect

At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.

You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.

Have fun and happy training!


In [12]:
# Feel free to play with this cell

def my_model(X,y,is_training):
    initializer = tf.contrib.layers.xavier_initializer()
    regularizer = tf.contrib.layers.l2_regularizer(1e-5)
    
    conv1 = tf.layers.conv2d(X, 32, 3, 1, padding='valid', kernel_initializer=initializer, kernel_regularizer=regularizer)
    bn1 = tf.layers.batch_normalization(conv1, training=is_training)
    relu1 = tf.nn.relu(bn1)
    conv2 = tf.layers.conv2d(relu1, 32, 3, 1, padding='valid', kernel_initializer=initializer, kernel_regularizer=regularizer)
    bn2 = tf.layers.batch_normalization(conv2, training=is_training)
    relu2 = tf.nn.relu(bn2)
    pool1 = tf.layers.max_pooling2d(relu2, 2, 2, padding='valid')
    
    conv3 = tf.layers.conv2d(pool1, 32, 3, 1, padding='valid', kernel_initializer=initializer, kernel_regularizer=regularizer)
    bn3 = tf.layers.batch_normalization(conv3, training=is_training)
    relu3 = tf.nn.relu(bn3)
    conv4 = tf.layers.conv2d(relu3, 32, 3, 1, padding='valid', kernel_initializer=initializer, kernel_regularizer=regularizer)
    bn4 = tf.layers.batch_normalization(conv4, training=is_training)
    relu4 = tf.nn.relu(bn4)
    pool2 = tf.layers.max_pooling2d(relu4, 2, 2, padding='valid')
    
    flatten = tf.reshape(pool2, [-1, 5 * 5 * 32])
    fc = tf.layers.dense(flatten, 512, activation=tf.nn.relu, kernel_initializer=initializer, kernel_regularizer=regularizer)
    dropout = tf.layers.dropout(fc)
    y_out = tf.layers.dense(dropout, 10, kernel_initializer=initializer, kernel_regularizer=regularizer)
    
    return y_out

tf.reset_default_graph()

X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)

y_out = my_model(X,y,is_training)
mean_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=y_out))
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(1e-3, global_step, X_train.shape[0] / 64, .95, staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate)

pass

# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
    train_step = optimizer.minimize(mean_loss, global_step=global_step)

In [13]:
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()

sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,20,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)


Training
Iteration 0: with minibatch training loss = 2.73 and accuracy of 0.12
Iteration 100: with minibatch training loss = 1.63 and accuracy of 0.39
Iteration 200: with minibatch training loss = 1.24 and accuracy of 0.53
Iteration 300: with minibatch training loss = 1.17 and accuracy of 0.62
Iteration 400: with minibatch training loss = 1.38 and accuracy of 0.48
Iteration 500: with minibatch training loss = 0.994 and accuracy of 0.62
Iteration 600: with minibatch training loss = 1.04 and accuracy of 0.64
Iteration 700: with minibatch training loss = 1.22 and accuracy of 0.53
Epoch 1, Overall loss = 1.31 and accuracy of 0.53
Iteration 800: with minibatch training loss = 1.25 and accuracy of 0.56
Iteration 900: with minibatch training loss = 0.978 and accuracy of 0.61
Iteration 1000: with minibatch training loss = 0.843 and accuracy of 0.67
Iteration 1100: with minibatch training loss = 1.17 and accuracy of 0.62
Iteration 1200: with minibatch training loss = 0.946 and accuracy of 0.66
Iteration 1300: with minibatch training loss = 0.92 and accuracy of 0.66
Iteration 1400: with minibatch training loss = 0.849 and accuracy of 0.73
Iteration 1500: with minibatch training loss = 0.645 and accuracy of 0.8
Epoch 2, Overall loss = 0.909 and accuracy of 0.677
Iteration 1600: with minibatch training loss = 0.797 and accuracy of 0.73
Iteration 1700: with minibatch training loss = 0.62 and accuracy of 0.75
Iteration 1800: with minibatch training loss = 0.558 and accuracy of 0.78
Iteration 1900: with minibatch training loss = 0.665 and accuracy of 0.78
Iteration 2000: with minibatch training loss = 0.684 and accuracy of 0.78
Iteration 2100: with minibatch training loss = 0.893 and accuracy of 0.67
Iteration 2200: with minibatch training loss = 0.691 and accuracy of 0.7
Epoch 3, Overall loss = 0.74 and accuracy of 0.738
Iteration 2300: with minibatch training loss = 0.74 and accuracy of 0.73
Iteration 2400: with minibatch training loss = 0.562 and accuracy of 0.8
Iteration 2500: with minibatch training loss = 0.578 and accuracy of 0.73
Iteration 2600: with minibatch training loss = 0.688 and accuracy of 0.73
Iteration 2700: with minibatch training loss = 0.545 and accuracy of 0.8
Iteration 2800: with minibatch training loss = 0.498 and accuracy of 0.86
Iteration 2900: with minibatch training loss = 0.478 and accuracy of 0.88
Iteration 3000: with minibatch training loss = 0.625 and accuracy of 0.78
Epoch 4, Overall loss = 0.626 and accuracy of 0.78
Iteration 3100: with minibatch training loss = 0.424 and accuracy of 0.83
Iteration 3200: with minibatch training loss = 0.547 and accuracy of 0.81
Iteration 3300: with minibatch training loss = 0.449 and accuracy of 0.88
Iteration 3400: with minibatch training loss = 0.515 and accuracy of 0.77
Iteration 3500: with minibatch training loss = 0.639 and accuracy of 0.8
Iteration 3600: with minibatch training loss = 0.531 and accuracy of 0.81
Iteration 3700: with minibatch training loss = 0.758 and accuracy of 0.77
Iteration 3800: with minibatch training loss = 0.577 and accuracy of 0.75
Epoch 5, Overall loss = 0.535 and accuracy of 0.814
Iteration 3900: with minibatch training loss = 0.761 and accuracy of 0.72
Iteration 4000: with minibatch training loss = 0.471 and accuracy of 0.88
Iteration 4100: with minibatch training loss = 0.416 and accuracy of 0.89
Iteration 4200: with minibatch training loss = 0.73 and accuracy of 0.77
Iteration 4300: with minibatch training loss = 0.372 and accuracy of 0.91
Iteration 4400: with minibatch training loss = 0.673 and accuracy of 0.75
Iteration 4500: with minibatch training loss = 0.417 and accuracy of 0.88
Epoch 6, Overall loss = 0.452 and accuracy of 0.844
Iteration 4600: with minibatch training loss = 0.482 and accuracy of 0.84
Iteration 4700: with minibatch training loss = 0.389 and accuracy of 0.84
Iteration 4800: with minibatch training loss = 0.306 and accuracy of 0.92
Iteration 4900: with minibatch training loss = 0.377 and accuracy of 0.86
Iteration 5000: with minibatch training loss = 0.36 and accuracy of 0.89
Iteration 5100: with minibatch training loss = 0.318 and accuracy of 0.89
Iteration 5200: with minibatch training loss = 0.361 and accuracy of 0.89
Iteration 5300: with minibatch training loss = 0.386 and accuracy of 0.88
Epoch 7, Overall loss = 0.381 and accuracy of 0.871
Iteration 5400: with minibatch training loss = 0.282 and accuracy of 0.88
Iteration 5500: with minibatch training loss = 0.407 and accuracy of 0.86
Iteration 5600: with minibatch training loss = 0.267 and accuracy of 0.95
Iteration 5700: with minibatch training loss = 0.281 and accuracy of 0.89
Iteration 5800: with minibatch training loss = 0.267 and accuracy of 0.91
Iteration 5900: with minibatch training loss = 0.236 and accuracy of 0.94
Iteration 6000: with minibatch training loss = 0.294 and accuracy of 0.91
Iteration 6100: with minibatch training loss = 0.292 and accuracy of 0.88
Epoch 8, Overall loss = 0.319 and accuracy of 0.893
Iteration 6200: with minibatch training loss = 0.259 and accuracy of 0.94
Iteration 6300: with minibatch training loss = 0.467 and accuracy of 0.89
Iteration 6400: with minibatch training loss = 0.235 and accuracy of 0.92
Iteration 6500: with minibatch training loss = 0.311 and accuracy of 0.88
Iteration 6600: with minibatch training loss = 0.293 and accuracy of 0.92
Iteration 6700: with minibatch training loss = 0.142 and accuracy of 0.95
Iteration 6800: with minibatch training loss = 0.275 and accuracy of 0.89
Epoch 9, Overall loss = 0.263 and accuracy of 0.915
Iteration 6900: with minibatch training loss = 0.162 and accuracy of 0.97
Iteration 7000: with minibatch training loss = 0.398 and accuracy of 0.86
Iteration 7100: with minibatch training loss = 0.167 and accuracy of 0.94
Iteration 7200: with minibatch training loss = 0.227 and accuracy of 0.92
Iteration 7300: with minibatch training loss = 0.131 and accuracy of 0.94
Iteration 7400: with minibatch training loss = 0.22 and accuracy of 0.94
Iteration 7500: with minibatch training loss = 0.145 and accuracy of 0.95
Iteration 7600: with minibatch training loss = 0.223 and accuracy of 0.92
Epoch 10, Overall loss = 0.214 and accuracy of 0.932
Iteration 7700: with minibatch training loss = 0.261 and accuracy of 0.92
Iteration 7800: with minibatch training loss = 0.177 and accuracy of 0.92
Iteration 7900: with minibatch training loss = 0.109 and accuracy of 1
Iteration 8000: with minibatch training loss = 0.179 and accuracy of 0.95
Iteration 8100: with minibatch training loss = 0.116 and accuracy of 0.98
Iteration 8200: with minibatch training loss = 0.25 and accuracy of 0.92
Iteration 8300: with minibatch training loss = 0.237 and accuracy of 0.91
Iteration 8400: with minibatch training loss = 0.176 and accuracy of 0.98
Epoch 11, Overall loss = 0.176 and accuracy of 0.944
Iteration 8500: with minibatch training loss = 0.176 and accuracy of 0.94
Iteration 8600: with minibatch training loss = 0.15 and accuracy of 0.94
Iteration 8700: with minibatch training loss = 0.24 and accuracy of 0.92
Iteration 8800: with minibatch training loss = 0.173 and accuracy of 0.97
Iteration 8900: with minibatch training loss = 0.137 and accuracy of 0.97
Iteration 9000: with minibatch training loss = 0.121 and accuracy of 0.98
Iteration 9100: with minibatch training loss = 0.135 and accuracy of 0.98
Epoch 12, Overall loss = 0.146 and accuracy of 0.954
Iteration 9200: with minibatch training loss = 0.209 and accuracy of 0.92
Iteration 9300: with minibatch training loss = 0.101 and accuracy of 0.97
Iteration 9400: with minibatch training loss = 0.101 and accuracy of 0.95
Iteration 9500: with minibatch training loss = 0.167 and accuracy of 0.92
Iteration 9600: with minibatch training loss = 0.0596 and accuracy of 1
Iteration 9700: with minibatch training loss = 0.0955 and accuracy of 0.98
Iteration 9800: with minibatch training loss = 0.0786 and accuracy of 0.98
Iteration 9900: with minibatch training loss = 0.0646 and accuracy of 0.98
Epoch 13, Overall loss = 0.126 and accuracy of 0.96
Iteration 10000: with minibatch training loss = 0.0766 and accuracy of 0.98
Iteration 10100: with minibatch training loss = 0.097 and accuracy of 0.97
Iteration 10200: with minibatch training loss = 0.125 and accuracy of 0.97
Iteration 10300: with minibatch training loss = 0.072 and accuracy of 0.98
Iteration 10400: with minibatch training loss = 0.125 and accuracy of 0.94
Iteration 10500: with minibatch training loss = 0.134 and accuracy of 0.97
Iteration 10600: with minibatch training loss = 0.0656 and accuracy of 0.98
Iteration 10700: with minibatch training loss = 0.135 and accuracy of 0.97
Epoch 14, Overall loss = 0.111 and accuracy of 0.964
Iteration 10800: with minibatch training loss = 0.13 and accuracy of 0.95
Iteration 10900: with minibatch training loss = 0.103 and accuracy of 1
Iteration 11000: with minibatch training loss = 0.0648 and accuracy of 1
Iteration 11100: with minibatch training loss = 0.0813 and accuracy of 0.97
Iteration 11200: with minibatch training loss = 0.0559 and accuracy of 0.98
Iteration 11300: with minibatch training loss = 0.0586 and accuracy of 1
Iteration 11400: with minibatch training loss = 0.114 and accuracy of 0.97
Epoch 15, Overall loss = 0.094 and accuracy of 0.97
Iteration 11500: with minibatch training loss = 0.0802 and accuracy of 0.97
Iteration 11600: with minibatch training loss = 0.0647 and accuracy of 0.97
Iteration 11700: with minibatch training loss = 0.083 and accuracy of 0.97
Iteration 11800: with minibatch training loss = 0.0394 and accuracy of 0.98
Iteration 11900: with minibatch training loss = 0.0799 and accuracy of 0.98
Iteration 12000: with minibatch training loss = 0.0411 and accuracy of 1
Iteration 12100: with minibatch training loss = 0.0964 and accuracy of 0.95
Iteration 12200: with minibatch training loss = 0.0371 and accuracy of 1
Epoch 16, Overall loss = 0.0746 and accuracy of 0.977
Iteration 12300: with minibatch training loss = 0.0645 and accuracy of 0.97
Iteration 12400: with minibatch training loss = 0.0672 and accuracy of 0.98
Iteration 12500: with minibatch training loss = 0.0941 and accuracy of 0.95
Iteration 12600: with minibatch training loss = 0.0321 and accuracy of 0.98
Iteration 12700: with minibatch training loss = 0.021 and accuracy of 1
Iteration 12800: with minibatch training loss = 0.036 and accuracy of 1
Iteration 12900: with minibatch training loss = 0.0368 and accuracy of 0.98
Iteration 13000: with minibatch training loss = 0.0186 and accuracy of 1
Epoch 17, Overall loss = 0.0568 and accuracy of 0.984
Iteration 13100: with minibatch training loss = 0.0456 and accuracy of 0.98
Iteration 13200: with minibatch training loss = 0.0412 and accuracy of 1
Iteration 13300: with minibatch training loss = 0.0395 and accuracy of 0.98
Iteration 13400: with minibatch training loss = 0.0259 and accuracy of 1
Iteration 13500: with minibatch training loss = 0.024 and accuracy of 1
Iteration 13600: with minibatch training loss = 0.0423 and accuracy of 0.97
Iteration 13700: with minibatch training loss = 0.0303 and accuracy of 1
Epoch 18, Overall loss = 0.0426 and accuracy of 0.989
Iteration 13800: with minibatch training loss = 0.0329 and accuracy of 1
Iteration 13900: with minibatch training loss = 0.0395 and accuracy of 0.98
Iteration 14000: with minibatch training loss = 0.0202 and accuracy of 1
Iteration 14100: with minibatch training loss = 0.0277 and accuracy of 1
Iteration 14200: with minibatch training loss = 0.0412 and accuracy of 1
Iteration 14300: with minibatch training loss = 0.013 and accuracy of 1
Iteration 14400: with minibatch training loss = 0.0262 and accuracy of 1
Iteration 14500: with minibatch training loss = 0.0243 and accuracy of 1
Epoch 19, Overall loss = 0.0331 and accuracy of 0.992
Iteration 14600: with minibatch training loss = 0.0641 and accuracy of 0.97
Iteration 14700: with minibatch training loss = 0.0286 and accuracy of 0.98
Iteration 14800: with minibatch training loss = 0.0399 and accuracy of 1
Iteration 14900: with minibatch training loss = 0.00911 and accuracy of 1
Iteration 15000: with minibatch training loss = 0.0201 and accuracy of 1
Iteration 15100: with minibatch training loss = 0.011 and accuracy of 1
Iteration 15200: with minibatch training loss = 0.0476 and accuracy of 0.98
Epoch 20, Overall loss = 0.0261 and accuracy of 0.994
Validation
Epoch 1, Overall loss = 1.24 and accuracy of 0.755
Out[13]:
(1.2356742324829102, 0.755)

In [14]:
# Test your model here, and make sure 
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)


Training
Epoch 1, Overall loss = 0.147 and accuracy of 0.948
Validation
Epoch 1, Overall loss = 1.21 and accuracy of 0.757
Out[14]:
(1.2116215324401856, 0.757)

Describe what you did here

In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network

Vanilla ConvNet architecture of [conv-relu-conv-relu-pool]x2 -> [affine]x2 -> [softmax]. In this architecture, the szie of filters utilizing in convolution is as small as 3, and in module [conv-relu-conv-relu-pool], stacking two 3x3 filters instead of one 5x5 filter with same receptive field tends to lower the quantity of parameters. And regularization strategy for such deep neuron network is L2 regularizer for each trainable weight in filters and dropout after the first fully connected layer. Also applying exponential learning rate decay strategy. Train the CNN for 10 epochs and plot the loss with respected to iteration numbers, you will observe that the training accuracy is continuously rising, so it's reasonable to increase the epoch number up to 20, finally you are promised to see better performance of the model.

Test Set - Do this only once

Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.


In [15]:
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)


Test
Epoch 1, Overall loss = 1.43 and accuracy of 0.743
Out[15]:
(1.426207932472229, 0.7432)

Going further with TensorFlow

The next assignment will make heavy use of TensorFlow. You might also find it useful for your projects.

Extra Credit Description

If you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable.