Deep Learning

Assignment 3

Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.

The goal of this assignment is to explore regularization techniques.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import numpy as np
import tensorflow as tf
from __future__ import print_function
from six.moves import cPickle as pickle

First reload the data we generated in notmist.ipynb.


In [2]:
pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
    save = pickle.load(f)
    train_dataset = save['train_dataset']
    train_labels = save['train_labels']
    valid_dataset = save['valid_dataset']
    valid_labels = save['valid_labels']
    test_dataset = save['test_dataset']
    test_labels = save['test_labels']
    del save  # hint to help gc free up memory
    print('Training set', train_dataset.shape, train_labels.shape)
    print('Validation set', valid_dataset.shape, valid_labels.shape)
    print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)

Reformat into a shape that's more adapted to the models we're going to train:

  • data as a flat matrix,
  • labels as float 1-hot encodings.

In [3]:
image_size = 28
num_labels = 10

def reformat(dataset, labels):
    dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
    # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
    labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
    return dataset, labels

train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 784) (200000, 10)
Validation set (10000, 784) (10000, 10)
Test set (10000, 784) (10000, 10)

In [4]:
def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

Problem 1

Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.



In [5]:
# fixed variables

# Subset the training data for faster turnaround.
train_subset = 50000

# batch size for sgd
batch_size = 512

In [6]:
# create computation graph for logistic regression with regularization
logRegGraph = tf.Graph()
with logRegGraph.as_default():

    # input data
    tf_test_dataset = tf.constant(test_dataset)
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_train_labels = tf.placeholder(tf.float32,shape=(batch_size,num_labels))
    tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    
    # variables
    biases = tf.Variable(tf.zeros([num_labels]))
    weights = tf.Variable(tf.truncated_normal([image_size*image_size,num_labels]))
    
    # regularization constant
    beta = tf.constant(5e-3)
    
    # training computation.
    logits = tf.matmul(tf_train_dataset,weights)+biases
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits,tf_train_labels)) \
           + beta*tf.nn.l2_loss(weights)
  
    # optimizer
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    # predictions for the training, validation, and test data
    train_prediction = tf.nn.softmax(logits)
    test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
    valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)

In [7]:
num_steps = 2001

with tf.Session(graph=logRegGraph) as session:
  
    tf.initialize_all_variables().run()
    
    for step in range(num_steps):
        # offset to sample data
        offset = (step*batch_size)%(train_labels.shape[0]-batch_size)
    
        # generate a minibatch.
        batch_data = train_dataset[offset:(offset+batch_size),:]
        batch_labels = train_labels[offset:(offset+batch_size),:]
    
        # prepare dictionary with the minibatch.
        feed_dict = {tf_train_dataset:batch_data,tf_train_labels:batch_labels}
    
        _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    
        if (step%500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
            print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Minibatch loss at step 0: 34.197643
Minibatch accuracy: 4.9%
Validation accuracy: 7.8%
Test accuracy: 7.0%
Minibatch loss at step 500: 1.649033
Minibatch accuracy: 82.6%
Validation accuracy: 80.1%
Test accuracy: 87.0%
Minibatch loss at step 1000: 0.747080
Minibatch accuracy: 83.6%
Validation accuracy: 82.4%
Test accuracy: 89.1%
Minibatch loss at step 1500: 0.806371
Minibatch accuracy: 81.8%
Validation accuracy: 82.1%
Test accuracy: 89.1%
Minibatch loss at step 2000: 0.702716
Minibatch accuracy: 80.3%
Validation accuracy: 82.1%
Test accuracy: 88.7%

In [8]:
batch_size = 512
num_hidden = 1024

nnGraph = tf.Graph()
with nnGraph.as_default():

    # input data
    tf_test_dataset = tf.constant(test_dataset)
    tf_valid_dataset = tf.constant(valid_dataset)    
    tf_train_labels = tf.placeholder(tf.float32,shape=(batch_size,num_labels))
    tf_hidden_units = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    
    # variables.
    biases1 = tf.Variable(tf.zeros([num_hidden]))
    biases2 = tf.Variable(tf.zeros([num_labels]))
    weights1 = tf.Variable(tf.truncated_normal([image_size*image_size,num_hidden]))
    weights2 = tf.Variable(tf.truncated_normal([num_hidden,num_labels]))
    
    # regularization constants
    beta1 = tf.constant(5e-3)
    beta2 = tf.constant(5e-3)
    
    # training computation.
    tf_hidden_units = tf.nn.relu(tf.matmul(tf_train_dataset, weights1)+biases1)
    
    logits = tf.matmul(tf_hidden_units, weights2)+biases2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits,tf_train_labels)) \
           + beta1*tf.nn.l2_loss(weights1) + beta2*tf.nn.l2_loss(weights2)

    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    # Predictions for the training, validation, and test data.
    train_prediction = tf.nn.softmax(logits)
    test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset,weights1)+biases1),
                                              weights2)+biases2)
    valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset,weights1)+biases1),
                                               weights2)+biases2)

In [9]:
num_steps = 2001

with tf.Session(graph=nnGraph) as session:
  
    tf.initialize_all_variables().run()
    print("Initialized")
  
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step*batch_size)%(train_labels.shape[0]-batch_size)
    
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset+batch_size),:]
        batch_labels = train_labels[offset:(offset+batch_size),:]
    
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {tf_train_dataset:batch_data,tf_train_labels:batch_labels}
    
        _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    
        if (step%500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
            print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 1970.614990
Minibatch accuracy: 11.1%
Validation accuracy: 42.6%
Test accuracy: 46.4%
Minibatch loss at step 500: 126.372452
Minibatch accuracy: 86.3%
Validation accuracy: 84.2%
Test accuracy: 90.8%
Minibatch loss at step 1000: 10.873931
Minibatch accuracy: 86.1%
Validation accuracy: 86.2%
Test accuracy: 92.3%
Minibatch loss at step 1500: 1.545693
Minibatch accuracy: 84.6%
Validation accuracy: 85.6%
Test accuracy: 91.9%
Minibatch loss at step 2000: 0.690539
Minibatch accuracy: 85.2%
Validation accuracy: 85.7%
Test accuracy: 91.6%

Problem 2

Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?



In [ ]:
batch_size = 64
num_hidden = 1024
train_subset = 500

nnGraph = tf.Graph()
with nnGraph.as_default():

    # input data
    tf_test_dataset = tf.constant(test_dataset)
    tf_valid_dataset = tf.constant(valid_dataset)    
    tf_train_labels = tf.placeholder(tf.float32,shape=(batch_size,num_labels))
    tf_hidden_units = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    
    # variables.
    biases1 = tf.Variable(tf.zeros([num_hidden]))
    biases2 = tf.Variable(tf.zeros([num_labels]))
    weights1 = tf.Variable(tf.truncated_normal([image_size*image_size,num_hidden]))
    weights2 = tf.Variable(tf.truncated_normal([num_hidden,num_labels]))
    
    # regularization constants
    beta1 = tf.constant(0.0)
    beta2 = tf.constant(0.0)
    
    # training computation.
    tf_hidden_units = tf.nn.relu(tf.matmul(tf_train_dataset, weights1)+biases1)
    
    logits = tf.matmul(tf_hidden_units, weights2)+biases2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits,tf_train_labels)) \
           + beta1*tf.nn.l2_loss(weights1) + beta2*tf.nn.l2_loss(weights2)

    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    # Predictions for the training, validation, and test data.
    train_prediction = tf.nn.softmax(logits)
    test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset,weights1)+biases1),
                                              weights2)+biases2)
    valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset,weights1)+biases1),
                                               weights2)+biases2)

In [ ]:
num_steps = 5001

with tf.Session(graph=nnGraph) as session:
  
    tf.initialize_all_variables().run()
    print("Initialized")
  
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step*batch_size)%(train_subset-batch_size)
    
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset+batch_size),:]
        batch_labels = train_labels[offset:(offset+batch_size),:]
    
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {tf_train_dataset:batch_data,tf_train_labels:batch_labels}
    
        _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    
        if (step%500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
            print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

Problem 3

Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.

What happens to our extreme overfitting case?



In [ ]:
batch_size = 64
num_hidden = 1024
train_subset = 500

nnGraph = tf.Graph()
with nnGraph.as_default():

    # input data
    tf_test_dataset = tf.constant(test_dataset)
    tf_valid_dataset = tf.constant(valid_dataset)    
    tf_train_labels = tf.placeholder(tf.float32,shape=(batch_size,num_labels))
    tf_hidden_units = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    
    # variables.
    biases1 = tf.Variable(tf.zeros([num_hidden]))
    biases2 = tf.Variable(tf.zeros([num_labels]))
    weights1 = tf.Variable(tf.truncated_normal([image_size*image_size,num_hidden]))
    weights2 = tf.Variable(tf.truncated_normal([num_hidden,num_labels]))
    
    # regularization constants
    beta1 = tf.constant(0.0)
    beta2 = tf.constant(0.0)
    
    # dropout constant
    keep_prob = tf.constant(0.5)
    
    # training computation.
    tf_hidden_units = tf.nn.dropout(tf.nn.relu(tf.matmul(tf_train_dataset, weights1)+biases1),keep_prob)
    
    logits = tf.matmul(tf_hidden_units, weights2)+biases2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits,tf_train_labels)) \
           + beta1*tf.nn.l2_loss(weights1) + beta2*tf.nn.l2_loss(weights2)

    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    # Predictions for the training, validation, and test data.
    train_prediction = tf.nn.softmax(logits)
    test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset,weights1)+biases1),
                                              weights2)+biases2)
    valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset,weights1)+biases1),
                                               weights2)+biases2)

In [ ]:
num_steps = 5001

with tf.Session(graph=nnGraph) as session:
  
    tf.initialize_all_variables().run()
    print("Initialized")
  
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step*batch_size)%(train_subset-batch_size)
    
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset+batch_size),:]
        batch_labels = train_labels[offset:(offset+batch_size),:]
    
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {tf_train_dataset:batch_data,tf_train_labels:batch_labels}
    
        _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    
        if (step%500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
            print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

Problem 4

Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.

One avenue you can explore is to add multiple layers.

Another one is to use learning rate decay:

global_step = tf.Variable(0)  # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)



In [12]:
batch_size = 512
train_subset = train_labels.shape[0]

num_hidden_1 = 1024
num_hidden_2 = 256
num_hidden_3 = 64

num_steps = 12001

keep_prob = 0.6
reg_constant = 1e-4
update_steps = 100
update_exponent = 0.96
initial_learning_rate = 6e-3
weight_dev = np.sqrt(1.0/train_subset)

dnnGraph = tf.Graph()
with dnnGraph.as_default():

    # input data
    tf_test_dataset = tf.constant(test_dataset)
    tf_valid_dataset = tf.constant(valid_dataset)    
    tf_train_labels = tf.placeholder(tf.float32,shape=(batch_size,num_labels))
    tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size,image_size*image_size))
    
    tf_hidden_units_1 = tf.placeholder(tf.float32,shape=(batch_size,num_hidden_1))
    tf_hidden_units_2 = tf.placeholder(tf.float32,shape=(batch_size,num_hidden_2))
    tf_hidden_units_3 = tf.placeholder(tf.float32,shape=(batch_size,num_hidden_3))
    
    # variables.
    biases1 = tf.Variable(tf.zeros([num_hidden_1]))
    biases2 = tf.Variable(tf.zeros([num_hidden_2]))
    biases3 = tf.Variable(tf.zeros([num_hidden_3]))
    biases4 = tf.Variable(tf.zeros([num_labels]))
    
    weights1 = tf.Variable(tf.truncated_normal([image_size*image_size,num_hidden_1],stddev=weight_dev))
    weights2 = tf.Variable(tf.truncated_normal([num_hidden_1,num_hidden_2],stddev=weight_dev))
    weights3 = tf.Variable(tf.truncated_normal([num_hidden_2,num_hidden_3],stddev=weight_dev))
    weights4 = tf.Variable(tf.truncated_normal([num_hidden_3,num_labels],stddev=weight_dev))
    
    # regularization constants
    beta = tf.constant(reg_constant)
    
    # dropout constant
    keep_prob_1 = tf.constant(keep_prob)
    keep_prob_2 = tf.constant(keep_prob)
    keep_prob_3 = tf.constant(keep_prob)
    
    # training computation.
    def forwardPropwithDrop(data):
        tf_hidden_units_1 = tf.nn.dropout(tf.nn.relu(tf.matmul(data,weights1)+biases1),keep_prob_1)
        tf_hidden_units_2 = tf.nn.dropout(tf.nn.relu(tf.matmul(tf_hidden_units_1,weights2)+biases2),keep_prob_2)
        tf_hidden_units_3 = tf.nn.dropout(tf.nn.relu(tf.matmul(tf_hidden_units_2,weights3)+biases3),keep_prob_3)
        logits = tf.matmul(tf_hidden_units_3, weights4)+biases4
        return logits
    
    def forwardPropwithoutDrop(data):
        tf_hidden_units_1 = tf.nn.relu(tf.matmul(data,weights1)+biases1)
        tf_hidden_units_2 = tf.nn.relu(tf.matmul(tf_hidden_units_1,weights2)+biases2)
        tf_hidden_units_3 = tf.nn.relu(tf.matmul(tf_hidden_units_2,weights3)+biases3)
        logits = tf.matmul(tf_hidden_units_3, weights4)+biases4
        return logits
    
    logits = forwardPropwithDrop(tf_train_dataset)
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits,tf_train_labels)) \
           + beta*(tf.nn.l2_loss(weights1) + tf.nn.l2_loss(weights2) + 
                   tf.nn.l2_loss(weights3) + tf.nn.l2_loss(weights4))

    # optimizer
    global_step = tf.Variable(0)
    learning_rate = tf.train.exponential_decay(initial_learning_rate,global_step,update_steps,update_exponent)
    optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss,global_step=global_step)

    # Predictions for the training, validation, and test data.
    test_prediction = tf.nn.softmax(forwardPropwithoutDrop(tf_test_dataset))
    valid_prediction = tf.nn.softmax(forwardPropwithoutDrop(tf_valid_dataset))
    train_prediction = tf.nn.softmax(forwardPropwithoutDrop(tf_train_dataset))

In [13]:
with tf.Session(graph=dnnGraph) as session:
  
    tf.initialize_all_variables().run()
    print("Initialized")
  
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step*batch_size)%(train_subset-batch_size)
    
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset+batch_size),:]
        batch_labels = train_labels[offset:(offset+batch_size),:]
    
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {tf_train_dataset:batch_data,tf_train_labels:batch_labels}
    
        _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    
        if (step%500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
            print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 2.302795
Minibatch accuracy: 6.1%
Validation accuracy: 10.0%
Test accuracy: 10.0%
Minibatch loss at step 500: 0.670646
Minibatch accuracy: 86.3%
Validation accuracy: 85.0%
Test accuracy: 91.5%
Minibatch loss at step 1000: 0.668569
Minibatch accuracy: 87.9%
Validation accuracy: 86.2%
Test accuracy: 92.3%
Minibatch loss at step 1500: 0.698432
Minibatch accuracy: 84.8%
Validation accuracy: 86.9%
Test accuracy: 93.0%
Minibatch loss at step 2000: 0.622877
Minibatch accuracy: 87.7%
Validation accuracy: 87.5%
Test accuracy: 93.3%
Minibatch loss at step 2500: 0.532955
Minibatch accuracy: 89.3%
Validation accuracy: 88.2%
Test accuracy: 93.8%
Minibatch loss at step 3000: 0.532571
Minibatch accuracy: 88.7%
Validation accuracy: 88.3%
Test accuracy: 94.1%
Minibatch loss at step 3500: 0.445569
Minibatch accuracy: 90.6%
Validation accuracy: 89.1%
Test accuracy: 94.7%
Minibatch loss at step 4000: 0.456826
Minibatch accuracy: 90.8%
Validation accuracy: 89.3%
Test accuracy: 94.8%
Minibatch loss at step 4500: 0.451316
Minibatch accuracy: 89.8%
Validation accuracy: 89.6%
Test accuracy: 95.1%
Minibatch loss at step 5000: 0.519884
Minibatch accuracy: 88.7%
Validation accuracy: 89.7%
Test accuracy: 95.1%
Minibatch loss at step 5500: 0.408073
Minibatch accuracy: 90.8%
Validation accuracy: 89.8%
Test accuracy: 95.3%
Minibatch loss at step 6000: 0.461886
Minibatch accuracy: 90.0%
Validation accuracy: 90.2%
Test accuracy: 95.5%
Minibatch loss at step 6500: 0.477012
Minibatch accuracy: 90.4%
Validation accuracy: 90.1%
Test accuracy: 95.7%
Minibatch loss at step 7000: 0.309482
Minibatch accuracy: 93.8%
Validation accuracy: 90.3%
Test accuracy: 95.8%
Minibatch loss at step 7500: 0.342681
Minibatch accuracy: 93.0%
Validation accuracy: 90.3%
Test accuracy: 95.7%
Minibatch loss at step 8000: 0.395787
Minibatch accuracy: 92.4%
Validation accuracy: 90.5%
Test accuracy: 95.8%
Minibatch loss at step 8500: 0.338320
Minibatch accuracy: 92.6%
Validation accuracy: 90.5%
Test accuracy: 95.8%
Minibatch loss at step 9000: 0.400793
Minibatch accuracy: 91.4%
Validation accuracy: 90.6%
Test accuracy: 96.0%
Minibatch loss at step 9500: 0.341982
Minibatch accuracy: 92.8%
Validation accuracy: 90.5%
Test accuracy: 96.0%
Minibatch loss at step 10000: 0.264198
Minibatch accuracy: 95.1%
Validation accuracy: 90.6%
Test accuracy: 96.1%
Minibatch loss at step 10500: 0.379584
Minibatch accuracy: 91.8%
Validation accuracy: 90.5%
Test accuracy: 96.0%
Minibatch loss at step 11000: 0.350483
Minibatch accuracy: 93.9%
Validation accuracy: 90.7%
Test accuracy: 96.0%
Minibatch loss at step 11500: 0.322964
Minibatch accuracy: 92.8%
Validation accuracy: 90.6%
Test accuracy: 96.1%
Minibatch loss at step 12000: 0.333289
Minibatch accuracy: 93.6%
Validation accuracy: 90.8%
Test accuracy: 96.1%

In [ ]: