Deep Learning

Assignment 3

Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.

The goal of this assignment is to explore regularization techniques.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle

First reload the data we generated in 1_notmnist.ipynb.


In [2]:
pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)

Reformat into a shape that's more adapted to the models we're going to train:

  • data as a flat matrix,
  • labels as float 1-hot encodings.

In [3]:
image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 784) (200000, 10)
Validation set (10000, 784) (10000, 10)
Test set (10000, 784) (10000, 10)

In [35]:
def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

Problem 1

Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.



Logistic Model


In [6]:
batch_size = 128
beta = 0.005

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases = tf.Variable(tf.zeros([num_labels]))
  
  # Training computation.
  logits = tf.matmul(tf_train_dataset, weights) + biases
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)
    + beta*tf.nn.l2_loss(weights)
  )
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = tf.nn.softmax(
    tf.matmul(tf_valid_dataset, weights) + biases)
  test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)

In [7]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 32.632149
Minibatch accuracy: 9.4%
Validation accuracy: 14.4%
Minibatch loss at step 500: 1.791600
Minibatch accuracy: 77.3%
Validation accuracy: 79.7%
Minibatch loss at step 1000: 0.927333
Minibatch accuracy: 78.9%
Validation accuracy: 81.8%
Minibatch loss at step 1500: 0.755874
Minibatch accuracy: 83.6%
Validation accuracy: 82.0%
Minibatch loss at step 2000: 0.919513
Minibatch accuracy: 74.2%
Validation accuracy: 81.8%
Minibatch loss at step 2500: 0.689223
Minibatch accuracy: 82.8%
Validation accuracy: 82.2%
Minibatch loss at step 3000: 0.855758
Minibatch accuracy: 79.7%
Validation accuracy: 82.6%
Test accuracy: 88.7%
  • beta = 0.01 - Test accuracy: 88.5%
  • beta = 0.5 - Test accuracy: 55.5%
  • beta = 0.005 - Test accuracy: 88.7%
  • beta = 0.001 - Test accuracy: 88.7%
  • beta = 0.0001 - Test accuracy: 86.2%

Hidden Layer Model


In [36]:
batch_size = 128
beta=0.005

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights_layer_1 = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
  # Layer 2 weights have an input dimension = output of first layer
  weights_layer_2 = tf.Variable(
    tf.truncated_normal([num_labels, num_labels]))
  biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
  
  # Training computation.
  logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
  relu_output = tf.nn.relu(logits_layer_1)
  logits_layer_2 = tf.matmul(relu_output, weights_layer_2) + biases_layer_2
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2)
    + beta*tf.nn.l2_loss(weights_layer_1)
    + beta*tf.nn.l2_loss(weights_layer_2)
  )
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits_layer_2)

  logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
  relu_valid = tf.nn.relu(logits_l_1_valid)  
  logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2  
  valid_prediction = tf.nn.softmax(logits_l_2_valid)

  logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
  relu_test = tf.nn.relu(logits_l_1_test)  
  logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2  
  test_prediction = tf.nn.softmax(logits_l_2_test)

In [37]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 40.259315
Minibatch accuracy: 16.4%
Validation accuracy: 25.9%
Minibatch loss at step 500: 1.924523
Minibatch accuracy: 77.3%
Validation accuracy: 78.6%
Minibatch loss at step 1000: 0.966337
Minibatch accuracy: 78.9%
Validation accuracy: 81.9%
Minibatch loss at step 1500: 0.662468
Minibatch accuracy: 82.8%
Validation accuracy: 82.0%
Minibatch loss at step 2000: 0.787361
Minibatch accuracy: 78.9%
Validation accuracy: 82.3%
Minibatch loss at step 2500: 0.658163
Minibatch accuracy: 81.2%
Validation accuracy: 82.3%
Minibatch loss at step 3000: 0.867185
Minibatch accuracy: 81.2%
Validation accuracy: 82.8%
Test accuracy: 88.7%

beta = 0.01 - Test accuracy: 88.7% beta = 0.5 - Test accuracy: 45.9% (and slow) beta = 0.005 - Test accuracy: 89.2% beta = 0.001 - Test accuracy: 89.2% beta = 0.0001 - Test accuracy: 85.7%


Problem 2

Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?



In [11]:
num_steps = 3001

#Restrict training data
reduced_train_dataset = train_dataset[:640, :]
reduced_train_labels = train_labels[:640, :]

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (reduced_train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = reduced_train_dataset[offset:(offset + batch_size), :]
    batch_labels = reduced_train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 37.813343
Minibatch accuracy: 11.7%
Validation accuracy: 22.2%
Minibatch loss at step 500: 1.518300
Minibatch accuracy: 94.5%
Validation accuracy: 69.3%
Minibatch loss at step 1000: 0.338952
Minibatch accuracy: 99.2%
Validation accuracy: 74.0%
Minibatch loss at step 1500: 0.231990
Minibatch accuracy: 99.2%
Validation accuracy: 74.7%
Minibatch loss at step 2000: 0.216898
Minibatch accuracy: 99.2%
Validation accuracy: 75.3%
Minibatch loss at step 2500: 0.204409
Minibatch accuracy: 100.0%
Validation accuracy: 75.8%
Minibatch loss at step 3000: 0.192178
Minibatch accuracy: 100.0%
Validation accuracy: 76.0%
Test accuracy: 82.9%

Restricted to 1000 samples in each batch - we get quick convergence and 100% accuracy on the mini-batch but poor performance on the validation dataset and poorer performance on the unseen test dataset.

  • Minibatch loss at step 3000: 0.268746
  • Minibatch accuracy: 100.0%
  • Validation accuracy: 78.4%
  • Test accuracy: 85.1%
    On 640 samples:
  • Minibatch loss at step 3000: 0.196770
  • Minibatch accuracy: 100.0%
  • Validation accuracy: 76.6%
  • Test accuracy: 83.6%

Problem 3

Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.

What happens to our extreme overfitting case?



In [12]:
batch_size = 128
beta=0.005

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights_layer_1 = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
  # Layer 2 weights have an input dimension = output of first layer
  weights_layer_2 = tf.Variable(
    tf.truncated_normal([num_labels, num_labels]))
  biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
  
  # Training computation.
  logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
  relu_output = tf.nn.relu(logits_layer_1)
  # Introduce dropout - probability feature is kept is passed as a variable
  keep_probability = tf.placeholder(tf.float32)
  dropout_output = tf.nn.dropout(relu_output, keep_probability)

  logits_layer_2 = tf.matmul(dropout_output, weights_layer_2) + biases_layer_2
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2)
    + beta*tf.nn.l2_loss(weights_layer_1)
    + beta*tf.nn.l2_loss(weights_layer_2)
  )
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits_layer_2)

  logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
  relu_valid = tf.nn.relu(logits_l_1_valid)  
  logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2  
  valid_prediction = tf.nn.softmax(logits_l_2_valid)

  logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
  relu_test = tf.nn.relu(logits_l_1_test)  
  logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2  
  test_prediction = tf.nn.softmax(logits_l_2_test)

In [13]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_probability: 0.5}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 59.214775
Minibatch accuracy: 12.5%
Validation accuracy: 14.9%
Minibatch loss at step 500: 2.834952
Minibatch accuracy: 39.1%
Validation accuracy: 64.0%
Minibatch loss at step 1000: 1.622742
Minibatch accuracy: 46.1%
Validation accuracy: 75.2%
Minibatch loss at step 1500: 1.387890
Minibatch accuracy: 53.1%
Validation accuracy: 77.3%
Minibatch loss at step 2000: 1.412264
Minibatch accuracy: 56.2%
Validation accuracy: 77.8%
Minibatch loss at step 2500: 1.323213
Minibatch accuracy: 57.0%
Validation accuracy: 79.0%
Minibatch loss at step 3000: 1.544850
Minibatch accuracy: 45.3%
Validation accuracy: 77.6%
Test accuracy: 83.8%

Dropout doesn't improve performance for me - maybe I'm applying it wrong - getting test accuracy of 80%.

Try on reduced batch size data below:


In [14]:
num_steps = 3001

#Restrict training data
reduced_train_dataset = train_dataset[:640, :]
reduced_train_labels = train_labels[:640, :]

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (reduced_train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = reduced_train_dataset[offset:(offset + batch_size), :]
    batch_labels = reduced_train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_probability: 0.5}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 52.094055
Minibatch accuracy: 14.8%
Validation accuracy: 15.7%
Minibatch loss at step 500: 2.330360
Minibatch accuracy: 57.8%
Validation accuracy: 70.8%
Minibatch loss at step 1000: 1.143688
Minibatch accuracy: 67.2%
Validation accuracy: 72.2%
Minibatch loss at step 1500: 1.009757
Minibatch accuracy: 69.5%
Validation accuracy: 71.8%
Minibatch loss at step 2000: 1.038862
Minibatch accuracy: 68.0%
Validation accuracy: 73.6%
Minibatch loss at step 2500: 0.943375
Minibatch accuracy: 71.1%
Validation accuracy: 73.3%
Minibatch loss at step 3000: 1.094494
Minibatch accuracy: 66.4%
Validation accuracy: 72.0%
Test accuracy: 79.1%

Does reduce overfitting but does not increase accuracy.


Problem 4

Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.

One avenue you can explore is to add multiple layers.

Another one is to use learning rate decay:

global_step = tf.Variable(0)  # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)


Try adding an additional layer:


In [26]:
batch_size = 128
beta=0.005

graph = tf.Graph()
with graph.as_default():

    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(None, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(None, num_labels))
  
    # Variables.
    weights_layer_1 = tf.Variable(
        tf.truncated_normal([image_size * image_size, num_labels]))
    biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
    # Layer 2 weights have an input dimension = output of first layer
    weights_layer_2 = tf.Variable(
        tf.truncated_normal([num_labels, num_labels]))
    biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
    # Layer 3
    weights_layer_3 = tf.Variable(
        tf.truncated_normal([num_labels, num_labels]))
    biases_layer_3 = tf.Variable(tf.zeros([num_labels]))
  
    # Training computation.
    # Compute layer 1
    logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
    relu_output_1 = tf.nn.relu(logits_layer_1)
    # Introduce dropout - probability feature is kept is passed as a variable
    keep_probability = tf.placeholder(tf.float32)
    dropout_output_1 = tf.nn.dropout(relu_output_1, keep_probability)
    # Compute layer 2
    logits_layer_2 = tf.matmul(dropout_output_1, weights_layer_2) + biases_layer_2
    relu_output_2 = tf.nn.relu(logits_layer_2)
    dropout_output_2 = tf.nn.dropout(relu_output_2, keep_probability)
    # Computer layer 3
    logits_layer_3 = tf.matmul(dropout_output_2, weights_layer_3) + biases_layer_3
    
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_3)
        + beta*tf.nn.l2_loss(weights_layer_1)
        + beta*tf.nn.l2_loss(weights_layer_2)
        + beta*tf.nn.l2_loss(weights_layer_3)
    )
  
    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    # Predictions for the data.
    train_prediction = tf.nn.softmax(logits_layer_3)

    # Determine accuracy
    correct_prediction = tf.equal(tf.argmax(train_prediction,1), tf.argmax(tf_train_labels,1))
    accuracy_calc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))*100

In [32]:
num_steps = 3001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    print("Initialized")
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset + batch_size), :]
        batch_labels = train_labels[offset:(offset + batch_size), :]
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_probability: 1.0}
    
        _, l, accuracy = session.run(
          [optimizer, loss, accuracy_calc], feed_dict=feed_dict
        )
    
        if (step % 500 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy)
            valid_feed_dict = {tf_train_dataset : valid_dataset, tf_train_labels : valid_labels, keep_probability: 1.0}
            print("Validation accuracy: %.1f%%" % accuracy_calc.eval(feed_dict=valid_feed_dict))

    test_feed_dict = {tf_train_dataset : test_dataset, tf_train_labels : test_labels, keep_probability: 1.0}
    print("Test accuracy: %.1f%%" % accuracy_calc.eval(feed_dict=test_feed_dict))


Initialized
Minibatch loss at step 0: 43.138603
Minibatch accuracy: 9.4%
Validation accuracy: 15.4%
Minibatch loss at step 500: 2.713095
Minibatch accuracy: 40.6%
Validation accuracy: 50.3%
Minibatch loss at step 1000: 1.376885
Minibatch accuracy: 60.9%
Validation accuracy: 74.8%
Minibatch loss at step 1500: 1.298263
Minibatch accuracy: 73.4%
Validation accuracy: 75.8%
Minibatch loss at step 2000: 1.260409
Minibatch accuracy: 71.1%
Validation accuracy: 74.8%
Minibatch loss at step 2500: 1.015574
Minibatch accuracy: 77.3%
Validation accuracy: 75.4%
Minibatch loss at step 3000: 1.433252
Minibatch accuracy: 68.0%
Validation accuracy: 73.7%
Test accuracy: 80.2%

87.2 with 3 layers and no dropout
Dies at 10% accuracy with 0.5 dropout - is it basically destroying all the information?
Yes even with 0.9 keep probability - only get 25%

My code may be wrong somehow.


Learning rate decay


In [43]:
batch_size = 128
beta=0.005

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights_layer_1 = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
  # Layer 2 weights have an input dimension = output of first layer
  weights_layer_2 = tf.Variable(
    tf.truncated_normal([num_labels, num_labels]))
  biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
  
  # Training computation.
  logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
  relu_output = tf.nn.relu(logits_layer_1)
  logits_layer_2 = tf.matmul(relu_output, weights_layer_2) + biases_layer_2
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2)
    + beta*tf.nn.l2_loss(weights_layer_1)
    + beta*tf.nn.l2_loss(weights_layer_2)
  )

  # Optimizer.
  global_step = tf.Variable(0)  # count the number of steps taken.
  learning_rate = tf.train.exponential_decay(0.5, global_step, 100, 0.96)
  optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)  
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits_layer_2)

  logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
  relu_valid = tf.nn.relu(logits_l_1_valid)  
  logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2  
  valid_prediction = tf.nn.softmax(logits_l_2_valid)

  logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
  relu_test = tf.nn.relu(logits_l_1_test)  
  logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2  
  test_prediction = tf.nn.softmax(logits_l_2_test)

In [44]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))


Initialized
Minibatch loss at step 0: 43.019760
Minibatch accuracy: 4.7%
Validation accuracy: 14.4%
Minibatch loss at step 500: 2.011266
Minibatch accuracy: 75.8%
Validation accuracy: 79.5%
Minibatch loss at step 1000: 0.933745
Minibatch accuracy: 75.0%
Validation accuracy: 82.5%
Minibatch loss at step 1500: 0.643496
Minibatch accuracy: 85.9%
Validation accuracy: 82.6%
Minibatch loss at step 2000: 0.855891
Minibatch accuracy: 80.5%
Validation accuracy: 82.4%
Minibatch loss at step 2500: 0.665540
Minibatch accuracy: 79.7%
Validation accuracy: 83.1%
Minibatch loss at step 3000: 0.827709
Minibatch accuracy: 82.8%
Validation accuracy: 83.5%
Test accuracy: 89.3%

Got 89.9% - with rate decay every 500 steps


In [ ]: