Deep Learning

Assignment 3

Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.

The goal of this assignment is to explore regularization techniques.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
import time
from six.moves import cPickle as pickle

First reload the data we generated in notmist.ipynb.


In [2]:
pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)


Training set (80000, 28, 28) (80000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)

Reformat into a shape that's more adapted to the models we're going to train:

  • data as a flat matrix,
  • labels as float 1-hot encodings.

In [3]:
image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


Training set (80000, 784) (80000, 10)
Validation set (10000, 784) (10000, 10)
Test set (10000, 784) (10000, 10)

In [4]:
def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

Problem 1

Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.


L2-Regularization for Logistic Model


In [17]:
batch_size = 128
num_steps = 3001

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases = tf.Variable(tf.zeros([num_labels]))
  
  # Training computation.
  logits = tf.matmul(tf_train_dataset, weights) + biases
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
  rloss = loss + tf.constant(0.001) * tf.nn.l2_loss(weights)
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(rloss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = tf.nn.softmax(
    tf.matmul(tf_valid_dataset, weights) + biases)
  test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)

with tf.Session(graph=graph) as session:
  tic = time.time()
  try:
    tf.global_variables_initializer().run()
  except AttributeError:
    tf.initialize_all_variables().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  score = accuracy(test_prediction.eval(), test_labels)
  benchmark = 85.3
  print("Test accuracy: %.1f%%" % score)
  print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
  print("L2 Regularization Improvement: %.1f%%" % (score - benchmark))


Initialized
Minibatch loss at step 0: 19.009109
Minibatch accuracy: 9.4%
Validation accuracy: 10.4%
Minibatch loss at step 500: 1.728431
Minibatch accuracy: 72.7%
Validation accuracy: 76.0%
Minibatch loss at step 1000: 0.805928
Minibatch accuracy: 80.5%
Validation accuracy: 78.1%
Minibatch loss at step 1500: 0.808299
Minibatch accuracy: 82.0%
Validation accuracy: 80.1%
Minibatch loss at step 2000: 0.877429
Minibatch accuracy: 75.8%
Validation accuracy: 80.1%
Minibatch loss at step 2500: 0.674414
Minibatch accuracy: 81.2%
Validation accuracy: 81.3%
Minibatch loss at step 3000: 0.710956
Minibatch accuracy: 83.6%
Validation accuracy: 81.5%
Test accuracy: 88.1%
StochasticGradientDescent Time: 3.039 s
L2 Regularization Improvement: 2.8%

L2-Regularization for Neural Network Model


In [24]:
batch_size = 128
nodes = 1024
num_steps = 3001
num_cores = 2

nngraph = tf.Graph()

with nngraph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, nodes]))
  biases = tf.Variable(tf.zeros([nodes]))
  z = tf.matmul(tf_train_dataset, weights) + biases
  
  # Hidden Layer
  u = np.sqrt(6.0) / np.sqrt(nodes + num_labels)
  hidden_weights = tf.Variable(
    tf.random_uniform([nodes, num_labels], minval=-u, maxval=u))
  hidden_bias = tf.Variable(tf.zeros([num_labels]))
  logits = tf.matmul(tf.nn.relu(z), hidden_weights) + hidden_bias
  
  def forward_prop_tensor(dataset):
    return tf.nn.softmax(
      tf.matmul(
        tf.nn.relu(tf.matmul(dataset, weights) + biases), hidden_weights
      ) + hidden_bias
    )
  
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
  rloss = loss + tf.constant(0.001) * (
    tf.nn.l2_loss(weights) + tf.nn.l2_loss(hidden_weights))
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(rloss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = forward_prop_tensor(tf_valid_dataset)
  test_prediction = forward_prop_tensor(tf_test_dataset)
  
with tf.Session(graph=nngraph, config=tf.ConfigProto(
    inter_op_parallelism_threads=num_cores, 
    intra_op_parallelism_threads=num_cores)) as session:
  tic = time.time()
  try:
    tf.global_variables_initializer().run()
  except AttributeError:
    tf.initialize_all_variables().run()
  print("One-Hidden-Layer NueralNetworkGraph Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  score = accuracy(test_prediction.eval(), test_labels)
  benchmark = 89.0
  print("Test accuracy: %.1f%%" % score)
  print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
  print("L2 Regularization Improvement: %.1f%%" % (score - benchmark))


One-Hidden-Layer NueralNetworkGraph Initialized
Minibatch loss at step 0: 19.467033
Minibatch accuracy: 7.0%
Validation accuracy: 16.0%
Minibatch loss at step 500: 8.796896
Minibatch accuracy: 78.9%
Validation accuracy: 78.1%
Minibatch loss at step 1000: 4.181072
Minibatch accuracy: 82.0%
Validation accuracy: 80.1%
Minibatch loss at step 1500: 0.714755
Minibatch accuracy: 89.1%
Validation accuracy: 82.6%
Minibatch loss at step 2000: 0.750587
Minibatch accuracy: 85.2%
Validation accuracy: 83.9%
Minibatch loss at step 2500: 0.399135
Minibatch accuracy: 90.6%
Validation accuracy: 85.4%
Minibatch loss at step 3000: 0.291372
Minibatch accuracy: 93.0%
Validation accuracy: 86.3%
Test accuracy: 92.5%
StochasticGradientDescent Time: 33.001 s

Problem 2

Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?



In [34]:
num_samples = 100
batch_size = 50
nodes = 1024
num_steps = 5001
num_cores = 2

nngraph = tf.Graph()

with nngraph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, nodes]))
  biases = tf.Variable(tf.zeros([nodes]))
  z = tf.matmul(tf_train_dataset, weights) + biases
  
  # Hidden Layer
  u = np.sqrt(6.0) / np.sqrt(nodes + num_labels)
  hidden_weights = tf.Variable(
    tf.random_uniform([nodes, num_labels], minval=-u, maxval=u))
  hidden_bias = tf.Variable(tf.zeros([num_labels]))
  logits = tf.matmul(tf.nn.relu(z), hidden_weights) + hidden_bias
  
  def forward_prop_tensor(dataset):
    return tf.nn.softmax(
      tf.matmul(
        tf.nn.relu(tf.matmul(dataset, weights) + biases), hidden_weights
      ) + hidden_bias
    )
  
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
  rloss = loss + tf.constant(0.001) * (
    tf.nn.l2_loss(weights) + tf.nn.l2_loss(hidden_weights))
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(rloss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = forward_prop_tensor(tf_valid_dataset)
  test_prediction = forward_prop_tensor(tf_test_dataset)
  
with tf.Session(graph=nngraph, config=tf.ConfigProto(
    inter_op_parallelism_threads=num_cores, 
    intra_op_parallelism_threads=num_cores)) as session:
  tic = time.time()
  try:
    tf.global_variables_initializer().run()
  except AttributeError:
    tf.initialize_all_variables().run()
  print("One-Hidden-Layer NueralNetworkGraph Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data[:num_samples], 
                 tf_train_labels : batch_labels[:num_samples]}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  score = accuracy(test_prediction.eval(), test_labels)
  benchmark = 89.0
  print("Test accuracy: %.1f%%" % score)
  print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
  print("L2 Regularization Improvement: %.1f%%" % (score - benchmark))


One-Hidden-Layer NueralNetworkGraph Initialized
Minibatch loss at step 0: 19.108963
Minibatch accuracy: 12.0%
Validation accuracy: 32.5%
Minibatch loss at step 500: 8.293187
Minibatch accuracy: 80.0%
Validation accuracy: 77.2%
Minibatch loss at step 1000: 3.182842
Minibatch accuracy: 80.0%
Validation accuracy: 77.7%
Minibatch loss at step 1500: 1.419265
Minibatch accuracy: 72.0%
Validation accuracy: 78.5%
Minibatch loss at step 2000: 0.327838
Minibatch accuracy: 88.0%
Validation accuracy: 83.1%
Minibatch loss at step 2500: 0.409700
Minibatch accuracy: 84.0%
Validation accuracy: 84.2%
Minibatch loss at step 3000: 0.441499
Minibatch accuracy: 86.0%
Validation accuracy: 84.2%
Minibatch loss at step 3500: 0.568649
Minibatch accuracy: 80.0%
Validation accuracy: 85.4%
Minibatch loss at step 4000: 0.267706
Minibatch accuracy: 90.0%
Validation accuracy: 85.3%
Minibatch loss at step 4500: 0.528480
Minibatch accuracy: 80.0%
Validation accuracy: 85.4%
Minibatch loss at step 5000: 0.467398
Minibatch accuracy: 88.0%
Validation accuracy: 85.9%
Test accuracy: 92.5%
StochasticGradientDescent Time: 29.553 s
L2 Regularization Improvement: 3.5%

Problem 3

Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.

What happens to our extreme overfitting case?



In [30]:
batch_size = 128
nodes = 1024
num_steps = 3001
num_cores = 2

nngraph = tf.Graph()

with nngraph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)
  
  # Variables.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, nodes]))
  biases = tf.Variable(tf.zeros([nodes]))
  z = tf.matmul(tf_train_dataset, weights) + biases
  
  # Hidden Layer
  u = np.sqrt(6.0) / np.sqrt(nodes + num_labels)
  hidden_weights = tf.Variable(
    tf.random_uniform([nodes, num_labels], minval=-u, maxval=u))
  hidden_bias = tf.Variable(tf.zeros([num_labels]))
  rate = tf.constant(0.5)
  logits = tf.matmul(tf.nn.dropout(tf.nn.relu(z), rate), hidden_weights) + hidden_bias
  
  def forward_prop_tensor(dataset):
    return tf.nn.softmax(
      tf.matmul(
        tf.nn.relu(tf.matmul(dataset, weights) + biases), hidden_weights
      ) + hidden_bias
    )
  
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
  rloss = loss + tf.constant(0.001) * (
    tf.nn.l2_loss(weights) + tf.nn.l2_loss(hidden_weights))
  
  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(rloss)
  
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = forward_prop_tensor(tf_valid_dataset)
  test_prediction = forward_prop_tensor(tf_test_dataset)
  
with tf.Session(graph=nngraph, config=tf.ConfigProto(
    inter_op_parallelism_threads=num_cores, 
    intra_op_parallelism_threads=num_cores)) as session:
  tic = time.time()
  try:
    tf.global_variables_initializer().run()
  except AttributeError:
    tf.initialize_all_variables().run()
  print("One-Hidden-Layer NueralNetworkGraph Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  score = accuracy(test_prediction.eval(), test_labels)
  benchmark = 89.0
  print("Test accuracy: %.1f%%" % score)
  print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
  print("L2 Regularization Improvement: %.1f%%" % (score - benchmark))


One-Hidden-Layer NueralNetworkGraph Initialized
Minibatch loss at step 0: 25.664806
Minibatch accuracy: 7.8%
Validation accuracy: 16.4%
Minibatch loss at step 500: 16.765213
Minibatch accuracy: 71.1%
Validation accuracy: 79.5%
Minibatch loss at step 1000: 9.130754
Minibatch accuracy: 75.8%
Validation accuracy: 80.2%
Minibatch loss at step 1500: 1.311195
Minibatch accuracy: 73.4%
Validation accuracy: 82.2%
Minibatch loss at step 2000: 1.699077
Minibatch accuracy: 78.9%
Validation accuracy: 83.3%
Minibatch loss at step 2500: 0.705580
Minibatch accuracy: 80.5%
Validation accuracy: 84.7%
Minibatch loss at step 3000: 0.762661
Minibatch accuracy: 85.9%
Validation accuracy: 85.3%
Test accuracy: 92.0%
StochasticGradientDescent Time: 37.706 s
L2 Regularization Improvement: 3.0%

Problem 4

Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.

One avenue you can explore is to add multiple layers.

Another one is to use learning rate decay:

global_step = tf.Variable(0)  # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)