Deep Learning with TensorFlow

Credits: Forked from TensorFlow by Google

Setup

Refer to the setup instructions.

Exercise 3

Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.

The goal of this exercise is to explore regularization techniques.


In [7]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import cPickle as pickle
import numpy as np
import tensorflow as tf

First reload the data we generated in notmist.ipynb.


In [8]:
pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print 'Training set', train_dataset.shape, train_labels.shape
  print 'Validation set', valid_dataset.shape, valid_labels.shape
  print 'Test set', test_dataset.shape, test_labels.shape


Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (18724, 28, 28) (18724,)

Reformat into a shape that's more adapted to the models we're going to train:

  • data as a flat matrix,
  • labels as float 1-hot encodings.

In [9]:
image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape


Training set (200000, 784) (200000, 10)
Validation set (10000, 784) (10000, 10)
Test set (18724, 784) (18724, 10)

In [10]:
def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

Problem 1

Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compue the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.



In [11]:
import numpy as np
import tensorflow as tf

In [30]:
batch_size = 128
image_size = 28
num_labels = 10
graph = tf.Graph()
with graph.as_default():

    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(None, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)
  
    # Variables.
    l1_size = 1000
    l2_size = 30

    weights_l1 = tf.Variable(
        tf.truncated_normal([image_size * image_size, l1_size],seed=1))
    biases_l1 = tf.Variable(tf.zeros([l1_size]))

    weights_output = tf.Variable(
        tf.truncated_normal([l1_size ,num_labels],seed=1))
    biases_output = tf.Variable(tf.zeros([num_labels]))


  
    # Training computation.
  
    l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,weights_l1) + biases_l1)
    
    logits = tf.matmul(l1_output, weights_output) + biases_output
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + tf.nn.l2_loss(weights_l1)*.01
  
    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
  
    train_prediction = tf.nn.softmax(logits)

In [34]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print "Initialized"
  for step in xrange(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l = session.run(
      [optimizer, loss], feed_dict=feed_dict)
    
    
    if (step % 500 == 0):
      print "Minibatch loss at step", step, ":", l
      print "Valid_dataset Set Validation",accuracy(
            session.run(train_prediction,feed_dict={tf_train_dataset:valid_dataset,}),valid_labels)

  print "Testing Set Validation",accuracy(
        session.run(train_prediction,feed_dict={tf_train_dataset:test_dataset,}),test_labels)


WARNING:tensorflow:From <ipython-input-34-e70dba65243c>:4 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
 Initialized
Minibatch loss at step 0 : 3372.93
Valid_dataset Set Validation 31.05
Minibatch loss at step 500 : 23.044
Valid_dataset Set Validation 78.89
Minibatch loss at step 1000 : 0.997539
Valid_dataset Set Validation 83.65
Minibatch loss at step 1500 : 0.63444
Valid_dataset Set Validation 85.1
Minibatch loss at step 2000 : 0.754798
Valid_dataset Set Validation 84.56
Minibatch loss at step 2500 : 0.697436
Valid_dataset Set Validation 83.16
Minibatch loss at step 3000 : 0.678637
Valid_dataset Set Validation 84.86
Testing Set Validation 90.5842768639

Problem 2

Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?



In [ ]:


Problem 3

Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.

What happens to our extreme overfitting case?



In [38]:
batch_size = 128
image_size = 28
num_labels = 10
graph = tf.Graph()
with graph.as_default():

    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(None, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)
  
    # Variables.
    l1_size = 1000
    l2_size = 30

    weights_l1 = tf.Variable(
        tf.truncated_normal([image_size * image_size, l1_size],seed=1))
    biases_l1 = tf.Variable(tf.zeros([l1_size]))

    weights_output = tf.Variable(
        tf.truncated_normal([l1_size ,num_labels],seed=1))
    biases_output = tf.Variable(tf.zeros([num_labels]))


  
    # Training computation.
    
    drop_weights_l1 = tf.nn.dropout(weights_l1,keep_prob=0.5)
    drop_l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,drop_weights_l1) + biases_l1)
    drop_logits = tf.matmul(drop_l1_output, weights_output) + biases_output    

    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(drop_logits, tf_train_labels)) + tf.nn.l2_loss(drop_weights_l1)*.01
    
    # prediction graph
    l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,weights_l1) + biases_l1)
    logits = tf.matmul(l1_output, weights_output) + biases_output    
    train_prediction = tf.nn.softmax(logits)


  
    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

In [39]:
num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print "Initialized"
  for step in xrange(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l = session.run(
      [optimizer, loss], feed_dict=feed_dict)
    
    
    if (step % 500 == 0):
      print "Minibatch loss at step", step, ":", l
      print "Valid_dataset Set Validation",accuracy(
            session.run(train_prediction,feed_dict={tf_train_dataset:valid_dataset,}),valid_labels)

  print "Testing Set Validation",accuracy(
        session.run(train_prediction,feed_dict={tf_train_dataset:test_dataset,}),test_labels)


WARNING:tensorflow:From <ipython-input-39-99a32cfda019>:4 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Initialized
Minibatch loss at step 0 : 6602.73
Valid_dataset Set Validation 24.45
Minibatch loss at step 500 : 2.46614
Valid_dataset Set Validation 77.11
Minibatch loss at step 1000 : 1.36975
Valid_dataset Set Validation 77.64
Minibatch loss at step 1500 : 0.887545
Valid_dataset Set Validation 82.35
Minibatch loss at step 2000 : 0.934424
Valid_dataset Set Validation 81.25
Minibatch loss at step 2500 : 0.947684
Valid_dataset Set Validation 82.9
Minibatch loss at step 3000 : 0.826892
Valid_dataset Set Validation 82.58
Testing Set Validation 88.4746848964

Problem 4

Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.

One avenue you can explore is to add multiple layers.

Another one is to use learning rate decay:

global_step = tf.Variable(0)  # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)



In [ ]: