Deep Learning

Assignment 3

Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.

The goal of this assignment is to explore regularization techniques.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle


e:\python36\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

First reload the data we generated in 1_notmnist.ipynb.


In [2]:
pickle_file = '../notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)

Reformat into a shape that's more adapted to the models we're going to train:

  • data as a flat matrix,
  • labels as float 1-hot encodings.

In [3]:
image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


Training set (200000, 784) (200000, 10)
Validation set (10000, 784) (10000, 10)
Test set (10000, 784) (10000, 10)

In [4]:
def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

Problem 1

Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.



In [3]:
graph = tf.Graph()
with graph.as_default():
...
 loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+ \
    tf.scalar_mul(beta, tf.nn.l2_loss(weights1)+tf.nn.l2_loss(weights2))


  File "<ipython-input-3-77afdc05f092>", line 2
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+     tf.scalar_mul(beta, tf.nn.l2_loss(weights1)+tf.nn.l2_loss(weights2))
    ^
IndentationError: unexpected indent
summary

With

batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001

Results

  • Test accuracy: 88.5% with beta=0.000000 (no L2 regulization)
  • Test accuracy: 86.7% with beta=0.000010
  • Test accuracy: 88.8% with beta=0.000100
  • Test accuracy: 92.6% with beta=0.001000
  • Test accuracy: 89.7% with beta=0.010000
  • Test accuracy: 82.2% with beta=0.100000
  • Test accuracy: 10.0% with beta=1.000000

Problem 2

Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?



In [1]:
offset = 0 #offset = (step * batch_size) % (train_labels.shape[0] - batch_size)

With

batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001

Results

  • Original Test accuracy: 92.6% with beta=0.001000
  • With offset = 0: Test accuracy: 67.5% with beta=0.001000

Problem 3

Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.

What happens to our extreme overfitting case?



In [ ]:
keep_rate = 0.5
  dropout = tf.nn.dropout(activated_hidden_layer, keep_rate) #dropout if applied after activation
  logits = tf.matmul(dropout, weights2) + biases2

Vary keep_rate:

  • Test accuracy: 92.7% with beta=0.001000, keep_rate =1.000000
  • Test accuracy: 92.3% with beta=0.001000, keep_rate =0.800000
  • Test accuracy: 91.8% with beta=0.001000, keep_rate =0.600000
  • Test accuracy: 90.7% with beta=0.001000, keep_rate =0.400000
  • Test accuracy: 87.0% with beta=0.001000, keep_rate =0.200000

Vary beta while keep keep_rate=0.5

  • Test accuracy: 91.7% with beta=0.001000, keep_rate =0.500000
  • Test accuracy: 87.6% with beta=0.000100, keep_rate =0.500000
  • Test accuracy: 89.5% with beta=0.010000, keep_rate =0.500000

Note that keep_rate cannot be set to be 0: range (0, 1]

Worse Case offset=0: Significant Improvement

  • Normal: Test accuracy: 91.7% with beta=0.001000, keep_rate =0.500000
  • offset = 0 without dropout: Test accuracy: 67.5% with beta=0.001000 (keep_rate =1)
  • offset = 0 with dropout: Test accuracy: 74.6% with beta=0.001000, keep_rate =0.500000

Problem 4

Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.

One avenue you can explore is to add multiple layers.

Another one is to use learning rate decay:

global_step = tf.Variable(0)  # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)


Fixed Learning Rate

batch_size = 128
num_hidden_nodes1 = 1024
num_hidden_nodes2 = 1024
beta = 0.001
num_steps = 3001
keep_rate = 0.5
learning_rate=1e-3
  • Test accuracy: 89.1% with beta=0.001000, keep_rate =0.500000, learning_rate=0.001000
  • Test accuracy: 83.4% with beta=0.001000, keep_rate =0.500000, learning_rate=0.010000
  • learning_rate = 0.1: blow up with NaN
  • learning_rate = 0.5 (all runs in problem 1-3 are with 0.5): blow up with NaN
  • learning_rate = 1e-4: very slow

Learning Rate Decay

  • learning_rate = tf.train.exponential_decay(0.01, global_step, 100, 0.95): Test accuracy: 85.5% with beta=0.001000, keep_rate =0.500000
  • learning_rate = tf.train.exponential_decay(0.005, global_step, 100, 0.95): Test accuracy: 88.9% with beta=0.001000, keep_rate =0.500000
  • learning_rate = tf.train.exponential_decay(0.001, global_step, 100, 0.95): Test accuracy: 89.3% with beta=0.001000, keep_rate =0.500000
  • learning_rate = tf.train.exponential_decay(0.001, global_step, 100, 0.5): Test accuracy: 85.4% with beta=0.001000, keep_rate =0.500000
  • learning_rate = tf.train.exponential_decay(0.01, global_step, 100, 0.5): Test accuracy: 88.0% with beta=0.001000, keep_rate =0.500000

More Data without Learning Rate De

batch_size = 128
num_hidden_nodes1 = 1024
num_hidden_nodes2 = 1024
beta = 0.001
num_steps = 30001
keep_rate = 0.5
learning_rate = 1e-3
  • 30k steps: Test accuracy: 86.8% with beta=0.001000, keep_rate =0.500000

change size of layers

batch_size = 128
num_hidden_nodes1 = 256
num_hidden_nodes2 = 512
beta = 0.001
num_steps = 3001
keep_rate = 0.5
learning_rate=1e-3
  • Test accuracy: 86.1% with beta=0.001000, keep_rate =0.500000
  • Test accuracy: 85.7% with beta=0.001000, keep_rate =1.000000

Forum user mentioned 4 hidden layer solution to get to 97.3%

https://discussions.udacity.com/t/assignment-3-3-how-to-implement-dropout/45730/24

I was able to get an accuracy of 97.3% using a 4 hidden layer network 1024x1024x305x75 and 95k steps. The trick was to use good weight initialization (sqrt(2/n)) and lower dropout rate (I used 0.75). The code is here https://discussions.udacity.com/t/assignment-4-problem-2/46525/26?u=endri.deliu. With conv nets you get even higher.

prob3.4_endri.deliu.py runs (after fixing a few compilcation problems due to python-3) as following to get 96.7%:

Initialized
Minibatch loss at step 0 : 2.4214315
Minibatch accuracy: 33.6%
Validation accuracy: 21.9%
Minibatch loss at step 500 : 0.74792475
Minibatch accuracy: 85.2%
Validation accuracy: 85.1%
Minibatch loss at step 1000 : 0.6289795
Minibatch accuracy: 85.9%
Validation accuracy: 86.6%
Minibatch loss at step 1500 : 0.45435938
Minibatch accuracy: 90.6%
Validation accuracy: 87.2%
Minibatch loss at step 2000 : 0.64454144
Minibatch accuracy: 83.6%
Validation accuracy: 87.9%
Minibatch loss at step 2500 : 0.47072983
Minibatch accuracy: 85.2%
Validation accuracy: 88.7%
Minibatch loss at step 3000 : 0.33217508
Minibatch accuracy: 93.8%
Validation accuracy: 88.8%
...
Minibatch loss at step 92500 : 0.14325579
Minibatch accuracy: 98.4%
Validation accuracy: 92.6%
Minibatch loss at step 93000 : 0.07832281
Minibatch accuracy: 98.4%
Validation accuracy: 92.7%
Minibatch loss at step 93500 : 0.056985322
Minibatch accuracy: 99.2%
Validation accuracy: 92.7%
Minibatch loss at step 94000 : 0.097948775
Minibatch accuracy: 99.2%
Validation accuracy: 92.7%
Minibatch loss at step 94500 : 0.08198348
Minibatch accuracy: 97.7%
Validation accuracy: 92.6%
Minibatch loss at step 95000 : 0.10525039
Minibatch accuracy: 98.4%
Validation accuracy: 92.6%
##########################
Test accuracy: 96.7%

Full output is at output_endri.deliu.txt.

Another run with only 3000 steps has a result of 93.8%

Its setup

batch_size = 128
hidden_layer1_size = 1024
hidden_layer2_size = 305
hidden_lastlayer_size = 75

use_multilayers = True

regularization_meta=0.03 #Note that this is not used in the code (commented out)
...
num_steps = 95001

Analysis

  • 4 hidden layer network 1024x1024x305x75 inspite of the above definition of only 3 hidden layer sizes since the hidden_layer1_size is used twice.
  • learning rate deay is used: learning_rate = tf.train.exponential_decay(0.3, global_step, 3500, 0.86, staircase=True)
  • He uses the n=weight_matrix.shape[0] to calculate the initial distribution using stddev=np.sqrt(2/n)
  • dropout is used
    • keep_prob=75% for training
    • keep_prob=100% for validation and testing

My Own 6-Layer Code

prob3.4_6layers.py:

batch_size = 128
num_hidden_nodes1 = 1024
num_hidden_nodes2 = 1024
num_hidden_nodes3 = 305
num_hidden_nodes4 = 75
beta = 0.03
num_steps = 30001
keep_rate = 0.75

results:

Initialized
Minibatch loss at step 0: 58.998505. learning_rate=0.300000
Minibatch accuracy: 11.7%
Minibatch loss at step 500: 1.461278. learning_rate=0.300000
Minibatch accuracy: 78.1%
...
Minibatch loss at step 30000: 1.107867. learning_rate=0.089765
Minibatch accuracy: 85.9%
#############################
Test accuracy: 88.5% with beta=0.030000, keep_rate =0.750000

Remove Regularization: Better Results

Initialized
Minibatch loss at step 0: 2.484716. learning_rate=0.300000
Minibatch accuracy: 12.5%
Minibatch loss at step 500: 0.748225. learning_rate=0.300000
Minibatch accuracy: 77.3%
Minibatch loss at step 1000: 0.730464. learning_rate=0.300000
Minibatch accuracy: 78.1%
Minibatch loss at step 1500: 0.463169. learning_rate=0.300000
Minibatch accuracy: 85.9%
Minibatch loss at step 2000: 0.601513. learning_rate=0.300000
Minibatch accuracy: 79.7%
Minibatch loss at step 2500: 0.561515. learning_rate=0.300000
Minibatch accuracy: 82.0%
Minibatch loss at step 3000: 0.287524. learning_rate=0.300000
Minibatch accuracy: 90.6%
#############################
Test accuracy: 93.8% with beta=0.000000, keep_rate =0.750000

This is as good as Endri.Deliu's code. Note that without reguliarzation, the initial loss is much smaller.

My Best Result: 96.7%

prob3.4_6layers.py:

batch_size = 128
num_hidden_nodes1 = 1024
num_hidden_nodes2 = 1024
num_hidden_nodes3 = 305
num_hidden_nodes4 = 75
beta = 0
num_steps = 95001
keep_rate = 0.75

results:

Minibatch loss at step 94000: 0.077660. learning_rate=0.005944
Minibatch accuracy: 97.7%
Minibatch loss at step 94500: 0.097502. learning_rate=0.005112
Minibatch accuracy: 97.7%
Minibatch loss at step 95000: 0.100003. learning_rate=0.005112
Minibatch accuracy: 96.1%
#############################
Test accuracy: 96.7% with beta=0.000000, keep_rate =0.750000

In [ ]: