TensorFlow - Multilayer Perceptron Example

About this notebook

I will build a MLP network using two hidden layers to which I apply the ReLu activation function. I will be using a rather large network in terms of nodes to give the network a chance to learn as much as it can but at the same time apply dropout to decrease the chance of overfitting. The result should be about 95+% precition accuracy, which is decent given how simple the network is.

Requirements

  • Python 3.5
  • TensorFlow 1.1

Import dependencies


In [7]:
import tensorflow as tf

Import data

I use the MNIST database of handwritten digits found @ http://yann.lecun.com/exdb/mnist/.


In [8]:
from tensorflow.examples.tutorials.mnist import input_data
mnist_data = input_data.read_data_sets("MNIST_data/", one_hot=True)


Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

Define parameters

In this section, I am defining:

  • relevant hypter parameters (optimizer and model parameters),
  • define input placeholders, and
  • define weights and biases

In [9]:
# Hyper parameters
training_epochs = 100
learning_rate = 0.01
batch_size = 256
print_loss_for_each_epoch = 10
test_validation_size = 512 # validation images to use during training - solely for printing purposes

# Network parameters
n_input = 784               # MNIST length of 28 by 28 image when stored as a column vector
n_hidden_layer_1 = 1024     # features in the 1st hidden layer
n_hidden_layer_2 = 1024     # features in the 2nd hidden layer
n_classes = 10              # total label classes (0-9 digits)
dropout_keep_rate = 0.75    # only 25% of the hidden outputs are passed on

# Graph input placeholders
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32)

# Define weights and biases
weights = {'hl_1': tf.Variable(tf.truncated_normal([n_input,n_hidden_layer_1])),
           'hl_2': tf.Variable(tf.truncated_normal([n_hidden_layer_1,n_hidden_layer_2])),
           'output': tf.Variable(tf.truncated_normal([n_hidden_layer_2,n_classes]))}

biases = {'hl_1': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_1])),
          'hl_2': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_2])),
          'output': tf.Variable(0.01 * tf.truncated_normal([n_classes]))}

Create TF Graph

I build the TensorFlow graph using two hidden layers with dropout to prevent overfitting. I am using Cross Entropy to classify my logits and finally an Adam Optimizer to reduce the training error. I found that the Adam Optimizer converges faster than Stocastic Gradient Descent.


In [10]:
def multilayer_perceptron_network(x, weights, biases):
    # Hidden layer 1 with ReLu
    hidden_layer_1 = tf.add(tf.matmul(x, weights['hl_1']), biases['hl_1'])
    hidden_layer_1 = tf.nn.relu(hidden_layer_1)
    hidden_layer_1 = tf.nn.dropout(hidden_layer_1, keep_prob=keep_prob)

    # Hidden layer 2 with ReLu
    hidden_layer_2 = tf.add(tf.matmul(hidden_layer_1, weights['hl_2']), biases['hl_2'])
    hidden_layer_2 = tf.nn.relu(hidden_layer_2)
    hidden_layer_2 = tf.nn.dropout(hidden_layer_2, keep_prob=keep_prob)

    # Output layer with linear activation
    output_layer = tf.add(tf.matmul(hidden_layer_2, weights['output']), biases['output'])
    return output_layer

# Construct model
logits = multilayer_perceptron_network(x, weights, biases)

# Define cost and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss=cost)

# Define accuracy
correct_prediction = tf.equal(tf.arg_max(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

Launch TF Graph

Finally, a simple output to calculate the train, validation and test accuracy. Not bad for a simple MLP net.


In [11]:
# Init variables
init = tf.global_variables_initializer()

# Run Graph
with tf.Session() as sess:
    sess.run(init)

    # Training cycle
    for epoch in range(training_epochs):

        for batch in range(mnist_data.train.num_examples//batch_size):

            # Get x and y values for the given batch
            batch_x, batch_y = mnist_data.train.next_batch(batch_size)

            # Compute graph with respect to 'optimizer' and 'cost'
            _, loss, training_accuracy = sess.run([optimizer, cost, accuracy], feed_dict={x: batch_x,
                                                             y: batch_y,
                                                             keep_prob: dropout_keep_rate})

            # Compute graph with respect to validation data
            validation_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.validation.images[:test_validation_size],
                                                                y: mnist_data.validation.labels[:test_validation_size],
                                                                keep_prob: 1.})

        # Display logs per epoch step
        if epoch % print_loss_for_each_epoch == 0:
            print('Epoch {:>2}, Batches {:>3}, Loss: {:>10.4f}, Train Accuracy: {:.4f}, Val Accuracy: {:.4f}'.format(
                epoch + 1, # epoch starts at 0
                batch + 1, # batch starts at 0
                loss,
                training_accuracy,
                validation_accuracy))

    print('Optimization Finished!')

    # Testing cycle
    test_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.test.images,
                                                  y: mnist_data.test.labels,
                                                  keep_prob: 1.})
    print('Test accuracy: {:3f}'.format(test_accuracy))


Epoch  1, Batches 214, Loss:    86.9222, Train Accuracy: 0.9062, Val Accuracy: 0.9453
Epoch 11, Batches 214, Loss:    10.5737, Train Accuracy: 0.9766, Val Accuracy: 0.9688
Epoch 21, Batches 214, Loss:     1.1919, Train Accuracy: 0.9844, Val Accuracy: 0.9707
Epoch 31, Batches 214, Loss:     1.8853, Train Accuracy: 0.9805, Val Accuracy: 0.9785
Epoch 41, Batches 214, Loss:     0.1632, Train Accuracy: 0.9844, Val Accuracy: 0.9727
Epoch 51, Batches 214, Loss:     0.5461, Train Accuracy: 0.9375, Val Accuracy: 0.9453
Epoch 61, Batches 214, Loss:     0.0982, Train Accuracy: 0.9609, Val Accuracy: 0.9590
Epoch 71, Batches 214, Loss:     0.1424, Train Accuracy: 0.9570, Val Accuracy: 0.9590
Epoch 81, Batches 214, Loss:     0.3279, Train Accuracy: 0.9609, Val Accuracy: 0.9531
Epoch 91, Batches 214, Loss:     0.1936, Train Accuracy: 0.9766, Val Accuracy: 0.9629
Optimization Finished!
Test accuracy: 0.944500