In this notebook, we want to build a logistic regression classifier in Tensorflow for MNIST.
The logistic regression classifier is defined as: $y = sigmoid(Wx + b)$
In [1]:
from __future__ import print_function
import tensorflow as tf
import math
In [2]:
# Get the MNIST dataset
from tensorflow.examples.tutorials.mnist import input_data
In [3]:
# Step 0: Read in the data into the "/data/mnist" folder
mnist = input_data.read_data_sets('./data/mnist', one_hot=True)
In [4]:
# Step 1: Create placeholders to feed your inputs and labels into
# Each MNIST image has 28*28 = 784 pixels. So you can represent it as a 1x784 Tensor.
# There are 10 possible classes for each image, corresponding to digits 0-9.
# Name the input placeholder "mnist_inputs" and the labels placeholder "mnist_labels"
In [5]:
# Step 2: Create Variables for the parameters of the model: the weights and the bias.
# Initialize the bias to a 0 tensor. (hint: tf.zeros)
# Initialize the weights with a random uniform distribution, with a max of 1 and a min of -1. (hint: tf.random_uniform)
# Be sure to think carefully about the shapes of these tensors.
# Optional: Define a global_step variable for use in tensorboard.
In [6]:
# Step 3: Build the model, stringing together your placeholders and variables to create
# two ops: one for the logits (output right before sigmoid), and one for the probability
# distribution generated from the model (output right after sigmoid/softmax operation).
# tf.nn.softmax may come in handy for generating the probabilities.
# Name the logits operation "logits", and the probability operation "predictions".
In [7]:
# Step 4: Define your loss function. Use the cross entropy loss function, and use tensorflow's
# built in "tf.nn.sofmtax_cross_entropy_with_logits(logits, labels)" function to get the xentropy
# of each instance in the batch. Then, get the average loss of the batch.
# Name the loss op "loss"
In [8]:
# Step 5: Define a function to get the accuracy of your model on this batch. Name it "accuracy"
In [9]:
# Step 6: Define an optimizer that you want to use, and create the training operation to
# use the optimizer to minimize the loss. Name the training operation "train_op"
In [10]:
# Define summary ops for TensorBoard (optional). Name the summary op "summary_op".
In [11]:
# Step 7: Create a session for the model to run in, and then set up a train loop
# to optimize the weights given the mnist data. Optionally, add tensorboard visualization too.
nb_train_examples = mnist.train.num_examples
batch_size = 128
nb_epochs = 30
batches_per_epoch = int(math.ceil(nb_train_examples/batch_size))
log_period = 250
with tf.Session() as sess:
# Step 7.1 Initialize your Variables
# Set up tensorboard writer (optional)
for epoch in range(nb_epochs):
epoch_total_loss = 0
epoch_total_accuracy = 0
for batch in range(batches_per_epoch):
loop_global_step = sess.run(global_step) + 1
batch_inputs, batch_labels = mnist.train.next_batch(batch_size)
# Step 7.2 Get the batch loss, batch accuracy, and run the training op.
# If the log period is up, write summaries to tensorboard.
epoch_total_loss += batch_loss
epoch_total_accuracy += batch_acc
epoch_average_loss = epoch_total_loss / batches_per_epoch
epoch_average_accuracy = epoch_total_accuracy / batches_per_epoch
print("Epoch {} done! Average Train Loss: {}, Average Train Accuracy: {}".format(epoch,
epoch_average_loss,
epoch_average_accuracy))
print("Finished {} epochs".format(nb_epochs))
In [ ]: