Image Classification

In this project, we'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. We'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. We'll build a convolutional, max pooling, dropout, and fully connected layers. At the end, we'll get to see our neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 3
sample_id = 2
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 3:
Samples: 10000
Label Counts: {0: 994, 1: 1042, 2: 965, 3: 997, 4: 990, 5: 1029, 6: 978, 7: 1015, 8: 961, 9: 1029}
First 20 Labels: [8, 5, 0, 6, 9, 2, 8, 3, 6, 2, 7, 4, 6, 9, 0, 0, 7, 3, 7, 2]

Example of Image 2:
Image - Min Value: 21 Max Value: 255
Image - Shape: (32, 32, 3)
Label - Label Id: 0 Name: airplane

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    #Implement Function
    result_normalize = x/255
    #print(result_normalize[0])
    return result_normalize

tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, we'll be implementing a function for preprocessing. This time, we'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.


In [4]:
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    #Implement Function
    #print(x)
    result_one_hot_encode = np.eye(10)[x]
    #print(result_one_hot_encode)
    return result_one_hot_encode


tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but we don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [5]:
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is our first checkpoint. If we ever decide to come back to this notebook or have to restart the notebook, we can start from here. The preprocessed data has been saved to disk.


In [6]:
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, we'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [7]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    #Implement Function
    print(image_shape)
    return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name="x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    #Implement Function
    #print(n_classes)
    return tf.placeholder(tf.float32, [None, n_classes], name="y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # Implement Function
    return tf.placeholder(tf.float32, name="keep_prob")


tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


(32, 32, 3)
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, we should implement the function conv2d_maxpool to apply convolution then max pooling.


In [8]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    #Implement Function
    print(conv_ksize)
    print(x_tensor.shape)
    print(conv_num_outputs)
    weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], int(x_tensor.shape[3]), conv_num_outputs ),\
                                              mean=0, stddev=0.1))
    bias = tf.Variable(tf.zeros(conv_num_outputs))
    
    print(conv_strides)
    conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    
    conv_layer = tf.nn.relu(conv_layer)
    
    print(pool_ksize)
    print(pool_strides)
    conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], \
                                strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME' )
    
    return conv_layer 

tests.test_con_pool(conv2d_maxpool)


(2, 2)
(?, 32, 32, 5)
10
(4, 4)
(2, 2)
(2, 2)
Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size).


In [9]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    #Implement Function
    print(x_tensor.shape)
    flattened_x_tensor = tf.reshape(x_tensor, [-1, int(x_tensor.shape[1]) * int(x_tensor.shape[2]) * int(x_tensor.shape[3]) ])
    print(flattened_x_tensor.shape)
    return flattened_x_tensor

tests.test_flatten(flatten)


(?, 10, 30, 6)
(?, 1800)
Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).


In [10]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # Implement Function
    print(x_tensor.shape)
    print(num_outputs)
    weights = tf.Variable(tf.truncated_normal( (int(x_tensor.shape[1]), num_outputs), mean=0, stddev=0.1 ) )
    bias = tf.Variable(tf.zeros(num_outputs))
    
    layer = tf.add(tf.matmul(x_tensor, weights), bias)
    print(layer.shape)
    return layer

tests.test_fully_conn(fully_conn)


(?, 128)
40
(?, 40)
Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs).

Note: Activation, softmax, or cross entropy should not be applied to this.


In [33]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    #Implement Function
    print(x_tensor.shape)
    print(num_outputs)
    weights = tf.Variable(tf.truncated_normal((int(x_tensor.shape[1]), num_outputs), mean=0, stddev=0.1 ) )
    bias = tf.Variable(tf.zeros(num_outputs))
    
    layer = tf.add(tf.matmul(x_tensor, weights), bias )
    print(layer.shape)

    return layer

tests.test_output(output)


(?, 128)
40
(?, 40)
Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply Fully Connected Layers
  • Apply an Output Layer
  • Return the output and apply dropout

In [35]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv1 = conv2d_maxpool(x, 64, (5,5), (1,1), (3,3), (2,2))
    #conv1 = conv2d_maxpool(x, 64, (8,8), (1,1), (3,3), (2,2))
    #conv1 = conv2d_maxpool(x, 64, (5,5), (1,1), (3,3), (2,2))
    #print('dsdsa')
    #print(conv1.shape)
    conv2 = conv2d_maxpool(conv1, 128, (3,3), (1,1), (2,2), (2,2))
    #conv2 = conv2d_maxpool(conv1, 32, (4,4), (1,1), (3,3), (2,2))
    #conv2 = conv2d_maxpool(conv1, 32, (3,3), (1,1), (3,3), (2,2))
    
    conv3 = conv2d_maxpool(conv2, 256, (2,2), (1,1), (2,2), (2,2))
    #conv3 = conv2d_maxpool(conv2, 16, (2,2), (1,1), (2,2), (2,2))
    #conv3 = conv2d_maxpool(conv2, 16, (2,2), (1,1), (2,2), (2,2) )
    
    # Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    fc1 = flatten(conv3)

    # Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    fc1 = fully_conn(fc1, 1024)
    fc1 = fully_conn(fc1, 512)
    fc1 = fully_conn(fc1, 256)
    #fc1 = fully_conn(fc1, 256)
    #c1 = fully_conn(fc1, 256)
    fc1 = tf.nn.relu(fc1)
    
    # Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    fc1 = tf.nn.dropout(fc1, keep_prob)
    outputvar = output(fc1, 10)
    
    # return output
    return outputvar


##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


(32, 32, 3)
(5, 5)
(?, 32, 32, 3)
64
(1, 1)
(3, 3)
(2, 2)
(3, 3)
(?, 16, 16, 64)
128
(1, 1)
(2, 2)
(2, 2)
(2, 2)
(?, 8, 8, 128)
256
(1, 1)
(2, 2)
(2, 2)
(?, 4, 4, 256)
(?, 4096)
(?, 4096)
1024
(?, 1024)
(?, 1024)
512
(?, 512)
(?, 512)
256
(?, 256)
(?, 256)
10
(?, 10)
(5, 5)
(?, 32, 32, 3)
64
(1, 1)
(3, 3)
(2, 2)
(3, 3)
(?, 16, 16, 64)
128
(1, 1)
(2, 2)
(2, 2)
(2, 2)
(?, 8, 8, 128)
256
(1, 1)
(2, 2)
(2, 2)
(?, 4, 4, 256)
(?, 4096)
(?, 4096)
1024
(?, 1024)
(?, 1024)
512
(?, 512)
(?, 512)
256
(?, 256)
(?, 256)
10
(?, 10)
Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [36]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # Implement Function
    #print(optimizer)
    #print(feature_batch.size)
    #cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=feature_batch, labels=label_batch))
    #optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
    session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob:keep_probability})
    
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [37]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # Implement Function
    loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.})
    valid_acc = session.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.})
    print ('Loss: {:>10.4f} Validation Accuracy: {:.6f} '.format(loss,valid_acc))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for.
  • Set keep_probability to the probability of keeping a node using dropout

In [38]:
# Tune Parameters
epochs = 40
batch_size = 128
keep_probability = 0.5

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [39]:
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss:     2.1310 Validation Accuracy: 0.290000 
Epoch  2, CIFAR-10 Batch 1:  Loss:     1.9401 Validation Accuracy: 0.378400 
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.8085 Validation Accuracy: 0.383800 
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.5841 Validation Accuracy: 0.434600 
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.3746 Validation Accuracy: 0.479200 
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.2545 Validation Accuracy: 0.473200 
Epoch  7, CIFAR-10 Batch 1:  Loss:     0.9879 Validation Accuracy: 0.497800 
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.0820 Validation Accuracy: 0.493400 
Epoch  9, CIFAR-10 Batch 1:  Loss:     0.8002 Validation Accuracy: 0.532400 
Epoch 10, CIFAR-10 Batch 1:  Loss:     0.7418 Validation Accuracy: 0.512600 
Epoch 11, CIFAR-10 Batch 1:  Loss:     0.6078 Validation Accuracy: 0.555600 
Epoch 12, CIFAR-10 Batch 1:  Loss:     0.5621 Validation Accuracy: 0.547400 
Epoch 13, CIFAR-10 Batch 1:  Loss:     0.5123 Validation Accuracy: 0.550200 
Epoch 14, CIFAR-10 Batch 1:  Loss:     0.4002 Validation Accuracy: 0.558400 
Epoch 15, CIFAR-10 Batch 1:  Loss:     0.3990 Validation Accuracy: 0.563800 
Epoch 16, CIFAR-10 Batch 1:  Loss:     0.3461 Validation Accuracy: 0.549400 
Epoch 17, CIFAR-10 Batch 1:  Loss:     0.2878 Validation Accuracy: 0.554400 
Epoch 18, CIFAR-10 Batch 1:  Loss:     0.2374 Validation Accuracy: 0.556600 
Epoch 19, CIFAR-10 Batch 1:  Loss:     0.1745 Validation Accuracy: 0.555000 
Epoch 20, CIFAR-10 Batch 1:  Loss:     0.1064 Validation Accuracy: 0.559200 
Epoch 21, CIFAR-10 Batch 1:  Loss:     0.0858 Validation Accuracy: 0.555000 
Epoch 22, CIFAR-10 Batch 1:  Loss:     0.1074 Validation Accuracy: 0.569000 
Epoch 23, CIFAR-10 Batch 1:  Loss:     0.0878 Validation Accuracy: 0.565800 
Epoch 24, CIFAR-10 Batch 1:  Loss:     0.0382 Validation Accuracy: 0.579400 
Epoch 25, CIFAR-10 Batch 1:  Loss:     0.1175 Validation Accuracy: 0.570600 
Epoch 26, CIFAR-10 Batch 1:  Loss:     0.0324 Validation Accuracy: 0.582200 
Epoch 27, CIFAR-10 Batch 1:  Loss:     0.0410 Validation Accuracy: 0.567600 
Epoch 28, CIFAR-10 Batch 1:  Loss:     0.0484 Validation Accuracy: 0.561400 
Epoch 29, CIFAR-10 Batch 1:  Loss:     0.0160 Validation Accuracy: 0.565200 
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.0301 Validation Accuracy: 0.576000 
Epoch 31, CIFAR-10 Batch 1:  Loss:     0.0122 Validation Accuracy: 0.567800 
Epoch 32, CIFAR-10 Batch 1:  Loss:     0.0109 Validation Accuracy: 0.551800 
Epoch 33, CIFAR-10 Batch 1:  Loss:     0.0146 Validation Accuracy: 0.554600 
Epoch 34, CIFAR-10 Batch 1:  Loss:     0.0127 Validation Accuracy: 0.577800 
Epoch 35, CIFAR-10 Batch 1:  Loss:     0.0076 Validation Accuracy: 0.562000 
Epoch 36, CIFAR-10 Batch 1:  Loss:     0.0322 Validation Accuracy: 0.555000 
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.0110 Validation Accuracy: 0.591800 
Epoch 38, CIFAR-10 Batch 1:  Loss:     0.0019 Validation Accuracy: 0.565400 
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.0079 Validation Accuracy: 0.568200 
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.0099 Validation Accuracy: 0.559600 

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [40]:
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss:     2.1524 Validation Accuracy: 0.213600 
Epoch  1, CIFAR-10 Batch 2:  Loss:     2.0353 Validation Accuracy: 0.297600 
Epoch  1, CIFAR-10 Batch 3:  Loss:     1.7272 Validation Accuracy: 0.357400 
Epoch  1, CIFAR-10 Batch 4:  Loss:     1.6584 Validation Accuracy: 0.367800 
Epoch  1, CIFAR-10 Batch 5:  Loss:     1.6339 Validation Accuracy: 0.413200 
Epoch  2, CIFAR-10 Batch 1:  Loss:     1.5721 Validation Accuracy: 0.466800 
Epoch  2, CIFAR-10 Batch 2:  Loss:     1.4500 Validation Accuracy: 0.456800 
Epoch  2, CIFAR-10 Batch 3:  Loss:     1.2926 Validation Accuracy: 0.477600 
Epoch  2, CIFAR-10 Batch 4:  Loss:     1.4116 Validation Accuracy: 0.488200 
Epoch  2, CIFAR-10 Batch 5:  Loss:     1.3208 Validation Accuracy: 0.507000 
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.1493 Validation Accuracy: 0.525800 
Epoch  3, CIFAR-10 Batch 2:  Loss:     1.0946 Validation Accuracy: 0.535600 
Epoch  3, CIFAR-10 Batch 3:  Loss:     1.0319 Validation Accuracy: 0.536800 
Epoch  3, CIFAR-10 Batch 4:  Loss:     1.1508 Validation Accuracy: 0.574600 
Epoch  3, CIFAR-10 Batch 5:  Loss:     1.0709 Validation Accuracy: 0.568800 
Epoch  4, CIFAR-10 Batch 1:  Loss:     0.9681 Validation Accuracy: 0.593200 
Epoch  4, CIFAR-10 Batch 2:  Loss:     0.8767 Validation Accuracy: 0.588800 
Epoch  4, CIFAR-10 Batch 3:  Loss:     0.8066 Validation Accuracy: 0.605400 
Epoch  4, CIFAR-10 Batch 4:  Loss:     0.8211 Validation Accuracy: 0.581800 
Epoch  4, CIFAR-10 Batch 5:  Loss:     0.8898 Validation Accuracy: 0.601800 
Epoch  5, CIFAR-10 Batch 1:  Loss:     0.8924 Validation Accuracy: 0.605600 
Epoch  5, CIFAR-10 Batch 2:  Loss:     0.8660 Validation Accuracy: 0.606000 
Epoch  5, CIFAR-10 Batch 3:  Loss:     0.7392 Validation Accuracy: 0.604400 
Epoch  5, CIFAR-10 Batch 4:  Loss:     0.7357 Validation Accuracy: 0.621200 
Epoch  5, CIFAR-10 Batch 5:  Loss:     0.6389 Validation Accuracy: 0.624200 
Epoch  6, CIFAR-10 Batch 1:  Loss:     0.7402 Validation Accuracy: 0.647200 
Epoch  6, CIFAR-10 Batch 2:  Loss:     0.6165 Validation Accuracy: 0.639800 
Epoch  6, CIFAR-10 Batch 3:  Loss:     0.4871 Validation Accuracy: 0.633600 
Epoch  6, CIFAR-10 Batch 4:  Loss:     0.7012 Validation Accuracy: 0.631400 
Epoch  6, CIFAR-10 Batch 5:  Loss:     0.5130 Validation Accuracy: 0.642800 
Epoch  7, CIFAR-10 Batch 1:  Loss:     0.5847 Validation Accuracy: 0.646800 
Epoch  7, CIFAR-10 Batch 2:  Loss:     0.5390 Validation Accuracy: 0.653000 
Epoch  7, CIFAR-10 Batch 3:  Loss:     0.3987 Validation Accuracy: 0.658800 
Epoch  7, CIFAR-10 Batch 4:  Loss:     0.5403 Validation Accuracy: 0.657400 
Epoch  7, CIFAR-10 Batch 5:  Loss:     0.4596 Validation Accuracy: 0.661000 
Epoch  8, CIFAR-10 Batch 1:  Loss:     0.5571 Validation Accuracy: 0.666400 
Epoch  8, CIFAR-10 Batch 2:  Loss:     0.4388 Validation Accuracy: 0.667400 
Epoch  8, CIFAR-10 Batch 3:  Loss:     0.3730 Validation Accuracy: 0.641800 
Epoch  8, CIFAR-10 Batch 4:  Loss:     0.4675 Validation Accuracy: 0.677200 
Epoch  8, CIFAR-10 Batch 5:  Loss:     0.3357 Validation Accuracy: 0.655600 
Epoch  9, CIFAR-10 Batch 1:  Loss:     0.5198 Validation Accuracy: 0.666800 
Epoch  9, CIFAR-10 Batch 2:  Loss:     0.4062 Validation Accuracy: 0.677800 
Epoch  9, CIFAR-10 Batch 3:  Loss:     0.2500 Validation Accuracy: 0.659400 
Epoch  9, CIFAR-10 Batch 4:  Loss:     0.3718 Validation Accuracy: 0.691000 
Epoch  9, CIFAR-10 Batch 5:  Loss:     0.2767 Validation Accuracy: 0.662800 
Epoch 10, CIFAR-10 Batch 1:  Loss:     0.3990 Validation Accuracy: 0.686200 
Epoch 10, CIFAR-10 Batch 2:  Loss:     0.3164 Validation Accuracy: 0.685200 
Epoch 10, CIFAR-10 Batch 3:  Loss:     0.1650 Validation Accuracy: 0.675400 
Epoch 10, CIFAR-10 Batch 4:  Loss:     0.3334 Validation Accuracy: 0.675400 
Epoch 10, CIFAR-10 Batch 5:  Loss:     0.2895 Validation Accuracy: 0.674600 
Epoch 11, CIFAR-10 Batch 1:  Loss:     0.3941 Validation Accuracy: 0.694000 
Epoch 11, CIFAR-10 Batch 2:  Loss:     0.2833 Validation Accuracy: 0.682000 
Epoch 11, CIFAR-10 Batch 3:  Loss:     0.2053 Validation Accuracy: 0.686600 
Epoch 11, CIFAR-10 Batch 4:  Loss:     0.2095 Validation Accuracy: 0.696400 
Epoch 11, CIFAR-10 Batch 5:  Loss:     0.1951 Validation Accuracy: 0.675600 
Epoch 12, CIFAR-10 Batch 1:  Loss:     0.3336 Validation Accuracy: 0.682800 
Epoch 12, CIFAR-10 Batch 2:  Loss:     0.2512 Validation Accuracy: 0.682000 
Epoch 12, CIFAR-10 Batch 3:  Loss:     0.1612 Validation Accuracy: 0.675600 
Epoch 12, CIFAR-10 Batch 4:  Loss:     0.2056 Validation Accuracy: 0.701200 
Epoch 12, CIFAR-10 Batch 5:  Loss:     0.1899 Validation Accuracy: 0.677600 
Epoch 13, CIFAR-10 Batch 1:  Loss:     0.2738 Validation Accuracy: 0.687600 
Epoch 13, CIFAR-10 Batch 2:  Loss:     0.2507 Validation Accuracy: 0.692200 
Epoch 13, CIFAR-10 Batch 3:  Loss:     0.1464 Validation Accuracy: 0.698200 
Epoch 13, CIFAR-10 Batch 4:  Loss:     0.2113 Validation Accuracy: 0.684000 
Epoch 13, CIFAR-10 Batch 5:  Loss:     0.1595 Validation Accuracy: 0.669400 
Epoch 14, CIFAR-10 Batch 1:  Loss:     0.2745 Validation Accuracy: 0.686400 
Epoch 14, CIFAR-10 Batch 2:  Loss:     0.2000 Validation Accuracy: 0.693400 
Epoch 14, CIFAR-10 Batch 3:  Loss:     0.1178 Validation Accuracy: 0.684600 
Epoch 14, CIFAR-10 Batch 4:  Loss:     0.2289 Validation Accuracy: 0.671800 
Epoch 14, CIFAR-10 Batch 5:  Loss:     0.1540 Validation Accuracy: 0.673600 
Epoch 15, CIFAR-10 Batch 1:  Loss:     0.2364 Validation Accuracy: 0.681400 
Epoch 15, CIFAR-10 Batch 2:  Loss:     0.2232 Validation Accuracy: 0.698400 
Epoch 15, CIFAR-10 Batch 3:  Loss:     0.0971 Validation Accuracy: 0.688400 
Epoch 15, CIFAR-10 Batch 4:  Loss:     0.1290 Validation Accuracy: 0.682400 
Epoch 15, CIFAR-10 Batch 5:  Loss:     0.2095 Validation Accuracy: 0.676200 
Epoch 16, CIFAR-10 Batch 1:  Loss:     0.2149 Validation Accuracy: 0.703200 
Epoch 16, CIFAR-10 Batch 2:  Loss:     0.1633 Validation Accuracy: 0.689600 
Epoch 16, CIFAR-10 Batch 3:  Loss:     0.0801 Validation Accuracy: 0.661000 
Epoch 16, CIFAR-10 Batch 4:  Loss:     0.1774 Validation Accuracy: 0.681800 
Epoch 16, CIFAR-10 Batch 5:  Loss:     0.1594 Validation Accuracy: 0.695000 
Epoch 17, CIFAR-10 Batch 1:  Loss:     0.2202 Validation Accuracy: 0.697600 
Epoch 17, CIFAR-10 Batch 2:  Loss:     0.1199 Validation Accuracy: 0.676400 
Epoch 17, CIFAR-10 Batch 3:  Loss:     0.0858 Validation Accuracy: 0.692400 
Epoch 17, CIFAR-10 Batch 4:  Loss:     0.1913 Validation Accuracy: 0.677400 
Epoch 17, CIFAR-10 Batch 5:  Loss:     0.0875 Validation Accuracy: 0.682000 
Epoch 18, CIFAR-10 Batch 1:  Loss:     0.1250 Validation Accuracy: 0.696800 
Epoch 18, CIFAR-10 Batch 2:  Loss:     0.0664 Validation Accuracy: 0.709600 
Epoch 18, CIFAR-10 Batch 3:  Loss:     0.1054 Validation Accuracy: 0.685600 
Epoch 18, CIFAR-10 Batch 4:  Loss:     0.2043 Validation Accuracy: 0.691000 
Epoch 18, CIFAR-10 Batch 5:  Loss:     0.1140 Validation Accuracy: 0.695400 
Epoch 19, CIFAR-10 Batch 1:  Loss:     0.1643 Validation Accuracy: 0.705600 
Epoch 19, CIFAR-10 Batch 2:  Loss:     0.0685 Validation Accuracy: 0.687800 
Epoch 19, CIFAR-10 Batch 3:  Loss:     0.0922 Validation Accuracy: 0.683800 
Epoch 19, CIFAR-10 Batch 4:  Loss:     0.0942 Validation Accuracy: 0.692200 
Epoch 19, CIFAR-10 Batch 5:  Loss:     0.0750 Validation Accuracy: 0.712000 
Epoch 20, CIFAR-10 Batch 1:  Loss:     0.1438 Validation Accuracy: 0.694000 
Epoch 20, CIFAR-10 Batch 2:  Loss:     0.0849 Validation Accuracy: 0.702400 
Epoch 20, CIFAR-10 Batch 3:  Loss:     0.0464 Validation Accuracy: 0.684800 
Epoch 20, CIFAR-10 Batch 4:  Loss:     0.1544 Validation Accuracy: 0.705200 
Epoch 20, CIFAR-10 Batch 5:  Loss:     0.0699 Validation Accuracy: 0.713400 
Epoch 21, CIFAR-10 Batch 1:  Loss:     0.1054 Validation Accuracy: 0.705400 
Epoch 21, CIFAR-10 Batch 2:  Loss:     0.0619 Validation Accuracy: 0.713400 
Epoch 21, CIFAR-10 Batch 3:  Loss:     0.0699 Validation Accuracy: 0.697600 
Epoch 21, CIFAR-10 Batch 4:  Loss:     0.1152 Validation Accuracy: 0.691800 
Epoch 21, CIFAR-10 Batch 5:  Loss:     0.0720 Validation Accuracy: 0.702800 
Epoch 22, CIFAR-10 Batch 1:  Loss:     0.0748 Validation Accuracy: 0.694400 
Epoch 22, CIFAR-10 Batch 2:  Loss:     0.0892 Validation Accuracy: 0.715600 
Epoch 22, CIFAR-10 Batch 3:  Loss:     0.0465 Validation Accuracy: 0.695200 
Epoch 22, CIFAR-10 Batch 4:  Loss:     0.0411 Validation Accuracy: 0.692000 
Epoch 22, CIFAR-10 Batch 5:  Loss:     0.0744 Validation Accuracy: 0.684800 
Epoch 23, CIFAR-10 Batch 1:  Loss:     0.0527 Validation Accuracy: 0.698600 
Epoch 23, CIFAR-10 Batch 2:  Loss:     0.1050 Validation Accuracy: 0.679400 
Epoch 23, CIFAR-10 Batch 3:  Loss:     0.0417 Validation Accuracy: 0.690800 
Epoch 23, CIFAR-10 Batch 4:  Loss:     0.0841 Validation Accuracy: 0.680000 
Epoch 23, CIFAR-10 Batch 5:  Loss:     0.1249 Validation Accuracy: 0.697400 
Epoch 24, CIFAR-10 Batch 1:  Loss:     0.1273 Validation Accuracy: 0.696200 
Epoch 24, CIFAR-10 Batch 2:  Loss:     0.0571 Validation Accuracy: 0.685600 
Epoch 24, CIFAR-10 Batch 3:  Loss:     0.0426 Validation Accuracy: 0.689200 
Epoch 24, CIFAR-10 Batch 4:  Loss:     0.0575 Validation Accuracy: 0.689600 
Epoch 24, CIFAR-10 Batch 5:  Loss:     0.0553 Validation Accuracy: 0.704800 
Epoch 25, CIFAR-10 Batch 1:  Loss:     0.0873 Validation Accuracy: 0.691800 
Epoch 25, CIFAR-10 Batch 2:  Loss:     0.0418 Validation Accuracy: 0.704800 
Epoch 25, CIFAR-10 Batch 3:  Loss:     0.0385 Validation Accuracy: 0.687200 
Epoch 25, CIFAR-10 Batch 4:  Loss:     0.0208 Validation Accuracy: 0.693400 
Epoch 25, CIFAR-10 Batch 5:  Loss:     0.0265 Validation Accuracy: 0.698400 
Epoch 26, CIFAR-10 Batch 1:  Loss:     0.0993 Validation Accuracy: 0.700200 
Epoch 26, CIFAR-10 Batch 2:  Loss:     0.0420 Validation Accuracy: 0.684000 
Epoch 26, CIFAR-10 Batch 3:  Loss:     0.0400 Validation Accuracy: 0.703400 
Epoch 26, CIFAR-10 Batch 4:  Loss:     0.0327 Validation Accuracy: 0.695000 
Epoch 26, CIFAR-10 Batch 5:  Loss:     0.0966 Validation Accuracy: 0.678600 
Epoch 27, CIFAR-10 Batch 1:  Loss:     0.0819 Validation Accuracy: 0.691400 
Epoch 27, CIFAR-10 Batch 2:  Loss:     0.1080 Validation Accuracy: 0.698600 
Epoch 27, CIFAR-10 Batch 3:  Loss:     0.0265 Validation Accuracy: 0.704800 
Epoch 27, CIFAR-10 Batch 4:  Loss:     0.0792 Validation Accuracy: 0.673400 
Epoch 27, CIFAR-10 Batch 5:  Loss:     0.0412 Validation Accuracy: 0.709200 
Epoch 28, CIFAR-10 Batch 1:  Loss:     0.1001 Validation Accuracy: 0.693200 
Epoch 28, CIFAR-10 Batch 2:  Loss:     0.0223 Validation Accuracy: 0.690600 
Epoch 28, CIFAR-10 Batch 3:  Loss:     0.0087 Validation Accuracy: 0.684400 
Epoch 28, CIFAR-10 Batch 4:  Loss:     0.0443 Validation Accuracy: 0.694400 
Epoch 28, CIFAR-10 Batch 5:  Loss:     0.0151 Validation Accuracy: 0.712600 
Epoch 29, CIFAR-10 Batch 1:  Loss:     0.0741 Validation Accuracy: 0.710400 
Epoch 29, CIFAR-10 Batch 2:  Loss:     0.0296 Validation Accuracy: 0.700000 
Epoch 29, CIFAR-10 Batch 3:  Loss:     0.0261 Validation Accuracy: 0.699000 
Epoch 29, CIFAR-10 Batch 4:  Loss:     0.0574 Validation Accuracy: 0.689600 
Epoch 29, CIFAR-10 Batch 5:  Loss:     0.1178 Validation Accuracy: 0.687200 
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.0772 Validation Accuracy: 0.701200 
Epoch 30, CIFAR-10 Batch 2:  Loss:     0.0608 Validation Accuracy: 0.695200 
Epoch 30, CIFAR-10 Batch 3:  Loss:     0.0094 Validation Accuracy: 0.686600 
Epoch 30, CIFAR-10 Batch 4:  Loss:     0.0850 Validation Accuracy: 0.688200 
Epoch 30, CIFAR-10 Batch 5:  Loss:     0.0468 Validation Accuracy: 0.690800 
Epoch 31, CIFAR-10 Batch 1:  Loss:     0.1160 Validation Accuracy: 0.682400 
Epoch 31, CIFAR-10 Batch 2:  Loss:     0.0255 Validation Accuracy: 0.692800 
Epoch 31, CIFAR-10 Batch 3:  Loss:     0.0080 Validation Accuracy: 0.701800 
Epoch 31, CIFAR-10 Batch 4:  Loss:     0.0419 Validation Accuracy: 0.712200 
Epoch 31, CIFAR-10 Batch 5:  Loss:     0.0206 Validation Accuracy: 0.692800 
Epoch 32, CIFAR-10 Batch 1:  Loss:     0.1030 Validation Accuracy: 0.693400 
Epoch 32, CIFAR-10 Batch 2:  Loss:     0.0438 Validation Accuracy: 0.708600 
Epoch 32, CIFAR-10 Batch 3:  Loss:     0.0293 Validation Accuracy: 0.697200 
Epoch 32, CIFAR-10 Batch 4:  Loss:     0.0329 Validation Accuracy: 0.708000 
Epoch 32, CIFAR-10 Batch 5:  Loss:     0.0163 Validation Accuracy: 0.708400 
Epoch 33, CIFAR-10 Batch 1:  Loss:     0.0571 Validation Accuracy: 0.696600 
Epoch 33, CIFAR-10 Batch 2:  Loss:     0.0926 Validation Accuracy: 0.694200 
Epoch 33, CIFAR-10 Batch 3:  Loss:     0.0111 Validation Accuracy: 0.677600 
Epoch 33, CIFAR-10 Batch 4:  Loss:     0.0359 Validation Accuracy: 0.705400 
Epoch 33, CIFAR-10 Batch 5:  Loss:     0.0209 Validation Accuracy: 0.702400 
Epoch 34, CIFAR-10 Batch 1:  Loss:     0.0728 Validation Accuracy: 0.683600 
Epoch 34, CIFAR-10 Batch 2:  Loss:     0.0357 Validation Accuracy: 0.710800 
Epoch 34, CIFAR-10 Batch 3:  Loss:     0.0236 Validation Accuracy: 0.707600 
Epoch 34, CIFAR-10 Batch 4:  Loss:     0.0341 Validation Accuracy: 0.683400 
Epoch 34, CIFAR-10 Batch 5:  Loss:     0.0804 Validation Accuracy: 0.699400 
Epoch 35, CIFAR-10 Batch 1:  Loss:     0.0618 Validation Accuracy: 0.705000 
Epoch 35, CIFAR-10 Batch 2:  Loss:     0.0196 Validation Accuracy: 0.706000 
Epoch 35, CIFAR-10 Batch 3:  Loss:     0.0088 Validation Accuracy: 0.706600 
Epoch 35, CIFAR-10 Batch 4:  Loss:     0.0148 Validation Accuracy: 0.699200 
Epoch 35, CIFAR-10 Batch 5:  Loss:     0.0351 Validation Accuracy: 0.704000 
Epoch 36, CIFAR-10 Batch 1:  Loss:     0.0282 Validation Accuracy: 0.709400 
Epoch 36, CIFAR-10 Batch 2:  Loss:     0.0162 Validation Accuracy: 0.697800 
Epoch 36, CIFAR-10 Batch 3:  Loss:     0.0051 Validation Accuracy: 0.708400 
Epoch 36, CIFAR-10 Batch 4:  Loss:     0.0351 Validation Accuracy: 0.707000 
Epoch 36, CIFAR-10 Batch 5:  Loss:     0.0142 Validation Accuracy: 0.709000 
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.0521 Validation Accuracy: 0.704000 
Epoch 37, CIFAR-10 Batch 2:  Loss:     0.0440 Validation Accuracy: 0.710000 
Epoch 37, CIFAR-10 Batch 3:  Loss:     0.0043 Validation Accuracy: 0.711200 
Epoch 37, CIFAR-10 Batch 4:  Loss:     0.0873 Validation Accuracy: 0.718800 
Epoch 37, CIFAR-10 Batch 5:  Loss:     0.0161 Validation Accuracy: 0.704000 
Epoch 38, CIFAR-10 Batch 1:  Loss:     0.0418 Validation Accuracy: 0.709200 
Epoch 38, CIFAR-10 Batch 2:  Loss:     0.0107 Validation Accuracy: 0.696200 
Epoch 38, CIFAR-10 Batch 3:  Loss:     0.0015 Validation Accuracy: 0.698800 
Epoch 38, CIFAR-10 Batch 4:  Loss:     0.0415 Validation Accuracy: 0.705600 
Epoch 38, CIFAR-10 Batch 5:  Loss:     0.0319 Validation Accuracy: 0.698600 
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.1003 Validation Accuracy: 0.700200 
Epoch 39, CIFAR-10 Batch 2:  Loss:     0.0050 Validation Accuracy: 0.699400 
Epoch 39, CIFAR-10 Batch 3:  Loss:     0.0348 Validation Accuracy: 0.702600 
Epoch 39, CIFAR-10 Batch 4:  Loss:     0.0183 Validation Accuracy: 0.699600 
Epoch 39, CIFAR-10 Batch 5:  Loss:     0.0072 Validation Accuracy: 0.709600 
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.0383 Validation Accuracy: 0.713200 
Epoch 40, CIFAR-10 Batch 2:  Loss:     0.0030 Validation Accuracy: 0.711000 
Epoch 40, CIFAR-10 Batch 3:  Loss:     0.0056 Validation Accuracy: 0.696800 
Epoch 40, CIFAR-10 Batch 4:  Loss:     0.0394 Validation Accuracy: 0.701200 
Epoch 40, CIFAR-10 Batch 5:  Loss:     0.0477 Validation Accuracy: 0.711600 

Checkpoint

The model has been saved to disk.

Test Model

Test the model against the test dataset. This will be our final accuracy.


In [41]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.700751582278481