Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


CIFAR-10 Dataset: 171MB [00:31, 5.37MB/s]                              
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [13]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 0
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 0:
Image - Min Value: 0 Max Value: 255
Image - Shape: (32, 32, 3)
Label - Label Id: 6 Name: frog

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [14]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    x_norm = (x - np.min(x)) / (np.max(x) - np.min(x))
    return x_norm


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [23]:
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    x_reshape = np.asarray(x)
    labels = (np.arange(10) == x_reshape[:,None]).astype(np.float32)
    return labels


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [24]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [25]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [28]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    height = image_shape[0]
    width = image_shape[1]
    depth = image_shape[2]
    return tf.placeholder(tf.float32, shape=(None, height, width, depth), name='x')


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, name='keep_prob')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [36]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    weight = tf.Variable(tf.truncated_normal([conv_ksize[0],
                                             conv_ksize[1],
                                             x_tensor.get_shape().as_list()[-1],
                                             conv_num_outputs],
                                            stddev = 0.1))
    bias = tf.Variable(tf.zeros(conv_num_outputs, dtype=tf.float32))
    conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1],
                             padding='SAME')
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    conv_layer = tf.nn.relu(conv_layer)
    conv_layer = tf.nn.max_pool(conv_layer,
                               ksize=[1, pool_ksize[0], pool_ksize[1], 1],
                               strides = [1, pool_strides[0], pool_strides[1], 1],
                               padding='SAME')
    return conv_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [45]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    flattened_size = x_tensor.shape[1] * x_tensor.shape[2] * x_tensor.shape[3]
    return tf.reshape(x_tensor, [-1, flattened_size.value])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [52]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    num_features = x_tensor.shape[1].value
    weights = tf.Variable(tf.truncated_normal([num_features, num_outputs], stddev=0.1))
    biases = tf.Variable(tf.zeros([num_outputs]))
    fc = tf.add(tf.matmul(x_tensor, weights), biases)
    fc = tf.nn.relu(fc)
    return fc


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [53]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    num_features = x_tensor.shape[1].value
    weights = tf.Variable(tf.truncated_normal([num_features, num_outputs], stddev = 0.1))
    biases = tf.Variable(tf.zeros([num_outputs]))
    output_layer = tf.add(tf.matmul(x_tensor, weights), biases)
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [56]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    num_conv_layers = 3
    conv_num_outputs = [32, 64, 128]
    conv_ksizes = [[3, 3], [3, 3], [3, 3]]
    conv_strides = [[1, 1], [1, 1], [1, 1]]
    pool_ksizes = [[2, 2], [2, 2], [2, 2]]
    pool_strides = [[2, 2], [2, 2], [2, 2]]
    
    conv_layer = x
    for i in range(num_conv_layers):
        conv_layer = conv2d_maxpool(conv_layer, 
                                   conv_num_outputs[i],
                                   conv_ksizes[i],
                                   conv_strides[i],
                                   pool_ksizes[i],
                                   pool_strides[i])

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flatten_layer = flatten(conv_layer)
    

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    num_fully_conn_layers = 3
    fully_num_outputs = 10
    
    fully_conn_layer = flatten_layer
    for i in range(num_fully_conn_layers):
        fully_conn_layer = fully_conn(fully_conn_layer, fully_num_outputs)
        fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
    
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    num_classes = 10
    output_layer = output(fully_conn_layer, num_classes)
    
    # TODO: return output
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [57]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [62]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
    valid_accuracy = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
    
    print(' Loss : {} ' .format(loss))
    print(' Validation Accuracy : {} ' .format(valid_accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [64]:
# TODO: Tune Parameters
epochs = 30
batch_size = 128
keep_probability = 0.8

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [65]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:   Loss : 2.1613826751708984 
 Validation Accuracy : 0.24959999322891235 
Epoch  2, CIFAR-10 Batch 1:   Loss : 2.115671396255493 
 Validation Accuracy : 0.31380000710487366 
Epoch  3, CIFAR-10 Batch 1:   Loss : 1.9672447443008423 
 Validation Accuracy : 0.36000001430511475 
Epoch  4, CIFAR-10 Batch 1:   Loss : 1.9215888977050781 
 Validation Accuracy : 0.38359999656677246 
Epoch  5, CIFAR-10 Batch 1:   Loss : 1.8318796157836914 
 Validation Accuracy : 0.38679999113082886 
Epoch  6, CIFAR-10 Batch 1:   Loss : 1.7326921224594116 
 Validation Accuracy : 0.4065999984741211 
Epoch  7, CIFAR-10 Batch 1:   Loss : 1.614936113357544 
 Validation Accuracy : 0.4142000079154968 
Epoch  8, CIFAR-10 Batch 1:   Loss : 1.4697009325027466 
 Validation Accuracy : 0.4392000138759613 
Epoch  9, CIFAR-10 Batch 1:   Loss : 1.4021152257919312 
 Validation Accuracy : 0.45100000500679016 
Epoch 10, CIFAR-10 Batch 1:   Loss : 1.2492268085479736 
 Validation Accuracy : 0.4758000075817108 
Epoch 11, CIFAR-10 Batch 1:   Loss : 1.1953659057617188 
 Validation Accuracy : 0.4796000123023987 
Epoch 12, CIFAR-10 Batch 1:   Loss : 1.0308644771575928 
 Validation Accuracy : 0.5135999917984009 
Epoch 13, CIFAR-10 Batch 1:   Loss : 0.9800134897232056 
 Validation Accuracy : 0.5210000276565552 
Epoch 14, CIFAR-10 Batch 1:   Loss : 0.843894362449646 
 Validation Accuracy : 0.5293999910354614 
Epoch 15, CIFAR-10 Batch 1:   Loss : 0.7844985127449036 
 Validation Accuracy : 0.5307999849319458 
Epoch 16, CIFAR-10 Batch 1:   Loss : 0.7415503859519958 
 Validation Accuracy : 0.5288000106811523 
Epoch 17, CIFAR-10 Batch 1:   Loss : 0.7093139886856079 
 Validation Accuracy : 0.5266000032424927 
Epoch 18, CIFAR-10 Batch 1:   Loss : 0.5950130224227905 
 Validation Accuracy : 0.5454000234603882 
Epoch 19, CIFAR-10 Batch 1:   Loss : 0.5758943557739258 
 Validation Accuracy : 0.5418000221252441 
Epoch 20, CIFAR-10 Batch 1:   Loss : 0.5401255488395691 
 Validation Accuracy : 0.5468000173568726 
Epoch 21, CIFAR-10 Batch 1:   Loss : 0.4685498774051666 
 Validation Accuracy : 0.5564000010490417 
Epoch 22, CIFAR-10 Batch 1:   Loss : 0.5044137835502625 
 Validation Accuracy : 0.521399974822998 
Epoch 23, CIFAR-10 Batch 1:   Loss : 0.5090219974517822 
 Validation Accuracy : 0.5347999930381775 
Epoch 24, CIFAR-10 Batch 1:   Loss : 0.4357181489467621 
 Validation Accuracy : 0.5461999773979187 
Epoch 25, CIFAR-10 Batch 1:   Loss : 0.43442878127098083 
 Validation Accuracy : 0.5437999963760376 
Epoch 26, CIFAR-10 Batch 1:   Loss : 0.33013564348220825 
 Validation Accuracy : 0.545199990272522 
Epoch 27, CIFAR-10 Batch 1:   Loss : 0.3461911082267761 
 Validation Accuracy : 0.5389999747276306 
Epoch 28, CIFAR-10 Batch 1:   Loss : 0.316291481256485 
 Validation Accuracy : 0.5450000166893005 
Epoch 29, CIFAR-10 Batch 1:   Loss : 0.30129826068878174 
 Validation Accuracy : 0.5365999937057495 
Epoch 30, CIFAR-10 Batch 1:   Loss : 0.3202803134918213 
 Validation Accuracy : 0.5493999719619751 

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [66]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:   Loss : 2.1596007347106934 
 Validation Accuracy : 0.21060000360012054 
Epoch  1, CIFAR-10 Batch 2:   Loss : 1.9048904180526733 
 Validation Accuracy : 0.3192000091075897 
Epoch  1, CIFAR-10 Batch 3:   Loss : 1.626186728477478 
 Validation Accuracy : 0.3668000102043152 
Epoch  1, CIFAR-10 Batch 4:   Loss : 1.6408315896987915 
 Validation Accuracy : 0.39719998836517334 
Epoch  1, CIFAR-10 Batch 5:   Loss : 1.543958306312561 
 Validation Accuracy : 0.4090000092983246 
Epoch  2, CIFAR-10 Batch 1:   Loss : 1.772907018661499 
 Validation Accuracy : 0.4246000051498413 
Epoch  2, CIFAR-10 Batch 2:   Loss : 1.6059131622314453 
 Validation Accuracy : 0.4537999927997589 
Epoch  2, CIFAR-10 Batch 3:   Loss : 1.3098396062850952 
 Validation Accuracy : 0.43720000982284546 
Epoch  2, CIFAR-10 Batch 4:   Loss : 1.4385404586791992 
 Validation Accuracy : 0.4747999906539917 
Epoch  2, CIFAR-10 Batch 5:   Loss : 1.37602698802948 
 Validation Accuracy : 0.4796000123023987 
Epoch  3, CIFAR-10 Batch 1:   Loss : 1.558704137802124 
 Validation Accuracy : 0.5037999749183655 
Epoch  3, CIFAR-10 Batch 2:   Loss : 1.3471788167953491 
 Validation Accuracy : 0.5166000127792358 
Epoch  3, CIFAR-10 Batch 3:   Loss : 1.0880448818206787 
 Validation Accuracy : 0.52920001745224 
Epoch  3, CIFAR-10 Batch 4:   Loss : 1.281217098236084 
 Validation Accuracy : 0.5206000208854675 
Epoch  3, CIFAR-10 Batch 5:   Loss : 1.223043441772461 
 Validation Accuracy : 0.527400016784668 
Epoch  4, CIFAR-10 Batch 1:   Loss : 1.3440632820129395 
 Validation Accuracy : 0.5424000024795532 
Epoch  4, CIFAR-10 Batch 2:   Loss : 1.1268705129623413 
 Validation Accuracy : 0.5228000283241272 
Epoch  4, CIFAR-10 Batch 3:   Loss : 0.9965965151786804 
 Validation Accuracy : 0.5523999929428101 
Epoch  4, CIFAR-10 Batch 4:   Loss : 1.2024712562561035 
 Validation Accuracy : 0.5705999732017517 
Epoch  4, CIFAR-10 Batch 5:   Loss : 1.068787693977356 
 Validation Accuracy : 0.5735999941825867 
Epoch  5, CIFAR-10 Batch 1:   Loss : 1.1985620260238647 
 Validation Accuracy : 0.5785999894142151 
Epoch  5, CIFAR-10 Batch 2:   Loss : 1.001720905303955 
 Validation Accuracy : 0.5803999900817871 
Epoch  5, CIFAR-10 Batch 3:   Loss : 0.8677833676338196 
 Validation Accuracy : 0.5848000049591064 
Epoch  5, CIFAR-10 Batch 4:   Loss : 1.1002506017684937 
 Validation Accuracy : 0.5794000029563904 
Epoch  5, CIFAR-10 Batch 5:   Loss : 0.9655154943466187 
 Validation Accuracy : 0.59579998254776 
Epoch  6, CIFAR-10 Batch 1:   Loss : 1.0674978494644165 
 Validation Accuracy : 0.6057999730110168 
Epoch  6, CIFAR-10 Batch 2:   Loss : 0.9239028692245483 
 Validation Accuracy : 0.5914000272750854 
Epoch  6, CIFAR-10 Batch 3:   Loss : 0.8106040954589844 
 Validation Accuracy : 0.600600004196167 
Epoch  6, CIFAR-10 Batch 4:   Loss : 0.9992243051528931 
 Validation Accuracy : 0.6018000245094299 
Epoch  6, CIFAR-10 Batch 5:   Loss : 0.9001911878585815 
 Validation Accuracy : 0.616599977016449 
Epoch  7, CIFAR-10 Batch 1:   Loss : 0.9767659306526184 
 Validation Accuracy : 0.6136000156402588 
Epoch  7, CIFAR-10 Batch 2:   Loss : 0.8413203954696655 
 Validation Accuracy : 0.6164000034332275 
Epoch  7, CIFAR-10 Batch 3:   Loss : 0.7581170797348022 
 Validation Accuracy : 0.6133999824523926 
Epoch  7, CIFAR-10 Batch 4:   Loss : 0.9905490875244141 
 Validation Accuracy : 0.6129999756813049 
Epoch  7, CIFAR-10 Batch 5:   Loss : 0.7973763942718506 
 Validation Accuracy : 0.6340000033378601 
Epoch  8, CIFAR-10 Batch 1:   Loss : 0.8731502294540405 
 Validation Accuracy : 0.6248000264167786 
Epoch  8, CIFAR-10 Batch 2:   Loss : 0.6989936828613281 
 Validation Accuracy : 0.6204000115394592 
Epoch  8, CIFAR-10 Batch 3:   Loss : 0.6695940494537354 
 Validation Accuracy : 0.6330000162124634 
Epoch  8, CIFAR-10 Batch 4:   Loss : 0.8816022872924805 
 Validation Accuracy : 0.6345999836921692 
Epoch  8, CIFAR-10 Batch 5:   Loss : 0.7133124470710754 
 Validation Accuracy : 0.6403999924659729 
Epoch  9, CIFAR-10 Batch 1:   Loss : 0.7459217309951782 
 Validation Accuracy : 0.6376000046730042 
Epoch  9, CIFAR-10 Batch 2:   Loss : 0.6908662915229797 
 Validation Accuracy : 0.6420000195503235 
Epoch  9, CIFAR-10 Batch 3:   Loss : 0.6233009099960327 
 Validation Accuracy : 0.6371999979019165 
Epoch  9, CIFAR-10 Batch 4:   Loss : 0.7461656332015991 
 Validation Accuracy : 0.6398000121116638 
Epoch  9, CIFAR-10 Batch 5:   Loss : 0.6572418212890625 
 Validation Accuracy : 0.649399995803833 
Epoch 10, CIFAR-10 Batch 1:   Loss : 0.6360254287719727 
 Validation Accuracy : 0.6485999822616577 
Epoch 10, CIFAR-10 Batch 2:   Loss : 0.6468040943145752 
 Validation Accuracy : 0.6287999749183655 
Epoch 10, CIFAR-10 Batch 3:   Loss : 0.5931113958358765 
 Validation Accuracy : 0.6412000060081482 
Epoch 10, CIFAR-10 Batch 4:   Loss : 0.6866615414619446 
 Validation Accuracy : 0.6381999850273132 
Epoch 10, CIFAR-10 Batch 5:   Loss : 0.5952857732772827 
 Validation Accuracy : 0.6377999782562256 
Epoch 11, CIFAR-10 Batch 1:   Loss : 0.597509503364563 
 Validation Accuracy : 0.6413999795913696 
Epoch 11, CIFAR-10 Batch 2:   Loss : 0.5764943361282349 
 Validation Accuracy : 0.6370000243186951 
Epoch 11, CIFAR-10 Batch 3:   Loss : 0.5440574884414673 
 Validation Accuracy : 0.6466000080108643 
Epoch 11, CIFAR-10 Batch 4:   Loss : 0.5810431241989136 
 Validation Accuracy : 0.6389999985694885 
Epoch 11, CIFAR-10 Batch 5:   Loss : 0.5188345909118652 
 Validation Accuracy : 0.6601999998092651 
Epoch 12, CIFAR-10 Batch 1:   Loss : 0.541750431060791 
 Validation Accuracy : 0.6570000052452087 
Epoch 12, CIFAR-10 Batch 2:   Loss : 0.5597530603408813 
 Validation Accuracy : 0.6377999782562256 
Epoch 12, CIFAR-10 Batch 3:   Loss : 0.47549358010292053 
 Validation Accuracy : 0.6534000039100647 
Epoch 12, CIFAR-10 Batch 4:   Loss : 0.5287909507751465 
 Validation Accuracy : 0.6565999984741211 
Epoch 12, CIFAR-10 Batch 5:   Loss : 0.4729062616825104 
 Validation Accuracy : 0.6636000275611877 
Epoch 13, CIFAR-10 Batch 1:   Loss : 0.46821698546409607 
 Validation Accuracy : 0.6607999801635742 
Epoch 13, CIFAR-10 Batch 2:   Loss : 0.5010747909545898 
 Validation Accuracy : 0.6492000222206116 
Epoch 13, CIFAR-10 Batch 3:   Loss : 0.4177054762840271 
 Validation Accuracy : 0.6711999773979187 
Epoch 13, CIFAR-10 Batch 4:   Loss : 0.47009068727493286 
 Validation Accuracy : 0.6575999855995178 
Epoch 13, CIFAR-10 Batch 5:   Loss : 0.43309879302978516 
 Validation Accuracy : 0.6687999963760376 
Epoch 14, CIFAR-10 Batch 1:   Loss : 0.4756123423576355 
 Validation Accuracy : 0.6692000031471252 
Epoch 14, CIFAR-10 Batch 2:   Loss : 0.4595036506652832 
 Validation Accuracy : 0.6597999930381775 
Epoch 14, CIFAR-10 Batch 3:   Loss : 0.3930908143520355 
 Validation Accuracy : 0.6620000004768372 
Epoch 14, CIFAR-10 Batch 4:   Loss : 0.4249402582645416 
 Validation Accuracy : 0.6690000295639038 
Epoch 14, CIFAR-10 Batch 5:   Loss : 0.39018362760543823 
 Validation Accuracy : 0.6636000275611877 
Epoch 15, CIFAR-10 Batch 1:   Loss : 0.4413653016090393 
 Validation Accuracy : 0.6705999970436096 
Epoch 15, CIFAR-10 Batch 2:   Loss : 0.4245365262031555 
 Validation Accuracy : 0.6601999998092651 
Epoch 15, CIFAR-10 Batch 3:   Loss : 0.33231520652770996 
 Validation Accuracy : 0.6697999835014343 
Epoch 15, CIFAR-10 Batch 4:   Loss : 0.49282678961753845 
 Validation Accuracy : 0.6585999727249146 
Epoch 15, CIFAR-10 Batch 5:   Loss : 0.34166809916496277 
 Validation Accuracy : 0.671999990940094 
Epoch 16, CIFAR-10 Batch 1:   Loss : 0.35148364305496216 
 Validation Accuracy : 0.66839998960495 
Epoch 16, CIFAR-10 Batch 2:   Loss : 0.39878779649734497 
 Validation Accuracy : 0.6561999917030334 
Epoch 16, CIFAR-10 Batch 3:   Loss : 0.32873886823654175 
 Validation Accuracy : 0.6747999787330627 
Epoch 16, CIFAR-10 Batch 4:   Loss : 0.4497438073158264 
 Validation Accuracy : 0.6394000053405762 
Epoch 16, CIFAR-10 Batch 5:   Loss : 0.38303694128990173 
 Validation Accuracy : 0.6621999740600586 
Epoch 17, CIFAR-10 Batch 1:   Loss : 0.3878578543663025 
 Validation Accuracy : 0.6705999970436096 
Epoch 17, CIFAR-10 Batch 2:   Loss : 0.3970097601413727 
 Validation Accuracy : 0.6561999917030334 
Epoch 17, CIFAR-10 Batch 3:   Loss : 0.35681402683258057 
 Validation Accuracy : 0.6743999719619751 
Epoch 17, CIFAR-10 Batch 4:   Loss : 0.41583195328712463 
 Validation Accuracy : 0.6646000146865845 
Epoch 17, CIFAR-10 Batch 5:   Loss : 0.30611997842788696 
 Validation Accuracy : 0.6539999842643738 
Epoch 18, CIFAR-10 Batch 1:   Loss : 0.3296014070510864 
 Validation Accuracy : 0.6776000261306763 
Epoch 18, CIFAR-10 Batch 2:   Loss : 0.35579735040664673 
 Validation Accuracy : 0.6601999998092651 
Epoch 18, CIFAR-10 Batch 3:   Loss : 0.2997961938381195 
 Validation Accuracy : 0.6621999740600586 
Epoch 18, CIFAR-10 Batch 4:   Loss : 0.3929644823074341 
 Validation Accuracy : 0.6603999733924866 
Epoch 18, CIFAR-10 Batch 5:   Loss : 0.27755206823349 
 Validation Accuracy : 0.6687999963760376 
Epoch 19, CIFAR-10 Batch 1:   Loss : 0.2880447804927826 
 Validation Accuracy : 0.6833999752998352 
Epoch 19, CIFAR-10 Batch 2:   Loss : 0.3443247377872467 
 Validation Accuracy : 0.6687999963760376 
Epoch 19, CIFAR-10 Batch 3:   Loss : 0.2908170521259308 
 Validation Accuracy : 0.6754000186920166 
Epoch 19, CIFAR-10 Batch 4:   Loss : 0.35817426443099976 
 Validation Accuracy : 0.6588000059127808 
Epoch 19, CIFAR-10 Batch 5:   Loss : 0.26855236291885376 
 Validation Accuracy : 0.6664000153541565 
Epoch 20, CIFAR-10 Batch 1:   Loss : 0.3110892176628113 
 Validation Accuracy : 0.6732000112533569 
Epoch 20, CIFAR-10 Batch 2:   Loss : 0.3526187837123871 
 Validation Accuracy : 0.6603999733924866 
Epoch 20, CIFAR-10 Batch 3:   Loss : 0.26393380761146545 
 Validation Accuracy : 0.6633999943733215 
Epoch 20, CIFAR-10 Batch 4:   Loss : 0.30459481477737427 
 Validation Accuracy : 0.6621999740600586 
Epoch 20, CIFAR-10 Batch 5:   Loss : 0.2577568292617798 
 Validation Accuracy : 0.6629999876022339 
Epoch 21, CIFAR-10 Batch 1:   Loss : 0.284647136926651 
 Validation Accuracy : 0.6765999794006348 
Epoch 21, CIFAR-10 Batch 2:   Loss : 0.2998981475830078 
 Validation Accuracy : 0.6651999950408936 
Epoch 21, CIFAR-10 Batch 3:   Loss : 0.2500301003456116 
 Validation Accuracy : 0.6669999957084656 
Epoch 21, CIFAR-10 Batch 4:   Loss : 0.28724855184555054 
 Validation Accuracy : 0.6642000079154968 
Epoch 21, CIFAR-10 Batch 5:   Loss : 0.22482600808143616 
 Validation Accuracy : 0.6773999929428101 
Epoch 22, CIFAR-10 Batch 1:   Loss : 0.2305106371641159 
 Validation Accuracy : 0.6746000051498413 
Epoch 22, CIFAR-10 Batch 2:   Loss : 0.32907456159591675 
 Validation Accuracy : 0.6636000275611877 
Epoch 22, CIFAR-10 Batch 3:   Loss : 0.24582353234291077 
 Validation Accuracy : 0.6678000092506409 
Epoch 22, CIFAR-10 Batch 4:   Loss : 0.25187283754348755 
 Validation Accuracy : 0.6692000031471252 
Epoch 22, CIFAR-10 Batch 5:   Loss : 0.23719747364521027 
 Validation Accuracy : 0.6610000133514404 
Epoch 23, CIFAR-10 Batch 1:   Loss : 0.2559163570404053 
 Validation Accuracy : 0.676800012588501 
Epoch 23, CIFAR-10 Batch 2:   Loss : 0.3019610345363617 
 Validation Accuracy : 0.6496000289916992 
Epoch 23, CIFAR-10 Batch 3:   Loss : 0.23637166619300842 
 Validation Accuracy : 0.6678000092506409 
Epoch 23, CIFAR-10 Batch 4:   Loss : 0.2314295470714569 
 Validation Accuracy : 0.6620000004768372 
Epoch 23, CIFAR-10 Batch 5:   Loss : 0.19850820302963257 
 Validation Accuracy : 0.6601999998092651 
Epoch 24, CIFAR-10 Batch 1:   Loss : 0.26489731669425964 
 Validation Accuracy : 0.6723999977111816 
Epoch 24, CIFAR-10 Batch 2:   Loss : 0.2681286334991455 
 Validation Accuracy : 0.6675999760627747 
Epoch 24, CIFAR-10 Batch 3:   Loss : 0.21341542899608612 
 Validation Accuracy : 0.6736000180244446 
Epoch 24, CIFAR-10 Batch 4:   Loss : 0.22646772861480713 
 Validation Accuracy : 0.6507999897003174 
Epoch 24, CIFAR-10 Batch 5:   Loss : 0.1714164763689041 
 Validation Accuracy : 0.6564000248908997 
Epoch 25, CIFAR-10 Batch 1:   Loss : 0.22570089995861053 
 Validation Accuracy : 0.6710000038146973 
Epoch 25, CIFAR-10 Batch 2:   Loss : 0.2678498327732086 
 Validation Accuracy : 0.6661999821662903 
Epoch 25, CIFAR-10 Batch 3:   Loss : 0.2591668963432312 
 Validation Accuracy : 0.6636000275611877 
Epoch 25, CIFAR-10 Batch 4:   Loss : 0.220794677734375 
 Validation Accuracy : 0.6628000140190125 
Epoch 25, CIFAR-10 Batch 5:   Loss : 0.1690436750650406 
 Validation Accuracy : 0.6669999957084656 
Epoch 26, CIFAR-10 Batch 1:   Loss : 0.18786509335041046 
 Validation Accuracy : 0.6624000072479248 
Epoch 26, CIFAR-10 Batch 2:   Loss : 0.24876649677753448 
 Validation Accuracy : 0.6678000092506409 
Epoch 26, CIFAR-10 Batch 3:   Loss : 0.2338690459728241 
 Validation Accuracy : 0.6620000004768372 
Epoch 26, CIFAR-10 Batch 4:   Loss : 0.2236865758895874 
 Validation Accuracy : 0.6665999889373779 
Epoch 26, CIFAR-10 Batch 5:   Loss : 0.1606130599975586 
 Validation Accuracy : 0.6728000044822693 
Epoch 27, CIFAR-10 Batch 1:   Loss : 0.2276146411895752 
 Validation Accuracy : 0.6665999889373779 
Epoch 27, CIFAR-10 Batch 2:   Loss : 0.24527080357074738 
 Validation Accuracy : 0.6618000268936157 
Epoch 27, CIFAR-10 Batch 3:   Loss : 0.19288447499275208 
 Validation Accuracy : 0.6740000247955322 
Epoch 27, CIFAR-10 Batch 4:   Loss : 0.21032968163490295 
 Validation Accuracy : 0.6606000065803528 
Epoch 27, CIFAR-10 Batch 5:   Loss : 0.16143210232257843 
 Validation Accuracy : 0.6618000268936157 
Epoch 28, CIFAR-10 Batch 1:   Loss : 0.19572892785072327 
 Validation Accuracy : 0.675000011920929 
Epoch 28, CIFAR-10 Batch 2:   Loss : 0.21925139427185059 
 Validation Accuracy : 0.6761999726295471 
Epoch 28, CIFAR-10 Batch 3:   Loss : 0.21953272819519043 
 Validation Accuracy : 0.6693999767303467 
Epoch 28, CIFAR-10 Batch 4:   Loss : 0.2057599574327469 
 Validation Accuracy : 0.6674000024795532 
Epoch 28, CIFAR-10 Batch 5:   Loss : 0.15533462166786194 
 Validation Accuracy : 0.6693999767303467 
Epoch 29, CIFAR-10 Batch 1:   Loss : 0.17842456698417664 
 Validation Accuracy : 0.6679999828338623 
Epoch 29, CIFAR-10 Batch 2:   Loss : 0.23762965202331543 
 Validation Accuracy : 0.6776000261306763 
Epoch 29, CIFAR-10 Batch 3:   Loss : 0.18374721705913544 
 Validation Accuracy : 0.6601999998092651 
Epoch 29, CIFAR-10 Batch 4:   Loss : 0.18850596249103546 
 Validation Accuracy : 0.651199996471405 
Epoch 29, CIFAR-10 Batch 5:   Loss : 0.1813587248325348 
 Validation Accuracy : 0.6642000079154968 
Epoch 30, CIFAR-10 Batch 1:   Loss : 0.19719095528125763 
 Validation Accuracy : 0.6692000031471252 
Epoch 30, CIFAR-10 Batch 2:   Loss : 0.21282950043678284 
 Validation Accuracy : 0.680400013923645 
Epoch 30, CIFAR-10 Batch 3:   Loss : 0.19255897402763367 
 Validation Accuracy : 0.6715999841690063 
Epoch 30, CIFAR-10 Batch 4:   Loss : 0.21183042228221893 
 Validation Accuracy : 0.6665999889373779 
Epoch 30, CIFAR-10 Batch 5:   Loss : 0.15690161287784576 
 Validation Accuracy : 0.6732000112533569 

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [67]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


INFO:tensorflow:Restoring parameters from ./image_classification
Testing Accuracy: 0.6536787974683544

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.