Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('cifar-10-python.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            'cifar-10-python.tar.gz',
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open('cifar-10-python.tar.gz') as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [ ]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [ ]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data. The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    return x / 255


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [ ]:
from sklearn.preprocessing import LabelBinarizer

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # The possible values for labels are 0 to 9. 10 in total
    return np.eye(10)[x]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [2]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    return tf.placeholder(tf.float32, shape=[None, *image_shape], name = "x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(tf.float32, shape=[None, n_classes], name = "y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(tf.float32, name="keep_prob")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [3]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # x_tensor.get_shape()
    # >> (?, 32, 32, 5)
    input_depth = x_tensor.shape[-1].value

    weights = tf.Variable(
        tf.truncated_normal(
            shape=[
                conv_ksize[0], # height
                conv_ksize[1], # width
                input_depth, # input_depth
                conv_num_outputs # out_depth
            ], 
            mean=0.0,
            stddev=0.1
        ),
        name='weights'
    )
    bias = tf.Variable(tf.zeros(conv_num_outputs), trainable=True)
    
    # Apply a convolution to x_tensor using weight and conv_strides
    conv_layer = tf.nn.conv2d(
        x_tensor, 
        weights, 
        strides=[1, *conv_strides, 1], 
        padding='SAME'
    )
    
    # Add bias
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    
    # Add a nonlinear activation to the convolution
    conv_layer = tf.nn.relu(conv_layer)
    
    # Apply Max Pooling using pool_ksize and pool_strides
    conv_layer = tf.nn.max_pool(
        conv_layer, 
        ksize=[1, *pool_ksize, 1], 
        strides=[1, *pool_strides, 1], 
        padding='SAME'
    )
    
    return conv_layer 

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [4]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # print(x_tensor.get_shape()[1:4].num_elements())
    return tf.reshape(x_tensor, [-1, x_tensor.get_shape()[1:4].num_elements()])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [5]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    batch_size = x_tensor.shape[1].value
    weights = tf.Variable(
        tf.truncated_normal(
            [batch_size, num_outputs],
            mean = 0,
            stddev=0.1)
    )
    bias = tf.Variable(tf.zeros(num_outputs))

    fully_conn_layer = tf.matmul(x_tensor, weights)
    fully_conn_layer = tf.nn.bias_add(fully_conn_layer, bias)
    fully_conn_layer = tf.nn.relu(fully_conn_layer)

    return fully_conn_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [6]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    batch_size = x_tensor.shape[1].value
    weights = tf.Variable(
        tf.truncated_normal(
            [batch_size, num_outputs],
            mean = 0,
            stddev=0.1)
    )
    bias = tf.Variable(tf.zeros(num_outputs))
    
    output_layer = tf.matmul(x_tensor, weights)
    output_layer = tf.nn.bias_add(output_layer, bias)
    
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [7]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    conv_num_outputs_layer1 = 32
    conv_num_outputs_layer2 = 64
    conv_num_outputs_layer3 = 128
    fully_conn_num_outputs_layer_1 = 256
    fully_conn_num_outputs_layer_2 = 512
    conv_ksize = (5, 5)
    conv_strides = (1, 1)
    pool_ksize = (2, 2)
    pool_strides = (2, 2)
    
    common_params = [conv_ksize, conv_strides, pool_ksize, pool_strides]
    
    # Apply 1, 2, or 3 Convolution and Max Pool layers
    conv_layer_1 = conv2d_maxpool(x, conv_num_outputs_layer1, *common_params)
    conv_layer_1 = tf.nn.dropout(conv_layer_1, keep_prob)
    
    conv_layer_2 = conv2d_maxpool(conv_layer_1, conv_num_outputs_layer2, *common_params)
    conv_layer_2 = tf.nn.dropout(conv_layer_2, keep_prob)
    
    conv_layer_3 = conv2d_maxpool(conv_layer_2, conv_num_outputs_layer3, *common_params)
    conv_layer_3 = tf.nn.dropout(conv_layer_3, keep_prob)
    
    # Apply a Flatten Layer
    flatten_layer_1 = flatten(conv_layer_3)

    # Apply 1, 2, or 3 Fully Connected Layers
    fully_conn_layer_1 = fully_conn(flatten_layer_1, fully_conn_num_outputs_layer_1)
    fully_conn_layer_1 = tf.nn.dropout(fully_conn_layer_1, keep_prob)
    fully_conn_layer_2 = fully_conn(flatten_layer_1, fully_conn_num_outputs_layer_2)
    fully_conn_layer_2 = tf.nn.dropout(fully_conn_layer_2, keep_prob)
    
    
    # Apply an Output Layer
    num_outputs = 10 # 10 classes
    output_layer = output(fully_conn_layer_2, num_outputs)
    
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [8]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [9]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
    accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
    print("Loss: {} Accuracy: {}".format(loss, accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [10]:
# TODO: Tune Parameters
epochs = 35
batch_size = 128
keep_probability = 0.75

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [11]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1932835578918457 Accuracy: 0.22779998183250427
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.011131525039673 Accuracy: 0.31299999356269836
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.957720160484314 Accuracy: 0.3763999938964844
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.8524549007415771 Accuracy: 0.3917999863624573
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.7515805959701538 Accuracy: 0.4285999536514282
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.6449694633483887 Accuracy: 0.4553999900817871
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.6163479089736938 Accuracy: 0.465999960899353
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.5041940212249756 Accuracy: 0.48539993166923523
Epoch  9, CIFAR-10 Batch 1:  Loss: 1.4849427938461304 Accuracy: 0.48159995675086975
Epoch 10, CIFAR-10 Batch 1:  Loss: 1.4601508378982544 Accuracy: 0.4793999195098877
Epoch 11, CIFAR-10 Batch 1:  Loss: 1.334553599357605 Accuracy: 0.5107999444007874
Epoch 12, CIFAR-10 Batch 1:  Loss: 1.3120155334472656 Accuracy: 0.5133999586105347
Epoch 13, CIFAR-10 Batch 1:  Loss: 1.1587424278259277 Accuracy: 0.5285999774932861
Epoch 14, CIFAR-10 Batch 1:  Loss: 1.09547758102417 Accuracy: 0.5415999293327332
Epoch 15, CIFAR-10 Batch 1:  Loss: 1.119186282157898 Accuracy: 0.5281999111175537
Epoch 16, CIFAR-10 Batch 1:  Loss: 1.0606940984725952 Accuracy: 0.5435999035835266
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.9461108446121216 Accuracy: 0.5565999746322632
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.8634511232376099 Accuracy: 0.5571999549865723
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.875931978225708 Accuracy: 0.5637999176979065
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.7731307148933411 Accuracy: 0.5735999941825867
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.7171895503997803 Accuracy: 0.5747999548912048
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.6520981788635254 Accuracy: 0.5945999622344971
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.6261926293373108 Accuracy: 0.5879999399185181
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.5350332260131836 Accuracy: 0.5861998796463013
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.5346778631210327 Accuracy: 0.6061999201774597
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.4086195230484009 Accuracy: 0.5965999364852905
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.4407140612602234 Accuracy: 0.6021998524665833
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.3263455033302307 Accuracy: 0.6123998761177063
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.2961142361164093 Accuracy: 0.6095999479293823
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.2748626470565796 Accuracy: 0.6221998929977417
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.24453707039356232 Accuracy: 0.6201999187469482
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.2586561143398285 Accuracy: 0.6059998869895935
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.2022581845521927 Accuracy: 0.6185999512672424
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.19609764218330383 Accuracy: 0.6221999526023865
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.1641387790441513 Accuracy: 0.6179999113082886

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [12]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.2478585243225098 Accuracy: 0.22419998049736023
Epoch  1, CIFAR-10 Batch 2:  Loss: 1.8837275505065918 Accuracy: 0.3407999873161316
Epoch  1, CIFAR-10 Batch 3:  Loss: 1.6931178569793701 Accuracy: 0.366599977016449
Epoch  1, CIFAR-10 Batch 4:  Loss: 1.6457328796386719 Accuracy: 0.43439996242523193
Epoch  1, CIFAR-10 Batch 5:  Loss: 1.5777308940887451 Accuracy: 0.45159992575645447
Epoch  2, CIFAR-10 Batch 1:  Loss: 1.6450750827789307 Accuracy: 0.48399996757507324
Epoch  2, CIFAR-10 Batch 2:  Loss: 1.3891228437423706 Accuracy: 0.45259997248649597
Epoch  2, CIFAR-10 Batch 3:  Loss: 1.2397334575653076 Accuracy: 0.49479997158050537
Epoch  2, CIFAR-10 Batch 4:  Loss: 1.4911067485809326 Accuracy: 0.5090000033378601
Epoch  2, CIFAR-10 Batch 5:  Loss: 1.3708136081695557 Accuracy: 0.5109999179840088
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.3920594453811646 Accuracy: 0.5313999652862549
Epoch  3, CIFAR-10 Batch 2:  Loss: 1.1967368125915527 Accuracy: 0.5221999883651733
Epoch  3, CIFAR-10 Batch 3:  Loss: 1.1225855350494385 Accuracy: 0.539199948310852
Epoch  3, CIFAR-10 Batch 4:  Loss: 1.2587395906448364 Accuracy: 0.5643999576568604
Epoch  3, CIFAR-10 Batch 5:  Loss: 1.2642561197280884 Accuracy: 0.5647999048233032
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.1507940292358398 Accuracy: 0.5829999446868896
Epoch  4, CIFAR-10 Batch 2:  Loss: 1.0789107084274292 Accuracy: 0.5377998948097229
Epoch  4, CIFAR-10 Batch 3:  Loss: 0.9628630876541138 Accuracy: 0.5771998763084412
Epoch  4, CIFAR-10 Batch 4:  Loss: 1.0709222555160522 Accuracy: 0.5817999243736267
Epoch  4, CIFAR-10 Batch 5:  Loss: 1.0804729461669922 Accuracy: 0.5877999067306519
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.0380303859710693 Accuracy: 0.6127999424934387
Epoch  5, CIFAR-10 Batch 2:  Loss: 0.9688418507575989 Accuracy: 0.6031998991966248
Epoch  5, CIFAR-10 Batch 3:  Loss: 0.8632034063339233 Accuracy: 0.6093999147415161
Epoch  5, CIFAR-10 Batch 4:  Loss: 0.973716676235199 Accuracy: 0.6333999037742615
Epoch  5, CIFAR-10 Batch 5:  Loss: 0.885626494884491 Accuracy: 0.6373998522758484
Epoch  6, CIFAR-10 Batch 1:  Loss: 0.9440181851387024 Accuracy: 0.6339998841285706
Epoch  6, CIFAR-10 Batch 2:  Loss: 0.9215208888053894 Accuracy: 0.6327998638153076
Epoch  6, CIFAR-10 Batch 3:  Loss: 0.8347591757774353 Accuracy: 0.6157999634742737
Epoch  6, CIFAR-10 Batch 4:  Loss: 0.7886825799942017 Accuracy: 0.6541998386383057
Epoch  6, CIFAR-10 Batch 5:  Loss: 0.844180166721344 Accuracy: 0.6445999145507812
Epoch  7, CIFAR-10 Batch 1:  Loss: 0.7575644254684448 Accuracy: 0.6627998948097229
Epoch  7, CIFAR-10 Batch 2:  Loss: 0.8091502785682678 Accuracy: 0.6409999132156372
Epoch  7, CIFAR-10 Batch 3:  Loss: 0.6821871995925903 Accuracy: 0.6525998711585999
Epoch  7, CIFAR-10 Batch 4:  Loss: 0.7780335545539856 Accuracy: 0.6579998731613159
Epoch  7, CIFAR-10 Batch 5:  Loss: 0.6754856705665588 Accuracy: 0.665199875831604
Epoch  8, CIFAR-10 Batch 1:  Loss: 0.8035699129104614 Accuracy: 0.6637998819351196
Epoch  8, CIFAR-10 Batch 2:  Loss: 0.7641209959983826 Accuracy: 0.6587998867034912
Epoch  8, CIFAR-10 Batch 3:  Loss: 0.5282308459281921 Accuracy: 0.6653998494148254
Epoch  8, CIFAR-10 Batch 4:  Loss: 0.6674501895904541 Accuracy: 0.6865999102592468
Epoch  8, CIFAR-10 Batch 5:  Loss: 0.6420197486877441 Accuracy: 0.6791999340057373
Epoch  9, CIFAR-10 Batch 1:  Loss: 0.6222699880599976 Accuracy: 0.7009998559951782
Epoch  9, CIFAR-10 Batch 2:  Loss: 0.6511884927749634 Accuracy: 0.6747998595237732
Epoch  9, CIFAR-10 Batch 3:  Loss: 0.4982835352420807 Accuracy: 0.6953999400138855
Epoch  9, CIFAR-10 Batch 4:  Loss: 0.6109545230865479 Accuracy: 0.7005998492240906
Epoch  9, CIFAR-10 Batch 5:  Loss: 0.7144730091094971 Accuracy: 0.6653998494148254
Epoch 10, CIFAR-10 Batch 1:  Loss: 0.5636800527572632 Accuracy: 0.697399914264679
Epoch 10, CIFAR-10 Batch 2:  Loss: 0.4850624203681946 Accuracy: 0.6937997937202454
Epoch 10, CIFAR-10 Batch 3:  Loss: 0.38418346643447876 Accuracy: 0.6987997889518738
Epoch 10, CIFAR-10 Batch 4:  Loss: 0.5332787036895752 Accuracy: 0.707399845123291
Epoch 10, CIFAR-10 Batch 5:  Loss: 0.600462019443512 Accuracy: 0.7145998477935791
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.48435676097869873 Accuracy: 0.7241998910903931
Epoch 11, CIFAR-10 Batch 2:  Loss: 0.39974018931388855 Accuracy: 0.7073997855186462
Epoch 11, CIFAR-10 Batch 3:  Loss: 0.3301633596420288 Accuracy: 0.7287998199462891
Epoch 11, CIFAR-10 Batch 4:  Loss: 0.4790204167366028 Accuracy: 0.7131999135017395
Epoch 11, CIFAR-10 Batch 5:  Loss: 0.5108460783958435 Accuracy: 0.7161998152732849
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.4638007581233978 Accuracy: 0.7155998349189758
Epoch 12, CIFAR-10 Batch 2:  Loss: 0.421169638633728 Accuracy: 0.7217998504638672
Epoch 12, CIFAR-10 Batch 3:  Loss: 0.32230204343795776 Accuracy: 0.7313998341560364
Epoch 12, CIFAR-10 Batch 4:  Loss: 0.4071316123008728 Accuracy: 0.726999819278717
Epoch 12, CIFAR-10 Batch 5:  Loss: 0.4177444577217102 Accuracy: 0.7243998646736145
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.38347697257995605 Accuracy: 0.7397997975349426
Epoch 13, CIFAR-10 Batch 2:  Loss: 0.3650006353855133 Accuracy: 0.7129998207092285
Epoch 13, CIFAR-10 Batch 3:  Loss: 0.25890564918518066 Accuracy: 0.7389998435974121
Epoch 13, CIFAR-10 Batch 4:  Loss: 0.39913707971572876 Accuracy: 0.7439998388290405
Epoch 13, CIFAR-10 Batch 5:  Loss: 0.4226728677749634 Accuracy: 0.7411998510360718
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.3483855426311493 Accuracy: 0.7387999296188354
Epoch 14, CIFAR-10 Batch 2:  Loss: 0.2951708436012268 Accuracy: 0.7397997975349426
Epoch 14, CIFAR-10 Batch 3:  Loss: 0.1989605873823166 Accuracy: 0.7523998022079468
Epoch 14, CIFAR-10 Batch 4:  Loss: 0.35702818632125854 Accuracy: 0.7369998097419739
Epoch 14, CIFAR-10 Batch 5:  Loss: 0.3132578730583191 Accuracy: 0.7389998435974121
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.29422247409820557 Accuracy: 0.7419998645782471
Epoch 15, CIFAR-10 Batch 2:  Loss: 0.27029815316200256 Accuracy: 0.7449998259544373
Epoch 15, CIFAR-10 Batch 3:  Loss: 0.19292709231376648 Accuracy: 0.7603998184204102
Epoch 15, CIFAR-10 Batch 4:  Loss: 0.28725340962409973 Accuracy: 0.7583998441696167
Epoch 15, CIFAR-10 Batch 5:  Loss: 0.2921523451805115 Accuracy: 0.7461997866630554
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.23765446245670319 Accuracy: 0.7583998441696167
Epoch 16, CIFAR-10 Batch 2:  Loss: 0.2199307680130005 Accuracy: 0.7453998327255249
Epoch 16, CIFAR-10 Batch 3:  Loss: 0.15818116068840027 Accuracy: 0.7647998332977295
Epoch 16, CIFAR-10 Batch 4:  Loss: 0.280930757522583 Accuracy: 0.7577998638153076
Epoch 16, CIFAR-10 Batch 5:  Loss: 0.2315642237663269 Accuracy: 0.7577998042106628
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.23601774871349335 Accuracy: 0.7631998658180237
Epoch 17, CIFAR-10 Batch 2:  Loss: 0.17594154179096222 Accuracy: 0.7579998970031738
Epoch 17, CIFAR-10 Batch 3:  Loss: 0.16068536043167114 Accuracy: 0.7631998062133789
Epoch 17, CIFAR-10 Batch 4:  Loss: 0.2726075351238251 Accuracy: 0.7471998333930969
Epoch 17, CIFAR-10 Batch 5:  Loss: 0.16563448309898376 Accuracy: 0.7515998482704163
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.19974368810653687 Accuracy: 0.7559998035430908
Epoch 18, CIFAR-10 Batch 2:  Loss: 0.15342597663402557 Accuracy: 0.7623998522758484
Epoch 18, CIFAR-10 Batch 3:  Loss: 0.12634992599487305 Accuracy: 0.7649998664855957
Epoch 18, CIFAR-10 Batch 4:  Loss: 0.16940076649188995 Accuracy: 0.7649998664855957
Epoch 18, CIFAR-10 Batch 5:  Loss: 0.20089244842529297 Accuracy: 0.7655998468399048
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.13523614406585693 Accuracy: 0.7549998760223389
Epoch 19, CIFAR-10 Batch 2:  Loss: 0.15686222910881042 Accuracy: 0.7529997825622559
Epoch 19, CIFAR-10 Batch 3:  Loss: 0.11558765918016434 Accuracy: 0.757999837398529
Epoch 19, CIFAR-10 Batch 4:  Loss: 0.2244413197040558 Accuracy: 0.7627997994422913
Epoch 19, CIFAR-10 Batch 5:  Loss: 0.1549302488565445 Accuracy: 0.7653998732566833
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.10173415392637253 Accuracy: 0.7615997791290283
Epoch 20, CIFAR-10 Batch 2:  Loss: 0.11553886532783508 Accuracy: 0.7665997743606567
Epoch 20, CIFAR-10 Batch 3:  Loss: 0.12921208143234253 Accuracy: 0.7701997756958008
Epoch 20, CIFAR-10 Batch 4:  Loss: 0.15447542071342468 Accuracy: 0.7665998339653015
Epoch 20, CIFAR-10 Batch 5:  Loss: 0.1328590214252472 Accuracy: 0.7707998752593994
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.10821675509214401 Accuracy: 0.7651998400688171
Epoch 21, CIFAR-10 Batch 2:  Loss: 0.09070709347724915 Accuracy: 0.7671998143196106
Epoch 21, CIFAR-10 Batch 3:  Loss: 0.13414500653743744 Accuracy: 0.7503998279571533
Epoch 21, CIFAR-10 Batch 4:  Loss: 0.13382379710674286 Accuracy: 0.7633997797966003
Epoch 21, CIFAR-10 Batch 5:  Loss: 0.0911521166563034 Accuracy: 0.7737997770309448
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.11180008947849274 Accuracy: 0.7699998617172241
Epoch 22, CIFAR-10 Batch 2:  Loss: 0.10556705296039581 Accuracy: 0.7609997987747192
Epoch 22, CIFAR-10 Batch 3:  Loss: 0.08885900676250458 Accuracy: 0.7765998244285583
Epoch 22, CIFAR-10 Batch 4:  Loss: 0.12025374174118042 Accuracy: 0.776999831199646
Epoch 22, CIFAR-10 Batch 5:  Loss: 0.07065024971961975 Accuracy: 0.7683998346328735
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.0840025544166565 Accuracy: 0.7743998765945435
Epoch 23, CIFAR-10 Batch 2:  Loss: 0.07973319292068481 Accuracy: 0.7641997933387756
Epoch 23, CIFAR-10 Batch 3:  Loss: 0.07848314940929413 Accuracy: 0.7725998759269714
Epoch 23, CIFAR-10 Batch 4:  Loss: 0.17174965143203735 Accuracy: 0.75139981508255
Epoch 23, CIFAR-10 Batch 5:  Loss: 0.05778086557984352 Accuracy: 0.7831999063491821
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.0880860835313797 Accuracy: 0.7725998163223267
Epoch 24, CIFAR-10 Batch 2:  Loss: 0.06120329722762108 Accuracy: 0.7691998481750488
Epoch 24, CIFAR-10 Batch 3:  Loss: 0.09793127328157425 Accuracy: 0.773399829864502
Epoch 24, CIFAR-10 Batch 4:  Loss: 0.1076919212937355 Accuracy: 0.7749998569488525
Epoch 24, CIFAR-10 Batch 5:  Loss: 0.06611727923154831 Accuracy: 0.7779998779296875
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.05671758949756622 Accuracy: 0.7819998264312744
Epoch 25, CIFAR-10 Batch 2:  Loss: 0.05209682509303093 Accuracy: 0.7791998386383057
Epoch 25, CIFAR-10 Batch 3:  Loss: 0.06981603056192398 Accuracy: 0.7753998041152954
Epoch 25, CIFAR-10 Batch 4:  Loss: 0.08790335059165955 Accuracy: 0.7765998840332031
Epoch 25, CIFAR-10 Batch 5:  Loss: 0.04157699644565582 Accuracy: 0.7785998582839966
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.04922943934798241 Accuracy: 0.7849998474121094
Epoch 26, CIFAR-10 Batch 2:  Loss: 0.0665821060538292 Accuracy: 0.7679998278617859
Epoch 26, CIFAR-10 Batch 3:  Loss: 0.05838177353143692 Accuracy: 0.7725998163223267
Epoch 26, CIFAR-10 Batch 4:  Loss: 0.06313947588205338 Accuracy: 0.7709999084472656
Epoch 26, CIFAR-10 Batch 5:  Loss: 0.04845092073082924 Accuracy: 0.7835997343063354
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.07928922027349472 Accuracy: 0.7657998204231262
Epoch 27, CIFAR-10 Batch 2:  Loss: 0.07249107956886292 Accuracy: 0.7605998516082764
Epoch 27, CIFAR-10 Batch 3:  Loss: 0.048748262226581573 Accuracy: 0.7781998515129089
Epoch 27, CIFAR-10 Batch 4:  Loss: 0.05908665060997009 Accuracy: 0.7711997628211975
Epoch 27, CIFAR-10 Batch 5:  Loss: 0.039004817605018616 Accuracy: 0.7839998006820679
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.04055047035217285 Accuracy: 0.7797998189926147
Epoch 28, CIFAR-10 Batch 2:  Loss: 0.05120241269469261 Accuracy: 0.7707998752593994
Epoch 28, CIFAR-10 Batch 3:  Loss: 0.060802482068538666 Accuracy: 0.7725998163223267
Epoch 28, CIFAR-10 Batch 4:  Loss: 0.05944879725575447 Accuracy: 0.78059983253479
Epoch 28, CIFAR-10 Batch 5:  Loss: 0.02839808538556099 Accuracy: 0.7885997891426086
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.030118823051452637 Accuracy: 0.7879998683929443
Epoch 29, CIFAR-10 Batch 2:  Loss: 0.03210212290287018 Accuracy: 0.7849997878074646
Epoch 29, CIFAR-10 Batch 3:  Loss: 0.045216865837574005 Accuracy: 0.7693998217582703
Epoch 29, CIFAR-10 Batch 4:  Loss: 0.06881881505250931 Accuracy: 0.7791998386383057
Epoch 29, CIFAR-10 Batch 5:  Loss: 0.025423122569918633 Accuracy: 0.7837998270988464
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.0362873338162899 Accuracy: 0.7827997803688049
Epoch 30, CIFAR-10 Batch 2:  Loss: 0.035676248371601105 Accuracy: 0.7807998657226562
Epoch 30, CIFAR-10 Batch 3:  Loss: 0.052415356040000916 Accuracy: 0.7687998414039612
Epoch 30, CIFAR-10 Batch 4:  Loss: 0.05106329172849655 Accuracy: 0.7867997884750366
Epoch 30, CIFAR-10 Batch 5:  Loss: 0.026328323408961296 Accuracy: 0.7815998196601868
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.033422213047742844 Accuracy: 0.7833998799324036
Epoch 31, CIFAR-10 Batch 2:  Loss: 0.03539246693253517 Accuracy: 0.7741998434066772
Epoch 31, CIFAR-10 Batch 3:  Loss: 0.04775424301624298 Accuracy: 0.7797998189926147
Epoch 31, CIFAR-10 Batch 4:  Loss: 0.041406869888305664 Accuracy: 0.7683998346328735
Epoch 31, CIFAR-10 Batch 5:  Loss: 0.04309137165546417 Accuracy: 0.7801997661590576
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.027346407994627953 Accuracy: 0.7841998338699341
Epoch 32, CIFAR-10 Batch 2:  Loss: 0.0142636988312006 Accuracy: 0.7827998399734497
Epoch 32, CIFAR-10 Batch 3:  Loss: 0.03044021688401699 Accuracy: 0.7703998684883118
Epoch 32, CIFAR-10 Batch 4:  Loss: 0.0318337082862854 Accuracy: 0.7889997959136963
Epoch 32, CIFAR-10 Batch 5:  Loss: 0.020607806742191315 Accuracy: 0.7853997945785522
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.02937992289662361 Accuracy: 0.7833998203277588
Epoch 33, CIFAR-10 Batch 2:  Loss: 0.021134966984391212 Accuracy: 0.7831997871398926
Epoch 33, CIFAR-10 Batch 3:  Loss: 0.019134920090436935 Accuracy: 0.783599853515625
Epoch 33, CIFAR-10 Batch 4:  Loss: 0.041045673191547394 Accuracy: 0.7767997980117798
Epoch 33, CIFAR-10 Batch 5:  Loss: 0.019096646457910538 Accuracy: 0.7761998176574707
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.022560307756066322 Accuracy: 0.7819998264312744
Epoch 34, CIFAR-10 Batch 2:  Loss: 0.026803821325302124 Accuracy: 0.7767997980117798
Epoch 34, CIFAR-10 Batch 3:  Loss: 0.038166798651218414 Accuracy: 0.7761998176574707
Epoch 34, CIFAR-10 Batch 4:  Loss: 0.024650346487760544 Accuracy: 0.7789998054504395
Epoch 34, CIFAR-10 Batch 5:  Loss: 0.02045057713985443 Accuracy: 0.7833998799324036
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.030506327748298645 Accuracy: 0.7827998399734497
Epoch 35, CIFAR-10 Batch 2:  Loss: 0.020501764491200447 Accuracy: 0.7871997952461243
Epoch 35, CIFAR-10 Batch 3:  Loss: 0.019320465624332428 Accuracy: 0.7801998257637024
Epoch 35, CIFAR-10 Batch 4:  Loss: 0.020084433257579803 Accuracy: 0.7863998413085938
Epoch 35, CIFAR-10 Batch 5:  Loss: 0.011964945122599602 Accuracy: 0.7847998738288879

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [13]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.7771954113924051

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.