Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [118]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [51]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 2
sample_id = 18
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 2:
Samples: 10000
Label Counts: {0: 984, 1: 1007, 2: 1010, 3: 995, 4: 1010, 5: 988, 6: 1008, 7: 1026, 8: 987, 9: 985}
First 20 Labels: [1, 6, 6, 8, 8, 3, 4, 6, 0, 6, 0, 3, 6, 6, 5, 4, 8, 3, 2, 6]

Example of Image 18:
Image - Min Value: 25 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 2 Name: bird

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [125]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
#     print(x[0][0][0])
    z = x/255.0
#     print(z[0][0][0])
    return z


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [123]:
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
#     print(x.shape)
#     print(x[:10])
#     print(np.eye(10)[x[:10]])
    total_classes = 10
    return np.eye(total_classes)[x]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [127]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [148]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [139]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    shape = (None, image_shape[0], image_shape[1], image_shape[2])
    return tf.placeholder(tf.float32, shape=shape, name='x')


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
#     print(n_classes)
    shape = (None, n_classes)
    return tf.placeholder(tf.float32, shape=shape, name='y')


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, name='keep_prob')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [186]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
#     print(x_tensor.shape)
#     print(conv_num_outputs)
#     print(conv_ksize)
#     print(conv_strides)
#     print(pool_ksize)
#     print(pool_strides)
    
    # Weight and bias
    in_dim = x_tensor.get_shape().as_list()[3]
    weight_input = [conv_ksize[0], conv_ksize[1], in_dim, conv_num_outputs]
#     print('Weight',(weight_input))
    weight = tf.Variable(tf.truncated_normal(weight_input, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None))
    bias = tf.Variable(tf.zeros(conv_num_outputs))

    # Apply Convolution
    strides = [1, conv_strides[0], conv_strides[1], 1]
#     print('Stride', strides)
    conv_layer = tf.nn.conv2d(x_tensor, weight, strides=strides, padding='SAME')
    # Add bias
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    # Apply activation function
    conv_layer = tf.nn.relu(conv_layer)
    # Apply Max Pooling
    ksize = [1, pool_ksize[0], pool_ksize[1], 1]
    pool_stride = [1, pool_strides[0], pool_strides[1], 1]
    conv_layer = tf.nn.max_pool(conv_layer, ksize=ksize, strides=pool_stride, padding='SAME')
    print(conv_layer.shape)
    return conv_layer 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


(?, 4, 4, 10)
Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [150]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
#     print(x_tensor.shape)
    flattened_x_tensor = tf.contrib.layers.flatten(x_tensor)
#     print(flattened_x_tensor.shape)
    return flattened_x_tensor


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [151]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    return tf.contrib.layers.fully_connected(x_tensor, num_outputs)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [179]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    return tf.layers.dense(inputs=x_tensor, units=num_outputs, activation= tf.nn.relu)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [182]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
#     print(x.shape)
    conv_num_outputs = 64
    conv_ksize = (int(x.shape[1].value/2), int(x.shape[2].value/2))
    conv_strides = (4, 4)
    pool_ksize = (2, 2)
    pool_strides =  (2, 2)
    conv = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    print(conv.shape)
    
    conv_num_outputs = 10
    conv_ksize = (2, 2)
    conv_strides = (4, 4)
    pool_ksize = (2, 2)
    pool_strides =  (2, 2)
    conv = conv2d_maxpool(conv, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    print(conv.shape)
    
    
    conv =tf.nn.dropout(conv, keep_prob)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    conv = flatten(conv)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    conv = fully_conn(conv, conv_num_outputs)
    
    conv =tf.nn.dropout(conv, keep_prob)
    
    conv = fully_conn(conv, conv_num_outputs)
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    conv = output(conv, conv_num_outputs)
    print(conv.shape)
    
    # TODO: return output
    return conv


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


(?, 4, 4, 64)
(?, 1, 1, 10)
(?, 10)
(?, 4, 4, 64)
(?, 1, 1, 10)
(?, 10)
Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [180]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [181]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
    valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
    print('Loss: {:.6f} Accuracy: {:.6f}'.format(loss,valid_acc))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [183]:
# TODO: Tune Parameters
epochs = 10
batch_size = 64
keep_probability = 0.5

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [184]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.318351 Accuracy: 0.101200
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.302879 Accuracy: 0.100800
Epoch  3, CIFAR-10 Batch 1:  Loss: 2.302832 Accuracy: 0.100400
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-184-0ebd1bbc35ad> in <module>()
     11         batch_i = 1
     12         for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
---> 13             train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
     14         print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
     15         print_stats(sess, batch_features, batch_labels, cost, accuracy)

<ipython-input-180-1a7f7cccc68f> in train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch)
      9     """
     10     # TODO: Implement Function
---> 11     session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
     12 
     13 

C:\Users\Sandhya\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\Users\Sandhya\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

C:\Users\Sandhya\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

C:\Users\Sandhya\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1020   def _do_call(self, fn, *args):
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:
   1024       message = compat.as_text(e.message)

C:\Users\Sandhya\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1002         return tf_session.TF_Run(session, options,
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 
   1006     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [145]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.334820 Accuracy: 0.093800
Epoch  1, CIFAR-10 Batch 2:  Loss: 2.294685 Accuracy: 0.104400
Epoch  1, CIFAR-10 Batch 3:  Loss: 2.277361 Accuracy: 0.109000
Epoch  1, CIFAR-10 Batch 4:  Loss: 2.298951 Accuracy: 0.096600
Epoch  1, CIFAR-10 Batch 5:  Loss: 2.298135 Accuracy: 0.095400
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.288788 Accuracy: 0.105800
Epoch  2, CIFAR-10 Batch 2:  Loss: 2.280239 Accuracy: 0.099800
Epoch  2, CIFAR-10 Batch 3:  Loss: 2.328162 Accuracy: 0.108400
Epoch  2, CIFAR-10 Batch 4:  Loss: 2.298334 Accuracy: 0.128000
Epoch  2, CIFAR-10 Batch 5:  Loss: 2.253571 Accuracy: 0.139800
Epoch  3, CIFAR-10 Batch 1:  Loss: 2.194370 Accuracy: 0.158800
Epoch  3, CIFAR-10 Batch 2:  Loss: 2.275581 Accuracy: 0.170400
Epoch  3, CIFAR-10 Batch 3:  Loss: 2.080626 Accuracy: 0.173200
Epoch  3, CIFAR-10 Batch 4:  Loss: 2.220621 Accuracy: 0.175800
Epoch  3, CIFAR-10 Batch 5:  Loss: 2.201429 Accuracy: 0.173800
Epoch  4, CIFAR-10 Batch 1:  Loss: 2.217806 Accuracy: 0.187200
Epoch  4, CIFAR-10 Batch 2:  Loss: 2.135594 Accuracy: 0.177000
Epoch  4, CIFAR-10 Batch 3:  Loss: 1.970236 Accuracy: 0.183400
Epoch  4, CIFAR-10 Batch 4:  Loss: 2.237867 Accuracy: 0.187200
Epoch  4, CIFAR-10 Batch 5:  Loss: 2.162913 Accuracy: 0.186400
Epoch  5, CIFAR-10 Batch 1:  Loss: 2.145199 Accuracy: 0.199200
Epoch  5, CIFAR-10 Batch 2:  Loss: 2.262908 Accuracy: 0.197800
Epoch  5, CIFAR-10 Batch 3:  Loss: 1.978232 Accuracy: 0.209600
Epoch  5, CIFAR-10 Batch 4:  Loss: 2.088605 Accuracy: 0.228200
Epoch  5, CIFAR-10 Batch 5:  Loss: 1.996872 Accuracy: 0.232800
Epoch  6, CIFAR-10 Batch 1:  Loss: 2.112298 Accuracy: 0.244400
Epoch  6, CIFAR-10 Batch 2:  Loss: 2.180421 Accuracy: 0.259400
Epoch  6, CIFAR-10 Batch 3:  Loss: 1.854090 Accuracy: 0.254000
Epoch  6, CIFAR-10 Batch 4:  Loss: 2.079126 Accuracy: 0.263200
Epoch  6, CIFAR-10 Batch 5:  Loss: 2.051485 Accuracy: 0.255000
Epoch  7, CIFAR-10 Batch 1:  Loss: 2.087129 Accuracy: 0.276000
Epoch  7, CIFAR-10 Batch 2:  Loss: 2.016126 Accuracy: 0.267800
Epoch  7, CIFAR-10 Batch 3:  Loss: 1.878643 Accuracy: 0.267800
Epoch  7, CIFAR-10 Batch 4:  Loss: 2.047413 Accuracy: 0.271200
Epoch  7, CIFAR-10 Batch 5:  Loss: 2.037411 Accuracy: 0.279600
Epoch  8, CIFAR-10 Batch 1:  Loss: 2.163920 Accuracy: 0.283000
Epoch  8, CIFAR-10 Batch 2:  Loss: 2.124582 Accuracy: 0.270600
Epoch  8, CIFAR-10 Batch 3:  Loss: 1.907492 Accuracy: 0.284000
Epoch  8, CIFAR-10 Batch 4:  Loss: 1.918663 Accuracy: 0.284200
Epoch  8, CIFAR-10 Batch 5:  Loss: 1.915771 Accuracy: 0.275800
Epoch  9, CIFAR-10 Batch 1:  Loss: 2.018986 Accuracy: 0.288000
Epoch  9, CIFAR-10 Batch 2:  Loss: 2.152456 Accuracy: 0.289200
Epoch  9, CIFAR-10 Batch 3:  Loss: 1.921937 Accuracy: 0.285800
Epoch  9, CIFAR-10 Batch 4:  Loss: 1.840247 Accuracy: 0.285000
Epoch  9, CIFAR-10 Batch 5:  Loss: 2.068932 Accuracy: 0.280600
Epoch 10, CIFAR-10 Batch 1:  Loss: 2.072104 Accuracy: 0.285600
Epoch 10, CIFAR-10 Batch 2:  Loss: 2.009380 Accuracy: 0.288200
Epoch 10, CIFAR-10 Batch 3:  Loss: 1.784390 Accuracy: 0.290200
Epoch 10, CIFAR-10 Batch 4:  Loss: 1.908816 Accuracy: 0.284200
Epoch 10, CIFAR-10 Batch 5:  Loss: 1.922940 Accuracy: 0.285400
Epoch 11, CIFAR-10 Batch 1:  Loss: 2.071515 Accuracy: 0.283200
Epoch 11, CIFAR-10 Batch 2:  Loss: 2.103458 Accuracy: 0.279600
Epoch 11, CIFAR-10 Batch 3:  Loss: 1.804694 Accuracy: 0.284600
Epoch 11, CIFAR-10 Batch 4:  Loss: 1.953719 Accuracy: 0.293800
Epoch 11, CIFAR-10 Batch 5:  Loss: 1.969246 Accuracy: 0.286400
Epoch 12, CIFAR-10 Batch 1:  Loss: 1.912460 Accuracy: 0.297400
Epoch 12, CIFAR-10 Batch 2:  Loss: 2.009654 Accuracy: 0.290600
Epoch 12, CIFAR-10 Batch 3:  Loss: 1.869480 Accuracy: 0.291400
Epoch 12, CIFAR-10 Batch 4:  Loss: 1.895318 Accuracy: 0.289800
Epoch 12, CIFAR-10 Batch 5:  Loss: 2.015131 Accuracy: 0.287400
Epoch 13, CIFAR-10 Batch 1:  Loss: 1.988412 Accuracy: 0.293200
Epoch 13, CIFAR-10 Batch 2:  Loss: 2.152491 Accuracy: 0.287600
Epoch 13, CIFAR-10 Batch 3:  Loss: 1.781412 Accuracy: 0.293200
Epoch 13, CIFAR-10 Batch 4:  Loss: 2.009982 Accuracy: 0.292800
Epoch 13, CIFAR-10 Batch 5:  Loss: 2.133998 Accuracy: 0.292200
Epoch 14, CIFAR-10 Batch 1:  Loss: 2.103574 Accuracy: 0.290400
Epoch 14, CIFAR-10 Batch 2:  Loss: 2.033211 Accuracy: 0.299200
Epoch 14, CIFAR-10 Batch 3:  Loss: 1.930300 Accuracy: 0.294400
Epoch 14, CIFAR-10 Batch 4:  Loss: 1.888699 Accuracy: 0.295400
Epoch 14, CIFAR-10 Batch 5:  Loss: 1.992027 Accuracy: 0.291600
Epoch 15, CIFAR-10 Batch 1:  Loss: 2.198676 Accuracy: 0.296200
Epoch 15, CIFAR-10 Batch 2:  Loss: 2.098066 Accuracy: 0.289400
Epoch 15, CIFAR-10 Batch 3:  Loss: 1.752618 Accuracy: 0.289200
Epoch 15, CIFAR-10 Batch 4:  Loss: 1.813168 Accuracy: 0.292800
Epoch 15, CIFAR-10 Batch 5:  Loss: 1.974456 Accuracy: 0.293000
Epoch 16, CIFAR-10 Batch 1:  Loss: 2.051890 Accuracy: 0.296400
Epoch 16, CIFAR-10 Batch 2:  Loss: 1.966299 Accuracy: 0.291200
Epoch 16, CIFAR-10 Batch 3:  Loss: 1.897265 Accuracy: 0.287400
Epoch 16, CIFAR-10 Batch 4:  Loss: 2.019985 Accuracy: 0.291400
Epoch 16, CIFAR-10 Batch 5:  Loss: 1.947175 Accuracy: 0.283400
Epoch 17, CIFAR-10 Batch 1:  Loss: 1.978316 Accuracy: 0.296200
Epoch 17, CIFAR-10 Batch 2:  Loss: 2.052039 Accuracy: 0.294800
Epoch 17, CIFAR-10 Batch 3:  Loss: 1.773316 Accuracy: 0.287000
Epoch 17, CIFAR-10 Batch 4:  Loss: 1.812853 Accuracy: 0.303800
Epoch 17, CIFAR-10 Batch 5:  Loss: 2.035660 Accuracy: 0.295200
Epoch 18, CIFAR-10 Batch 1:  Loss: 2.019555 Accuracy: 0.291000
Epoch 18, CIFAR-10 Batch 2:  Loss: 2.031825 Accuracy: 0.296600
Epoch 18, CIFAR-10 Batch 3:  Loss: 1.732327 Accuracy: 0.296000
Epoch 18, CIFAR-10 Batch 4:  Loss: 1.839953 Accuracy: 0.292800
Epoch 18, CIFAR-10 Batch 5:  Loss: 1.980870 Accuracy: 0.305800
Epoch 19, CIFAR-10 Batch 1:  Loss: 2.076772 Accuracy: 0.307600
Epoch 19, CIFAR-10 Batch 2:  Loss: 2.047027 Accuracy: 0.299800
Epoch 19, CIFAR-10 Batch 3:  Loss: 1.882261 Accuracy: 0.292800
Epoch 19, CIFAR-10 Batch 4:  Loss: 2.066685 Accuracy: 0.294600
Epoch 19, CIFAR-10 Batch 5:  Loss: 1.951277 Accuracy: 0.293000
Epoch 20, CIFAR-10 Batch 1:  Loss: 2.040469 Accuracy: 0.308600
Epoch 20, CIFAR-10 Batch 2:  Loss: 2.115283 Accuracy: 0.293000
Epoch 20, CIFAR-10 Batch 3:  Loss: 1.923509 Accuracy: 0.292400
Epoch 20, CIFAR-10 Batch 4:  Loss: 1.778151 Accuracy: 0.292800
Epoch 20, CIFAR-10 Batch 5:  Loss: 1.933181 Accuracy: 0.300200
Epoch 21, CIFAR-10 Batch 1:  Loss: 2.111100 Accuracy: 0.297000
Epoch 21, CIFAR-10 Batch 2:  Loss: 2.049659 Accuracy: 0.292200
Epoch 21, CIFAR-10 Batch 3:  Loss: 1.930099 Accuracy: 0.293400
Epoch 21, CIFAR-10 Batch 4:  Loss: 1.960545 Accuracy: 0.306800
Epoch 21, CIFAR-10 Batch 5:  Loss: 1.959052 Accuracy: 0.290800
Epoch 22, CIFAR-10 Batch 1:  Loss: 2.152627 Accuracy: 0.284200
Epoch 22, CIFAR-10 Batch 2:  Loss: 1.925495 Accuracy: 0.301200
Epoch 22, CIFAR-10 Batch 3:  Loss: 1.763899 Accuracy: 0.295200
Epoch 22, CIFAR-10 Batch 4:  Loss: 1.788208 Accuracy: 0.300000
Epoch 22, CIFAR-10 Batch 5:  Loss: 2.016801 Accuracy: 0.298800
Epoch 23, CIFAR-10 Batch 1:  Loss: 2.259086 Accuracy: 0.288600
Epoch 23, CIFAR-10 Batch 2:  Loss: 2.028373 Accuracy: 0.307400
Epoch 23, CIFAR-10 Batch 3:  Loss: 1.783653 Accuracy: 0.296400
Epoch 23, CIFAR-10 Batch 4:  Loss: 1.943308 Accuracy: 0.298200
Epoch 23, CIFAR-10 Batch 5:  Loss: 1.949432 Accuracy: 0.293400
Epoch 24, CIFAR-10 Batch 1:  Loss: 1.992904 Accuracy: 0.297400
Epoch 24, CIFAR-10 Batch 2:  Loss: 2.087559 Accuracy: 0.304800
Epoch 24, CIFAR-10 Batch 3:  Loss: 1.757255 Accuracy: 0.309400
Epoch 24, CIFAR-10 Batch 4:  Loss: 1.820747 Accuracy: 0.302200
Epoch 24, CIFAR-10 Batch 5:  Loss: 1.931440 Accuracy: 0.297200
Epoch 25, CIFAR-10 Batch 1:  Loss: 2.050203 Accuracy: 0.309000
Epoch 25, CIFAR-10 Batch 2:  Loss: 1.882823 Accuracy: 0.298200
Epoch 25, CIFAR-10 Batch 3:  Loss: 1.787932 Accuracy: 0.289800
Epoch 25, CIFAR-10 Batch 4:  Loss: 1.774974 Accuracy: 0.302400
Epoch 25, CIFAR-10 Batch 5:  Loss: 2.090710 Accuracy: 0.292400
Epoch 26, CIFAR-10 Batch 1:  Loss: 1.921942 Accuracy: 0.300800
Epoch 26, CIFAR-10 Batch 2:  Loss: 1.909667 Accuracy: 0.305800
Epoch 26, CIFAR-10 Batch 3:  Loss: 1.767340 Accuracy: 0.304000
Epoch 26, CIFAR-10 Batch 4:  Loss: 1.770647 Accuracy: 0.305600
Epoch 26, CIFAR-10 Batch 5:  Loss: 1.881105 Accuracy: 0.287800
Epoch 27, CIFAR-10 Batch 1:  Loss: 1.980111 Accuracy: 0.311000
Epoch 27, CIFAR-10 Batch 2:  Loss: 2.092400 Accuracy: 0.291000
Epoch 27, CIFAR-10 Batch 3:  Loss: 1.703705 Accuracy: 0.304200
Epoch 27, CIFAR-10 Batch 4:  Loss: 2.034148 Accuracy: 0.302600
Epoch 27, CIFAR-10 Batch 5:  Loss: 1.972026 Accuracy: 0.294200
Epoch 28, CIFAR-10 Batch 1:  Loss: 2.033445 Accuracy: 0.298200
Epoch 28, CIFAR-10 Batch 2:  Loss: 2.128228 Accuracy: 0.303000
Epoch 28, CIFAR-10 Batch 3:  Loss: 1.791546 Accuracy: 0.297400
Epoch 28, CIFAR-10 Batch 4:  Loss: 1.942261 Accuracy: 0.288800
Epoch 28, CIFAR-10 Batch 5:  Loss: 1.864682 Accuracy: 0.302800
Epoch 29, CIFAR-10 Batch 1:  Loss: 2.083704 Accuracy: 0.298800
Epoch 29, CIFAR-10 Batch 2:  Loss: 1.937205 Accuracy: 0.301400
Epoch 29, CIFAR-10 Batch 3:  Loss: 1.722486 Accuracy: 0.299800
Epoch 29, CIFAR-10 Batch 4:  Loss: 1.873919 Accuracy: 0.291400
Epoch 29, CIFAR-10 Batch 5:  Loss: 1.794890 Accuracy: 0.302000
Epoch 30, CIFAR-10 Batch 1:  Loss: 2.263947 Accuracy: 0.300600
Epoch 30, CIFAR-10 Batch 2:  Loss: 1.978557 Accuracy: 0.300200
Epoch 30, CIFAR-10 Batch 3:  Loss: 1.861752 Accuracy: 0.302000
Epoch 30, CIFAR-10 Batch 4:  Loss: 1.820699 Accuracy: 0.303600
Epoch 30, CIFAR-10 Batch 5:  Loss: 1.802327 Accuracy: 0.301000
Epoch 31, CIFAR-10 Batch 1:  Loss: 1.980898 Accuracy: 0.307400
Epoch 31, CIFAR-10 Batch 2:  Loss: 2.093518 Accuracy: 0.288200
Epoch 31, CIFAR-10 Batch 3:  Loss: 1.862314 Accuracy: 0.303200
Epoch 31, CIFAR-10 Batch 4:  Loss: 1.963208 Accuracy: 0.303400
Epoch 31, CIFAR-10 Batch 5:  Loss: 2.001976 Accuracy: 0.300400
Epoch 32, CIFAR-10 Batch 1:  Loss: 1.867155 Accuracy: 0.303000
Epoch 32, CIFAR-10 Batch 2:  Loss: 2.017274 Accuracy: 0.307800
Epoch 32, CIFAR-10 Batch 3:  Loss: 1.695857 Accuracy: 0.300800
Epoch 32, CIFAR-10 Batch 4:  Loss: 1.837102 Accuracy: 0.312600
Epoch 32, CIFAR-10 Batch 5:  Loss: 1.932060 Accuracy: 0.298200
Epoch 33, CIFAR-10 Batch 1:  Loss: 2.000771 Accuracy: 0.294800
Epoch 33, CIFAR-10 Batch 2:  Loss: 2.063889 Accuracy: 0.310000
Epoch 33, CIFAR-10 Batch 3:  Loss: 1.710921 Accuracy: 0.301000
Epoch 33, CIFAR-10 Batch 4:  Loss: 1.899860 Accuracy: 0.307000
Epoch 33, CIFAR-10 Batch 5:  Loss: 1.968671 Accuracy: 0.298000
Epoch 34, CIFAR-10 Batch 1:  Loss: 2.028860 Accuracy: 0.314200
Epoch 34, CIFAR-10 Batch 2:  Loss: 1.823905 Accuracy: 0.296600
Epoch 34, CIFAR-10 Batch 3:  Loss: 1.711150 Accuracy: 0.299000
Epoch 34, CIFAR-10 Batch 4:  Loss: 1.922900 Accuracy: 0.298600
Epoch 34, CIFAR-10 Batch 5:  Loss: 1.944404 Accuracy: 0.300600
Epoch 35, CIFAR-10 Batch 1:  Loss: 2.071232 Accuracy: 0.299000
Epoch 35, CIFAR-10 Batch 2:  Loss: 1.925906 Accuracy: 0.298600
Epoch 35, CIFAR-10 Batch 3:  Loss: 1.839887 Accuracy: 0.290800
Epoch 35, CIFAR-10 Batch 4:  Loss: 1.777430 Accuracy: 0.307400
Epoch 35, CIFAR-10 Batch 5:  Loss: 1.912720 Accuracy: 0.295200
Epoch 36, CIFAR-10 Batch 1:  Loss: 1.952121 Accuracy: 0.305200
Epoch 36, CIFAR-10 Batch 2:  Loss: 2.014870 Accuracy: 0.311000
Epoch 36, CIFAR-10 Batch 3:  Loss: 1.777101 Accuracy: 0.299400
Epoch 36, CIFAR-10 Batch 4:  Loss: 1.714669 Accuracy: 0.298800
Epoch 36, CIFAR-10 Batch 5:  Loss: 1.867689 Accuracy: 0.312400
Epoch 37, CIFAR-10 Batch 1:  Loss: 2.006921 Accuracy: 0.304000
Epoch 37, CIFAR-10 Batch 2:  Loss: 2.157551 Accuracy: 0.298800
Epoch 37, CIFAR-10 Batch 3:  Loss: 1.717024 Accuracy: 0.304400
Epoch 37, CIFAR-10 Batch 4:  Loss: 1.880085 Accuracy: 0.294400
Epoch 37, CIFAR-10 Batch 5:  Loss: 2.022155 Accuracy: 0.297800
Epoch 38, CIFAR-10 Batch 1:  Loss: 1.819268 Accuracy: 0.303600
Epoch 38, CIFAR-10 Batch 2:  Loss: 1.806605 Accuracy: 0.306200
Epoch 38, CIFAR-10 Batch 3:  Loss: 1.841885 Accuracy: 0.293200
Epoch 38, CIFAR-10 Batch 4:  Loss: 1.831050 Accuracy: 0.307400
Epoch 38, CIFAR-10 Batch 5:  Loss: 1.815790 Accuracy: 0.303600
Epoch 39, CIFAR-10 Batch 1:  Loss: 2.050821 Accuracy: 0.309200
Epoch 39, CIFAR-10 Batch 2:  Loss: 1.959214 Accuracy: 0.308600
Epoch 39, CIFAR-10 Batch 3:  Loss: 1.793745 Accuracy: 0.297200
Epoch 39, CIFAR-10 Batch 4:  Loss: 1.764254 Accuracy: 0.312000
Epoch 39, CIFAR-10 Batch 5:  Loss: 1.999772 Accuracy: 0.293000
Epoch 40, CIFAR-10 Batch 1:  Loss: 2.151072 Accuracy: 0.305800
Epoch 40, CIFAR-10 Batch 2:  Loss: 2.068295 Accuracy: 0.300400
Epoch 40, CIFAR-10 Batch 3:  Loss: 1.804058 Accuracy: 0.309400
Epoch 40, CIFAR-10 Batch 4:  Loss: 1.831112 Accuracy: 0.296600
Epoch 40, CIFAR-10 Batch 5:  Loss: 1.841190 Accuracy: 0.310800
Epoch 41, CIFAR-10 Batch 1:  Loss: 1.970776 Accuracy: 0.299400
Epoch 41, CIFAR-10 Batch 2:  Loss: 1.966071 Accuracy: 0.306000
Epoch 41, CIFAR-10 Batch 3:  Loss: 1.945354 Accuracy: 0.298600
Epoch 41, CIFAR-10 Batch 4:  Loss: 1.800219 Accuracy: 0.303800
Epoch 41, CIFAR-10 Batch 5:  Loss: 2.079220 Accuracy: 0.303000
Epoch 42, CIFAR-10 Batch 1:  Loss: 2.231413 Accuracy: 0.300200
Epoch 42, CIFAR-10 Batch 2:  Loss: 1.952358 Accuracy: 0.303000
Epoch 42, CIFAR-10 Batch 3:  Loss: 1.714526 Accuracy: 0.297600
Epoch 42, CIFAR-10 Batch 4:  Loss: 1.868896 Accuracy: 0.315800
Epoch 42, CIFAR-10 Batch 5:  Loss: 1.918749 Accuracy: 0.309800
Epoch 43, CIFAR-10 Batch 1:  Loss: 2.045063 Accuracy: 0.310800
Epoch 43, CIFAR-10 Batch 2:  Loss: 2.085912 Accuracy: 0.309600
Epoch 43, CIFAR-10 Batch 3:  Loss: 1.729711 Accuracy: 0.306800
Epoch 43, CIFAR-10 Batch 4:  Loss: 1.839538 Accuracy: 0.302000
Epoch 43, CIFAR-10 Batch 5:  Loss: 1.939455 Accuracy: 0.308400
Epoch 44, CIFAR-10 Batch 1:  Loss: 2.098184 Accuracy: 0.308800
Epoch 44, CIFAR-10 Batch 2:  Loss: 1.955154 Accuracy: 0.311000
Epoch 44, CIFAR-10 Batch 3:  Loss: 1.704279 Accuracy: 0.296600
Epoch 44, CIFAR-10 Batch 4:  Loss: 1.932569 Accuracy: 0.312000
Epoch 44, CIFAR-10 Batch 5:  Loss: 2.092149 Accuracy: 0.308000
Epoch 45, CIFAR-10 Batch 1:  Loss: 2.024581 Accuracy: 0.313400
Epoch 45, CIFAR-10 Batch 2:  Loss: 1.924010 Accuracy: 0.300600
Epoch 45, CIFAR-10 Batch 3:  Loss: 1.722933 Accuracy: 0.303800
Epoch 45, CIFAR-10 Batch 4:  Loss: 1.882736 Accuracy: 0.311200
Epoch 45, CIFAR-10 Batch 5:  Loss: 1.833730 Accuracy: 0.298000
Epoch 46, CIFAR-10 Batch 1:  Loss: 2.059406 Accuracy: 0.309000
Epoch 46, CIFAR-10 Batch 2:  Loss: 1.910459 Accuracy: 0.310400
Epoch 46, CIFAR-10 Batch 3:  Loss: 1.782523 Accuracy: 0.304600
Epoch 46, CIFAR-10 Batch 4:  Loss: 1.908815 Accuracy: 0.307600
Epoch 46, CIFAR-10 Batch 5:  Loss: 1.902591 Accuracy: 0.295200
Epoch 47, CIFAR-10 Batch 1:  Loss: 2.211854 Accuracy: 0.301000
Epoch 47, CIFAR-10 Batch 2:  Loss: 1.915007 Accuracy: 0.312000
Epoch 47, CIFAR-10 Batch 3:  Loss: 1.781698 Accuracy: 0.314200
Epoch 47, CIFAR-10 Batch 4:  Loss: 1.924418 Accuracy: 0.306200
Epoch 47, CIFAR-10 Batch 5:  Loss: 1.945918 Accuracy: 0.308400
Epoch 48, CIFAR-10 Batch 1:  Loss: 2.158780 Accuracy: 0.313200
Epoch 48, CIFAR-10 Batch 2:  Loss: 1.990711 Accuracy: 0.317200
Epoch 48, CIFAR-10 Batch 3:  Loss: 1.766295 Accuracy: 0.309800
Epoch 48, CIFAR-10 Batch 4:  Loss: 1.788933 Accuracy: 0.307400
Epoch 48, CIFAR-10 Batch 5:  Loss: 1.843167 Accuracy: 0.306400
Epoch 49, CIFAR-10 Batch 1:  Loss: 2.025951 Accuracy: 0.313400
Epoch 49, CIFAR-10 Batch 2:  Loss: 2.059493 Accuracy: 0.318200
Epoch 49, CIFAR-10 Batch 3:  Loss: 1.808537 Accuracy: 0.308400
Epoch 49, CIFAR-10 Batch 4:  Loss: 1.764520 Accuracy: 0.305000
Epoch 49, CIFAR-10 Batch 5:  Loss: 1.955708 Accuracy: 0.311000
Epoch 50, CIFAR-10 Batch 1:  Loss: 1.924864 Accuracy: 0.306400
Epoch 50, CIFAR-10 Batch 2:  Loss: 2.083555 Accuracy: 0.314400
Epoch 50, CIFAR-10 Batch 3:  Loss: 1.695124 Accuracy: 0.309200
Epoch 50, CIFAR-10 Batch 4:  Loss: 1.885969 Accuracy: 0.313600
Epoch 50, CIFAR-10 Batch 5:  Loss: 1.924721 Accuracy: 0.316000

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [185]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.3115047770700637

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.


In [ ]: