Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


CIFAR-10 Dataset: 171MB [00:13, 12.7MB/s]                              
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 5
sample_id = 29
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 5:
Samples: 10000
Label Counts: {0: 1014, 1: 1014, 2: 952, 3: 1016, 4: 997, 5: 1025, 6: 980, 7: 977, 8: 1003, 9: 1022}
First 20 Labels: [1, 8, 5, 1, 5, 7, 4, 3, 8, 2, 7, 2, 0, 1, 5, 9, 6, 2, 0, 8]

Example of Image 29:
Image - Min Value: 2 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 9 Name: truck

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    return (x - x.min()) / (x.max()-x.min())


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [4]:
from sklearn import preprocessing

rrange = np.arange(10)

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    
    lb = preprocessing.LabelBinarizer()
    
    lb.fit(rrange)
    
    return lb.transform(x)
                
    

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [7]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a bach of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    return tf.placeholder(tf.float32, (None, image_shape[0], image_shape[1], image_shape[2]), name="x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(tf.float32, (None, n_classes), name="y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(tf.float32, name="keep_prob")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [8]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    
    input_feature_map = x_tensor.get_shape()[3].value
    
    weight = tf.Variable(
        tf.truncated_normal(
            [conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs],
            mean=0,
            stddev=0.01
        ),
        name="conv2d_weight"
    )
    #bias = tf.Variable(tf.truncated_normal([conv_num_outputs], dtype=tf.float32))
    bias = tf.Variable(tf.zeros([conv_num_outputs], dtype=tf.float32), name="conv2d_bias")
    
    cstrides = [1, conv_strides[0], conv_strides[1], 1]
    pstrides = [1, pool_strides[0], pool_strides[1], 1]
    
    output = tf.nn.conv2d(
        x_tensor,
        weight,
        strides=cstrides,
        padding="SAME"
    )
    
    output = tf.nn.bias_add(output, bias)
    output = tf.nn.relu(output)
    
    output = tf.nn.max_pool(
        output,
        ksize=[1, pool_ksize[0], pool_ksize[1], 1],
        strides=pstrides,
        padding="SAME"
    )
    
    return output
    


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [9]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    tensor_dims = x_tensor.get_shape().as_list()
    
    return tf.reshape(x_tensor, [-1, tensor_dims[1]*tensor_dims[2]*tensor_dims[3]])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [10]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    tensor_shape = x_tensor.get_shape().as_list()
    
    weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0.0, stddev=0.03), name="weight_fc")
    bias = tf.Variable(tf.zeros([num_outputs]), name="weight_bias")
    
    output = tf.add(tf.matmul(x_tensor, weights), bias)
    
    output = tf.nn.relu(output)
    
    return output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [11]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    tensor_shape = x_tensor.get_shape().as_list()
    
    weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0, stddev=0.01), name="output_weight")
    
    bias = tf.Variable(tf.zeros([num_outputs]), name="output_bias")
    
    output = tf.add(tf.matmul(x_tensor, weights), bias)
    
    return output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [12]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    conv_ksize = (3, 3)
    conv_strides = (1, 1)
    pool_ksize = (2, 2)
    pool_strides = (2, 2)
    
    num_outputs = 10
    
    network = conv2d_maxpool(x, 16, conv_ksize, conv_strides, pool_ksize, pool_strides)
    network = conv2d_maxpool(network, 32, conv_ksize, conv_strides, pool_ksize, pool_strides)
    network = conv2d_maxpool(network, 64, conv_ksize, conv_strides, pool_ksize, pool_strides)
    
    network = flatten(network)
    network = fully_conn(network, 512)
    network = tf.nn.dropout(network, keep_prob=keep_prob)
    network = fully_conn(network, 1024)
    network = output(network, num_outputs)
    
    return network


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [13]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    session.run(optimizer, feed_dict={
        x: feature_batch,
        y: label_batch,
        keep_prob: keep_probability
    })


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [14]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    loss = session.run(cost, feed_dict={
        x: feature_batch,
        y: label_batch,
        keep_prob: 1.0
    })
    
    accuracies = np.zeros(5)
    
    for i in [0, 1000, 2000, 3000, 4000]:
        valid_acc = session.run(accuracy, feed_dict={
            x: valid_features[i:i+1000],
            y: valid_labels[i:i+1000],
            keep_prob: 1.0
        })
        
        index = int(i/1000)
        
        accuracies[index] = valid_acc
        
    accuracy = np.mean(accuracies)
    
    print("Loss: {loss} - Validation Accuracy: {valid_acc}".format(loss=loss, valid_acc=accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [15]:
# TODO: Tune Parameters
epochs = 50
batch_size = 1024
keep_probability = .5

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [16]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.3023252487182617 - Validation Accuracy: 0.10500000268220902
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.2761712074279785 - Validation Accuracy: 0.11979999989271164
Epoch  3, CIFAR-10 Batch 1:  Loss: 2.1865084171295166 - Validation Accuracy: 0.20139999091625213
Epoch  4, CIFAR-10 Batch 1:  Loss: 2.119492292404175 - Validation Accuracy: 0.22820000648498534
Epoch  5, CIFAR-10 Batch 1:  Loss: 2.106419563293457 - Validation Accuracy: 0.22619999647140504
Epoch  6, CIFAR-10 Batch 1:  Loss: 2.0896782875061035 - Validation Accuracy: 0.21759999990463258
Epoch  7, CIFAR-10 Batch 1:  Loss: 2.0696170330047607 - Validation Accuracy: 0.2534000039100647
Epoch  8, CIFAR-10 Batch 1:  Loss: 2.0224626064300537 - Validation Accuracy: 0.25939999520778656
Epoch  9, CIFAR-10 Batch 1:  Loss: 2.0147383213043213 - Validation Accuracy: 0.2578000009059906
Epoch 10, CIFAR-10 Batch 1:  Loss: 1.9871013164520264 - Validation Accuracy: 0.2745999932289124
Epoch 11, CIFAR-10 Batch 1:  Loss: 1.9443789720535278 - Validation Accuracy: 0.30099999308586123
Epoch 12, CIFAR-10 Batch 1:  Loss: 1.9461321830749512 - Validation Accuracy: 0.29480000138282775
Epoch 13, CIFAR-10 Batch 1:  Loss: 1.891274333000183 - Validation Accuracy: 0.31360000371932983
Epoch 14, CIFAR-10 Batch 1:  Loss: 1.8661649227142334 - Validation Accuracy: 0.3154000103473663
Epoch 15, CIFAR-10 Batch 1:  Loss: 1.8390666246414185 - Validation Accuracy: 0.3251999974250793
Epoch 16, CIFAR-10 Batch 1:  Loss: 1.8032653331756592 - Validation Accuracy: 0.3412000000476837
Epoch 17, CIFAR-10 Batch 1:  Loss: 1.7688990831375122 - Validation Accuracy: 0.3522000014781952
Epoch 18, CIFAR-10 Batch 1:  Loss: 1.7333544492721558 - Validation Accuracy: 0.3648000001907349
Epoch 19, CIFAR-10 Batch 1:  Loss: 1.688138484954834 - Validation Accuracy: 0.3824000060558319
Epoch 20, CIFAR-10 Batch 1:  Loss: 1.649538278579712 - Validation Accuracy: 0.3916000008583069
Epoch 21, CIFAR-10 Batch 1:  Loss: 1.6175535917282104 - Validation Accuracy: 0.39720000624656676
Epoch 22, CIFAR-10 Batch 1:  Loss: 1.5845305919647217 - Validation Accuracy: 0.4117999911308289
Epoch 23, CIFAR-10 Batch 1:  Loss: 1.556902527809143 - Validation Accuracy: 0.4131999909877777
Epoch 24, CIFAR-10 Batch 1:  Loss: 1.5270206928253174 - Validation Accuracy: 0.41760000586509705
Epoch 25, CIFAR-10 Batch 1:  Loss: 1.5095536708831787 - Validation Accuracy: 0.428000009059906
Epoch 26, CIFAR-10 Batch 1:  Loss: 1.4716688394546509 - Validation Accuracy: 0.437200003862381
Epoch 27, CIFAR-10 Batch 1:  Loss: 1.455000877380371 - Validation Accuracy: 0.44100000262260436
Epoch 28, CIFAR-10 Batch 1:  Loss: 1.432867169380188 - Validation Accuracy: 0.44400001168251035
Epoch 29, CIFAR-10 Batch 1:  Loss: 1.4144526720046997 - Validation Accuracy: 0.4525999903678894
Epoch 30, CIFAR-10 Batch 1:  Loss: 1.3914774656295776 - Validation Accuracy: 0.45839999318122865
Epoch 31, CIFAR-10 Batch 1:  Loss: 1.4002678394317627 - Validation Accuracy: 0.4541999876499176
Epoch 32, CIFAR-10 Batch 1:  Loss: 1.3598357439041138 - Validation Accuracy: 0.4586000025272369
Epoch 33, CIFAR-10 Batch 1:  Loss: 1.3423950672149658 - Validation Accuracy: 0.4631999909877777
Epoch 34, CIFAR-10 Batch 1:  Loss: 1.3375284671783447 - Validation Accuracy: 0.46739999055862425
Epoch 35, CIFAR-10 Batch 1:  Loss: 1.3203481435775757 - Validation Accuracy: 0.46599999666213987
Epoch 36, CIFAR-10 Batch 1:  Loss: 1.30838143825531 - Validation Accuracy: 0.4732000112533569
Epoch 37, CIFAR-10 Batch 1:  Loss: 1.290765643119812 - Validation Accuracy: 0.4725999772548676
Epoch 38, CIFAR-10 Batch 1:  Loss: 1.2702209949493408 - Validation Accuracy: 0.4741999864578247
Epoch 39, CIFAR-10 Batch 1:  Loss: 1.254655361175537 - Validation Accuracy: 0.4797999978065491
Epoch 40, CIFAR-10 Batch 1:  Loss: 1.2537062168121338 - Validation Accuracy: 0.48459999561309813
Epoch 41, CIFAR-10 Batch 1:  Loss: 1.2111647129058838 - Validation Accuracy: 0.49279998540878295
Epoch 42, CIFAR-10 Batch 1:  Loss: 1.2009388208389282 - Validation Accuracy: 0.4953999936580658
Epoch 43, CIFAR-10 Batch 1:  Loss: 1.2216259241104126 - Validation Accuracy: 0.4888000011444092
Epoch 44, CIFAR-10 Batch 1:  Loss: 1.180574893951416 - Validation Accuracy: 0.49679996967315676
Epoch 45, CIFAR-10 Batch 1:  Loss: 1.16954505443573 - Validation Accuracy: 0.4963999927043915
Epoch 46, CIFAR-10 Batch 1:  Loss: 1.1481034755706787 - Validation Accuracy: 0.49999997615814207
Epoch 47, CIFAR-10 Batch 1:  Loss: 1.1432173252105713 - Validation Accuracy: 0.5011999905109406
Epoch 48, CIFAR-10 Batch 1:  Loss: 1.133634328842163 - Validation Accuracy: 0.5019999861717224
Epoch 49, CIFAR-10 Batch 1:  Loss: 1.11086106300354 - Validation Accuracy: 0.5015999913215637
Epoch 50, CIFAR-10 Batch 1:  Loss: 1.1302587985992432 - Validation Accuracy: 0.503799992799759

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [17]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.2958123683929443 - Validation Accuracy: 0.10499999970197678
Epoch  1, CIFAR-10 Batch 2:  Loss: 2.211688756942749 - Validation Accuracy: 0.1629999965429306
Epoch  1, CIFAR-10 Batch 3:  Loss: 2.2199418544769287 - Validation Accuracy: 0.1973999947309494
Epoch  1, CIFAR-10 Batch 4:  Loss: 2.141542673110962 - Validation Accuracy: 0.18539999723434447
Epoch  1, CIFAR-10 Batch 5:  Loss: 2.0854597091674805 - Validation Accuracy: 0.2404000073671341
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.1125354766845703 - Validation Accuracy: 0.22919999361038207
Epoch  2, CIFAR-10 Batch 2:  Loss: 2.018734931945801 - Validation Accuracy: 0.25659999549388884
Epoch  2, CIFAR-10 Batch 3:  Loss: 1.9746681451797485 - Validation Accuracy: 0.27440000176429746
Epoch  2, CIFAR-10 Batch 4:  Loss: 1.9400256872177124 - Validation Accuracy: 0.2837999999523163
Epoch  2, CIFAR-10 Batch 5:  Loss: 1.9231189489364624 - Validation Accuracy: 0.2899999976158142
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.9472681283950806 - Validation Accuracy: 0.29119999408721925
Epoch  3, CIFAR-10 Batch 2:  Loss: 1.888248085975647 - Validation Accuracy: 0.3029999971389771
Epoch  3, CIFAR-10 Batch 3:  Loss: 1.861868977546692 - Validation Accuracy: 0.29519999623298643
Epoch  3, CIFAR-10 Batch 4:  Loss: 1.85967218875885 - Validation Accuracy: 0.30279999375343325
Epoch  3, CIFAR-10 Batch 5:  Loss: 1.8365060091018677 - Validation Accuracy: 0.3286000072956085
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.8003530502319336 - Validation Accuracy: 0.33459998965263366
Epoch  4, CIFAR-10 Batch 2:  Loss: 1.7728029489517212 - Validation Accuracy: 0.3449999988079071
Epoch  4, CIFAR-10 Batch 3:  Loss: 1.7025806903839111 - Validation Accuracy: 0.3557999968528748
Epoch  4, CIFAR-10 Batch 4:  Loss: 1.679390549659729 - Validation Accuracy: 0.3752000033855438
Epoch  4, CIFAR-10 Batch 5:  Loss: 1.6898162364959717 - Validation Accuracy: 0.38559999465942385
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.6402544975280762 - Validation Accuracy: 0.39600000977516175
Epoch  5, CIFAR-10 Batch 2:  Loss: 1.6686749458312988 - Validation Accuracy: 0.3879999935626984
Epoch  5, CIFAR-10 Batch 3:  Loss: 1.5360110998153687 - Validation Accuracy: 0.4167999982833862
Epoch  5, CIFAR-10 Batch 4:  Loss: 1.5725293159484863 - Validation Accuracy: 0.4185999929904938
Epoch  5, CIFAR-10 Batch 5:  Loss: 1.5671324729919434 - Validation Accuracy: 0.4327999949455261
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.5159920454025269 - Validation Accuracy: 0.43820000886917115
Epoch  6, CIFAR-10 Batch 2:  Loss: 1.533545732498169 - Validation Accuracy: 0.44339999556541443
Epoch  6, CIFAR-10 Batch 3:  Loss: 1.4516444206237793 - Validation Accuracy: 0.4534000098705292
Epoch  6, CIFAR-10 Batch 4:  Loss: 1.509974718093872 - Validation Accuracy: 0.4490000128746033
Epoch  6, CIFAR-10 Batch 5:  Loss: 1.484488606452942 - Validation Accuracy: 0.4613999843597412
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.4803364276885986 - Validation Accuracy: 0.46140000224113464
Epoch  7, CIFAR-10 Batch 2:  Loss: 1.481225609779358 - Validation Accuracy: 0.45619999766349795
Epoch  7, CIFAR-10 Batch 3:  Loss: 1.3988326787948608 - Validation Accuracy: 0.4656000077724457
Epoch  7, CIFAR-10 Batch 4:  Loss: 1.4447731971740723 - Validation Accuracy: 0.47319998741149905
Epoch  7, CIFAR-10 Batch 5:  Loss: 1.428748607635498 - Validation Accuracy: 0.47399999499320983
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.4171431064605713 - Validation Accuracy: 0.4806000113487244
Epoch  8, CIFAR-10 Batch 2:  Loss: 1.444998860359192 - Validation Accuracy: 0.46659998297691346
Epoch  8, CIFAR-10 Batch 3:  Loss: 1.3595635890960693 - Validation Accuracy: 0.4791999876499176
Epoch  8, CIFAR-10 Batch 4:  Loss: 1.4150464534759521 - Validation Accuracy: 0.4813999950885773
Epoch  8, CIFAR-10 Batch 5:  Loss: 1.388658881187439 - Validation Accuracy: 0.4896000027656555
Epoch  9, CIFAR-10 Batch 1:  Loss: 1.393131136894226 - Validation Accuracy: 0.4880000114440918
Epoch  9, CIFAR-10 Batch 2:  Loss: 1.4183030128479004 - Validation Accuracy: 0.48159998655319214
Epoch  9, CIFAR-10 Batch 3:  Loss: 1.3417136669158936 - Validation Accuracy: 0.4898000001907349
Epoch  9, CIFAR-10 Batch 4:  Loss: 1.3698838949203491 - Validation Accuracy: 0.496999990940094
Epoch  9, CIFAR-10 Batch 5:  Loss: 1.3459516763687134 - Validation Accuracy: 0.4987999975681305
Epoch 10, CIFAR-10 Batch 1:  Loss: 1.349015474319458 - Validation Accuracy: 0.49939998984336853
Epoch 10, CIFAR-10 Batch 2:  Loss: 1.3835818767547607 - Validation Accuracy: 0.4892000138759613
Epoch 10, CIFAR-10 Batch 3:  Loss: 1.3133878707885742 - Validation Accuracy: 0.5039999902248382
Epoch 10, CIFAR-10 Batch 4:  Loss: 1.3440748453140259 - Validation Accuracy: 0.49839999675750735
Epoch 10, CIFAR-10 Batch 5:  Loss: 1.3170334100723267 - Validation Accuracy: 0.5065999925136566
Epoch 11, CIFAR-10 Batch 1:  Loss: 1.337259292602539 - Validation Accuracy: 0.5053999900817872
Epoch 11, CIFAR-10 Batch 2:  Loss: 1.3526456356048584 - Validation Accuracy: 0.5019999861717224
Epoch 11, CIFAR-10 Batch 3:  Loss: 1.2804372310638428 - Validation Accuracy: 0.5053999841213226
Epoch 11, CIFAR-10 Batch 4:  Loss: 1.3086856603622437 - Validation Accuracy: 0.5110000014305115
Epoch 11, CIFAR-10 Batch 5:  Loss: 1.2899279594421387 - Validation Accuracy: 0.5139999747276306
Epoch 12, CIFAR-10 Batch 1:  Loss: 1.301448941230774 - Validation Accuracy: 0.5148000240325927
Epoch 12, CIFAR-10 Batch 2:  Loss: 1.325179100036621 - Validation Accuracy: 0.5083999991416931
Epoch 12, CIFAR-10 Batch 3:  Loss: 1.2537459135055542 - Validation Accuracy: 0.5139999747276306
Epoch 12, CIFAR-10 Batch 4:  Loss: 1.2810451984405518 - Validation Accuracy: 0.5172000050544738
Epoch 12, CIFAR-10 Batch 5:  Loss: 1.2551683187484741 - Validation Accuracy: 0.518399977684021
Epoch 13, CIFAR-10 Batch 1:  Loss: 1.2725615501403809 - Validation Accuracy: 0.5203999996185302
Epoch 13, CIFAR-10 Batch 2:  Loss: 1.3096964359283447 - Validation Accuracy: 0.5192000269889832
Epoch 13, CIFAR-10 Batch 3:  Loss: 1.2326921224594116 - Validation Accuracy: 0.5267999649047852
Epoch 13, CIFAR-10 Batch 4:  Loss: 1.252631664276123 - Validation Accuracy: 0.5212000012397766
Epoch 13, CIFAR-10 Batch 5:  Loss: 1.233960747718811 - Validation Accuracy: 0.5228000164031983
Epoch 14, CIFAR-10 Batch 1:  Loss: 1.2387971878051758 - Validation Accuracy: 0.5287999987602234
Epoch 14, CIFAR-10 Batch 2:  Loss: 1.294676661491394 - Validation Accuracy: 0.5187999963760376
Epoch 14, CIFAR-10 Batch 3:  Loss: 1.211846113204956 - Validation Accuracy: 0.5319999814033508
Epoch 14, CIFAR-10 Batch 4:  Loss: 1.228611946105957 - Validation Accuracy: 0.5254000067710877
Epoch 14, CIFAR-10 Batch 5:  Loss: 1.2142363786697388 - Validation Accuracy: 0.5264000058174133
Epoch 15, CIFAR-10 Batch 1:  Loss: 1.2286944389343262 - Validation Accuracy: 0.5338000059127808
Epoch 15, CIFAR-10 Batch 2:  Loss: 1.2621287107467651 - Validation Accuracy: 0.5286000013351441
Epoch 15, CIFAR-10 Batch 3:  Loss: 1.1827211380004883 - Validation Accuracy: 0.5401999950408936
Epoch 15, CIFAR-10 Batch 4:  Loss: 1.2019433975219727 - Validation Accuracy: 0.5318000078201294
Epoch 15, CIFAR-10 Batch 5:  Loss: 1.1825482845306396 - Validation Accuracy: 0.5349999904632569
Epoch 16, CIFAR-10 Batch 1:  Loss: 1.2078999280929565 - Validation Accuracy: 0.5424000024795532
Epoch 16, CIFAR-10 Batch 2:  Loss: 1.2281323671340942 - Validation Accuracy: 0.5333999991416931
Epoch 16, CIFAR-10 Batch 3:  Loss: 1.1473838090896606 - Validation Accuracy: 0.5434000134468079
Epoch 16, CIFAR-10 Batch 4:  Loss: 1.1749961376190186 - Validation Accuracy: 0.5391999840736389
Epoch 16, CIFAR-10 Batch 5:  Loss: 1.1423965692520142 - Validation Accuracy: 0.5442000150680542
Epoch 17, CIFAR-10 Batch 1:  Loss: 1.1753084659576416 - Validation Accuracy: 0.5484000086784363
Epoch 17, CIFAR-10 Batch 2:  Loss: 1.1805036067962646 - Validation Accuracy: 0.5437999725341797
Epoch 17, CIFAR-10 Batch 3:  Loss: 1.1403844356536865 - Validation Accuracy: 0.5388000011444092
Epoch 17, CIFAR-10 Batch 4:  Loss: 1.1417464017868042 - Validation Accuracy: 0.5486000061035157
Epoch 17, CIFAR-10 Batch 5:  Loss: 1.1230573654174805 - Validation Accuracy: 0.5507999897003174
Epoch 18, CIFAR-10 Batch 1:  Loss: 1.151024341583252 - Validation Accuracy: 0.5558000087738038
Epoch 18, CIFAR-10 Batch 2:  Loss: 1.1568048000335693 - Validation Accuracy: 0.5604000091552734
Epoch 18, CIFAR-10 Batch 3:  Loss: 1.1049168109893799 - Validation Accuracy: 0.549399983882904
Epoch 18, CIFAR-10 Batch 4:  Loss: 1.1111834049224854 - Validation Accuracy: 0.5527999997138977
Epoch 18, CIFAR-10 Batch 5:  Loss: 1.086025595664978 - Validation Accuracy: 0.5568000197410583
Epoch 19, CIFAR-10 Batch 1:  Loss: 1.1167596578598022 - Validation Accuracy: 0.5679999947547912
Epoch 19, CIFAR-10 Batch 2:  Loss: 1.1192626953125 - Validation Accuracy: 0.5685999989509583
Epoch 19, CIFAR-10 Batch 3:  Loss: 1.107083797454834 - Validation Accuracy: 0.550000011920929
Epoch 19, CIFAR-10 Batch 4:  Loss: 1.0935477018356323 - Validation Accuracy: 0.5628000020980835
Epoch 19, CIFAR-10 Batch 5:  Loss: 1.0631823539733887 - Validation Accuracy: 0.5612000107765198
Epoch 20, CIFAR-10 Batch 1:  Loss: 1.0831830501556396 - Validation Accuracy: 0.572000014781952
Epoch 20, CIFAR-10 Batch 2:  Loss: 1.0922143459320068 - Validation Accuracy: 0.5729999899864197
Epoch 20, CIFAR-10 Batch 3:  Loss: 1.0583816766738892 - Validation Accuracy: 0.5623999953269958
Epoch 20, CIFAR-10 Batch 4:  Loss: 1.060630202293396 - Validation Accuracy: 0.5673999667167664
Epoch 20, CIFAR-10 Batch 5:  Loss: 1.026280403137207 - Validation Accuracy: 0.571999979019165
Epoch 21, CIFAR-10 Batch 1:  Loss: 1.0506675243377686 - Validation Accuracy: 0.5770000219345093
Epoch 21, CIFAR-10 Batch 2:  Loss: 1.065114140510559 - Validation Accuracy: 0.572000014781952
Epoch 21, CIFAR-10 Batch 3:  Loss: 1.0035187005996704 - Validation Accuracy: 0.580400013923645
Epoch 21, CIFAR-10 Batch 4:  Loss: 1.0109930038452148 - Validation Accuracy: 0.5825999975204468
Epoch 21, CIFAR-10 Batch 5:  Loss: 1.004485011100769 - Validation Accuracy: 0.5801999688148498
Epoch 22, CIFAR-10 Batch 1:  Loss: 1.0179330110549927 - Validation Accuracy: 0.5831999897956848
Epoch 22, CIFAR-10 Batch 2:  Loss: 1.0319901704788208 - Validation Accuracy: 0.5883999943733216
Epoch 22, CIFAR-10 Batch 3:  Loss: 0.9890032410621643 - Validation Accuracy: 0.5817999958992004
Epoch 22, CIFAR-10 Batch 4:  Loss: 0.9857237339019775 - Validation Accuracy: 0.5863999843597412
Epoch 22, CIFAR-10 Batch 5:  Loss: 0.9744945764541626 - Validation Accuracy: 0.5881999850273132
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.99955815076828 - Validation Accuracy: 0.5953999876976013
Epoch 23, CIFAR-10 Batch 2:  Loss: 1.0147514343261719 - Validation Accuracy: 0.5922000050544739
Epoch 23, CIFAR-10 Batch 3:  Loss: 0.9694851636886597 - Validation Accuracy: 0.5819999933242798
Epoch 23, CIFAR-10 Batch 4:  Loss: 0.9570097923278809 - Validation Accuracy: 0.5972000122070312
Epoch 23, CIFAR-10 Batch 5:  Loss: 0.949580192565918 - Validation Accuracy: 0.601199996471405
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.9642145037651062 - Validation Accuracy: 0.5917999863624572
Epoch 24, CIFAR-10 Batch 2:  Loss: 0.986356258392334 - Validation Accuracy: 0.6061999917030334
Epoch 24, CIFAR-10 Batch 3:  Loss: 0.934219479560852 - Validation Accuracy: 0.5887999892234802
Epoch 24, CIFAR-10 Batch 4:  Loss: 0.9305890202522278 - Validation Accuracy: 0.6004000067710876
Epoch 24, CIFAR-10 Batch 5:  Loss: 0.9257680177688599 - Validation Accuracy: 0.6033999800682068
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.9497304558753967 - Validation Accuracy: 0.5988000154495239
Epoch 25, CIFAR-10 Batch 2:  Loss: 0.9654070138931274 - Validation Accuracy: 0.6055999994277954
Epoch 25, CIFAR-10 Batch 3:  Loss: 0.8988060355186462 - Validation Accuracy: 0.6008000254631043
Epoch 25, CIFAR-10 Batch 4:  Loss: 0.8995476961135864 - Validation Accuracy: 0.6066000103950501
Epoch 25, CIFAR-10 Batch 5:  Loss: 0.8940737843513489 - Validation Accuracy: 0.6031999826431275
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.9107112884521484 - Validation Accuracy: 0.6074000120162963
Epoch 26, CIFAR-10 Batch 2:  Loss: 0.9378106594085693 - Validation Accuracy: 0.6047999739646912
Epoch 26, CIFAR-10 Batch 3:  Loss: 0.8826156258583069 - Validation Accuracy: 0.6032000303268432
Epoch 26, CIFAR-10 Batch 4:  Loss: 0.8902267813682556 - Validation Accuracy: 0.604800009727478
Epoch 26, CIFAR-10 Batch 5:  Loss: 0.8760175108909607 - Validation Accuracy: 0.6167999863624573
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.9026194214820862 - Validation Accuracy: 0.6104000210762024
Epoch 27, CIFAR-10 Batch 2:  Loss: 0.9101120829582214 - Validation Accuracy: 0.6131999969482422
Epoch 27, CIFAR-10 Batch 3:  Loss: 0.8455016613006592 - Validation Accuracy: 0.6139999866485596
Epoch 27, CIFAR-10 Batch 4:  Loss: 0.8484264612197876 - Validation Accuracy: 0.6139999985694885
Epoch 27, CIFAR-10 Batch 5:  Loss: 0.852256178855896 - Validation Accuracy: 0.6175999879837036
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.868722677230835 - Validation Accuracy: 0.6175999879837036
Epoch 28, CIFAR-10 Batch 2:  Loss: 0.8751017451286316 - Validation Accuracy: 0.6215999960899353
Epoch 28, CIFAR-10 Batch 3:  Loss: 0.8417572379112244 - Validation Accuracy: 0.6136000156402588
Epoch 28, CIFAR-10 Batch 4:  Loss: 0.8167181611061096 - Validation Accuracy: 0.6224000096321106
Epoch 28, CIFAR-10 Batch 5:  Loss: 0.8316553235054016 - Validation Accuracy: 0.6204000115394592
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.8442394733428955 - Validation Accuracy: 0.6204000234603881
Epoch 29, CIFAR-10 Batch 2:  Loss: 0.8604745268821716 - Validation Accuracy: 0.6200000166893005
Epoch 29, CIFAR-10 Batch 3:  Loss: 0.7930777668952942 - Validation Accuracy: 0.625599992275238
Epoch 29, CIFAR-10 Batch 4:  Loss: 0.8269283175468445 - Validation Accuracy: 0.6113999843597412
Epoch 29, CIFAR-10 Batch 5:  Loss: 0.7888728380203247 - Validation Accuracy: 0.6292000055313111
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.8458414673805237 - Validation Accuracy: 0.620199978351593
Epoch 30, CIFAR-10 Batch 2:  Loss: 0.853675127029419 - Validation Accuracy: 0.6267999887466431
Epoch 30, CIFAR-10 Batch 3:  Loss: 0.7888988852500916 - Validation Accuracy: 0.623799991607666
Epoch 30, CIFAR-10 Batch 4:  Loss: 0.7828641533851624 - Validation Accuracy: 0.6287999987602234
Epoch 30, CIFAR-10 Batch 5:  Loss: 0.7912818193435669 - Validation Accuracy: 0.6306000113487243
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.8120032548904419 - Validation Accuracy: 0.6221999883651733
Epoch 31, CIFAR-10 Batch 2:  Loss: 0.8110247850418091 - Validation Accuracy: 0.6349999666213989
Epoch 31, CIFAR-10 Batch 3:  Loss: 0.762914776802063 - Validation Accuracy: 0.6270000100135803
Epoch 31, CIFAR-10 Batch 4:  Loss: 0.7804691195487976 - Validation Accuracy: 0.6206000089645386
Epoch 31, CIFAR-10 Batch 5:  Loss: 0.7800992131233215 - Validation Accuracy: 0.6303999662399292
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.7853965759277344 - Validation Accuracy: 0.6352000117301941
Epoch 32, CIFAR-10 Batch 2:  Loss: 0.7946333885192871 - Validation Accuracy: 0.6355999827384948
Epoch 32, CIFAR-10 Batch 3:  Loss: 0.7323595881462097 - Validation Accuracy: 0.6291999816894531
Epoch 32, CIFAR-10 Batch 4:  Loss: 0.7547138333320618 - Validation Accuracy: 0.6277999997138977
Epoch 32, CIFAR-10 Batch 5:  Loss: 0.764913022518158 - Validation Accuracy: 0.6280000329017639
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.7657198905944824 - Validation Accuracy: 0.6373999834060669
Epoch 33, CIFAR-10 Batch 2:  Loss: 0.7814474701881409 - Validation Accuracy: 0.6246000051498413
Epoch 33, CIFAR-10 Batch 3:  Loss: 0.7010604739189148 - Validation Accuracy: 0.6407999992370605
Epoch 33, CIFAR-10 Batch 4:  Loss: 0.7272831797599792 - Validation Accuracy: 0.6251999974250794
Epoch 33, CIFAR-10 Batch 5:  Loss: 0.7402984499931335 - Validation Accuracy: 0.6361999988555909
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.7428336143493652 - Validation Accuracy: 0.6357999920845032
Epoch 34, CIFAR-10 Batch 2:  Loss: 0.7600041627883911 - Validation Accuracy: 0.6361999988555909
Epoch 34, CIFAR-10 Batch 3:  Loss: 0.6970824599266052 - Validation Accuracy: 0.6401999711990356
Epoch 34, CIFAR-10 Batch 4:  Loss: 0.7187796235084534 - Validation Accuracy: 0.6236000061035156
Epoch 34, CIFAR-10 Batch 5:  Loss: 0.7241672277450562 - Validation Accuracy: 0.6420000076293946
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.7229851484298706 - Validation Accuracy: 0.6350000023841857
Epoch 35, CIFAR-10 Batch 2:  Loss: 0.7664637565612793 - Validation Accuracy: 0.6301999926567078
Epoch 35, CIFAR-10 Batch 3:  Loss: 0.6956785917282104 - Validation Accuracy: 0.6389999985694885
Epoch 35, CIFAR-10 Batch 4:  Loss: 0.6827442049980164 - Validation Accuracy: 0.6382000207901001
Epoch 35, CIFAR-10 Batch 5:  Loss: 0.7096099853515625 - Validation Accuracy: 0.6416000008583069
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.6932011246681213 - Validation Accuracy: 0.6438000202178955
Epoch 36, CIFAR-10 Batch 2:  Loss: 0.727161705493927 - Validation Accuracy: 0.6363999724388123
Epoch 36, CIFAR-10 Batch 3:  Loss: 0.6792712211608887 - Validation Accuracy: 0.6321999907493592
Epoch 36, CIFAR-10 Batch 4:  Loss: 0.6552255749702454 - Validation Accuracy: 0.6412000179290771
Epoch 36, CIFAR-10 Batch 5:  Loss: 0.6789178252220154 - Validation Accuracy: 0.6432000041007996
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.6845470070838928 - Validation Accuracy: 0.6458000063896179
Epoch 37, CIFAR-10 Batch 2:  Loss: 0.7147436141967773 - Validation Accuracy: 0.6363999962806701
Epoch 37, CIFAR-10 Batch 3:  Loss: 0.7090072631835938 - Validation Accuracy: 0.6125999808311462
Epoch 37, CIFAR-10 Batch 4:  Loss: 0.6606859564781189 - Validation Accuracy: 0.6379999876022339
Epoch 37, CIFAR-10 Batch 5:  Loss: 0.6899051666259766 - Validation Accuracy: 0.6381999969482421
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.6793190240859985 - Validation Accuracy: 0.6387999892234802
Epoch 38, CIFAR-10 Batch 2:  Loss: 0.6902021169662476 - Validation Accuracy: 0.6447999954223633
Epoch 38, CIFAR-10 Batch 3:  Loss: 0.6613288521766663 - Validation Accuracy: 0.6324000120162964
Epoch 38, CIFAR-10 Batch 4:  Loss: 0.6580072045326233 - Validation Accuracy: 0.6349999904632568
Epoch 38, CIFAR-10 Batch 5:  Loss: 0.6687060594558716 - Validation Accuracy: 0.6363999843597412
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.6430783271789551 - Validation Accuracy: 0.6459999799728393
Epoch 39, CIFAR-10 Batch 2:  Loss: 0.6657017469406128 - Validation Accuracy: 0.6507999897003174
Epoch 39, CIFAR-10 Batch 3:  Loss: 0.6386556029319763 - Validation Accuracy: 0.6347999930381775
Epoch 39, CIFAR-10 Batch 4:  Loss: 0.6239820122718811 - Validation Accuracy: 0.6388000130653382
Epoch 39, CIFAR-10 Batch 5:  Loss: 0.6378450989723206 - Validation Accuracy: 0.6475999832153321
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.6333551406860352 - Validation Accuracy: 0.6442000150680542
Epoch 40, CIFAR-10 Batch 2:  Loss: 0.6269297003746033 - Validation Accuracy: 0.6562000155448914
Epoch 40, CIFAR-10 Batch 3:  Loss: 0.600254237651825 - Validation Accuracy: 0.6465999722480774
Epoch 40, CIFAR-10 Batch 4:  Loss: 0.5847431421279907 - Validation Accuracy: 0.6408000111579895
Epoch 40, CIFAR-10 Batch 5:  Loss: 0.6137900352478027 - Validation Accuracy: 0.6527999877929688
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.6060819625854492 - Validation Accuracy: 0.6495999693870544
Epoch 41, CIFAR-10 Batch 2:  Loss: 0.6163395643234253 - Validation Accuracy: 0.6555999994277955
Epoch 41, CIFAR-10 Batch 3:  Loss: 0.5885167121887207 - Validation Accuracy: 0.6391999959945679
Epoch 41, CIFAR-10 Batch 4:  Loss: 0.5822291374206543 - Validation Accuracy: 0.6429999947547913
Epoch 41, CIFAR-10 Batch 5:  Loss: 0.5982632637023926 - Validation Accuracy: 0.6542000055313111
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.5965672135353088 - Validation Accuracy: 0.6476000070571899
Epoch 42, CIFAR-10 Batch 2:  Loss: 0.6292796730995178 - Validation Accuracy: 0.6564000010490417
Epoch 42, CIFAR-10 Batch 3:  Loss: 0.556544840335846 - Validation Accuracy: 0.6436000227928161
Epoch 42, CIFAR-10 Batch 4:  Loss: 0.5568134784698486 - Validation Accuracy: 0.6465999841690063
Epoch 42, CIFAR-10 Batch 5:  Loss: 0.6112065315246582 - Validation Accuracy: 0.6426000237464905
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.5965697169303894 - Validation Accuracy: 0.6417999863624573
Epoch 43, CIFAR-10 Batch 2:  Loss: 0.5984639525413513 - Validation Accuracy: 0.6537999987602234
Epoch 43, CIFAR-10 Batch 3:  Loss: 0.5413165092468262 - Validation Accuracy: 0.6501999855041504
Epoch 43, CIFAR-10 Batch 4:  Loss: 0.5478262305259705 - Validation Accuracy: 0.6508000135421753
Epoch 43, CIFAR-10 Batch 5:  Loss: 0.5739781260490417 - Validation Accuracy: 0.6516000151634216
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.5768254995346069 - Validation Accuracy: 0.6477999925613404
Epoch 44, CIFAR-10 Batch 2:  Loss: 0.6203601360321045 - Validation Accuracy: 0.6517999768257141
Epoch 44, CIFAR-10 Batch 3:  Loss: 0.5291446447372437 - Validation Accuracy: 0.6507999897003174
Epoch 44, CIFAR-10 Batch 4:  Loss: 0.5398900508880615 - Validation Accuracy: 0.6473999977111816
Epoch 44, CIFAR-10 Batch 5:  Loss: 0.5643680691719055 - Validation Accuracy: 0.6462000012397766
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.5665495991706848 - Validation Accuracy: 0.6510000109672547
Epoch 45, CIFAR-10 Batch 2:  Loss: 0.5997424125671387 - Validation Accuracy: 0.6591999888420105
Epoch 45, CIFAR-10 Batch 3:  Loss: 0.5260621309280396 - Validation Accuracy: 0.645199978351593
Epoch 45, CIFAR-10 Batch 4:  Loss: 0.523960530757904 - Validation Accuracy: 0.6549999833106994
Epoch 45, CIFAR-10 Batch 5:  Loss: 0.5927390456199646 - Validation Accuracy: 0.6387999892234802
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.567211389541626 - Validation Accuracy: 0.6511999845504761
Epoch 46, CIFAR-10 Batch 2:  Loss: 0.5854371190071106 - Validation Accuracy: 0.6652000069618225
Epoch 46, CIFAR-10 Batch 3:  Loss: 0.5283147692680359 - Validation Accuracy: 0.6527999877929688
Epoch 46, CIFAR-10 Batch 4:  Loss: 0.5533863306045532 - Validation Accuracy: 0.6436000227928161
Epoch 46, CIFAR-10 Batch 5:  Loss: 0.576483428478241 - Validation Accuracy: 0.6414000034332276
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.5598692297935486 - Validation Accuracy: 0.6545999884605408
Epoch 47, CIFAR-10 Batch 2:  Loss: 0.5756838917732239 - Validation Accuracy: 0.6557999730110169
Epoch 47, CIFAR-10 Batch 3:  Loss: 0.4915195405483246 - Validation Accuracy: 0.6544000029563903
Epoch 47, CIFAR-10 Batch 4:  Loss: 0.52275151014328 - Validation Accuracy: 0.6501999735832215
Epoch 47, CIFAR-10 Batch 5:  Loss: 0.5642235279083252 - Validation Accuracy: 0.6410000085830688
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.5278795957565308 - Validation Accuracy: 0.6535999655723572
Epoch 48, CIFAR-10 Batch 2:  Loss: 0.570724606513977 - Validation Accuracy: 0.6575999736785889
Epoch 48, CIFAR-10 Batch 3:  Loss: 0.4801484942436218 - Validation Accuracy: 0.6521999955177307
Epoch 48, CIFAR-10 Batch 4:  Loss: 0.5265896916389465 - Validation Accuracy: 0.6430000185966491
Epoch 48, CIFAR-10 Batch 5:  Loss: 0.5361164808273315 - Validation Accuracy: 0.6490000128746033
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.5470927357673645 - Validation Accuracy: 0.647000002861023
Epoch 49, CIFAR-10 Batch 2:  Loss: 0.5558416843414307 - Validation Accuracy: 0.6536000013351441
Epoch 49, CIFAR-10 Batch 3:  Loss: 0.47760719060897827 - Validation Accuracy: 0.6563999772071838
Epoch 49, CIFAR-10 Batch 4:  Loss: 0.5202680826187134 - Validation Accuracy: 0.6496000051498413
Epoch 49, CIFAR-10 Batch 5:  Loss: 0.5330945253372192 - Validation Accuracy: 0.6502000093460083
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.4978303015232086 - Validation Accuracy: 0.65660001039505
Epoch 50, CIFAR-10 Batch 2:  Loss: 0.5400898456573486 - Validation Accuracy: 0.6555999994277955
Epoch 50, CIFAR-10 Batch 3:  Loss: 0.4699399471282959 - Validation Accuracy: 0.6572000026702881
Epoch 50, CIFAR-10 Batch 4:  Loss: 0.490442156791687 - Validation Accuracy: 0.6512000083923339
Epoch 50, CIFAR-10 Batch 5:  Loss: 0.47929590940475464 - Validation Accuracy: 0.6587999939918519

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.660636556148529

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.