Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [5]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 1
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 1:
Image - Min Value: 5 Max Value: 254
Image - Shape: (32, 32, 3)
Label - Label Id: 9 Name: truck

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [6]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    # Reference : Intro to Tensor Flow -  Min-Max scaling for grayscale image data
    a = 0
    b = 1
    grayscale_min = 0
    grayscale_max = 255
    return a + ( ( (x - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [7]:
from sklearn import preprocessing
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    # Reference: Intro to tensor flow - One Hot Encoding
    # print (x)
    # Create the encoder
    lb = preprocessing.LabelBinarizer()
    # Here the encoder finds the classes and assigns one-hot vectors 
    lb.fit([0,1,2,3,4,5,6,7,8,9])
    # And finally, transform the labels into one-hot encoded vectors
    return lb.transform(x)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [8]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [9]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [10]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    return  tf.placeholder(tf.float32, [None, image_shape[0],image_shape[1],image_shape[2]] , name='x')


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    return  tf.placeholder(tf.float32, [None, n_classes], name='y')


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, name='keep_prob')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [11]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    print ("ConvMax In", x_tensor.get_shape())
    x_depth = x_tensor.get_shape().as_list()[-1]
    weight= tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_depth, conv_num_outputs],stddev=0.1))
    bias = tf.Variable(tf.random_normal([conv_num_outputs]))
    conv = tf.nn.conv2d(x_tensor, weight, [1, conv_strides[0], conv_strides[1], 1], 'SAME')
    conv = tf.nn.bias_add(conv, bias)
    conv = tf.nn.relu(conv)
    conv = tf.nn.max_pool(conv,[1, pool_ksize[0], pool_ksize[1], 1],[1, pool_strides[0], pool_strides[1], 1],'SAME')
    print ("ConvMax Out", conv.get_shape())
    return conv 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


ConvMax In (?, 32, 32, 5)
ConvMax Out (?, 4, 4, 10)
Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [12]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    x_shape = x_tensor.get_shape().as_list()
    print ("Flatten In", x_shape)
    batch_size = x_shape[0] if x_shape[0] != None else -1
    flattened_image_size = 1
    for i in range(1, len(x_shape)):
        flattened_image_size = flattened_image_size * x_shape[i]
    ret = tf.reshape(x_tensor, (batch_size, flattened_image_size))
    print ("Flatten Out", ret.get_shape())
    return ret

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Flatten In [None, 10, 30, 6]
Flatten Out (?, 1800)
Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [13]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    x_shape = x_tensor.get_shape().as_list()
    print ("FullyConn In", x_shape)
    weights = tf.Variable(tf.truncated_normal([x_shape[-1], num_outputs], stddev = 0.1))
    bias    = tf.Variable(tf.random_normal([num_outputs]))
    ret_tf  = tf.add(tf.matmul(x_tensor, weights), bias)
    ret_tf  = tf.nn.relu(ret_tf)
    print ("FullyConn Out", ret_tf.get_shape())
    return ret_tf


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


FullyConn In [None, 128]
FullyConn Out (?, 40)
Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [14]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    x_shape = x_tensor.get_shape().as_list()
    print ("Output In", x_shape)
    weights = tf.Variable(tf.truncated_normal([x_shape[-1], num_outputs], stddev = 0.1))
    bias    = tf.Variable(tf.random_normal([num_outputs]))
    ret_tf  = tf.add(tf.matmul(x_tensor, weights), bias)
    #ret_tf  = tf.nn.relu(ret_tf)
    print ("Output Out", ret_tf.get_shape())
    return ret_tf


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Output In [None, 128]
Output Out (?, 40)
Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [15]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    convmax = conv2d_maxpool(x, 64, (8,8), (2,2), (4,4), (1,1))

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flat = flatten(convmax)
    
    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    fullyconn = fully_conn(flat, 600)
    drop = tf.nn.dropout(fullyconn, keep_prob)
    fullyconn = fully_conn(drop, 80)
    drop = tf.nn.dropout(fullyconn, keep_prob)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    return output(drop, 10)
    
    
    # TODO: return output
    return None


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


ConvMax In (?, 32, 32, 3)
ConvMax Out (?, 16, 16, 64)
Flatten In [None, 16, 16, 64]
Flatten Out (?, 16384)
FullyConn In [None, 16384]
FullyConn Out (?, 600)
FullyConn In [None, 600]
FullyConn Out (?, 80)
Output In [None, 80]
Output Out (?, 10)
ConvMax In (?, 32, 32, 3)
ConvMax Out (?, 16, 16, 64)
Flatten In [None, 16, 16, 64]
Flatten Out (?, 16384)
FullyConn In [None, 16384]
FullyConn Out (?, 600)
FullyConn In [None, 600]
FullyConn Out (?, 80)
Output In [None, 80]
Output Out (?, 10)
Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [23]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [24]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
    valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
    print('Loss = {0} and Validation Accuracy = {1}'.format(loss, valid_accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [27]:
# TODO: Tune Parameters
epochs = 45
batch_size = 512
keep_probability = 0.5

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [29]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    print (epochs)
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
45
Epoch  1, CIFAR-10 Batch 1:  Loss = 2.714409112930298 and Validation Accuracy = 0.0997999981045723
Epoch  2, CIFAR-10 Batch 1:  Loss = 2.702918529510498 and Validation Accuracy = 0.0997999981045723
Epoch  3, CIFAR-10 Batch 1:  Loss = 2.68631649017334 and Validation Accuracy = 0.1005999967455864
Epoch  4, CIFAR-10 Batch 1:  Loss = 2.653907537460327 and Validation Accuracy = 0.10899999737739563
Epoch  5, CIFAR-10 Batch 1:  Loss = 2.4161806106567383 and Validation Accuracy = 0.14980000257492065
Epoch  6, CIFAR-10 Batch 1:  Loss = 2.3009116649627686 and Validation Accuracy = 0.15919999778270721
Epoch  7, CIFAR-10 Batch 1:  Loss = 2.2251298427581787 and Validation Accuracy = 0.21979999542236328
Epoch  8, CIFAR-10 Batch 1:  Loss = 2.1258487701416016 and Validation Accuracy = 0.2280000001192093
Epoch  9, CIFAR-10 Batch 1:  Loss = 2.033878803253174 and Validation Accuracy = 0.27379998564720154
Epoch 10, CIFAR-10 Batch 1:  Loss = 2.0220019817352295 and Validation Accuracy = 0.27619999647140503
Epoch 11, CIFAR-10 Batch 1:  Loss = 1.9375813007354736 and Validation Accuracy = 0.31200000643730164
Epoch 12, CIFAR-10 Batch 1:  Loss = 1.9159280061721802 and Validation Accuracy = 0.3147999942302704
Epoch 13, CIFAR-10 Batch 1:  Loss = 1.8547120094299316 and Validation Accuracy = 0.3353999853134155
Epoch 14, CIFAR-10 Batch 1:  Loss = 1.865557312965393 and Validation Accuracy = 0.3474000096321106
Epoch 15, CIFAR-10 Batch 1:  Loss = 1.7672871351242065 and Validation Accuracy = 0.36800000071525574
Epoch 16, CIFAR-10 Batch 1:  Loss = 1.721395492553711 and Validation Accuracy = 0.387800008058548
Epoch 17, CIFAR-10 Batch 1:  Loss = 1.741246223449707 and Validation Accuracy = 0.3831999897956848
Epoch 18, CIFAR-10 Batch 1:  Loss = 1.6401211023330688 and Validation Accuracy = 0.41260001063346863
Epoch 19, CIFAR-10 Batch 1:  Loss = 1.6685912609100342 and Validation Accuracy = 0.4092000126838684
Epoch 20, CIFAR-10 Batch 1:  Loss = 1.5938656330108643 and Validation Accuracy = 0.41280001401901245
Epoch 21, CIFAR-10 Batch 1:  Loss = 1.528032898902893 and Validation Accuracy = 0.43799999356269836
Epoch 22, CIFAR-10 Batch 1:  Loss = 1.5460964441299438 and Validation Accuracy = 0.4334000051021576
Epoch 23, CIFAR-10 Batch 1:  Loss = 1.4909207820892334 and Validation Accuracy = 0.44839999079704285
Epoch 24, CIFAR-10 Batch 1:  Loss = 1.4751423597335815 and Validation Accuracy = 0.4505999982357025
Epoch 25, CIFAR-10 Batch 1:  Loss = 1.4314144849777222 and Validation Accuracy = 0.46639999747276306
Epoch 26, CIFAR-10 Batch 1:  Loss = 1.4116885662078857 and Validation Accuracy = 0.4580000042915344
Epoch 27, CIFAR-10 Batch 1:  Loss = 1.3772817850112915 and Validation Accuracy = 0.4715999960899353
Epoch 28, CIFAR-10 Batch 1:  Loss = 1.325887680053711 and Validation Accuracy = 0.4758000075817108
Epoch 29, CIFAR-10 Batch 1:  Loss = 1.409541368484497 and Validation Accuracy = 0.4620000123977661
Epoch 30, CIFAR-10 Batch 1:  Loss = 1.3093883991241455 and Validation Accuracy = 0.47540000081062317
Epoch 31, CIFAR-10 Batch 1:  Loss = 1.2614330053329468 and Validation Accuracy = 0.4943999946117401
Epoch 32, CIFAR-10 Batch 1:  Loss = 1.250693678855896 and Validation Accuracy = 0.48840001225471497
Epoch 33, CIFAR-10 Batch 1:  Loss = 1.225188970565796 and Validation Accuracy = 0.4819999933242798
Epoch 34, CIFAR-10 Batch 1:  Loss = 1.1764546632766724 and Validation Accuracy = 0.5005999803543091
Epoch 35, CIFAR-10 Batch 1:  Loss = 1.1600896120071411 and Validation Accuracy = 0.5004000067710876
Epoch 36, CIFAR-10 Batch 1:  Loss = 1.144994854927063 and Validation Accuracy = 0.49720001220703125
Epoch 37, CIFAR-10 Batch 1:  Loss = 1.1222455501556396 and Validation Accuracy = 0.5070000290870667
Epoch 38, CIFAR-10 Batch 1:  Loss = 1.1144746541976929 and Validation Accuracy = 0.4986000061035156
Epoch 39, CIFAR-10 Batch 1:  Loss = 1.0441806316375732 and Validation Accuracy = 0.5116000175476074
Epoch 40, CIFAR-10 Batch 1:  Loss = 1.0218617916107178 and Validation Accuracy = 0.525600016117096
Epoch 41, CIFAR-10 Batch 1:  Loss = 0.991119384765625 and Validation Accuracy = 0.5293999910354614
Epoch 42, CIFAR-10 Batch 1:  Loss = 0.981381893157959 and Validation Accuracy = 0.5275999903678894
Epoch 43, CIFAR-10 Batch 1:  Loss = 0.9507173299789429 and Validation Accuracy = 0.5368000268936157
Epoch 44, CIFAR-10 Batch 1:  Loss = 0.9244941473007202 and Validation Accuracy = 0.5315999984741211
Epoch 45, CIFAR-10 Batch 1:  Loss = 0.9064875245094299 and Validation Accuracy = 0.5234000086784363

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [31]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss = 2.800999879837036 and Validation Accuracy = 0.0997999981045723
Epoch  1, CIFAR-10 Batch 2:  Loss = 2.6728475093841553 and Validation Accuracy = 0.10100000351667404
Epoch  1, CIFAR-10 Batch 3:  Loss = 2.521149158477783 and Validation Accuracy = 0.10899999737739563
Epoch  1, CIFAR-10 Batch 4:  Loss = 2.3303167819976807 and Validation Accuracy = 0.15700000524520874
Epoch  1, CIFAR-10 Batch 5:  Loss = 2.1405866146087646 and Validation Accuracy = 0.20160000026226044
Epoch  2, CIFAR-10 Batch 1:  Loss = 2.129241466522217 and Validation Accuracy = 0.24240000545978546
Epoch  2, CIFAR-10 Batch 2:  Loss = 2.0775723457336426 and Validation Accuracy = 0.243599995970726
Epoch  2, CIFAR-10 Batch 3:  Loss = 2.065932035446167 and Validation Accuracy = 0.2370000034570694
Epoch  2, CIFAR-10 Batch 4:  Loss = 1.969041109085083 and Validation Accuracy = 0.2809999883174896
Epoch  2, CIFAR-10 Batch 5:  Loss = 1.8895936012268066 and Validation Accuracy = 0.29679998755455017
Epoch  3, CIFAR-10 Batch 1:  Loss = 1.9290494918823242 and Validation Accuracy = 0.3269999921321869
Epoch  3, CIFAR-10 Batch 2:  Loss = 1.8910588026046753 and Validation Accuracy = 0.2948000133037567
Epoch  3, CIFAR-10 Batch 3:  Loss = 1.7620445489883423 and Validation Accuracy = 0.34380000829696655
Epoch  3, CIFAR-10 Batch 4:  Loss = 1.7004019021987915 and Validation Accuracy = 0.3386000096797943
Epoch  3, CIFAR-10 Batch 5:  Loss = 1.7007097005844116 and Validation Accuracy = 0.3668000102043152
Epoch  4, CIFAR-10 Batch 1:  Loss = 1.7858355045318604 and Validation Accuracy = 0.39500001072883606
Epoch  4, CIFAR-10 Batch 2:  Loss = 1.7184664011001587 and Validation Accuracy = 0.3790000081062317
Epoch  4, CIFAR-10 Batch 3:  Loss = 1.630719780921936 and Validation Accuracy = 0.38420000672340393
Epoch  4, CIFAR-10 Batch 4:  Loss = 1.5702881813049316 and Validation Accuracy = 0.3880000114440918
Epoch  4, CIFAR-10 Batch 5:  Loss = 1.6163992881774902 and Validation Accuracy = 0.3846000134944916
Epoch  5, CIFAR-10 Batch 1:  Loss = 1.690342664718628 and Validation Accuracy = 0.4498000144958496
Epoch  5, CIFAR-10 Batch 2:  Loss = 1.6356921195983887 and Validation Accuracy = 0.42260000109672546
Epoch  5, CIFAR-10 Batch 3:  Loss = 1.5297634601593018 and Validation Accuracy = 0.40959998965263367
Epoch  5, CIFAR-10 Batch 4:  Loss = 1.468292474746704 and Validation Accuracy = 0.43320000171661377
Epoch  5, CIFAR-10 Batch 5:  Loss = 1.51227605342865 and Validation Accuracy = 0.4440000057220459
Epoch  6, CIFAR-10 Batch 1:  Loss = 1.6285228729248047 and Validation Accuracy = 0.44920000433921814
Epoch  6, CIFAR-10 Batch 2:  Loss = 1.5330384969711304 and Validation Accuracy = 0.45579999685287476
Epoch  6, CIFAR-10 Batch 3:  Loss = 1.4464473724365234 and Validation Accuracy = 0.4480000138282776
Epoch  6, CIFAR-10 Batch 4:  Loss = 1.4057769775390625 and Validation Accuracy = 0.4634000062942505
Epoch  6, CIFAR-10 Batch 5:  Loss = 1.4221816062927246 and Validation Accuracy = 0.46799999475479126
Epoch  7, CIFAR-10 Batch 1:  Loss = 1.5497593879699707 and Validation Accuracy = 0.48080000281333923
Epoch  7, CIFAR-10 Batch 2:  Loss = 1.4612804651260376 and Validation Accuracy = 0.46540001034736633
Epoch  7, CIFAR-10 Batch 3:  Loss = 1.3862581253051758 and Validation Accuracy = 0.4668000042438507
Epoch  7, CIFAR-10 Batch 4:  Loss = 1.3543429374694824 and Validation Accuracy = 0.4717999994754791
Epoch  7, CIFAR-10 Batch 5:  Loss = 1.3978215456008911 and Validation Accuracy = 0.4896000027656555
Epoch  8, CIFAR-10 Batch 1:  Loss = 1.4814918041229248 and Validation Accuracy = 0.4869999885559082
Epoch  8, CIFAR-10 Batch 2:  Loss = 1.4329864978790283 and Validation Accuracy = 0.4837999939918518
Epoch  8, CIFAR-10 Batch 3:  Loss = 1.34114408493042 and Validation Accuracy = 0.49160000681877136
Epoch  8, CIFAR-10 Batch 4:  Loss = 1.3062652349472046 and Validation Accuracy = 0.4878000020980835
Epoch  8, CIFAR-10 Batch 5:  Loss = 1.3777767419815063 and Validation Accuracy = 0.4893999993801117
Epoch  9, CIFAR-10 Batch 1:  Loss = 1.4258549213409424 and Validation Accuracy = 0.4973999857902527
Epoch  9, CIFAR-10 Batch 2:  Loss = 1.4183303117752075 and Validation Accuracy = 0.4708000123500824
Epoch  9, CIFAR-10 Batch 3:  Loss = 1.2862972021102905 and Validation Accuracy = 0.4957999885082245
Epoch  9, CIFAR-10 Batch 4:  Loss = 1.2332817316055298 and Validation Accuracy = 0.5109999775886536
Epoch  9, CIFAR-10 Batch 5:  Loss = 1.2832918167114258 and Validation Accuracy = 0.5138000249862671
Epoch 10, CIFAR-10 Batch 1:  Loss = 1.3762571811676025 and Validation Accuracy = 0.5194000005722046
Epoch 10, CIFAR-10 Batch 2:  Loss = 1.336008906364441 and Validation Accuracy = 0.5044000148773193
Epoch 10, CIFAR-10 Batch 3:  Loss = 1.241860270500183 and Validation Accuracy = 0.4941999912261963
Epoch 10, CIFAR-10 Batch 4:  Loss = 1.2253531217575073 and Validation Accuracy = 0.5228000283241272
Epoch 10, CIFAR-10 Batch 5:  Loss = 1.2489335536956787 and Validation Accuracy = 0.5260000228881836
Epoch 11, CIFAR-10 Batch 1:  Loss = 1.3719428777694702 and Validation Accuracy = 0.5210000276565552
Epoch 11, CIFAR-10 Batch 2:  Loss = 1.3308542966842651 and Validation Accuracy = 0.4957999885082245
Epoch 11, CIFAR-10 Batch 3:  Loss = 1.212643027305603 and Validation Accuracy = 0.4912000000476837
Epoch 11, CIFAR-10 Batch 4:  Loss = 1.169749140739441 and Validation Accuracy = 0.5311999917030334
Epoch 11, CIFAR-10 Batch 5:  Loss = 1.2166111469268799 and Validation Accuracy = 0.5249999761581421
Epoch 12, CIFAR-10 Batch 1:  Loss = 1.3026235103607178 and Validation Accuracy = 0.5314000248908997
Epoch 12, CIFAR-10 Batch 2:  Loss = 1.273229718208313 and Validation Accuracy = 0.5203999876976013
Epoch 12, CIFAR-10 Batch 3:  Loss = 1.1317791938781738 and Validation Accuracy = 0.5370000004768372
Epoch 12, CIFAR-10 Batch 4:  Loss = 1.1299949884414673 and Validation Accuracy = 0.5425999760627747
Epoch 12, CIFAR-10 Batch 5:  Loss = 1.1754919290542603 and Validation Accuracy = 0.5368000268936157
Epoch 13, CIFAR-10 Batch 1:  Loss = 1.258999228477478 and Validation Accuracy = 0.5486000180244446
Epoch 13, CIFAR-10 Batch 2:  Loss = 1.2400182485580444 and Validation Accuracy = 0.5131999850273132
Epoch 13, CIFAR-10 Batch 3:  Loss = 1.0851918458938599 and Validation Accuracy = 0.5194000005722046
Epoch 13, CIFAR-10 Batch 4:  Loss = 1.1336407661437988 and Validation Accuracy = 0.548799991607666
Epoch 13, CIFAR-10 Batch 5:  Loss = 1.1438120603561401 and Validation Accuracy = 0.5486000180244446
Epoch 14, CIFAR-10 Batch 1:  Loss = 1.247780203819275 and Validation Accuracy = 0.5533999800682068
Epoch 14, CIFAR-10 Batch 2:  Loss = 1.1859933137893677 and Validation Accuracy = 0.5368000268936157
Epoch 14, CIFAR-10 Batch 3:  Loss = 1.0821808576583862 and Validation Accuracy = 0.5446000099182129
Epoch 14, CIFAR-10 Batch 4:  Loss = 1.0706864595413208 and Validation Accuracy = 0.5605999827384949
Epoch 14, CIFAR-10 Batch 5:  Loss = 1.1124624013900757 and Validation Accuracy = 0.5529999732971191
Epoch 15, CIFAR-10 Batch 1:  Loss = 1.2018896341323853 and Validation Accuracy = 0.5609999895095825
Epoch 15, CIFAR-10 Batch 2:  Loss = 1.1411759853363037 and Validation Accuracy = 0.5468000173568726
Epoch 15, CIFAR-10 Batch 3:  Loss = 1.0488102436065674 and Validation Accuracy = 0.551800012588501
Epoch 15, CIFAR-10 Batch 4:  Loss = 1.0593830347061157 and Validation Accuracy = 0.5582000017166138
Epoch 15, CIFAR-10 Batch 5:  Loss = 1.0657153129577637 and Validation Accuracy = 0.5559999942779541
Epoch 16, CIFAR-10 Batch 1:  Loss = 1.153987169265747 and Validation Accuracy = 0.5753999948501587
Epoch 16, CIFAR-10 Batch 2:  Loss = 1.089840292930603 and Validation Accuracy = 0.5676000118255615
Epoch 16, CIFAR-10 Batch 3:  Loss = 1.0430351495742798 and Validation Accuracy = 0.5526000261306763
Epoch 16, CIFAR-10 Batch 4:  Loss = 1.0464215278625488 and Validation Accuracy = 0.5655999779701233
Epoch 16, CIFAR-10 Batch 5:  Loss = 1.0401338338851929 and Validation Accuracy = 0.5641999840736389
Epoch 17, CIFAR-10 Batch 1:  Loss = 1.1315155029296875 and Validation Accuracy = 0.5703999996185303
Epoch 17, CIFAR-10 Batch 2:  Loss = 1.0603301525115967 and Validation Accuracy = 0.5717999935150146
Epoch 17, CIFAR-10 Batch 3:  Loss = 0.9731886386871338 and Validation Accuracy = 0.5667999982833862
Epoch 17, CIFAR-10 Batch 4:  Loss = 0.994380533695221 and Validation Accuracy = 0.5809999704360962
Epoch 17, CIFAR-10 Batch 5:  Loss = 0.9990834593772888 and Validation Accuracy = 0.5690000057220459
Epoch 18, CIFAR-10 Batch 1:  Loss = 1.0930507183074951 and Validation Accuracy = 0.5825999975204468
Epoch 18, CIFAR-10 Batch 2:  Loss = 1.0157781839370728 and Validation Accuracy = 0.5789999961853027
Epoch 18, CIFAR-10 Batch 3:  Loss = 0.9621659517288208 and Validation Accuracy = 0.579800009727478
Epoch 18, CIFAR-10 Batch 4:  Loss = 0.9559503793716431 and Validation Accuracy = 0.5914000272750854
Epoch 18, CIFAR-10 Batch 5:  Loss = 0.9867962002754211 and Validation Accuracy = 0.5821999907493591
Epoch 19, CIFAR-10 Batch 1:  Loss = 1.0825504064559937 and Validation Accuracy = 0.5860000252723694
Epoch 19, CIFAR-10 Batch 2:  Loss = 1.0312212705612183 and Validation Accuracy = 0.5735999941825867
Epoch 19, CIFAR-10 Batch 3:  Loss = 0.9636058211326599 and Validation Accuracy = 0.5807999968528748
Epoch 19, CIFAR-10 Batch 4:  Loss = 0.9485710859298706 and Validation Accuracy = 0.5911999940872192
Epoch 19, CIFAR-10 Batch 5:  Loss = 0.9729766845703125 and Validation Accuracy = 0.5838000178337097
Epoch 20, CIFAR-10 Batch 1:  Loss = 1.031743049621582 and Validation Accuracy = 0.6014000177383423
Epoch 20, CIFAR-10 Batch 2:  Loss = 0.9796493053436279 and Validation Accuracy = 0.5766000151634216
Epoch 20, CIFAR-10 Batch 3:  Loss = 0.9001217484474182 and Validation Accuracy = 0.5971999764442444
Epoch 20, CIFAR-10 Batch 4:  Loss = 0.9291017651557922 and Validation Accuracy = 0.605400025844574
Epoch 20, CIFAR-10 Batch 5:  Loss = 0.9413225054740906 and Validation Accuracy = 0.5842000246047974
Epoch 21, CIFAR-10 Batch 1:  Loss = 0.9973140358924866 and Validation Accuracy = 0.6055999994277954
Epoch 21, CIFAR-10 Batch 2:  Loss = 0.9507081508636475 and Validation Accuracy = 0.5968000292778015
Epoch 21, CIFAR-10 Batch 3:  Loss = 0.8907941579818726 and Validation Accuracy = 0.6047999858856201
Epoch 21, CIFAR-10 Batch 4:  Loss = 0.9313663840293884 and Validation Accuracy = 0.6061999797821045
Epoch 21, CIFAR-10 Batch 5:  Loss = 0.9060831665992737 and Validation Accuracy = 0.5979999899864197
Epoch 22, CIFAR-10 Batch 1:  Loss = 0.9798767566680908 and Validation Accuracy = 0.6021999716758728
Epoch 22, CIFAR-10 Batch 2:  Loss = 0.9485613703727722 and Validation Accuracy = 0.5860000252723694
Epoch 22, CIFAR-10 Batch 3:  Loss = 0.8772729635238647 and Validation Accuracy = 0.5971999764442444
Epoch 22, CIFAR-10 Batch 4:  Loss = 0.8939932584762573 and Validation Accuracy = 0.6025999784469604
Epoch 22, CIFAR-10 Batch 5:  Loss = 0.9157071709632874 and Validation Accuracy = 0.5974000096321106
Epoch 23, CIFAR-10 Batch 1:  Loss = 0.9750726222991943 and Validation Accuracy = 0.6055999994277954
Epoch 23, CIFAR-10 Batch 2:  Loss = 0.9364960193634033 and Validation Accuracy = 0.6014000177383423
Epoch 23, CIFAR-10 Batch 3:  Loss = 0.8673727512359619 and Validation Accuracy = 0.5983999967575073
Epoch 23, CIFAR-10 Batch 4:  Loss = 0.8601030111312866 and Validation Accuracy = 0.6169999837875366
Epoch 23, CIFAR-10 Batch 5:  Loss = 0.8983466625213623 and Validation Accuracy = 0.5929999947547913
Epoch 24, CIFAR-10 Batch 1:  Loss = 0.9355779886245728 and Validation Accuracy = 0.6140000224113464
Epoch 24, CIFAR-10 Batch 2:  Loss = 0.8825652599334717 and Validation Accuracy = 0.6014000177383423
Epoch 24, CIFAR-10 Batch 3:  Loss = 0.8517388701438904 and Validation Accuracy = 0.6069999933242798
Epoch 24, CIFAR-10 Batch 4:  Loss = 0.8232347965240479 and Validation Accuracy = 0.6168000102043152
Epoch 24, CIFAR-10 Batch 5:  Loss = 0.8663997054100037 and Validation Accuracy = 0.6029999852180481
Epoch 25, CIFAR-10 Batch 1:  Loss = 0.9282224774360657 and Validation Accuracy = 0.6111999750137329
Epoch 25, CIFAR-10 Batch 2:  Loss = 0.8958938121795654 and Validation Accuracy = 0.6123999953269958
Epoch 25, CIFAR-10 Batch 3:  Loss = 0.7971311807632446 and Validation Accuracy = 0.6144000291824341
Epoch 25, CIFAR-10 Batch 4:  Loss = 0.8236284852027893 and Validation Accuracy = 0.6179999709129333
Epoch 25, CIFAR-10 Batch 5:  Loss = 0.8426036238670349 and Validation Accuracy = 0.6177999973297119
Epoch 26, CIFAR-10 Batch 1:  Loss = 0.8533464670181274 and Validation Accuracy = 0.6258000135421753
Epoch 26, CIFAR-10 Batch 2:  Loss = 0.8618022799491882 and Validation Accuracy = 0.6186000108718872
Epoch 26, CIFAR-10 Batch 3:  Loss = 0.7577717900276184 and Validation Accuracy = 0.6161999702453613
Epoch 26, CIFAR-10 Batch 4:  Loss = 0.788221001625061 and Validation Accuracy = 0.621399998664856
Epoch 26, CIFAR-10 Batch 5:  Loss = 0.8421543836593628 and Validation Accuracy = 0.6087999939918518
Epoch 27, CIFAR-10 Batch 1:  Loss = 0.8730340003967285 and Validation Accuracy = 0.620199978351593
Epoch 27, CIFAR-10 Batch 2:  Loss = 0.8298958539962769 and Validation Accuracy = 0.616599977016449
Epoch 27, CIFAR-10 Batch 3:  Loss = 0.7764936685562134 and Validation Accuracy = 0.6161999702453613
Epoch 27, CIFAR-10 Batch 4:  Loss = 0.7719091773033142 and Validation Accuracy = 0.6190000176429749
Epoch 27, CIFAR-10 Batch 5:  Loss = 0.8042386770248413 and Validation Accuracy = 0.6164000034332275
Epoch 28, CIFAR-10 Batch 1:  Loss = 0.8555810451507568 and Validation Accuracy = 0.6215999722480774
Epoch 28, CIFAR-10 Batch 2:  Loss = 0.8274146318435669 and Validation Accuracy = 0.6137999892234802
Epoch 28, CIFAR-10 Batch 3:  Loss = 0.7615144848823547 and Validation Accuracy = 0.6237999796867371
Epoch 28, CIFAR-10 Batch 4:  Loss = 0.7769222259521484 and Validation Accuracy = 0.6277999877929688
Epoch 28, CIFAR-10 Batch 5:  Loss = 0.7874867916107178 and Validation Accuracy = 0.6177999973297119
Epoch 29, CIFAR-10 Batch 1:  Loss = 0.8607261776924133 and Validation Accuracy = 0.6248000264167786
Epoch 29, CIFAR-10 Batch 2:  Loss = 0.8048633933067322 and Validation Accuracy = 0.6144000291824341
Epoch 29, CIFAR-10 Batch 3:  Loss = 0.7637938261032104 and Validation Accuracy = 0.61080002784729
Epoch 29, CIFAR-10 Batch 4:  Loss = 0.7386444807052612 and Validation Accuracy = 0.6276000142097473
Epoch 29, CIFAR-10 Batch 5:  Loss = 0.776685893535614 and Validation Accuracy = 0.6226000189781189
Epoch 30, CIFAR-10 Batch 1:  Loss = 0.8408816456794739 and Validation Accuracy = 0.6248000264167786
Epoch 30, CIFAR-10 Batch 2:  Loss = 0.7952011227607727 and Validation Accuracy = 0.6227999925613403
Epoch 30, CIFAR-10 Batch 3:  Loss = 0.733124852180481 and Validation Accuracy = 0.6262000203132629
Epoch 30, CIFAR-10 Batch 4:  Loss = 0.732571005821228 and Validation Accuracy = 0.6255999803543091
Epoch 30, CIFAR-10 Batch 5:  Loss = 0.7576985955238342 and Validation Accuracy = 0.623199999332428
Epoch 31, CIFAR-10 Batch 1:  Loss = 0.8155630826950073 and Validation Accuracy = 0.6362000107765198
Epoch 31, CIFAR-10 Batch 2:  Loss = 0.779087245464325 and Validation Accuracy = 0.6269999742507935
Epoch 31, CIFAR-10 Batch 3:  Loss = 0.6990640163421631 and Validation Accuracy = 0.6281999945640564
Epoch 31, CIFAR-10 Batch 4:  Loss = 0.6989326477050781 and Validation Accuracy = 0.635200023651123
Epoch 31, CIFAR-10 Batch 5:  Loss = 0.7406541109085083 and Validation Accuracy = 0.629800021648407
Epoch 32, CIFAR-10 Batch 1:  Loss = 0.7673524618148804 and Validation Accuracy = 0.6291999816894531
Epoch 32, CIFAR-10 Batch 2:  Loss = 0.7542313933372498 and Validation Accuracy = 0.6417999863624573
Epoch 32, CIFAR-10 Batch 3:  Loss = 0.6821193099021912 and Validation Accuracy = 0.6320000290870667
Epoch 32, CIFAR-10 Batch 4:  Loss = 0.7011252641677856 and Validation Accuracy = 0.6294000148773193
Epoch 32, CIFAR-10 Batch 5:  Loss = 0.6893466711044312 and Validation Accuracy = 0.628600001335144
Epoch 33, CIFAR-10 Batch 1:  Loss = 0.7569695711135864 and Validation Accuracy = 0.6367999911308289
Epoch 33, CIFAR-10 Batch 2:  Loss = 0.7428659796714783 and Validation Accuracy = 0.6353999972343445
Epoch 33, CIFAR-10 Batch 3:  Loss = 0.6949491500854492 and Validation Accuracy = 0.621999979019165
Epoch 33, CIFAR-10 Batch 4:  Loss = 0.6816523671150208 and Validation Accuracy = 0.633400022983551
Epoch 33, CIFAR-10 Batch 5:  Loss = 0.7251812219619751 and Validation Accuracy = 0.6349999904632568
Epoch 34, CIFAR-10 Batch 1:  Loss = 0.7294492721557617 and Validation Accuracy = 0.6434000134468079
Epoch 34, CIFAR-10 Batch 2:  Loss = 0.7231690883636475 and Validation Accuracy = 0.6276000142097473
Epoch 34, CIFAR-10 Batch 3:  Loss = 0.6575773358345032 and Validation Accuracy = 0.6320000290870667
Epoch 34, CIFAR-10 Batch 4:  Loss = 0.6646420359611511 and Validation Accuracy = 0.6359999775886536
Epoch 34, CIFAR-10 Batch 5:  Loss = 0.7029480934143066 and Validation Accuracy = 0.6290000081062317
Epoch 35, CIFAR-10 Batch 1:  Loss = 0.7120507955551147 and Validation Accuracy = 0.6381999850273132
Epoch 35, CIFAR-10 Batch 2:  Loss = 0.7029290199279785 and Validation Accuracy = 0.6371999979019165
Epoch 35, CIFAR-10 Batch 3:  Loss = 0.6505929827690125 and Validation Accuracy = 0.6322000026702881
Epoch 35, CIFAR-10 Batch 4:  Loss = 0.6539695262908936 and Validation Accuracy = 0.6377999782562256
Epoch 35, CIFAR-10 Batch 5:  Loss = 0.6593542098999023 and Validation Accuracy = 0.6376000046730042
Epoch 36, CIFAR-10 Batch 1:  Loss = 0.698373556137085 and Validation Accuracy = 0.6363999843597412
Epoch 36, CIFAR-10 Batch 2:  Loss = 0.7014596462249756 and Validation Accuracy = 0.6273999810218811
Epoch 36, CIFAR-10 Batch 3:  Loss = 0.6415884494781494 and Validation Accuracy = 0.640999972820282
Epoch 36, CIFAR-10 Batch 4:  Loss = 0.6853250861167908 and Validation Accuracy = 0.628600001335144
Epoch 36, CIFAR-10 Batch 5:  Loss = 0.6594274640083313 and Validation Accuracy = 0.6413999795913696
Epoch 37, CIFAR-10 Batch 1:  Loss = 0.668901264667511 and Validation Accuracy = 0.6388000249862671
Epoch 37, CIFAR-10 Batch 2:  Loss = 0.6962021589279175 and Validation Accuracy = 0.6407999992370605
Epoch 37, CIFAR-10 Batch 3:  Loss = 0.6353329420089722 and Validation Accuracy = 0.642799973487854
Epoch 37, CIFAR-10 Batch 4:  Loss = 0.6417596936225891 and Validation Accuracy = 0.6330000162124634
Epoch 37, CIFAR-10 Batch 5:  Loss = 0.641300618648529 and Validation Accuracy = 0.6394000053405762
Epoch 38, CIFAR-10 Batch 1:  Loss = 0.6399326324462891 and Validation Accuracy = 0.6388000249862671
Epoch 38, CIFAR-10 Batch 2:  Loss = 0.6574796438217163 and Validation Accuracy = 0.6417999863624573
Epoch 38, CIFAR-10 Batch 3:  Loss = 0.5953131914138794 and Validation Accuracy = 0.6395999789237976
Epoch 38, CIFAR-10 Batch 4:  Loss = 0.6186832785606384 and Validation Accuracy = 0.6424000263214111
Epoch 38, CIFAR-10 Batch 5:  Loss = 0.6102243661880493 and Validation Accuracy = 0.6381999850273132
Epoch 39, CIFAR-10 Batch 1:  Loss = 0.6626309752464294 and Validation Accuracy = 0.6413999795913696
Epoch 39, CIFAR-10 Batch 2:  Loss = 0.6500905752182007 and Validation Accuracy = 0.6438000202178955
Epoch 39, CIFAR-10 Batch 3:  Loss = 0.612647533416748 and Validation Accuracy = 0.6366000175476074
Epoch 39, CIFAR-10 Batch 4:  Loss = 0.6304300427436829 and Validation Accuracy = 0.6308000087738037
Epoch 39, CIFAR-10 Batch 5:  Loss = 0.648777425289154 and Validation Accuracy = 0.6416000127792358
Epoch 40, CIFAR-10 Batch 1:  Loss = 0.6572256088256836 and Validation Accuracy = 0.6407999992370605
Epoch 40, CIFAR-10 Batch 2:  Loss = 0.6459275484085083 and Validation Accuracy = 0.6521999835968018
Epoch 40, CIFAR-10 Batch 3:  Loss = 0.588822066783905 and Validation Accuracy = 0.6430000066757202
Epoch 40, CIFAR-10 Batch 4:  Loss = 0.5807976722717285 and Validation Accuracy = 0.6395999789237976
Epoch 40, CIFAR-10 Batch 5:  Loss = 0.6159249544143677 and Validation Accuracy = 0.646399974822998
Epoch 41, CIFAR-10 Batch 1:  Loss = 0.6484395265579224 and Validation Accuracy = 0.645799994468689
Epoch 41, CIFAR-10 Batch 2:  Loss = 0.6206094026565552 and Validation Accuracy = 0.6498000025749207
Epoch 41, CIFAR-10 Batch 3:  Loss = 0.593422532081604 and Validation Accuracy = 0.6388000249862671
Epoch 41, CIFAR-10 Batch 4:  Loss = 0.5874971747398376 and Validation Accuracy = 0.6417999863624573
Epoch 41, CIFAR-10 Batch 5:  Loss = 0.5936239957809448 and Validation Accuracy = 0.650600016117096
Epoch 42, CIFAR-10 Batch 1:  Loss = 0.6152704358100891 and Validation Accuracy = 0.646399974822998
Epoch 42, CIFAR-10 Batch 2:  Loss = 0.6181066632270813 and Validation Accuracy = 0.6507999897003174
Epoch 42, CIFAR-10 Batch 3:  Loss = 0.5908918380737305 and Validation Accuracy = 0.6403999924659729
Epoch 42, CIFAR-10 Batch 4:  Loss = 0.5552011728286743 and Validation Accuracy = 0.6467999815940857
Epoch 42, CIFAR-10 Batch 5:  Loss = 0.5837681889533997 and Validation Accuracy = 0.6478000283241272
Epoch 43, CIFAR-10 Batch 1:  Loss = 0.6208240985870361 and Validation Accuracy = 0.6452000141143799
Epoch 43, CIFAR-10 Batch 2:  Loss = 0.6361974477767944 and Validation Accuracy = 0.6484000086784363
Epoch 43, CIFAR-10 Batch 3:  Loss = 0.564063549041748 and Validation Accuracy = 0.6439999938011169
Epoch 43, CIFAR-10 Batch 4:  Loss = 0.5458097457885742 and Validation Accuracy = 0.6413999795913696
Epoch 43, CIFAR-10 Batch 5:  Loss = 0.5841844081878662 and Validation Accuracy = 0.6453999876976013
Epoch 44, CIFAR-10 Batch 1:  Loss = 0.6043950319290161 and Validation Accuracy = 0.6525999903678894
Epoch 44, CIFAR-10 Batch 2:  Loss = 0.5964829921722412 and Validation Accuracy = 0.6516000032424927
Epoch 44, CIFAR-10 Batch 3:  Loss = 0.5564060211181641 and Validation Accuracy = 0.6442000269889832
Epoch 44, CIFAR-10 Batch 4:  Loss = 0.5387495756149292 and Validation Accuracy = 0.6406000256538391
Epoch 44, CIFAR-10 Batch 5:  Loss = 0.565437912940979 and Validation Accuracy = 0.6413999795913696
Epoch 45, CIFAR-10 Batch 1:  Loss = 0.5987561941146851 and Validation Accuracy = 0.651199996471405
Epoch 45, CIFAR-10 Batch 2:  Loss = 0.5813735127449036 and Validation Accuracy = 0.6516000032424927
Epoch 45, CIFAR-10 Batch 3:  Loss = 0.5320531725883484 and Validation Accuracy = 0.6492000222206116
Epoch 45, CIFAR-10 Batch 4:  Loss = 0.4991263449192047 and Validation Accuracy = 0.6516000032424927
Epoch 45, CIFAR-10 Batch 5:  Loss = 0.5656506419181824 and Validation Accuracy = 0.6399999856948853

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [32]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


INFO:tensorflow:Restoring parameters from ./image_classification
Testing Accuracy: 0.6419979333877563

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.


In [ ]: