Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [4]:
import sys
sys.executable
import tensorflow as tf

In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


CIFAR-10 Dataset: 171MB [00:28, 6.02MB/s]                              
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [6]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [7]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    return (x / np.max(x))


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [8]:
def one_hot_encode(x):

    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    

    out = np.zeros((len(x), 10))

    out[np.arange(len(x)), x] = 1

    return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [9]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [2]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    
    x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1],image_shape[2]), name="x")
   
    return x


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    
    y =tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
    
    return y


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    
    keep =tf.placeholder(tf.float32, shape=None, name="keep_prob")
    return keep


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [3]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    
    W_conv = tf.Variable(tf.truncated_normal([conv_ksize[0],conv_ksize[1], int(x_tensor.get_shape()[3]), conv_num_outputs], stddev=0.1))
    b_conv = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.1))
    
    conv = tf.nn.conv2d(x_tensor, W_conv, strides = [1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')
    conv = tf.nn.bias_add(conv, b_conv)
    conv = tf.nn.relu(conv)
    maxpool = tf.nn.max_pool(conv, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
    return maxpool
    
    #Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
    #Apply a convolution to x_tensor using weight and conv_strides.
    #We recommend you use same padding, but you're welcome to use any padding.
    #Add bias
    #Add a nonlinear activation to the convolution.
    #Apply Max Pooling using pool_ksize and pool_strides.
    #We recommend you use same padding, but you're welcome to use any padding.
    


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [4]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function

    return tf.reshape(x_tensor , shape=[-1, int(x_tensor.shape[1])*int(x_tensor.shape[2])*int(x_tensor.shape[3])])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [5]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    

    
    # TODO: Implement Function
    shape = x_tensor.shape
    size = shape[1]
    weights = tf.Variable(tf.truncated_normal([int(size),num_outputs], stddev=0.1))
    bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
    
    
    fc = tf.nn.relu(tf.matmul(x_tensor, weights) + bias)
    return fc


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [6]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    shape = x_tensor.shape
    size = shape[1]
    weights = tf.Variable(tf.truncated_normal([int(size),num_outputs], stddev=0.1))
    
    bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
    
    
    out = tf.add(tf.matmul(x_tensor, weights), bias)
    return out




"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [13]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    

    #x_shape = tf.reshape(x,[-1,32,32,5])
      #conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv = x
    conv = conv2d_maxpool(x, 32, (3, 3), (1, 1), (2, 2), (2, 2))
    conv = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2))
    
    #conv = conv2d_maxpool(conv, 30, (32, 32), (2, 2), (2, 2), (2, 2))
    #conv = conv2d_maxpool(conv, 40, (16, 16), (2, 2), (2, 2), (2, 2))
    
    #conv = conv2d_maxpool(conv, 32, (2, 2), (1, 1), (2, 2), (2, 2))
    #conv = conv2d_maxpool(conv, 64, (2, 2), (1, 1), (2, 2), (2, 2))
    
    #conv = conv2d_maxpool(conv, 32, (4, 4), (1, 1), (2, 2), (2, 2))
    #conv = conv2d_maxpool(conv, 64, (4, 4), (1, 1), (2, 2), (2, 2))
    drop = tf.nn.dropout(conv, keep_prob)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flatten_conv = flatten(drop)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    
    fc = fully_conn(flatten_conv, 40)
    fc = fully_conn(fc, 40)
    #fc = fully_conn(fc, 40)

    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    
    
    # TODO: return output

    return output(fc, 10)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [14]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
    pass


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [15]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
    validation_accuracy = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
    #Printing loss and validation_accuracy
    print("Loss: {}".format(loss))
    print("Validation accuracy: {}".format(validation_accuracy))
    #print('Accuracy:', accuracy.eval({x:valid_features, y:valid_labels, keep_prob: 0.5}))
    
    pass

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [16]:
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.7

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [17]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1616621017456055
Validation accuracy: 0.25380000472068787
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.0477945804595947
Validation accuracy: 0.3325999975204468
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.873827576637268
Validation accuracy: 0.3659999966621399
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.68058180809021
Validation accuracy: 0.40860000252723694
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.5202767848968506
Validation accuracy: 0.4323999881744385
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.3914234638214111
Validation accuracy: 0.44780001044273376
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.2494909763336182
Validation accuracy: 0.4749999940395355
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.1322317123413086
Validation accuracy: 0.4830000102519989
Epoch  9, CIFAR-10 Batch 1:  Loss: 1.0305579900741577
Validation accuracy: 0.49239999055862427
Epoch 10, CIFAR-10 Batch 1:  Loss: 0.9357511401176453
Validation accuracy: 0.4952000081539154
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.8470497131347656
Validation accuracy: 0.5040000081062317
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.7918604016304016
Validation accuracy: 0.5116000175476074
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.7432309985160828
Validation accuracy: 0.5112000107765198
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.7027378678321838
Validation accuracy: 0.5166000127792358
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.66810142993927
Validation accuracy: 0.5199999809265137
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.6012442111968994
Validation accuracy: 0.5260000228881836
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.5696793794631958
Validation accuracy: 0.526199996471405
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.5372768640518188
Validation accuracy: 0.5297999978065491
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.48785200715065
Validation accuracy: 0.5350000262260437
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.4526267945766449
Validation accuracy: 0.5329999923706055
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.420492947101593
Validation accuracy: 0.5397999882698059
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.40936946868896484
Validation accuracy: 0.5335999727249146
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.36433112621307373
Validation accuracy: 0.5450000166893005
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.35090726613998413
Validation accuracy: 0.5370000004768372
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.327732115983963
Validation accuracy: 0.5382000207901001
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.2931740880012512
Validation accuracy: 0.5325999855995178
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.2831704020500183
Validation accuracy: 0.5361999869346619
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.26128721237182617
Validation accuracy: 0.5302000045776367
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.23766227066516876
Validation accuracy: 0.5317999720573425
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.22250354290008545
Validation accuracy: 0.5419999957084656
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.19767065346240997
Validation accuracy: 0.5497999787330627
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.1913776397705078
Validation accuracy: 0.5501999855041504
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.179753839969635
Validation accuracy: 0.5504000186920166
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.1651843935251236
Validation accuracy: 0.5526000261306763
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.16523322463035583
Validation accuracy: 0.557200014591217
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.1635456383228302
Validation accuracy: 0.5447999835014343
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.14397509396076202
Validation accuracy: 0.5491999983787537
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.13858738541603088
Validation accuracy: 0.5544000267982483
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.13389942049980164
Validation accuracy: 0.5464000105857849
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.1254502683877945
Validation accuracy: 0.5576000213623047
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.10722877830266953
Validation accuracy: 0.5604000091552734
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.09772567451000214
Validation accuracy: 0.5504000186920166
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.10310323536396027
Validation accuracy: 0.5414000153541565
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.09334985911846161
Validation accuracy: 0.5386000275611877
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.08183076232671738
Validation accuracy: 0.5368000268936157
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.0759027749300003
Validation accuracy: 0.5472000241279602
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.07017330825328827
Validation accuracy: 0.5411999821662903
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.061772167682647705
Validation accuracy: 0.5432000160217285
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.06724674999713898
Validation accuracy: 0.5410000085830688
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.06453578919172287
Validation accuracy: 0.5526000261306763

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1879000663757324
Validation accuracy: 0.26100000739097595
Epoch  1, CIFAR-10 Batch 2:  Loss: 1.84756600856781
Validation accuracy: 0.33379998803138733
Epoch  1, CIFAR-10 Batch 3:  Loss: 1.509157657623291
Validation accuracy: 0.3953999876976013
Epoch  1, CIFAR-10 Batch 4:  Loss: 1.5585622787475586
Validation accuracy: 0.44440001249313354
Epoch  1, CIFAR-10 Batch 5:  Loss: 1.5624542236328125
Validation accuracy: 0.45680001378059387
Epoch  2, CIFAR-10 Batch 1:  Loss: 1.5806092023849487
Validation accuracy: 0.48100000619888306
Epoch  2, CIFAR-10 Batch 2:  Loss: 1.2745894193649292
Validation accuracy: 0.47760000824928284
Epoch  2, CIFAR-10 Batch 3:  Loss: 1.0941895246505737
Validation accuracy: 0.49559998512268066
Epoch  2, CIFAR-10 Batch 4:  Loss: 1.3300931453704834
Validation accuracy: 0.5188000202178955
Epoch  2, CIFAR-10 Batch 5:  Loss: 1.3390562534332275
Validation accuracy: 0.5198000073432922
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.3328516483306885
Validation accuracy: 0.5407999753952026
Epoch  3, CIFAR-10 Batch 2:  Loss: 1.0417394638061523
Validation accuracy: 0.5192000269889832
Epoch  3, CIFAR-10 Batch 3:  Loss: 0.9303253293037415
Validation accuracy: 0.5353999733924866
Epoch  3, CIFAR-10 Batch 4:  Loss: 1.1575082540512085
Validation accuracy: 0.5464000105857849
Epoch  3, CIFAR-10 Batch 5:  Loss: 1.1877121925354004
Validation accuracy: 0.5479999780654907
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.2328829765319824
Validation accuracy: 0.5508000254631042
Epoch  4, CIFAR-10 Batch 2:  Loss: 0.8870840072631836
Validation accuracy: 0.5569999814033508
Epoch  4, CIFAR-10 Batch 3:  Loss: 0.7932536005973816
Validation accuracy: 0.5605999827384949
Epoch  4, CIFAR-10 Batch 4:  Loss: 1.0060657262802124
Validation accuracy: 0.5680000185966492
Epoch  4, CIFAR-10 Batch 5:  Loss: 1.0295639038085938
Validation accuracy: 0.5694000124931335
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.1291742324829102
Validation accuracy: 0.5709999799728394
Epoch  5, CIFAR-10 Batch 2:  Loss: 0.7764445543289185
Validation accuracy: 0.5809999704360962
Epoch  5, CIFAR-10 Batch 3:  Loss: 0.6941044926643372
Validation accuracy: 0.5726000070571899
Epoch  5, CIFAR-10 Batch 4:  Loss: 0.879808783531189
Validation accuracy: 0.5835999846458435
Epoch  5, CIFAR-10 Batch 5:  Loss: 0.8893446922302246
Validation accuracy: 0.574400007724762
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.03745698928833
Validation accuracy: 0.5874000191688538
Epoch  6, CIFAR-10 Batch 2:  Loss: 0.6788220405578613
Validation accuracy: 0.5968000292778015
Epoch  6, CIFAR-10 Batch 3:  Loss: 0.6103283762931824
Validation accuracy: 0.5857999920845032
Epoch  6, CIFAR-10 Batch 4:  Loss: 0.7743024826049805
Validation accuracy: 0.6032000184059143
Epoch  6, CIFAR-10 Batch 5:  Loss: 0.7775527834892273
Validation accuracy: 0.5907999873161316
Epoch  7, CIFAR-10 Batch 1:  Loss: 0.9142948985099792
Validation accuracy: 0.6057999730110168
Epoch  7, CIFAR-10 Batch 2:  Loss: 0.6003270745277405
Validation accuracy: 0.5953999757766724
Epoch  7, CIFAR-10 Batch 3:  Loss: 0.5315729975700378
Validation accuracy: 0.5925999879837036
Epoch  7, CIFAR-10 Batch 4:  Loss: 0.7019448280334473
Validation accuracy: 0.6126000285148621
Epoch  7, CIFAR-10 Batch 5:  Loss: 0.7104989886283875
Validation accuracy: 0.5934000015258789
Epoch  8, CIFAR-10 Batch 1:  Loss: 0.8536300659179688
Validation accuracy: 0.6137999892234802
Epoch  8, CIFAR-10 Batch 2:  Loss: 0.5159977674484253
Validation accuracy: 0.6161999702453613
Epoch  8, CIFAR-10 Batch 3:  Loss: 0.498481422662735
Validation accuracy: 0.5992000102996826
Epoch  8, CIFAR-10 Batch 4:  Loss: 0.5939110517501831
Validation accuracy: 0.6222000122070312
Epoch  8, CIFAR-10 Batch 5:  Loss: 0.6225907206535339
Validation accuracy: 0.6060000061988831
Epoch  9, CIFAR-10 Batch 1:  Loss: 0.7808440923690796
Validation accuracy: 0.6000000238418579
Epoch  9, CIFAR-10 Batch 2:  Loss: 0.4656960368156433
Validation accuracy: 0.6223999857902527
Epoch  9, CIFAR-10 Batch 3:  Loss: 0.44599494338035583
Validation accuracy: 0.6065999865531921
Epoch  9, CIFAR-10 Batch 4:  Loss: 0.5188361406326294
Validation accuracy: 0.6302000284194946
Epoch  9, CIFAR-10 Batch 5:  Loss: 0.5534471273422241
Validation accuracy: 0.6132000088691711
Epoch 10, CIFAR-10 Batch 1:  Loss: 0.713011622428894
Validation accuracy: 0.607200026512146
Epoch 10, CIFAR-10 Batch 2:  Loss: 0.40886735916137695
Validation accuracy: 0.6259999871253967
Epoch 10, CIFAR-10 Batch 3:  Loss: 0.40817657113075256
Validation accuracy: 0.6118000149726868
Epoch 10, CIFAR-10 Batch 4:  Loss: 0.4610823094844818
Validation accuracy: 0.6341999769210815
Epoch 10, CIFAR-10 Batch 5:  Loss: 0.4898938238620758
Validation accuracy: 0.6287999749183655
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.6134481430053711
Validation accuracy: 0.6236000061035156
Epoch 11, CIFAR-10 Batch 2:  Loss: 0.37383899092674255
Validation accuracy: 0.6320000290870667
Epoch 11, CIFAR-10 Batch 3:  Loss: 0.3730073571205139
Validation accuracy: 0.620199978351593
Epoch 11, CIFAR-10 Batch 4:  Loss: 0.40846434235572815
Validation accuracy: 0.6323999762535095
Epoch 11, CIFAR-10 Batch 5:  Loss: 0.43641820549964905
Validation accuracy: 0.6308000087738037
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.5725558996200562
Validation accuracy: 0.628600001335144
Epoch 12, CIFAR-10 Batch 2:  Loss: 0.33437076210975647
Validation accuracy: 0.6326000094413757
Epoch 12, CIFAR-10 Batch 3:  Loss: 0.33724433183670044
Validation accuracy: 0.6294000148773193
Epoch 12, CIFAR-10 Batch 4:  Loss: 0.36682409048080444
Validation accuracy: 0.6399999856948853
Epoch 12, CIFAR-10 Batch 5:  Loss: 0.39244818687438965
Validation accuracy: 0.6395999789237976
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.5044684410095215
Validation accuracy: 0.6272000074386597
Epoch 13, CIFAR-10 Batch 2:  Loss: 0.32638344168663025
Validation accuracy: 0.6412000060081482
Epoch 13, CIFAR-10 Batch 3:  Loss: 0.3014351427555084
Validation accuracy: 0.6358000040054321
Epoch 13, CIFAR-10 Batch 4:  Loss: 0.3289017677307129
Validation accuracy: 0.6416000127792358
Epoch 13, CIFAR-10 Batch 5:  Loss: 0.3848339021205902
Validation accuracy: 0.6402000188827515
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.47776564955711365
Validation accuracy: 0.6341999769210815
Epoch 14, CIFAR-10 Batch 2:  Loss: 0.2790488004684448
Validation accuracy: 0.6421999931335449
Epoch 14, CIFAR-10 Batch 3:  Loss: 0.28469377756118774
Validation accuracy: 0.635200023651123
Epoch 14, CIFAR-10 Batch 4:  Loss: 0.306048721075058
Validation accuracy: 0.6510000228881836
Epoch 14, CIFAR-10 Batch 5:  Loss: 0.3560745120048523
Validation accuracy: 0.644599974155426
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.4361470639705658
Validation accuracy: 0.6359999775886536
Epoch 15, CIFAR-10 Batch 2:  Loss: 0.26813921332359314
Validation accuracy: 0.63919997215271
Epoch 15, CIFAR-10 Batch 3:  Loss: 0.2781257629394531
Validation accuracy: 0.6489999890327454
Epoch 15, CIFAR-10 Batch 4:  Loss: 0.3106028139591217
Validation accuracy: 0.6438000202178955
Epoch 15, CIFAR-10 Batch 5:  Loss: 0.31653913855552673
Validation accuracy: 0.6531999707221985
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.4281979203224182
Validation accuracy: 0.640999972820282
Epoch 16, CIFAR-10 Batch 2:  Loss: 0.23071971535682678
Validation accuracy: 0.6520000100135803
Epoch 16, CIFAR-10 Batch 3:  Loss: 0.23550447821617126
Validation accuracy: 0.6525999903678894
Epoch 16, CIFAR-10 Batch 4:  Loss: 0.27191874384880066
Validation accuracy: 0.6471999883651733
Epoch 16, CIFAR-10 Batch 5:  Loss: 0.29514649510383606
Validation accuracy: 0.6538000106811523
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.37605854868888855
Validation accuracy: 0.6502000093460083
Epoch 17, CIFAR-10 Batch 2:  Loss: 0.22720952332019806
Validation accuracy: 0.6439999938011169
Epoch 17, CIFAR-10 Batch 3:  Loss: 0.2239396572113037
Validation accuracy: 0.6484000086784363
Epoch 17, CIFAR-10 Batch 4:  Loss: 0.2651316225528717
Validation accuracy: 0.651199996471405
Epoch 17, CIFAR-10 Batch 5:  Loss: 0.27677851915359497
Validation accuracy: 0.650600016117096
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.35432106256484985
Validation accuracy: 0.6395999789237976
Epoch 18, CIFAR-10 Batch 2:  Loss: 0.21183153986930847
Validation accuracy: 0.6514000296592712
Epoch 18, CIFAR-10 Batch 3:  Loss: 0.2023427039384842
Validation accuracy: 0.6456000208854675
Epoch 18, CIFAR-10 Batch 4:  Loss: 0.24893882870674133
Validation accuracy: 0.6517999768257141
Epoch 18, CIFAR-10 Batch 5:  Loss: 0.24284783005714417
Validation accuracy: 0.6575999855995178
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.3224080801010132
Validation accuracy: 0.6452000141143799
Epoch 19, CIFAR-10 Batch 2:  Loss: 0.2027663290500641
Validation accuracy: 0.6485999822616577
Epoch 19, CIFAR-10 Batch 3:  Loss: 0.1808663308620453
Validation accuracy: 0.6514000296592712
Epoch 19, CIFAR-10 Batch 4:  Loss: 0.2203582227230072
Validation accuracy: 0.6561999917030334
Epoch 19, CIFAR-10 Batch 5:  Loss: 0.21153011918067932
Validation accuracy: 0.6583999991416931
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.29193729162216187
Validation accuracy: 0.6499999761581421
Epoch 20, CIFAR-10 Batch 2:  Loss: 0.1787720024585724
Validation accuracy: 0.645799994468689
Epoch 20, CIFAR-10 Batch 3:  Loss: 0.1639171838760376
Validation accuracy: 0.6628000140190125
Epoch 20, CIFAR-10 Batch 4:  Loss: 0.20712339878082275
Validation accuracy: 0.6539999842643738
Epoch 20, CIFAR-10 Batch 5:  Loss: 0.20246592164039612
Validation accuracy: 0.6638000011444092
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.26481813192367554
Validation accuracy: 0.6561999917030334
Epoch 21, CIFAR-10 Batch 2:  Loss: 0.16401176154613495
Validation accuracy: 0.6531999707221985
Epoch 21, CIFAR-10 Batch 3:  Loss: 0.1741698831319809
Validation accuracy: 0.6588000059127808
Epoch 21, CIFAR-10 Batch 4:  Loss: 0.1848624348640442
Validation accuracy: 0.6642000079154968
Epoch 21, CIFAR-10 Batch 5:  Loss: 0.17697153985500336
Validation accuracy: 0.6672000288963318
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.25788432359695435
Validation accuracy: 0.6639999747276306
Epoch 22, CIFAR-10 Batch 2:  Loss: 0.14933471381664276
Validation accuracy: 0.6492000222206116
Epoch 22, CIFAR-10 Batch 3:  Loss: 0.1577504426240921
Validation accuracy: 0.6588000059127808
Epoch 22, CIFAR-10 Batch 4:  Loss: 0.16778521239757538
Validation accuracy: 0.6647999882698059
Epoch 22, CIFAR-10 Batch 5:  Loss: 0.1721128523349762
Validation accuracy: 0.6696000099182129
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.23210029304027557
Validation accuracy: 0.6600000262260437
Epoch 23, CIFAR-10 Batch 2:  Loss: 0.13548551499843597
Validation accuracy: 0.6565999984741211
Epoch 23, CIFAR-10 Batch 3:  Loss: 0.14845526218414307
Validation accuracy: 0.6643999814987183
Epoch 23, CIFAR-10 Batch 4:  Loss: 0.16534075140953064
Validation accuracy: 0.6638000011444092
Epoch 23, CIFAR-10 Batch 5:  Loss: 0.16590245068073273
Validation accuracy: 0.66839998960495
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.23473748564720154
Validation accuracy: 0.6574000120162964
Epoch 24, CIFAR-10 Batch 2:  Loss: 0.1256626546382904
Validation accuracy: 0.6621999740600586
Epoch 24, CIFAR-10 Batch 3:  Loss: 0.1472509801387787
Validation accuracy: 0.6600000262260437
Epoch 24, CIFAR-10 Batch 4:  Loss: 0.1557687520980835
Validation accuracy: 0.6638000011444092
Epoch 24, CIFAR-10 Batch 5:  Loss: 0.14392545819282532
Validation accuracy: 0.6705999970436096
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.20968003571033478
Validation accuracy: 0.6675999760627747
Epoch 25, CIFAR-10 Batch 2:  Loss: 0.11728537082672119
Validation accuracy: 0.6564000248908997
Epoch 25, CIFAR-10 Batch 3:  Loss: 0.13538238406181335
Validation accuracy: 0.6650000214576721
Epoch 25, CIFAR-10 Batch 4:  Loss: 0.13822920620441437
Validation accuracy: 0.6661999821662903
Epoch 25, CIFAR-10 Batch 5:  Loss: 0.1425660103559494
Validation accuracy: 0.6615999937057495
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.20402221381664276
Validation accuracy: 0.6603999733924866
Epoch 26, CIFAR-10 Batch 2:  Loss: 0.10576228052377701
Validation accuracy: 0.6531999707221985
Epoch 26, CIFAR-10 Batch 3:  Loss: 0.11737338453531265
Validation accuracy: 0.6642000079154968
Epoch 26, CIFAR-10 Batch 4:  Loss: 0.13010966777801514
Validation accuracy: 0.6654000282287598
Epoch 26, CIFAR-10 Batch 5:  Loss: 0.1259283572435379
Validation accuracy: 0.6692000031471252
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.17379435896873474
Validation accuracy: 0.671999990940094
Epoch 27, CIFAR-10 Batch 2:  Loss: 0.09784809499979019
Validation accuracy: 0.65420001745224
Epoch 27, CIFAR-10 Batch 3:  Loss: 0.12970753014087677
Validation accuracy: 0.6636000275611877
Epoch 27, CIFAR-10 Batch 4:  Loss: 0.12752683460712433
Validation accuracy: 0.6661999821662903
Epoch 27, CIFAR-10 Batch 5:  Loss: 0.13129863142967224
Validation accuracy: 0.6632000207901001
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.17410361766815186
Validation accuracy: 0.6633999943733215
Epoch 28, CIFAR-10 Batch 2:  Loss: 0.0934884324669838
Validation accuracy: 0.6552000045776367
Epoch 28, CIFAR-10 Batch 3:  Loss: 0.10301116853952408
Validation accuracy: 0.6647999882698059
Epoch 28, CIFAR-10 Batch 4:  Loss: 0.12346378713846207
Validation accuracy: 0.6696000099182129
Epoch 28, CIFAR-10 Batch 5:  Loss: 0.11559240520000458
Validation accuracy: 0.6618000268936157
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.17294643819332123
Validation accuracy: 0.6611999869346619
Epoch 29, CIFAR-10 Batch 2:  Loss: 0.08830534666776657
Validation accuracy: 0.6510000228881836
Epoch 29, CIFAR-10 Batch 3:  Loss: 0.11291768401861191
Validation accuracy: 0.6579999923706055
Epoch 29, CIFAR-10 Batch 4:  Loss: 0.10923008620738983
Validation accuracy: 0.6606000065803528
Epoch 29, CIFAR-10 Batch 5:  Loss: 0.10536563396453857
Validation accuracy: 0.6682000160217285
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.16172286868095398
Validation accuracy: 0.6679999828338623
Epoch 30, CIFAR-10 Batch 2:  Loss: 0.0997132956981659
Validation accuracy: 0.6507999897003174
Epoch 30, CIFAR-10 Batch 3:  Loss: 0.10756117105484009
Validation accuracy: 0.6620000004768372
Epoch 30, CIFAR-10 Batch 4:  Loss: 0.10633806139230728
Validation accuracy: 0.6650000214576721
Epoch 30, CIFAR-10 Batch 5:  Loss: 0.09799693524837494
Validation accuracy: 0.6704000234603882
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.14302793145179749
Validation accuracy: 0.6687999963760376
Epoch 31, CIFAR-10 Batch 2:  Loss: 0.07643883675336838
Validation accuracy: 0.6517999768257141
Epoch 31, CIFAR-10 Batch 3:  Loss: 0.10067610442638397
Validation accuracy: 0.6564000248908997
Epoch 31, CIFAR-10 Batch 4:  Loss: 0.08973179012537003
Validation accuracy: 0.6682000160217285
Epoch 31, CIFAR-10 Batch 5:  Loss: 0.08865301311016083
Validation accuracy: 0.6711999773979187
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.11847744882106781
Validation accuracy: 0.6646000146865845
Epoch 32, CIFAR-10 Batch 2:  Loss: 0.07414306700229645
Validation accuracy: 0.6421999931335449
Epoch 32, CIFAR-10 Batch 3:  Loss: 0.09239856153726578
Validation accuracy: 0.6571999788284302
Epoch 32, CIFAR-10 Batch 4:  Loss: 0.10031072795391083
Validation accuracy: 0.6692000031471252
Epoch 32, CIFAR-10 Batch 5:  Loss: 0.0950130745768547
Validation accuracy: 0.6725999712944031
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.14011457562446594
Validation accuracy: 0.6628000140190125
Epoch 33, CIFAR-10 Batch 2:  Loss: 0.0705496221780777
Validation accuracy: 0.6553999781608582
Epoch 33, CIFAR-10 Batch 3:  Loss: 0.08745381981134415
Validation accuracy: 0.6498000025749207
Epoch 33, CIFAR-10 Batch 4:  Loss: 0.09717895090579987
Validation accuracy: 0.6632000207901001
Epoch 33, CIFAR-10 Batch 5:  Loss: 0.09944625943899155
Validation accuracy: 0.6629999876022339
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.12976409494876862
Validation accuracy: 0.6502000093460083
Epoch 34, CIFAR-10 Batch 2:  Loss: 0.07418929040431976
Validation accuracy: 0.6502000093460083
Epoch 34, CIFAR-10 Batch 3:  Loss: 0.08691626787185669
Validation accuracy: 0.6380000114440918
Epoch 34, CIFAR-10 Batch 4:  Loss: 0.08754147589206696
Validation accuracy: 0.6636000275611877
Epoch 34, CIFAR-10 Batch 5:  Loss: 0.09920147806406021
Validation accuracy: 0.66839998960495
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.16932964324951172
Validation accuracy: 0.6485999822616577
Epoch 35, CIFAR-10 Batch 2:  Loss: 0.07297645509243011
Validation accuracy: 0.6557999849319458
Epoch 35, CIFAR-10 Batch 3:  Loss: 0.07391998916864395
Validation accuracy: 0.6384000182151794
Epoch 35, CIFAR-10 Batch 4:  Loss: 0.09486068785190582
Validation accuracy: 0.6460000276565552
Epoch 35, CIFAR-10 Batch 5:  Loss: 0.08362146466970444
Validation accuracy: 0.6704000234603882
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.12006814777851105
Validation accuracy: 0.6521999835968018
Epoch 36, CIFAR-10 Batch 2:  Loss: 0.05627965182065964
Validation accuracy: 0.6538000106811523
Epoch 36, CIFAR-10 Batch 3:  Loss: 0.07703490555286407
Validation accuracy: 0.642799973487854
Epoch 36, CIFAR-10 Batch 4:  Loss: 0.08090762048959732
Validation accuracy: 0.6534000039100647
Epoch 36, CIFAR-10 Batch 5:  Loss: 0.07678946107625961
Validation accuracy: 0.6687999963760376
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.11156973987817764
Validation accuracy: 0.6601999998092651
Epoch 37, CIFAR-10 Batch 2:  Loss: 0.06289472430944443
Validation accuracy: 0.6528000235557556
Epoch 37, CIFAR-10 Batch 3:  Loss: 0.06493459641933441
Validation accuracy: 0.651199996471405
Epoch 37, CIFAR-10 Batch 4:  Loss: 0.07894262671470642
Validation accuracy: 0.649399995803833
Epoch 37, CIFAR-10 Batch 5:  Loss: 0.07315512001514435
Validation accuracy: 0.6693999767303467
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.10937311500310898
Validation accuracy: 0.652999997138977
Epoch 38, CIFAR-10 Batch 2:  Loss: 0.0678008645772934
Validation accuracy: 0.6546000242233276
Epoch 38, CIFAR-10 Batch 3:  Loss: 0.07191918045282364
Validation accuracy: 0.6385999917984009
Epoch 38, CIFAR-10 Batch 4:  Loss: 0.07943136990070343
Validation accuracy: 0.6549999713897705
Epoch 38, CIFAR-10 Batch 5:  Loss: 0.07329531759023666
Validation accuracy: 0.6679999828338623
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.10961765050888062
Validation accuracy: 0.6449999809265137
Epoch 39, CIFAR-10 Batch 2:  Loss: 0.06192560866475105
Validation accuracy: 0.6480000019073486
Epoch 39, CIFAR-10 Batch 3:  Loss: 0.061862457543611526
Validation accuracy: 0.6499999761581421
Epoch 39, CIFAR-10 Batch 4:  Loss: 0.07334856688976288
Validation accuracy: 0.6549999713897705
Epoch 39, CIFAR-10 Batch 5:  Loss: 0.06538072973489761
Validation accuracy: 0.6669999957084656
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.0993247777223587
Validation accuracy: 0.6520000100135803
Epoch 40, CIFAR-10 Batch 2:  Loss: 0.06480090320110321
Validation accuracy: 0.6394000053405762
Epoch 40, CIFAR-10 Batch 3:  Loss: 0.05832923576235771
Validation accuracy: 0.650600016117096
Epoch 40, CIFAR-10 Batch 4:  Loss: 0.06895242631435394
Validation accuracy: 0.6547999978065491
Epoch 40, CIFAR-10 Batch 5:  Loss: 0.05823894590139389
Validation accuracy: 0.6620000004768372
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.10588420927524567
Validation accuracy: 0.642799973487854
Epoch 41, CIFAR-10 Batch 2:  Loss: 0.06959984451532364
Validation accuracy: 0.6520000100135803
Epoch 41, CIFAR-10 Batch 3:  Loss: 0.05887482315301895
Validation accuracy: 0.6510000228881836
Epoch 41, CIFAR-10 Batch 4:  Loss: 0.06514251232147217
Validation accuracy: 0.6561999917030334
Epoch 41, CIFAR-10 Batch 5:  Loss: 0.05307896062731743
Validation accuracy: 0.659600019454956
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.11059264838695526
Validation accuracy: 0.6395999789237976
Epoch 42, CIFAR-10 Batch 2:  Loss: 0.05083860084414482
Validation accuracy: 0.6510000228881836
Epoch 42, CIFAR-10 Batch 3:  Loss: 0.0732874944806099
Validation accuracy: 0.6503999829292297
Epoch 42, CIFAR-10 Batch 4:  Loss: 0.05788055807352066
Validation accuracy: 0.6561999917030334
Epoch 42, CIFAR-10 Batch 5:  Loss: 0.04799573868513107
Validation accuracy: 0.6660000085830688
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.08468595147132874
Validation accuracy: 0.647599995136261
Epoch 43, CIFAR-10 Batch 2:  Loss: 0.05771099403500557
Validation accuracy: 0.6525999903678894
Epoch 43, CIFAR-10 Batch 3:  Loss: 0.059736043214797974
Validation accuracy: 0.6498000025749207
Epoch 43, CIFAR-10 Batch 4:  Loss: 0.056478776037693024
Validation accuracy: 0.6610000133514404
Epoch 43, CIFAR-10 Batch 5:  Loss: 0.044570885598659515
Validation accuracy: 0.6610000133514404
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.08682052791118622
Validation accuracy: 0.6421999931335449
Epoch 44, CIFAR-10 Batch 2:  Loss: 0.04506867751479149
Validation accuracy: 0.6485999822616577
Epoch 44, CIFAR-10 Batch 3:  Loss: 0.05897900462150574
Validation accuracy: 0.6327999830245972
Epoch 44, CIFAR-10 Batch 4:  Loss: 0.06408270448446274
Validation accuracy: 0.6660000085830688
Epoch 44, CIFAR-10 Batch 5:  Loss: 0.043052367866039276
Validation accuracy: 0.6650000214576721
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.08279154449701309
Validation accuracy: 0.6567999720573425
Epoch 45, CIFAR-10 Batch 2:  Loss: 0.04249558597803116
Validation accuracy: 0.646399974822998
Epoch 45, CIFAR-10 Batch 3:  Loss: 0.04758978635072708
Validation accuracy: 0.640999972820282
Epoch 45, CIFAR-10 Batch 4:  Loss: 0.06551985442638397
Validation accuracy: 0.6516000032424927
Epoch 45, CIFAR-10 Batch 5:  Loss: 0.0442320853471756
Validation accuracy: 0.657800018787384
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.0639641061425209
Validation accuracy: 0.6565999984741211
Epoch 46, CIFAR-10 Batch 2:  Loss: 0.05079669505357742
Validation accuracy: 0.6420000195503235
Epoch 46, CIFAR-10 Batch 3:  Loss: 0.051717378199100494
Validation accuracy: 0.6489999890327454
Epoch 46, CIFAR-10 Batch 4:  Loss: 0.05476025864481926
Validation accuracy: 0.6539999842643738
Epoch 46, CIFAR-10 Batch 5:  Loss: 0.048164937645196915
Validation accuracy: 0.6585999727249146
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.07143913209438324
Validation accuracy: 0.6520000100135803
Epoch 47, CIFAR-10 Batch 2:  Loss: 0.05986957624554634
Validation accuracy: 0.6460000276565552
Epoch 47, CIFAR-10 Batch 3:  Loss: 0.05346575379371643
Validation accuracy: 0.647599995136261
Epoch 47, CIFAR-10 Batch 4:  Loss: 0.04823634773492813
Validation accuracy: 0.6650000214576721
Epoch 47, CIFAR-10 Batch 5:  Loss: 0.04113725572824478
Validation accuracy: 0.6561999917030334
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.08047440648078918
Validation accuracy: 0.6510000228881836
Epoch 48, CIFAR-10 Batch 2:  Loss: 0.02984282374382019
Validation accuracy: 0.6574000120162964
Epoch 48, CIFAR-10 Batch 3:  Loss: 0.06134570762515068
Validation accuracy: 0.6488000154495239
Epoch 48, CIFAR-10 Batch 4:  Loss: 0.04817470535635948
Validation accuracy: 0.6601999998092651
Epoch 48, CIFAR-10 Batch 5:  Loss: 0.03501170128583908
Validation accuracy: 0.6567999720573425
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.07703761011362076
Validation accuracy: 0.649399995803833
Epoch 49, CIFAR-10 Batch 2:  Loss: 0.04502710700035095
Validation accuracy: 0.6492000222206116
Epoch 49, CIFAR-10 Batch 3:  Loss: 0.04030351713299751
Validation accuracy: 0.6484000086784363
Epoch 49, CIFAR-10 Batch 4:  Loss: 0.04682653024792671
Validation accuracy: 0.6489999890327454
Epoch 49, CIFAR-10 Batch 5:  Loss: 0.03455711528658867
Validation accuracy: 0.6588000059127808
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.05784664303064346
Validation accuracy: 0.6611999869346619
Epoch 50, CIFAR-10 Batch 2:  Loss: 0.040785156190395355
Validation accuracy: 0.6514000296592712
Epoch 50, CIFAR-10 Batch 3:  Loss: 0.06261305510997772
Validation accuracy: 0.625
Epoch 50, CIFAR-10 Batch 4:  Loss: 0.047459740191698074
Validation accuracy: 0.6589999794960022
Epoch 50, CIFAR-10 Batch 5:  Loss: 0.03182786703109741
Validation accuracy: 0.6633999943733215

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [19]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


INFO:tensorflow:Restoring parameters from ./image_classification
Testing Accuracy: 0.6607421875

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.