Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    norm = np.array (x / x.max())
    return norm
    #norm=np.linalg.norm(x)
    #if norm==0: 
    #    return x
    #return x/norm


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [4]:
from sklearn import preprocessing

one_hot_classes = None
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    global one_hot_classes
    # TODO: Implement Function

    return preprocessing.label_binarize(x,classes=[0,1,2,3,4,5,6,7,8,9])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [5]:
stddev=0.05
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [6]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    x=tf.placeholder(tf.float32,(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
    return x


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    
    return tf.placeholder(tf.float32, (None, n_classes), name='y')


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32,name='keep_prob')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [7]:
import math

def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 1))/float((conv_strides[0])))
    width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 1))/float((conv_strides[1])))
    #height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 2))/float((conv_strides[0] + 1)))
    #width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 2))/float((conv_strides[1] + 1)))
    weight = tf.Variable(tf.truncated_normal((height, width, x_tensor.shape[3].value, conv_num_outputs),stddev=stddev))
    bias = tf.Variable(tf.zeros(conv_num_outputs))
    conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME')
    conv_layer = tf.nn.bias_add(conv_layer,bias)
    conv_layer = tf.nn.relu(conv_layer)
    maxpool_layer = tf.nn.max_pool(conv_layer, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
    return maxpool_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [8]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    flattened = x_tensor.shape[1].value * x_tensor.shape[2].value * x_tensor.shape[3].value
    return tf.reshape(x_tensor, shape=(-1, flattened)) 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [9]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
    bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))

    fc1 = tf.add(tf.matmul(x_tensor, weights), bias)
    out = tf.nn.relu(fc1)
    return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [10]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
    bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))
    return tf.add(tf.matmul(x_tensor, weights), bias)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [11]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):

    stddev=0.01
    conv_strides = (2,2) # Getting out of mem errors with stride=1
    pool_strides = (2,2)
    pool_ksize = (2,2)

    conv_num_outputs1 = 32
    conv_ksize1 = (2,2)

    conv_num_outputs2 = 128
    conv_ksize2 = (4,4)

    conv_num_outputs3 = 128
    conv_ksize3 = (2,2)

    fully_conn_out1 = 1024
    fully_conn_out2 = 512
    fully_conn_out3 = 128

    num_outputs = 10
    
    x = conv2d_maxpool(x, conv_num_outputs1, conv_ksize1, conv_strides, pool_ksize, pool_strides)
    #x = tf.nn.dropout(x, keep_prob)
    x = conv2d_maxpool(x, conv_num_outputs2, conv_ksize2, conv_strides, pool_ksize, pool_strides)
    x = tf.nn.dropout(x, keep_prob)
    #x = conv2d_maxpool(x, conv_num_outputs3, conv_ksize3, conv_strides, pool_ksize, pool_strides)
    

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    x = flatten(x)
    

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    x = fully_conn(x,fully_conn_out1)
    x = tf.nn.dropout(x, keep_prob)
    x = fully_conn(x,fully_conn_out2)
    #x = tf.nn.dropout(x, keep_prob)
    #x = fully_conn(x,fully_conn_out3)
    
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    x = output(x, num_outputs)
    
    return x


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [12]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    session.run(optimizer, feed_dict={
        x: feature_batch,
        y: label_batch,
        keep_prob: keep_probability})
    pass


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [13]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    
    loss = session.run(cost, feed_dict={
        x: feature_batch,
        y: label_batch,
        keep_prob: 1.})
    valid_acc = sess.run(accuracy, feed_dict={
        x: valid_features[:256],
        y: valid_labels[:256],
        keep_prob: 1.})
    train_acc = session.run (accuracy, feed_dict = {
        x: feature_batch, 
        y: label_batch, 
        keep_prob: 1.})
    print('Loss: {:>10.4f} Training: {:.6f} Validation: {:.6f}'.format(
        loss,
        train_acc,
        valid_acc))
    pass

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [14]:
# TODO: Tune Parameters
epochs = 100
batch_size = 1024
keep_probability = 0.4

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())

    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss:     2.2352 Training: 0.224010 Validation: 0.242188
Epoch  2, CIFAR-10 Batch 1:  Loss:     2.1216 Training: 0.228960 Validation: 0.203125
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.9601 Training: 0.289604 Validation: 0.308594
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.8813 Training: 0.313119 Validation: 0.273438
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.7929 Training: 0.365099 Validation: 0.316406
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.7326 Training: 0.376238 Validation: 0.316406
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.6653 Training: 0.417079 Validation: 0.351562
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.6380 Training: 0.435644 Validation: 0.343750
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.5832 Training: 0.450495 Validation: 0.332031
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.5464 Training: 0.446782 Validation: 0.378906
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.5052 Training: 0.470297 Validation: 0.398438
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.4899 Training: 0.471535 Validation: 0.378906
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.4488 Training: 0.488861 Validation: 0.417969
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.4253 Training: 0.495050 Validation: 0.402344
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.4507 Training: 0.476485 Validation: 0.394531
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.3888 Training: 0.508663 Validation: 0.402344
Epoch 17, CIFAR-10 Batch 1:  Loss:     1.3557 Training: 0.521040 Validation: 0.386719
Epoch 18, CIFAR-10 Batch 1:  Loss:     1.3275 Training: 0.537129 Validation: 0.414062
Epoch 19, CIFAR-10 Batch 1:  Loss:     1.3327 Training: 0.522277 Validation: 0.398438
Epoch 20, CIFAR-10 Batch 1:  Loss:     1.3091 Training: 0.528465 Validation: 0.402344
Epoch 21, CIFAR-10 Batch 1:  Loss:     1.2784 Training: 0.530941 Validation: 0.394531
Epoch 22, CIFAR-10 Batch 1:  Loss:     1.2483 Training: 0.554455 Validation: 0.414062
Epoch 23, CIFAR-10 Batch 1:  Loss:     1.2108 Training: 0.574257 Validation: 0.417969
Epoch 24, CIFAR-10 Batch 1:  Loss:     1.2093 Training: 0.575495 Validation: 0.402344
Epoch 25, CIFAR-10 Batch 1:  Loss:     1.1971 Training: 0.586634 Validation: 0.425781
Epoch 26, CIFAR-10 Batch 1:  Loss:     1.1964 Training: 0.568069 Validation: 0.410156
Epoch 27, CIFAR-10 Batch 1:  Loss:     1.1653 Training: 0.589109 Validation: 0.421875
Epoch 28, CIFAR-10 Batch 1:  Loss:     1.1518 Training: 0.592822 Validation: 0.433594
Epoch 29, CIFAR-10 Batch 1:  Loss:     1.1172 Training: 0.615099 Validation: 0.433594
Epoch 30, CIFAR-10 Batch 1:  Loss:     1.0937 Training: 0.615099 Validation: 0.433594
Epoch 31, CIFAR-10 Batch 1:  Loss:     1.0974 Training: 0.611386 Validation: 0.433594
Epoch 32, CIFAR-10 Batch 1:  Loss:     1.1023 Training: 0.610148 Validation: 0.429688
Epoch 33, CIFAR-10 Batch 1:  Loss:     1.0611 Training: 0.632426 Validation: 0.476562
Epoch 34, CIFAR-10 Batch 1:  Loss:     1.0215 Training: 0.644802 Validation: 0.480469
Epoch 35, CIFAR-10 Batch 1:  Loss:     1.0217 Training: 0.633663 Validation: 0.472656
Epoch 36, CIFAR-10 Batch 1:  Loss:     1.0178 Training: 0.647277 Validation: 0.488281
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.9717 Training: 0.681931 Validation: 0.492188
Epoch 38, CIFAR-10 Batch 1:  Loss:     1.0102 Training: 0.652228 Validation: 0.472656
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.9702 Training: 0.658416 Validation: 0.480469
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.9270 Training: 0.705446 Validation: 0.496094
Epoch 41, CIFAR-10 Batch 1:  Loss:     0.9299 Training: 0.674505 Validation: 0.457031
Epoch 42, CIFAR-10 Batch 1:  Loss:     0.8984 Training: 0.694307 Validation: 0.492188
Epoch 43, CIFAR-10 Batch 1:  Loss:     0.9011 Training: 0.710396 Validation: 0.515625
Epoch 44, CIFAR-10 Batch 1:  Loss:     0.8940 Training: 0.704208 Validation: 0.476562
Epoch 45, CIFAR-10 Batch 1:  Loss:     0.8693 Training: 0.709158 Validation: 0.500000
Epoch 46, CIFAR-10 Batch 1:  Loss:     0.8432 Training: 0.727723 Validation: 0.492188
Epoch 47, CIFAR-10 Batch 1:  Loss:     0.8644 Training: 0.701733 Validation: 0.484375
Epoch 48, CIFAR-10 Batch 1:  Loss:     0.8729 Training: 0.691832 Validation: 0.511719
Epoch 49, CIFAR-10 Batch 1:  Loss:     0.8537 Training: 0.719059 Validation: 0.480469
Epoch 50, CIFAR-10 Batch 1:  Loss:     0.8346 Training: 0.725248 Validation: 0.472656
Epoch 51, CIFAR-10 Batch 1:  Loss:     0.8328 Training: 0.748762 Validation: 0.523438
Epoch 52, CIFAR-10 Batch 1:  Loss:     0.8344 Training: 0.742574 Validation: 0.539062
Epoch 53, CIFAR-10 Batch 1:  Loss:     0.7827 Training: 0.761139 Validation: 0.507812
Epoch 54, CIFAR-10 Batch 1:  Loss:     0.7830 Training: 0.762376 Validation: 0.488281
Epoch 55, CIFAR-10 Batch 1:  Loss:     0.8260 Training: 0.751238 Validation: 0.503906
Epoch 56, CIFAR-10 Batch 1:  Loss:     0.7825 Training: 0.773515 Validation: 0.531250
Epoch 57, CIFAR-10 Batch 1:  Loss:     0.7296 Training: 0.792079 Validation: 0.500000
Epoch 58, CIFAR-10 Batch 1:  Loss:     0.7202 Training: 0.778465 Validation: 0.488281
Epoch 59, CIFAR-10 Batch 1:  Loss:     0.7139 Training: 0.790842 Validation: 0.515625
Epoch 60, CIFAR-10 Batch 1:  Loss:     0.7474 Training: 0.785891 Validation: 0.492188
Epoch 61, CIFAR-10 Batch 1:  Loss:     0.7120 Training: 0.798267 Validation: 0.503906
Epoch 62, CIFAR-10 Batch 1:  Loss:     0.6818 Training: 0.814356 Validation: 0.511719
Epoch 63, CIFAR-10 Batch 1:  Loss:     0.6652 Training: 0.814356 Validation: 0.503906
Epoch 64, CIFAR-10 Batch 1:  Loss:     0.6624 Training: 0.815594 Validation: 0.515625
Epoch 65, CIFAR-10 Batch 1:  Loss:     0.6559 Training: 0.820545 Validation: 0.519531
Epoch 66, CIFAR-10 Batch 1:  Loss:     0.6594 Training: 0.819307 Validation: 0.515625
Epoch 67, CIFAR-10 Batch 1:  Loss:     0.6376 Training: 0.824257 Validation: 0.523438
Epoch 68, CIFAR-10 Batch 1:  Loss:     0.6284 Training: 0.826733 Validation: 0.476562
Epoch 69, CIFAR-10 Batch 1:  Loss:     0.6256 Training: 0.827970 Validation: 0.511719
Epoch 70, CIFAR-10 Batch 1:  Loss:     0.6236 Training: 0.837871 Validation: 0.523438
Epoch 71, CIFAR-10 Batch 1:  Loss:     0.5969 Training: 0.842822 Validation: 0.503906
Epoch 72, CIFAR-10 Batch 1:  Loss:     0.5872 Training: 0.847772 Validation: 0.511719
Epoch 73, CIFAR-10 Batch 1:  Loss:     0.5835 Training: 0.853960 Validation: 0.496094
Epoch 74, CIFAR-10 Batch 1:  Loss:     0.5623 Training: 0.862624 Validation: 0.511719
Epoch 75, CIFAR-10 Batch 1:  Loss:     0.5608 Training: 0.853960 Validation: 0.539062
Epoch 76, CIFAR-10 Batch 1:  Loss:     0.5749 Training: 0.851485 Validation: 0.515625
Epoch 77, CIFAR-10 Batch 1:  Loss:     0.5492 Training: 0.861386 Validation: 0.542969
Epoch 78, CIFAR-10 Batch 1:  Loss:     0.5262 Training: 0.872525 Validation: 0.511719
Epoch 79, CIFAR-10 Batch 1:  Loss:     0.5250 Training: 0.873762 Validation: 0.519531
Epoch 80, CIFAR-10 Batch 1:  Loss:     0.5248 Training: 0.876238 Validation: 0.507812
Epoch 81, CIFAR-10 Batch 1:  Loss:     0.5336 Training: 0.866337 Validation: 0.511719
Epoch 82, CIFAR-10 Batch 1:  Loss:     0.5150 Training: 0.875000 Validation: 0.535156
Epoch 83, CIFAR-10 Batch 1:  Loss:     0.5117 Training: 0.873762 Validation: 0.500000
Epoch 84, CIFAR-10 Batch 1:  Loss:     0.5037 Training: 0.868812 Validation: 0.503906
Epoch 85, CIFAR-10 Batch 1:  Loss:     0.5047 Training: 0.879951 Validation: 0.507812
Epoch 86, CIFAR-10 Batch 1:  Loss:     0.5167 Training: 0.877475 Validation: 0.503906
Epoch 87, CIFAR-10 Batch 1:  Loss:     0.4927 Training: 0.875000 Validation: 0.523438
Epoch 88, CIFAR-10 Batch 1:  Loss:     0.4987 Training: 0.881188 Validation: 0.546875
Epoch 89, CIFAR-10 Batch 1:  Loss:     0.4795 Training: 0.888614 Validation: 0.523438
Epoch 90, CIFAR-10 Batch 1:  Loss:     0.4787 Training: 0.887376 Validation: 0.527344
Epoch 91, CIFAR-10 Batch 1:  Loss:     0.4850 Training: 0.889852 Validation: 0.523438
Epoch 92, CIFAR-10 Batch 1:  Loss:     0.4705 Training: 0.902228 Validation: 0.515625
Epoch 93, CIFAR-10 Batch 1:  Loss:     0.4471 Training: 0.904703 Validation: 0.531250
Epoch 94, CIFAR-10 Batch 1:  Loss:     0.4490 Training: 0.891089 Validation: 0.519531
Epoch 95, CIFAR-10 Batch 1:  Loss:     0.4486 Training: 0.894802 Validation: 0.542969
Epoch 96, CIFAR-10 Batch 1:  Loss:     0.4393 Training: 0.910891 Validation: 0.527344
Epoch 97, CIFAR-10 Batch 1:  Loss:     0.4372 Training: 0.904703 Validation: 0.519531
Epoch 98, CIFAR-10 Batch 1:  Loss:     0.4225 Training: 0.912129 Validation: 0.539062
Epoch 99, CIFAR-10 Batch 1:  Loss:     0.4241 Training: 0.907178 Validation: 0.535156
Epoch 100, CIFAR-10 Batch 1:  Loss:     0.4449 Training: 0.896040 Validation: 0.496094

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [16]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss:     2.2712 Training: 0.146040 Validation: 0.125000
Epoch  1, CIFAR-10 Batch 2:  Loss:     2.1470 Training: 0.246287 Validation: 0.261719
Epoch  1, CIFAR-10 Batch 3:  Loss:     2.0109 Training: 0.252475 Validation: 0.253906
Epoch  1, CIFAR-10 Batch 4:  Loss:     1.9230 Training: 0.298267 Validation: 0.292969
Epoch  1, CIFAR-10 Batch 5:  Loss:     1.8846 Training: 0.299505 Validation: 0.261719
Epoch  2, CIFAR-10 Batch 1:  Loss:     1.8274 Training: 0.349010 Validation: 0.300781
Epoch  2, CIFAR-10 Batch 2:  Loss:     1.7886 Training: 0.352723 Validation: 0.308594
Epoch  2, CIFAR-10 Batch 3:  Loss:     1.6951 Training: 0.400990 Validation: 0.335938
Epoch  2, CIFAR-10 Batch 4:  Loss:     1.6855 Training: 0.396040 Validation: 0.335938
Epoch  2, CIFAR-10 Batch 5:  Loss:     1.7143 Training: 0.362624 Validation: 0.332031
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.6506 Training: 0.423267 Validation: 0.371094
Epoch  3, CIFAR-10 Batch 2:  Loss:     1.6729 Training: 0.389851 Validation: 0.355469
Epoch  3, CIFAR-10 Batch 3:  Loss:     1.5893 Training: 0.430693 Validation: 0.343750
Epoch  3, CIFAR-10 Batch 4:  Loss:     1.5714 Training: 0.426980 Validation: 0.394531
Epoch  3, CIFAR-10 Batch 5:  Loss:     1.6171 Training: 0.413366 Validation: 0.359375
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.5586 Training: 0.429455 Validation: 0.406250
Epoch  4, CIFAR-10 Batch 2:  Loss:     1.5934 Training: 0.433168 Validation: 0.375000
Epoch  4, CIFAR-10 Batch 3:  Loss:     1.4984 Training: 0.446782 Validation: 0.402344
Epoch  4, CIFAR-10 Batch 4:  Loss:     1.4858 Training: 0.459158 Validation: 0.425781
Epoch  4, CIFAR-10 Batch 5:  Loss:     1.5261 Training: 0.456683 Validation: 0.398438
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.4777 Training: 0.452970 Validation: 0.425781
Epoch  5, CIFAR-10 Batch 2:  Loss:     1.5034 Training: 0.457921 Validation: 0.402344
Epoch  5, CIFAR-10 Batch 3:  Loss:     1.4355 Training: 0.474010 Validation: 0.394531
Epoch  5, CIFAR-10 Batch 4:  Loss:     1.4245 Training: 0.487624 Validation: 0.417969
Epoch  5, CIFAR-10 Batch 5:  Loss:     1.4711 Training: 0.466584 Validation: 0.390625
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.4319 Training: 0.481436 Validation: 0.417969
Epoch  6, CIFAR-10 Batch 2:  Loss:     1.4626 Training: 0.465347 Validation: 0.410156
Epoch  6, CIFAR-10 Batch 3:  Loss:     1.3869 Training: 0.497525 Validation: 0.437500
Epoch  6, CIFAR-10 Batch 4:  Loss:     1.3786 Training: 0.517327 Validation: 0.425781
Epoch  6, CIFAR-10 Batch 5:  Loss:     1.4301 Training: 0.496287 Validation: 0.402344
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.3929 Training: 0.500000 Validation: 0.464844
Epoch  7, CIFAR-10 Batch 2:  Loss:     1.4154 Training: 0.481436 Validation: 0.425781
Epoch  7, CIFAR-10 Batch 3:  Loss:     1.3383 Training: 0.521040 Validation: 0.425781
Epoch  7, CIFAR-10 Batch 4:  Loss:     1.3343 Training: 0.532178 Validation: 0.453125
Epoch  7, CIFAR-10 Batch 5:  Loss:     1.3776 Training: 0.511139 Validation: 0.417969
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.3479 Training: 0.513614 Validation: 0.468750
Epoch  8, CIFAR-10 Batch 2:  Loss:     1.3671 Training: 0.506188 Validation: 0.445312
Epoch  8, CIFAR-10 Batch 3:  Loss:     1.3183 Training: 0.545792 Validation: 0.449219
Epoch  8, CIFAR-10 Batch 4:  Loss:     1.3304 Training: 0.528465 Validation: 0.425781
Epoch  8, CIFAR-10 Batch 5:  Loss:     1.3498 Training: 0.524752 Validation: 0.410156
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.3442 Training: 0.513614 Validation: 0.464844
Epoch  9, CIFAR-10 Batch 2:  Loss:     1.3640 Training: 0.525990 Validation: 0.460938
Epoch  9, CIFAR-10 Batch 3:  Loss:     1.3014 Training: 0.548267 Validation: 0.425781
Epoch  9, CIFAR-10 Batch 4:  Loss:     1.2888 Training: 0.553218 Validation: 0.453125
Epoch  9, CIFAR-10 Batch 5:  Loss:     1.3140 Training: 0.547030 Validation: 0.433594
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.2998 Training: 0.539604 Validation: 0.472656
Epoch 10, CIFAR-10 Batch 2:  Loss:     1.3224 Training: 0.535891 Validation: 0.476562
Epoch 10, CIFAR-10 Batch 3:  Loss:     1.2789 Training: 0.563119 Validation: 0.425781
Epoch 10, CIFAR-10 Batch 4:  Loss:     1.2642 Training: 0.551980 Validation: 0.488281
Epoch 10, CIFAR-10 Batch 5:  Loss:     1.2936 Training: 0.540842 Validation: 0.449219
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.2619 Training: 0.547030 Validation: 0.476562
Epoch 11, CIFAR-10 Batch 2:  Loss:     1.3269 Training: 0.527228 Validation: 0.453125
Epoch 11, CIFAR-10 Batch 3:  Loss:     1.2628 Training: 0.554455 Validation: 0.453125
Epoch 11, CIFAR-10 Batch 4:  Loss:     1.2262 Training: 0.582921 Validation: 0.464844
Epoch 11, CIFAR-10 Batch 5:  Loss:     1.2607 Training: 0.555693 Validation: 0.441406
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.2360 Training: 0.574257 Validation: 0.468750
Epoch 12, CIFAR-10 Batch 2:  Loss:     1.2856 Training: 0.549505 Validation: 0.460938
Epoch 12, CIFAR-10 Batch 3:  Loss:     1.2274 Training: 0.575495 Validation: 0.464844
Epoch 12, CIFAR-10 Batch 4:  Loss:     1.2022 Training: 0.576733 Validation: 0.480469
Epoch 12, CIFAR-10 Batch 5:  Loss:     1.2294 Training: 0.581683 Validation: 0.464844
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.2190 Training: 0.574257 Validation: 0.496094
Epoch 13, CIFAR-10 Batch 2:  Loss:     1.2808 Training: 0.561881 Validation: 0.488281
Epoch 13, CIFAR-10 Batch 3:  Loss:     1.2082 Training: 0.581683 Validation: 0.468750
Epoch 13, CIFAR-10 Batch 4:  Loss:     1.1987 Training: 0.585396 Validation: 0.464844
Epoch 13, CIFAR-10 Batch 5:  Loss:     1.2214 Training: 0.582921 Validation: 0.457031
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.2112 Training: 0.582921 Validation: 0.492188
Epoch 14, CIFAR-10 Batch 2:  Loss:     1.2631 Training: 0.560644 Validation: 0.484375
Epoch 14, CIFAR-10 Batch 3:  Loss:     1.2143 Training: 0.576733 Validation: 0.464844
Epoch 14, CIFAR-10 Batch 4:  Loss:     1.1847 Training: 0.582921 Validation: 0.480469
Epoch 14, CIFAR-10 Batch 5:  Loss:     1.1940 Training: 0.589109 Validation: 0.460938
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.1985 Training: 0.564356 Validation: 0.472656
Epoch 15, CIFAR-10 Batch 2:  Loss:     1.2753 Training: 0.550743 Validation: 0.449219
Epoch 15, CIFAR-10 Batch 3:  Loss:     1.1996 Training: 0.582921 Validation: 0.492188
Epoch 15, CIFAR-10 Batch 4:  Loss:     1.1626 Training: 0.587871 Validation: 0.484375
Epoch 15, CIFAR-10 Batch 5:  Loss:     1.1989 Training: 0.584158 Validation: 0.464844
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.1808 Training: 0.585396 Validation: 0.488281
Epoch 16, CIFAR-10 Batch 2:  Loss:     1.2113 Training: 0.596535 Validation: 0.472656
Epoch 16, CIFAR-10 Batch 3:  Loss:     1.1837 Training: 0.585396 Validation: 0.468750
Epoch 16, CIFAR-10 Batch 4:  Loss:     1.1404 Training: 0.586634 Validation: 0.476562
Epoch 16, CIFAR-10 Batch 5:  Loss:     1.1646 Training: 0.622525 Validation: 0.476562
Epoch 17, CIFAR-10 Batch 1:  Loss:     1.1490 Training: 0.625000 Validation: 0.484375
Epoch 17, CIFAR-10 Batch 2:  Loss:     1.1899 Training: 0.594059 Validation: 0.480469
Epoch 17, CIFAR-10 Batch 3:  Loss:     1.1504 Training: 0.610148 Validation: 0.472656
Epoch 17, CIFAR-10 Batch 4:  Loss:     1.1312 Training: 0.600248 Validation: 0.480469
Epoch 17, CIFAR-10 Batch 5:  Loss:     1.1446 Training: 0.615099 Validation: 0.445312
Epoch 18, CIFAR-10 Batch 1:  Loss:     1.1698 Training: 0.600248 Validation: 0.468750
Epoch 18, CIFAR-10 Batch 2:  Loss:     1.1720 Training: 0.590347 Validation: 0.484375
Epoch 18, CIFAR-10 Batch 3:  Loss:     1.1494 Training: 0.618812 Validation: 0.457031
Epoch 18, CIFAR-10 Batch 4:  Loss:     1.1058 Training: 0.626238 Validation: 0.480469
Epoch 18, CIFAR-10 Batch 5:  Loss:     1.1356 Training: 0.607673 Validation: 0.441406
Epoch 19, CIFAR-10 Batch 1:  Loss:     1.1117 Training: 0.611386 Validation: 0.476562
Epoch 19, CIFAR-10 Batch 2:  Loss:     1.1623 Training: 0.610148 Validation: 0.464844
Epoch 19, CIFAR-10 Batch 3:  Loss:     1.1270 Training: 0.606436 Validation: 0.476562
Epoch 19, CIFAR-10 Batch 4:  Loss:     1.0954 Training: 0.642327 Validation: 0.492188
Epoch 19, CIFAR-10 Batch 5:  Loss:     1.1080 Training: 0.629951 Validation: 0.484375
Epoch 20, CIFAR-10 Batch 1:  Loss:     1.1231 Training: 0.602723 Validation: 0.492188
Epoch 20, CIFAR-10 Batch 2:  Loss:     1.1378 Training: 0.616337 Validation: 0.511719
Epoch 20, CIFAR-10 Batch 3:  Loss:     1.1001 Training: 0.627475 Validation: 0.468750
Epoch 20, CIFAR-10 Batch 4:  Loss:     1.0733 Training: 0.644802 Validation: 0.500000
Epoch 20, CIFAR-10 Batch 5:  Loss:     1.1026 Training: 0.641089 Validation: 0.488281
Epoch 21, CIFAR-10 Batch 1:  Loss:     1.0964 Training: 0.626238 Validation: 0.476562
Epoch 21, CIFAR-10 Batch 2:  Loss:     1.1418 Training: 0.626238 Validation: 0.480469
Epoch 21, CIFAR-10 Batch 3:  Loss:     1.1195 Training: 0.602723 Validation: 0.511719
Epoch 21, CIFAR-10 Batch 4:  Loss:     1.0806 Training: 0.637376 Validation: 0.484375
Epoch 21, CIFAR-10 Batch 5:  Loss:     1.0786 Training: 0.649752 Validation: 0.511719
Epoch 22, CIFAR-10 Batch 1:  Loss:     1.0869 Training: 0.628713 Validation: 0.488281
Epoch 22, CIFAR-10 Batch 2:  Loss:     1.1149 Training: 0.621287 Validation: 0.500000
Epoch 22, CIFAR-10 Batch 3:  Loss:     1.0876 Training: 0.623762 Validation: 0.496094
Epoch 22, CIFAR-10 Batch 4:  Loss:     1.0798 Training: 0.638614 Validation: 0.496094
Epoch 22, CIFAR-10 Batch 5:  Loss:     1.0729 Training: 0.649752 Validation: 0.503906
Epoch 23, CIFAR-10 Batch 1:  Loss:     1.0780 Training: 0.622525 Validation: 0.500000
Epoch 23, CIFAR-10 Batch 2:  Loss:     1.1225 Training: 0.628713 Validation: 0.496094
Epoch 23, CIFAR-10 Batch 3:  Loss:     1.0763 Training: 0.623762 Validation: 0.523438
Epoch 23, CIFAR-10 Batch 4:  Loss:     1.0509 Training: 0.664604 Validation: 0.500000
Epoch 23, CIFAR-10 Batch 5:  Loss:     1.0613 Training: 0.660891 Validation: 0.500000
Epoch 24, CIFAR-10 Batch 1:  Loss:     1.0554 Training: 0.638614 Validation: 0.507812
Epoch 24, CIFAR-10 Batch 2:  Loss:     1.1113 Training: 0.613861 Validation: 0.519531
Epoch 24, CIFAR-10 Batch 3:  Loss:     1.0654 Training: 0.643564 Validation: 0.511719
Epoch 24, CIFAR-10 Batch 4:  Loss:     1.0213 Training: 0.672030 Validation: 0.503906
Epoch 24, CIFAR-10 Batch 5:  Loss:     1.0582 Training: 0.644802 Validation: 0.519531
Epoch 25, CIFAR-10 Batch 1:  Loss:     1.0478 Training: 0.639852 Validation: 0.519531
Epoch 25, CIFAR-10 Batch 2:  Loss:     1.0998 Training: 0.629951 Validation: 0.511719
Epoch 25, CIFAR-10 Batch 3:  Loss:     1.0621 Training: 0.634901 Validation: 0.500000
Epoch 25, CIFAR-10 Batch 4:  Loss:     1.0458 Training: 0.650990 Validation: 0.503906
Epoch 25, CIFAR-10 Batch 5:  Loss:     1.0319 Training: 0.664604 Validation: 0.492188
Epoch 26, CIFAR-10 Batch 1:  Loss:     1.0341 Training: 0.644802 Validation: 0.503906
Epoch 26, CIFAR-10 Batch 2:  Loss:     1.0916 Training: 0.636139 Validation: 0.507812
Epoch 26, CIFAR-10 Batch 3:  Loss:     1.0776 Training: 0.611386 Validation: 0.464844
Epoch 26, CIFAR-10 Batch 4:  Loss:     1.0245 Training: 0.663366 Validation: 0.503906
Epoch 26, CIFAR-10 Batch 5:  Loss:     1.0367 Training: 0.663366 Validation: 0.523438
Epoch 27, CIFAR-10 Batch 1:  Loss:     1.0265 Training: 0.647277 Validation: 0.515625
Epoch 27, CIFAR-10 Batch 2:  Loss:     1.0603 Training: 0.646040 Validation: 0.492188
Epoch 27, CIFAR-10 Batch 3:  Loss:     1.0382 Training: 0.642327 Validation: 0.511719
Epoch 27, CIFAR-10 Batch 4:  Loss:     1.0146 Training: 0.678218 Validation: 0.488281
Epoch 27, CIFAR-10 Batch 5:  Loss:     1.0126 Training: 0.669554 Validation: 0.542969
Epoch 28, CIFAR-10 Batch 1:  Loss:     1.0037 Training: 0.670792 Validation: 0.542969
Epoch 28, CIFAR-10 Batch 2:  Loss:     1.0395 Training: 0.653465 Validation: 0.507812
Epoch 28, CIFAR-10 Batch 3:  Loss:     1.0068 Training: 0.669554 Validation: 0.519531
Epoch 28, CIFAR-10 Batch 4:  Loss:     0.9882 Training: 0.672030 Validation: 0.500000
Epoch 28, CIFAR-10 Batch 5:  Loss:     1.0000 Training: 0.679455 Validation: 0.496094
Epoch 29, CIFAR-10 Batch 1:  Loss:     1.0037 Training: 0.658416 Validation: 0.527344
Epoch 29, CIFAR-10 Batch 2:  Loss:     1.0650 Training: 0.646040 Validation: 0.531250
Epoch 29, CIFAR-10 Batch 3:  Loss:     1.0307 Training: 0.652228 Validation: 0.480469
Epoch 29, CIFAR-10 Batch 4:  Loss:     0.9962 Training: 0.673267 Validation: 0.511719
Epoch 29, CIFAR-10 Batch 5:  Loss:     1.0092 Training: 0.664604 Validation: 0.503906
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.9908 Training: 0.658416 Validation: 0.500000
Epoch 30, CIFAR-10 Batch 2:  Loss:     1.0326 Training: 0.670792 Validation: 0.515625
Epoch 30, CIFAR-10 Batch 3:  Loss:     1.0065 Training: 0.659653 Validation: 0.507812
Epoch 30, CIFAR-10 Batch 4:  Loss:     0.9748 Training: 0.683168 Validation: 0.503906
Epoch 30, CIFAR-10 Batch 5:  Loss:     0.9791 Training: 0.688119 Validation: 0.503906
Epoch 31, CIFAR-10 Batch 1:  Loss:     0.9954 Training: 0.660891 Validation: 0.511719
Epoch 31, CIFAR-10 Batch 2:  Loss:     1.0267 Training: 0.660891 Validation: 0.531250
Epoch 31, CIFAR-10 Batch 3:  Loss:     0.9974 Training: 0.667079 Validation: 0.511719
Epoch 31, CIFAR-10 Batch 4:  Loss:     0.9809 Training: 0.678218 Validation: 0.507812
Epoch 31, CIFAR-10 Batch 5:  Loss:     0.9653 Training: 0.691832 Validation: 0.480469
Epoch 32, CIFAR-10 Batch 1:  Loss:     0.9635 Training: 0.672030 Validation: 0.539062
Epoch 32, CIFAR-10 Batch 2:  Loss:     0.9998 Training: 0.680693 Validation: 0.542969
Epoch 32, CIFAR-10 Batch 3:  Loss:     0.9712 Training: 0.690594 Validation: 0.531250
Epoch 32, CIFAR-10 Batch 4:  Loss:     0.9643 Training: 0.701733 Validation: 0.527344
Epoch 32, CIFAR-10 Batch 5:  Loss:     0.9668 Training: 0.691832 Validation: 0.511719
Epoch 33, CIFAR-10 Batch 1:  Loss:     0.9848 Training: 0.663366 Validation: 0.554688
Epoch 33, CIFAR-10 Batch 2:  Loss:     0.9819 Training: 0.683168 Validation: 0.503906
Epoch 33, CIFAR-10 Batch 3:  Loss:     0.9528 Training: 0.688119 Validation: 0.519531
Epoch 33, CIFAR-10 Batch 4:  Loss:     0.9421 Training: 0.704208 Validation: 0.519531
Epoch 33, CIFAR-10 Batch 5:  Loss:     0.9531 Training: 0.694307 Validation: 0.527344
Epoch 34, CIFAR-10 Batch 1:  Loss:     0.9579 Training: 0.676980 Validation: 0.527344
Epoch 34, CIFAR-10 Batch 2:  Loss:     0.9933 Training: 0.684406 Validation: 0.531250
Epoch 34, CIFAR-10 Batch 3:  Loss:     0.9607 Training: 0.683168 Validation: 0.546875
Epoch 34, CIFAR-10 Batch 4:  Loss:     0.9726 Training: 0.683168 Validation: 0.535156
Epoch 34, CIFAR-10 Batch 5:  Loss:     0.9794 Training: 0.701733 Validation: 0.507812
Epoch 35, CIFAR-10 Batch 1:  Loss:     0.9623 Training: 0.670792 Validation: 0.554688
Epoch 35, CIFAR-10 Batch 2:  Loss:     0.9723 Training: 0.693069 Validation: 0.515625
Epoch 35, CIFAR-10 Batch 3:  Loss:     0.9507 Training: 0.683168 Validation: 0.523438
Epoch 35, CIFAR-10 Batch 4:  Loss:     0.9235 Training: 0.705446 Validation: 0.554688
Epoch 35, CIFAR-10 Batch 5:  Loss:     0.9504 Training: 0.690594 Validation: 0.507812
Epoch 36, CIFAR-10 Batch 1:  Loss:     0.9259 Training: 0.681931 Validation: 0.527344
Epoch 36, CIFAR-10 Batch 2:  Loss:     0.9739 Training: 0.690594 Validation: 0.531250
Epoch 36, CIFAR-10 Batch 3:  Loss:     0.9522 Training: 0.680693 Validation: 0.500000
Epoch 36, CIFAR-10 Batch 4:  Loss:     0.9350 Training: 0.693069 Validation: 0.531250
Epoch 36, CIFAR-10 Batch 5:  Loss:     0.9478 Training: 0.706683 Validation: 0.519531
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.9207 Training: 0.685644 Validation: 0.523438
Epoch 37, CIFAR-10 Batch 2:  Loss:     0.9549 Training: 0.693069 Validation: 0.535156
Epoch 37, CIFAR-10 Batch 3:  Loss:     0.9390 Training: 0.690594 Validation: 0.523438
Epoch 37, CIFAR-10 Batch 4:  Loss:     0.9182 Training: 0.700495 Validation: 0.531250
Epoch 37, CIFAR-10 Batch 5:  Loss:     0.9251 Training: 0.716584 Validation: 0.531250
Epoch 38, CIFAR-10 Batch 1:  Loss:     0.9209 Training: 0.695545 Validation: 0.535156
Epoch 38, CIFAR-10 Batch 2:  Loss:     0.9559 Training: 0.702970 Validation: 0.546875
Epoch 38, CIFAR-10 Batch 3:  Loss:     0.9489 Training: 0.678218 Validation: 0.527344
Epoch 38, CIFAR-10 Batch 4:  Loss:     0.9092 Training: 0.706683 Validation: 0.535156
Epoch 38, CIFAR-10 Batch 5:  Loss:     0.9105 Training: 0.715347 Validation: 0.542969
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.9073 Training: 0.689356 Validation: 0.511719
Epoch 39, CIFAR-10 Batch 2:  Loss:     0.9446 Training: 0.698020 Validation: 0.531250
Epoch 39, CIFAR-10 Batch 3:  Loss:     0.9139 Training: 0.702970 Validation: 0.531250
Epoch 39, CIFAR-10 Batch 4:  Loss:     0.8958 Training: 0.710396 Validation: 0.535156
Epoch 39, CIFAR-10 Batch 5:  Loss:     0.9044 Training: 0.730198 Validation: 0.531250
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.8843 Training: 0.701733 Validation: 0.523438
Epoch 40, CIFAR-10 Batch 2:  Loss:     0.9273 Training: 0.707921 Validation: 0.554688
Epoch 40, CIFAR-10 Batch 3:  Loss:     0.9316 Training: 0.689356 Validation: 0.546875
Epoch 40, CIFAR-10 Batch 4:  Loss:     0.9046 Training: 0.711634 Validation: 0.550781
Epoch 40, CIFAR-10 Batch 5:  Loss:     0.9038 Training: 0.728960 Validation: 0.539062
Epoch 41, CIFAR-10 Batch 1:  Loss:     0.9049 Training: 0.699257 Validation: 0.542969
Epoch 41, CIFAR-10 Batch 2:  Loss:     0.9319 Training: 0.699257 Validation: 0.519531
Epoch 41, CIFAR-10 Batch 3:  Loss:     0.9136 Training: 0.709158 Validation: 0.519531
Epoch 41, CIFAR-10 Batch 4:  Loss:     0.8803 Training: 0.716584 Validation: 0.558594
Epoch 41, CIFAR-10 Batch 5:  Loss:     0.9080 Training: 0.715347 Validation: 0.531250
Epoch 42, CIFAR-10 Batch 1:  Loss:     0.9133 Training: 0.704208 Validation: 0.546875
Epoch 42, CIFAR-10 Batch 2:  Loss:     0.9261 Training: 0.702970 Validation: 0.527344
Epoch 42, CIFAR-10 Batch 3:  Loss:     0.9042 Training: 0.696782 Validation: 0.542969
Epoch 42, CIFAR-10 Batch 4:  Loss:     0.8718 Training: 0.730198 Validation: 0.566406
Epoch 42, CIFAR-10 Batch 5:  Loss:     0.9085 Training: 0.725248 Validation: 0.550781
Epoch 43, CIFAR-10 Batch 1:  Loss:     0.8727 Training: 0.705446 Validation: 0.558594
Epoch 43, CIFAR-10 Batch 2:  Loss:     0.8968 Training: 0.709158 Validation: 0.550781
Epoch 43, CIFAR-10 Batch 3:  Loss:     0.8827 Training: 0.715347 Validation: 0.546875
Epoch 43, CIFAR-10 Batch 4:  Loss:     0.8675 Training: 0.719059 Validation: 0.546875
Epoch 43, CIFAR-10 Batch 5:  Loss:     0.8879 Training: 0.732673 Validation: 0.546875
Epoch 44, CIFAR-10 Batch 1:  Loss:     0.8749 Training: 0.725248 Validation: 0.558594
Epoch 44, CIFAR-10 Batch 2:  Loss:     0.9006 Training: 0.722772 Validation: 0.539062
Epoch 44, CIFAR-10 Batch 3:  Loss:     0.8961 Training: 0.698020 Validation: 0.550781
Epoch 44, CIFAR-10 Batch 4:  Loss:     0.8604 Training: 0.720297 Validation: 0.566406
Epoch 44, CIFAR-10 Batch 5:  Loss:     0.8545 Training: 0.738861 Validation: 0.554688
Epoch 45, CIFAR-10 Batch 1:  Loss:     0.8774 Training: 0.707921 Validation: 0.593750
Epoch 45, CIFAR-10 Batch 2:  Loss:     0.9055 Training: 0.714109 Validation: 0.570312
Epoch 45, CIFAR-10 Batch 3:  Loss:     0.8943 Training: 0.704208 Validation: 0.542969
Epoch 45, CIFAR-10 Batch 4:  Loss:     0.8796 Training: 0.726485 Validation: 0.539062
Epoch 45, CIFAR-10 Batch 5:  Loss:     0.8579 Training: 0.730198 Validation: 0.558594
Epoch 46, CIFAR-10 Batch 1:  Loss:     0.8571 Training: 0.711634 Validation: 0.562500
Epoch 46, CIFAR-10 Batch 2:  Loss:     0.8954 Training: 0.724010 Validation: 0.554688
Epoch 46, CIFAR-10 Batch 3:  Loss:     0.8982 Training: 0.704208 Validation: 0.531250
Epoch 46, CIFAR-10 Batch 4:  Loss:     0.8606 Training: 0.712871 Validation: 0.585938
Epoch 46, CIFAR-10 Batch 5:  Loss:     0.8718 Training: 0.726485 Validation: 0.550781
Epoch 47, CIFAR-10 Batch 1:  Loss:     0.8598 Training: 0.712871 Validation: 0.546875
Epoch 47, CIFAR-10 Batch 2:  Loss:     0.8893 Training: 0.725248 Validation: 0.554688
Epoch 47, CIFAR-10 Batch 3:  Loss:     0.8986 Training: 0.702970 Validation: 0.535156
Epoch 47, CIFAR-10 Batch 4:  Loss:     0.8501 Training: 0.728960 Validation: 0.574219
Epoch 47, CIFAR-10 Batch 5:  Loss:     0.8412 Training: 0.742574 Validation: 0.519531
Epoch 48, CIFAR-10 Batch 1:  Loss:     0.8435 Training: 0.711634 Validation: 0.566406
Epoch 48, CIFAR-10 Batch 2:  Loss:     0.8650 Training: 0.728960 Validation: 0.558594
Epoch 48, CIFAR-10 Batch 3:  Loss:     0.8828 Training: 0.719059 Validation: 0.550781
Epoch 48, CIFAR-10 Batch 4:  Loss:     0.8359 Training: 0.735148 Validation: 0.562500
Epoch 48, CIFAR-10 Batch 5:  Loss:     0.8623 Training: 0.730198 Validation: 0.535156
Epoch 49, CIFAR-10 Batch 1:  Loss:     0.8356 Training: 0.719059 Validation: 0.578125
Epoch 49, CIFAR-10 Batch 2:  Loss:     0.8492 Training: 0.740099 Validation: 0.542969
Epoch 49, CIFAR-10 Batch 3:  Loss:     0.8772 Training: 0.721535 Validation: 0.507812
Epoch 49, CIFAR-10 Batch 4:  Loss:     0.8587 Training: 0.717822 Validation: 0.574219
Epoch 49, CIFAR-10 Batch 5:  Loss:     0.8994 Training: 0.717822 Validation: 0.527344
Epoch 50, CIFAR-10 Batch 1:  Loss:     0.8527 Training: 0.717822 Validation: 0.605469
Epoch 50, CIFAR-10 Batch 2:  Loss:     0.8689 Training: 0.714109 Validation: 0.558594
Epoch 50, CIFAR-10 Batch 3:  Loss:     0.8615 Training: 0.736386 Validation: 0.539062
Epoch 50, CIFAR-10 Batch 4:  Loss:     0.8125 Training: 0.740099 Validation: 0.589844
Epoch 50, CIFAR-10 Batch 5:  Loss:     0.8366 Training: 0.728960 Validation: 0.527344
Epoch 51, CIFAR-10 Batch 1:  Loss:     0.8352 Training: 0.727723 Validation: 0.574219
Epoch 51, CIFAR-10 Batch 2:  Loss:     0.8702 Training: 0.728960 Validation: 0.546875
Epoch 51, CIFAR-10 Batch 3:  Loss:     0.8454 Training: 0.725248 Validation: 0.535156
Epoch 51, CIFAR-10 Batch 4:  Loss:     0.8198 Training: 0.732673 Validation: 0.562500
Epoch 51, CIFAR-10 Batch 5:  Loss:     0.8247 Training: 0.748762 Validation: 0.554688
Epoch 52, CIFAR-10 Batch 1:  Loss:     0.8400 Training: 0.724010 Validation: 0.566406
Epoch 52, CIFAR-10 Batch 2:  Loss:     0.8406 Training: 0.733911 Validation: 0.535156
Epoch 52, CIFAR-10 Batch 3:  Loss:     0.8468 Training: 0.732673 Validation: 0.507812
Epoch 52, CIFAR-10 Batch 4:  Loss:     0.8165 Training: 0.742574 Validation: 0.546875
Epoch 52, CIFAR-10 Batch 5:  Loss:     0.8446 Training: 0.743812 Validation: 0.535156
Epoch 53, CIFAR-10 Batch 1:  Loss:     0.8419 Training: 0.733911 Validation: 0.589844
Epoch 53, CIFAR-10 Batch 2:  Loss:     0.8469 Training: 0.738861 Validation: 0.562500
Epoch 53, CIFAR-10 Batch 3:  Loss:     0.8422 Training: 0.726485 Validation: 0.531250
Epoch 53, CIFAR-10 Batch 4:  Loss:     0.8152 Training: 0.754951 Validation: 0.546875
Epoch 53, CIFAR-10 Batch 5:  Loss:     0.8610 Training: 0.731436 Validation: 0.539062
Epoch 54, CIFAR-10 Batch 1:  Loss:     0.8237 Training: 0.743812 Validation: 0.566406
Epoch 54, CIFAR-10 Batch 2:  Loss:     0.8544 Training: 0.731436 Validation: 0.593750
Epoch 54, CIFAR-10 Batch 3:  Loss:     0.8481 Training: 0.732673 Validation: 0.542969
Epoch 54, CIFAR-10 Batch 4:  Loss:     0.8043 Training: 0.754951 Validation: 0.582031
Epoch 54, CIFAR-10 Batch 5:  Loss:     0.8057 Training: 0.753713 Validation: 0.566406
Epoch 55, CIFAR-10 Batch 1:  Loss:     0.8314 Training: 0.728960 Validation: 0.597656
Epoch 55, CIFAR-10 Batch 2:  Loss:     0.8641 Training: 0.728960 Validation: 0.566406
Epoch 55, CIFAR-10 Batch 3:  Loss:     0.8332 Training: 0.740099 Validation: 0.535156
Epoch 55, CIFAR-10 Batch 4:  Loss:     0.8182 Training: 0.748762 Validation: 0.542969
Epoch 55, CIFAR-10 Batch 5:  Loss:     0.8265 Training: 0.742574 Validation: 0.574219
Epoch 56, CIFAR-10 Batch 1:  Loss:     0.8171 Training: 0.732673 Validation: 0.550781
Epoch 56, CIFAR-10 Batch 2:  Loss:     0.8338 Training: 0.740099 Validation: 0.531250
Epoch 56, CIFAR-10 Batch 3:  Loss:     0.8431 Training: 0.737624 Validation: 0.570312
Epoch 56, CIFAR-10 Batch 4:  Loss:     0.8044 Training: 0.757426 Validation: 0.531250
Epoch 56, CIFAR-10 Batch 5:  Loss:     0.7985 Training: 0.740099 Validation: 0.554688
Epoch 57, CIFAR-10 Batch 1:  Loss:     0.8097 Training: 0.743812 Validation: 0.585938
Epoch 57, CIFAR-10 Batch 2:  Loss:     0.8374 Training: 0.730198 Validation: 0.562500
Epoch 57, CIFAR-10 Batch 3:  Loss:     0.8311 Training: 0.740099 Validation: 0.574219
Epoch 57, CIFAR-10 Batch 4:  Loss:     0.8025 Training: 0.756188 Validation: 0.554688
Epoch 57, CIFAR-10 Batch 5:  Loss:     0.8089 Training: 0.754951 Validation: 0.531250
Epoch 58, CIFAR-10 Batch 1:  Loss:     0.8200 Training: 0.730198 Validation: 0.574219
Epoch 58, CIFAR-10 Batch 2:  Loss:     0.8570 Training: 0.726485 Validation: 0.527344
Epoch 58, CIFAR-10 Batch 3:  Loss:     0.8380 Training: 0.731436 Validation: 0.527344
Epoch 58, CIFAR-10 Batch 4:  Loss:     0.8028 Training: 0.761139 Validation: 0.558594
Epoch 58, CIFAR-10 Batch 5:  Loss:     0.8009 Training: 0.762376 Validation: 0.523438
Epoch 59, CIFAR-10 Batch 1:  Loss:     0.7990 Training: 0.750000 Validation: 0.566406
Epoch 59, CIFAR-10 Batch 2:  Loss:     0.8212 Training: 0.750000 Validation: 0.566406
Epoch 59, CIFAR-10 Batch 3:  Loss:     0.8286 Training: 0.735148 Validation: 0.542969
Epoch 59, CIFAR-10 Batch 4:  Loss:     0.7886 Training: 0.763614 Validation: 0.578125
Epoch 59, CIFAR-10 Batch 5:  Loss:     0.8208 Training: 0.752475 Validation: 0.566406
Epoch 60, CIFAR-10 Batch 1:  Loss:     0.8044 Training: 0.751238 Validation: 0.582031
Epoch 60, CIFAR-10 Batch 2:  Loss:     0.8260 Training: 0.746287 Validation: 0.531250
Epoch 60, CIFAR-10 Batch 3:  Loss:     0.8209 Training: 0.751238 Validation: 0.550781
Epoch 60, CIFAR-10 Batch 4:  Loss:     0.7996 Training: 0.751238 Validation: 0.535156
Epoch 60, CIFAR-10 Batch 5:  Loss:     0.7929 Training: 0.751238 Validation: 0.542969
Epoch 61, CIFAR-10 Batch 1:  Loss:     0.8186 Training: 0.741337 Validation: 0.535156
Epoch 61, CIFAR-10 Batch 2:  Loss:     0.8256 Training: 0.733911 Validation: 0.570312
Epoch 61, CIFAR-10 Batch 3:  Loss:     0.8259 Training: 0.752475 Validation: 0.539062
Epoch 61, CIFAR-10 Batch 4:  Loss:     0.7762 Training: 0.769802 Validation: 0.535156
Epoch 61, CIFAR-10 Batch 5:  Loss:     0.7628 Training: 0.780941 Validation: 0.542969
Epoch 62, CIFAR-10 Batch 1:  Loss:     0.7975 Training: 0.756188 Validation: 0.550781
Epoch 62, CIFAR-10 Batch 2:  Loss:     0.8073 Training: 0.747525 Validation: 0.515625
Epoch 62, CIFAR-10 Batch 3:  Loss:     0.8366 Training: 0.728960 Validation: 0.531250
Epoch 62, CIFAR-10 Batch 4:  Loss:     0.7648 Training: 0.769802 Validation: 0.578125
Epoch 62, CIFAR-10 Batch 5:  Loss:     0.7744 Training: 0.761139 Validation: 0.535156
Epoch 63, CIFAR-10 Batch 1:  Loss:     0.7895 Training: 0.747525 Validation: 0.554688
Epoch 63, CIFAR-10 Batch 2:  Loss:     0.8141 Training: 0.732673 Validation: 0.574219
Epoch 63, CIFAR-10 Batch 3:  Loss:     0.8100 Training: 0.738861 Validation: 0.550781
Epoch 63, CIFAR-10 Batch 4:  Loss:     0.7631 Training: 0.772277 Validation: 0.535156
Epoch 63, CIFAR-10 Batch 5:  Loss:     0.7576 Training: 0.772277 Validation: 0.589844
Epoch 64, CIFAR-10 Batch 1:  Loss:     0.7763 Training: 0.758663 Validation: 0.566406
Epoch 64, CIFAR-10 Batch 2:  Loss:     0.7915 Training: 0.766089 Validation: 0.578125
Epoch 64, CIFAR-10 Batch 3:  Loss:     0.7963 Training: 0.745049 Validation: 0.542969
Epoch 64, CIFAR-10 Batch 4:  Loss:     0.7586 Training: 0.779703 Validation: 0.558594
Epoch 64, CIFAR-10 Batch 5:  Loss:     0.7803 Training: 0.772277 Validation: 0.554688
Epoch 65, CIFAR-10 Batch 1:  Loss:     0.7620 Training: 0.756188 Validation: 0.531250
Epoch 65, CIFAR-10 Batch 2:  Loss:     0.7793 Training: 0.766089 Validation: 0.566406
Epoch 65, CIFAR-10 Batch 3:  Loss:     0.7737 Training: 0.756188 Validation: 0.539062
Epoch 65, CIFAR-10 Batch 4:  Loss:     0.7509 Training: 0.775990 Validation: 0.550781
Epoch 65, CIFAR-10 Batch 5:  Loss:     0.7751 Training: 0.771040 Validation: 0.562500
Epoch 66, CIFAR-10 Batch 1:  Loss:     0.7766 Training: 0.754951 Validation: 0.570312
Epoch 66, CIFAR-10 Batch 2:  Loss:     0.7869 Training: 0.759901 Validation: 0.550781
Epoch 66, CIFAR-10 Batch 3:  Loss:     0.7996 Training: 0.735148 Validation: 0.539062
Epoch 66, CIFAR-10 Batch 4:  Loss:     0.7517 Training: 0.783416 Validation: 0.542969
Epoch 66, CIFAR-10 Batch 5:  Loss:     0.7438 Training: 0.785891 Validation: 0.574219
Epoch 67, CIFAR-10 Batch 1:  Loss:     0.7775 Training: 0.756188 Validation: 0.546875
Epoch 67, CIFAR-10 Batch 2:  Loss:     0.7842 Training: 0.756188 Validation: 0.574219
Epoch 67, CIFAR-10 Batch 3:  Loss:     0.8042 Training: 0.740099 Validation: 0.539062
Epoch 67, CIFAR-10 Batch 4:  Loss:     0.7854 Training: 0.759901 Validation: 0.535156
Epoch 67, CIFAR-10 Batch 5:  Loss:     0.7624 Training: 0.772277 Validation: 0.570312
Epoch 68, CIFAR-10 Batch 1:  Loss:     0.8077 Training: 0.752475 Validation: 0.531250
Epoch 68, CIFAR-10 Batch 2:  Loss:     0.7724 Training: 0.768564 Validation: 0.578125
Epoch 68, CIFAR-10 Batch 3:  Loss:     0.7984 Training: 0.742574 Validation: 0.539062
Epoch 68, CIFAR-10 Batch 4:  Loss:     0.7732 Training: 0.762376 Validation: 0.597656
Epoch 68, CIFAR-10 Batch 5:  Loss:     0.7410 Training: 0.769802 Validation: 0.574219
Epoch 69, CIFAR-10 Batch 1:  Loss:     0.7844 Training: 0.761139 Validation: 0.562500
Epoch 69, CIFAR-10 Batch 2:  Loss:     0.7947 Training: 0.758663 Validation: 0.578125
Epoch 69, CIFAR-10 Batch 3:  Loss:     0.7935 Training: 0.743812 Validation: 0.550781
Epoch 69, CIFAR-10 Batch 4:  Loss:     0.7701 Training: 0.769802 Validation: 0.554688
Epoch 69, CIFAR-10 Batch 5:  Loss:     0.7528 Training: 0.762376 Validation: 0.562500
Epoch 70, CIFAR-10 Batch 1:  Loss:     0.7860 Training: 0.761139 Validation: 0.546875
Epoch 70, CIFAR-10 Batch 2:  Loss:     0.8153 Training: 0.756188 Validation: 0.601562
Epoch 70, CIFAR-10 Batch 3:  Loss:     0.8264 Training: 0.741337 Validation: 0.507812
Epoch 70, CIFAR-10 Batch 4:  Loss:     0.7649 Training: 0.771040 Validation: 0.570312
Epoch 70, CIFAR-10 Batch 5:  Loss:     0.7611 Training: 0.779703 Validation: 0.574219
Epoch 71, CIFAR-10 Batch 1:  Loss:     0.7692 Training: 0.773515 Validation: 0.589844
Epoch 71, CIFAR-10 Batch 2:  Loss:     0.7806 Training: 0.772277 Validation: 0.582031
Epoch 71, CIFAR-10 Batch 3:  Loss:     0.7813 Training: 0.754951 Validation: 0.562500
Epoch 71, CIFAR-10 Batch 4:  Loss:     0.7663 Training: 0.774752 Validation: 0.574219
Epoch 71, CIFAR-10 Batch 5:  Loss:     0.7534 Training: 0.768564 Validation: 0.562500
Epoch 72, CIFAR-10 Batch 1:  Loss:     0.7650 Training: 0.771040 Validation: 0.570312
Epoch 72, CIFAR-10 Batch 2:  Loss:     0.7691 Training: 0.771040 Validation: 0.593750
Epoch 72, CIFAR-10 Batch 3:  Loss:     0.7752 Training: 0.756188 Validation: 0.558594
Epoch 72, CIFAR-10 Batch 4:  Loss:     0.7523 Training: 0.787129 Validation: 0.578125
Epoch 72, CIFAR-10 Batch 5:  Loss:     0.7616 Training: 0.759901 Validation: 0.562500
Epoch 73, CIFAR-10 Batch 1:  Loss:     0.7895 Training: 0.757426 Validation: 0.574219
Epoch 73, CIFAR-10 Batch 2:  Loss:     0.7524 Training: 0.767327 Validation: 0.613281
Epoch 73, CIFAR-10 Batch 3:  Loss:     0.7665 Training: 0.768564 Validation: 0.546875
Epoch 73, CIFAR-10 Batch 4:  Loss:     0.7422 Training: 0.794554 Validation: 0.582031
Epoch 73, CIFAR-10 Batch 5:  Loss:     0.7341 Training: 0.775990 Validation: 0.566406
Epoch 74, CIFAR-10 Batch 1:  Loss:     0.7479 Training: 0.767327 Validation: 0.574219
Epoch 74, CIFAR-10 Batch 2:  Loss:     0.7750 Training: 0.764852 Validation: 0.593750
Epoch 74, CIFAR-10 Batch 3:  Loss:     0.7615 Training: 0.761139 Validation: 0.562500
Epoch 74, CIFAR-10 Batch 4:  Loss:     0.7249 Training: 0.798267 Validation: 0.578125
Epoch 74, CIFAR-10 Batch 5:  Loss:     0.7371 Training: 0.780941 Validation: 0.570312
Epoch 75, CIFAR-10 Batch 1:  Loss:     0.7323 Training: 0.767327 Validation: 0.597656
Epoch 75, CIFAR-10 Batch 2:  Loss:     0.7387 Training: 0.778465 Validation: 0.589844
Epoch 75, CIFAR-10 Batch 3:  Loss:     0.7453 Training: 0.764852 Validation: 0.550781
Epoch 75, CIFAR-10 Batch 4:  Loss:     0.7028 Training: 0.803218 Validation: 0.625000
Epoch 75, CIFAR-10 Batch 5:  Loss:     0.7318 Training: 0.783416 Validation: 0.550781
Epoch 76, CIFAR-10 Batch 1:  Loss:     0.7135 Training: 0.782178 Validation: 0.566406
Epoch 76, CIFAR-10 Batch 2:  Loss:     0.7551 Training: 0.767327 Validation: 0.597656
Epoch 76, CIFAR-10 Batch 3:  Loss:     0.7420 Training: 0.764852 Validation: 0.570312
Epoch 76, CIFAR-10 Batch 4:  Loss:     0.7324 Training: 0.795792 Validation: 0.578125
Epoch 76, CIFAR-10 Batch 5:  Loss:     0.7195 Training: 0.788366 Validation: 0.582031
Epoch 77, CIFAR-10 Batch 1:  Loss:     0.7164 Training: 0.780941 Validation: 0.574219
Epoch 77, CIFAR-10 Batch 2:  Loss:     0.7376 Training: 0.777228 Validation: 0.589844
Epoch 77, CIFAR-10 Batch 3:  Loss:     0.7423 Training: 0.758663 Validation: 0.582031
Epoch 77, CIFAR-10 Batch 4:  Loss:     0.7117 Training: 0.788366 Validation: 0.605469
Epoch 77, CIFAR-10 Batch 5:  Loss:     0.7143 Training: 0.792079 Validation: 0.566406
Epoch 78, CIFAR-10 Batch 1:  Loss:     0.7192 Training: 0.783416 Validation: 0.582031
Epoch 78, CIFAR-10 Batch 2:  Loss:     0.7451 Training: 0.773515 Validation: 0.582031
Epoch 78, CIFAR-10 Batch 3:  Loss:     0.7483 Training: 0.774752 Validation: 0.550781
Epoch 78, CIFAR-10 Batch 4:  Loss:     0.6994 Training: 0.800743 Validation: 0.613281
Epoch 78, CIFAR-10 Batch 5:  Loss:     0.7140 Training: 0.790842 Validation: 0.574219
Epoch 79, CIFAR-10 Batch 1:  Loss:     0.7257 Training: 0.773515 Validation: 0.601562
Epoch 79, CIFAR-10 Batch 2:  Loss:     0.7447 Training: 0.778465 Validation: 0.582031
Epoch 79, CIFAR-10 Batch 3:  Loss:     0.7247 Training: 0.774752 Validation: 0.589844
Epoch 79, CIFAR-10 Batch 4:  Loss:     0.6925 Training: 0.795792 Validation: 0.609375
Epoch 79, CIFAR-10 Batch 5:  Loss:     0.7157 Training: 0.804455 Validation: 0.593750
Epoch 80, CIFAR-10 Batch 1:  Loss:     0.7416 Training: 0.777228 Validation: 0.613281
Epoch 80, CIFAR-10 Batch 2:  Loss:     0.7477 Training: 0.769802 Validation: 0.574219
Epoch 80, CIFAR-10 Batch 3:  Loss:     0.7172 Training: 0.780941 Validation: 0.574219
Epoch 80, CIFAR-10 Batch 4:  Loss:     0.6755 Training: 0.803218 Validation: 0.589844
Epoch 80, CIFAR-10 Batch 5:  Loss:     0.6964 Training: 0.785891 Validation: 0.585938
Epoch 81, CIFAR-10 Batch 1:  Loss:     0.7016 Training: 0.789604 Validation: 0.605469
Epoch 81, CIFAR-10 Batch 2:  Loss:     0.7339 Training: 0.778465 Validation: 0.601562
Epoch 81, CIFAR-10 Batch 3:  Loss:     0.7412 Training: 0.768564 Validation: 0.570312
Epoch 81, CIFAR-10 Batch 4:  Loss:     0.6916 Training: 0.797030 Validation: 0.605469
Epoch 81, CIFAR-10 Batch 5:  Loss:     0.7193 Training: 0.794554 Validation: 0.542969
Epoch 82, CIFAR-10 Batch 1:  Loss:     0.7171 Training: 0.782178 Validation: 0.574219
Epoch 82, CIFAR-10 Batch 2:  Loss:     0.7222 Training: 0.790842 Validation: 0.597656
Epoch 82, CIFAR-10 Batch 3:  Loss:     0.7341 Training: 0.767327 Validation: 0.570312
Epoch 82, CIFAR-10 Batch 4:  Loss:     0.7234 Training: 0.778465 Validation: 0.578125
Epoch 82, CIFAR-10 Batch 5:  Loss:     0.6930 Training: 0.794554 Validation: 0.546875
Epoch 83, CIFAR-10 Batch 1:  Loss:     0.7042 Training: 0.785891 Validation: 0.585938
Epoch 83, CIFAR-10 Batch 2:  Loss:     0.7300 Training: 0.780941 Validation: 0.578125
Epoch 83, CIFAR-10 Batch 3:  Loss:     0.7092 Training: 0.775990 Validation: 0.558594
Epoch 83, CIFAR-10 Batch 4:  Loss:     0.7087 Training: 0.793317 Validation: 0.605469
Epoch 83, CIFAR-10 Batch 5:  Loss:     0.7454 Training: 0.783416 Validation: 0.597656
Epoch 84, CIFAR-10 Batch 1:  Loss:     0.6976 Training: 0.809406 Validation: 0.593750
Epoch 84, CIFAR-10 Batch 2:  Loss:     0.7080 Training: 0.793317 Validation: 0.609375
Epoch 84, CIFAR-10 Batch 3:  Loss:     0.7004 Training: 0.784653 Validation: 0.562500
Epoch 84, CIFAR-10 Batch 4:  Loss:     0.6859 Training: 0.792079 Validation: 0.617188
Epoch 84, CIFAR-10 Batch 5:  Loss:     0.7021 Training: 0.799505 Validation: 0.566406
Epoch 85, CIFAR-10 Batch 1:  Loss:     0.7033 Training: 0.801980 Validation: 0.589844
Epoch 85, CIFAR-10 Batch 2:  Loss:     0.7033 Training: 0.792079 Validation: 0.628906
Epoch 85, CIFAR-10 Batch 3:  Loss:     0.6944 Training: 0.795792 Validation: 0.550781
Epoch 85, CIFAR-10 Batch 4:  Loss:     0.6955 Training: 0.790842 Validation: 0.585938
Epoch 85, CIFAR-10 Batch 5:  Loss:     0.7180 Training: 0.805693 Validation: 0.582031
Epoch 86, CIFAR-10 Batch 1:  Loss:     0.7072 Training: 0.792079 Validation: 0.578125
Epoch 86, CIFAR-10 Batch 2:  Loss:     0.7052 Training: 0.795792 Validation: 0.597656
Epoch 86, CIFAR-10 Batch 3:  Loss:     0.6830 Training: 0.793317 Validation: 0.605469
Epoch 86, CIFAR-10 Batch 4:  Loss:     0.6772 Training: 0.810644 Validation: 0.570312
Epoch 86, CIFAR-10 Batch 5:  Loss:     0.7011 Training: 0.804455 Validation: 0.585938
Epoch 87, CIFAR-10 Batch 1:  Loss:     0.7098 Training: 0.795792 Validation: 0.597656
Epoch 87, CIFAR-10 Batch 2:  Loss:     0.7016 Training: 0.795792 Validation: 0.601562
Epoch 87, CIFAR-10 Batch 3:  Loss:     0.6935 Training: 0.794554 Validation: 0.566406
Epoch 87, CIFAR-10 Batch 4:  Loss:     0.6722 Training: 0.793317 Validation: 0.609375
Epoch 87, CIFAR-10 Batch 5:  Loss:     0.6820 Training: 0.814356 Validation: 0.570312
Epoch 88, CIFAR-10 Batch 1:  Loss:     0.7009 Training: 0.788366 Validation: 0.566406
Epoch 88, CIFAR-10 Batch 2:  Loss:     0.6941 Training: 0.792079 Validation: 0.589844
Epoch 88, CIFAR-10 Batch 3:  Loss:     0.6829 Training: 0.785891 Validation: 0.566406
Epoch 88, CIFAR-10 Batch 4:  Loss:     0.6821 Training: 0.805693 Validation: 0.566406
Epoch 88, CIFAR-10 Batch 5:  Loss:     0.6814 Training: 0.805693 Validation: 0.593750
Epoch 89, CIFAR-10 Batch 1:  Loss:     0.7142 Training: 0.779703 Validation: 0.570312
Epoch 89, CIFAR-10 Batch 2:  Loss:     0.6921 Training: 0.792079 Validation: 0.617188
Epoch 89, CIFAR-10 Batch 3:  Loss:     0.6851 Training: 0.789604 Validation: 0.570312
Epoch 89, CIFAR-10 Batch 4:  Loss:     0.6825 Training: 0.806931 Validation: 0.593750
Epoch 89, CIFAR-10 Batch 5:  Loss:     0.6711 Training: 0.815594 Validation: 0.617188
Epoch 90, CIFAR-10 Batch 1:  Loss:     0.6934 Training: 0.790842 Validation: 0.617188
Epoch 90, CIFAR-10 Batch 2:  Loss:     0.6828 Training: 0.818069 Validation: 0.593750
Epoch 90, CIFAR-10 Batch 3:  Loss:     0.6790 Training: 0.785891 Validation: 0.550781
Epoch 90, CIFAR-10 Batch 4:  Loss:     0.6705 Training: 0.799505 Validation: 0.617188
Epoch 90, CIFAR-10 Batch 5:  Loss:     0.6673 Training: 0.813119 Validation: 0.593750
Epoch 91, CIFAR-10 Batch 1:  Loss:     0.6969 Training: 0.798267 Validation: 0.554688
Epoch 91, CIFAR-10 Batch 2:  Loss:     0.6734 Training: 0.814356 Validation: 0.601562
Epoch 91, CIFAR-10 Batch 3:  Loss:     0.6642 Training: 0.789604 Validation: 0.585938
Epoch 91, CIFAR-10 Batch 4:  Loss:     0.6725 Training: 0.801980 Validation: 0.570312
Epoch 91, CIFAR-10 Batch 5:  Loss:     0.6609 Training: 0.806931 Validation: 0.585938
Epoch 92, CIFAR-10 Batch 1:  Loss:     0.6840 Training: 0.794554 Validation: 0.585938
Epoch 92, CIFAR-10 Batch 2:  Loss:     0.6855 Training: 0.783416 Validation: 0.613281
Epoch 92, CIFAR-10 Batch 3:  Loss:     0.6592 Training: 0.798267 Validation: 0.582031
Epoch 92, CIFAR-10 Batch 4:  Loss:     0.6451 Training: 0.821782 Validation: 0.601562
Epoch 92, CIFAR-10 Batch 5:  Loss:     0.6776 Training: 0.818069 Validation: 0.585938
Epoch 93, CIFAR-10 Batch 1:  Loss:     0.6878 Training: 0.794554 Validation: 0.593750
Epoch 93, CIFAR-10 Batch 2:  Loss:     0.6724 Training: 0.808168 Validation: 0.609375
Epoch 93, CIFAR-10 Batch 3:  Loss:     0.6590 Training: 0.798267 Validation: 0.593750
Epoch 93, CIFAR-10 Batch 4:  Loss:     0.6288 Training: 0.832921 Validation: 0.601562
Epoch 93, CIFAR-10 Batch 5:  Loss:     0.6848 Training: 0.815594 Validation: 0.558594
Epoch 94, CIFAR-10 Batch 1:  Loss:     0.6985 Training: 0.784653 Validation: 0.593750
Epoch 94, CIFAR-10 Batch 2:  Loss:     0.6666 Training: 0.814356 Validation: 0.593750
Epoch 94, CIFAR-10 Batch 3:  Loss:     0.6549 Training: 0.783416 Validation: 0.566406
Epoch 94, CIFAR-10 Batch 4:  Loss:     0.6403 Training: 0.809406 Validation: 0.636719
Epoch 94, CIFAR-10 Batch 5:  Loss:     0.6539 Training: 0.823020 Validation: 0.593750
Epoch 95, CIFAR-10 Batch 1:  Loss:     0.6539 Training: 0.810644 Validation: 0.609375
Epoch 95, CIFAR-10 Batch 2:  Loss:     0.6824 Training: 0.800743 Validation: 0.566406
Epoch 95, CIFAR-10 Batch 3:  Loss:     0.6727 Training: 0.795792 Validation: 0.585938
Epoch 95, CIFAR-10 Batch 4:  Loss:     0.6418 Training: 0.810644 Validation: 0.617188
Epoch 95, CIFAR-10 Batch 5:  Loss:     0.6652 Training: 0.820545 Validation: 0.593750
Epoch 96, CIFAR-10 Batch 1:  Loss:     0.6758 Training: 0.794554 Validation: 0.558594
Epoch 96, CIFAR-10 Batch 2:  Loss:     0.6752 Training: 0.792079 Validation: 0.625000
Epoch 96, CIFAR-10 Batch 3:  Loss:     0.6570 Training: 0.798267 Validation: 0.605469
Epoch 96, CIFAR-10 Batch 4:  Loss:     0.6446 Training: 0.809406 Validation: 0.589844
Epoch 96, CIFAR-10 Batch 5:  Loss:     0.6509 Training: 0.824257 Validation: 0.550781
Epoch 97, CIFAR-10 Batch 1:  Loss:     0.6608 Training: 0.809406 Validation: 0.582031
Epoch 97, CIFAR-10 Batch 2:  Loss:     0.6513 Training: 0.818069 Validation: 0.582031
Epoch 97, CIFAR-10 Batch 3:  Loss:     0.6493 Training: 0.818069 Validation: 0.601562
Epoch 97, CIFAR-10 Batch 4:  Loss:     0.6262 Training: 0.820545 Validation: 0.613281
Epoch 97, CIFAR-10 Batch 5:  Loss:     0.6576 Training: 0.829208 Validation: 0.570312
Epoch 98, CIFAR-10 Batch 1:  Loss:     0.6419 Training: 0.823020 Validation: 0.542969
Epoch 98, CIFAR-10 Batch 2:  Loss:     0.6549 Training: 0.805693 Validation: 0.585938
Epoch 98, CIFAR-10 Batch 3:  Loss:     0.6460 Training: 0.808168 Validation: 0.589844
Epoch 98, CIFAR-10 Batch 4:  Loss:     0.6265 Training: 0.815594 Validation: 0.578125
Epoch 98, CIFAR-10 Batch 5:  Loss:     0.6518 Training: 0.823020 Validation: 0.617188
Epoch 99, CIFAR-10 Batch 1:  Loss:     0.6516 Training: 0.811881 Validation: 0.613281
Epoch 99, CIFAR-10 Batch 2:  Loss:     0.6513 Training: 0.820545 Validation: 0.601562
Epoch 99, CIFAR-10 Batch 3:  Loss:     0.6416 Training: 0.808168 Validation: 0.605469
Epoch 99, CIFAR-10 Batch 4:  Loss:     0.6125 Training: 0.823020 Validation: 0.605469
Epoch 99, CIFAR-10 Batch 5:  Loss:     0.6237 Training: 0.836634 Validation: 0.597656
Epoch 100, CIFAR-10 Batch 1:  Loss:     0.6440 Training: 0.805693 Validation: 0.609375
Epoch 100, CIFAR-10 Batch 2:  Loss:     0.6430 Training: 0.813119 Validation: 0.593750
Epoch 100, CIFAR-10 Batch 3:  Loss:     0.6283 Training: 0.814356 Validation: 0.585938
Epoch 100, CIFAR-10 Batch 4:  Loss:     0.6084 Training: 0.821782 Validation: 0.578125
Epoch 100, CIFAR-10 Batch 5:  Loss:     0.6291 Training: 0.826733 Validation: 0.578125

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [2]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.6253980891719745

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.