Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('cifar-10-python.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            'cifar-10-python.tar.gz',
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open('cifar-10-python.tar.gz') as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


CIFAR-10 Dataset: 171MB [10:50, 262KB/s]                               
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    v_range = 255
    return np.array(x) / 255

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [4]:
from sklearn import preprocessing

all_values = [0,1,2,3,4,5,6,7,8,9]

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    lb = preprocessing.LabelBinarizer()
    lb.fit(all_values)
    return lb.transform(x)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [7]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a bach of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], "x")

def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, [None, n_classes], "y")

def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32, name = "keep_prob")

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [8]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    input_depth = x_tensor.get_shape().as_list()[3]
    weight = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], input_depth, conv_num_outputs)))
    bias = tf.Variable(tf.zeros(conv_num_outputs))
    
    x = tf.nn.conv2d(x_tensor, weight, strides=[1, *conv_strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, bias)
    x = tf.nn.relu(x)
    x = tf.nn.max_pool(x, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
    return x 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [9]:
def x_shape(x_tensor):
    return x_tensor.get_shape().as_list()

def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Functionprint(x_tensor)
    return tf.reshape(x_tensor, [-1, np.prod(x_shape(x_tensor)[1:])])

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [10]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    x_length = x_shape(x_tensor)[1]
    weights = tf.Variable(tf.truncated_normal([x_length, num_outputs], stddev=0.1))
    bias = tf.Variable(tf.zeros(num_outputs))
    x = tf.add(tf.matmul(x_tensor, weights), bias)
    x = tf.nn.relu(x)
    return x


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [11]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    x_length = x_shape(x_tensor)[1]
    weights = tf.Variable(tf.truncated_normal([x_length, num_outputs], stddev=0.1))
    bias = tf.Variable(tf.zeros(num_outputs))
    return tf.add(tf.matmul(x_tensor, weights), bias)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [49]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv_ksize = (3, 3)
    conv_strides = (1, 1)
    pool_ksize = (2, 2)
    pool_strides = (2, 2)
    
    conv_num_outputs_1 = 16
    conv_num_outputs_2 = 32

    x = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize, conv_strides, pool_ksize, pool_strides)
    x = conv2d_maxpool(x, conv_num_outputs_2, conv_ksize, conv_strides, pool_ksize, pool_strides)
    
    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    x = flatten(x)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    fc_num_outputs = 784
    
    x = fully_conn(x, fc_num_outputs)
    x = tf.nn.dropout(x, keep_prob)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    x = output(x, 10)
    
    # TODO: return output
    return x


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [50]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, 
                feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [51]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss = session.run(cost, 
                       feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
    valid_acc = session.run(accuracy, 
                       feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
    print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [52]:
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.70

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [53]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss:    10.3923 Validation Accuracy: 0.258800
Epoch  2, CIFAR-10 Batch 1:  Loss:     2.0923 Validation Accuracy: 0.224800
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.9604 Validation Accuracy: 0.212800
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.7845 Validation Accuracy: 0.226600
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.6747 Validation Accuracy: 0.257800
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.6365 Validation Accuracy: 0.280600
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.5563 Validation Accuracy: 0.289200
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.5436 Validation Accuracy: 0.303800
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.4876 Validation Accuracy: 0.315600
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.4145 Validation Accuracy: 0.330800
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.3452 Validation Accuracy: 0.326400
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.2444 Validation Accuracy: 0.343000
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.2534 Validation Accuracy: 0.348400
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.2506 Validation Accuracy: 0.359800
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.2078 Validation Accuracy: 0.364000
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.1544 Validation Accuracy: 0.372200
Epoch 17, CIFAR-10 Batch 1:  Loss:     1.1116 Validation Accuracy: 0.378800
Epoch 18, CIFAR-10 Batch 1:  Loss:     1.1184 Validation Accuracy: 0.390400
Epoch 19, CIFAR-10 Batch 1:  Loss:     1.0636 Validation Accuracy: 0.389400
Epoch 20, CIFAR-10 Batch 1:  Loss:     1.0249 Validation Accuracy: 0.392000
Epoch 21, CIFAR-10 Batch 1:  Loss:     1.0017 Validation Accuracy: 0.406800
Epoch 22, CIFAR-10 Batch 1:  Loss:     0.9522 Validation Accuracy: 0.413600
Epoch 23, CIFAR-10 Batch 1:  Loss:     0.9365 Validation Accuracy: 0.412000
Epoch 24, CIFAR-10 Batch 1:  Loss:     0.8430 Validation Accuracy: 0.423400
Epoch 25, CIFAR-10 Batch 1:  Loss:     0.8651 Validation Accuracy: 0.417200
Epoch 26, CIFAR-10 Batch 1:  Loss:     0.8081 Validation Accuracy: 0.421200
Epoch 27, CIFAR-10 Batch 1:  Loss:     0.7575 Validation Accuracy: 0.433000
Epoch 28, CIFAR-10 Batch 1:  Loss:     0.7476 Validation Accuracy: 0.432200
Epoch 29, CIFAR-10 Batch 1:  Loss:     0.7303 Validation Accuracy: 0.444000
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.6887 Validation Accuracy: 0.442200
Epoch 31, CIFAR-10 Batch 1:  Loss:     0.6500 Validation Accuracy: 0.453000
Epoch 32, CIFAR-10 Batch 1:  Loss:     0.6141 Validation Accuracy: 0.456600
Epoch 33, CIFAR-10 Batch 1:  Loss:     0.5787 Validation Accuracy: 0.461000
Epoch 34, CIFAR-10 Batch 1:  Loss:     0.5550 Validation Accuracy: 0.471800
Epoch 35, CIFAR-10 Batch 1:  Loss:     0.5679 Validation Accuracy: 0.462600
Epoch 36, CIFAR-10 Batch 1:  Loss:     0.5150 Validation Accuracy: 0.466600
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.5058 Validation Accuracy: 0.460800
Epoch 38, CIFAR-10 Batch 1:  Loss:     0.4936 Validation Accuracy: 0.472000
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.4523 Validation Accuracy: 0.471200
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.4560 Validation Accuracy: 0.470000
Epoch 41, CIFAR-10 Batch 1:  Loss:     0.4675 Validation Accuracy: 0.464200
Epoch 42, CIFAR-10 Batch 1:  Loss:     0.4272 Validation Accuracy: 0.462800
Epoch 43, CIFAR-10 Batch 1:  Loss:     0.4112 Validation Accuracy: 0.474600
Epoch 44, CIFAR-10 Batch 1:  Loss:     0.4229 Validation Accuracy: 0.464000
Epoch 45, CIFAR-10 Batch 1:  Loss:     0.4119 Validation Accuracy: 0.471600
Epoch 46, CIFAR-10 Batch 1:  Loss:     0.4200 Validation Accuracy: 0.461400
Epoch 47, CIFAR-10 Batch 1:  Loss:     0.4100 Validation Accuracy: 0.452200
Epoch 48, CIFAR-10 Batch 1:  Loss:     0.3636 Validation Accuracy: 0.461600
Epoch 49, CIFAR-10 Batch 1:  Loss:     0.3507 Validation Accuracy: 0.468600
Epoch 50, CIFAR-10 Batch 1:  Loss:     0.3471 Validation Accuracy: 0.472400
Epoch 51, CIFAR-10 Batch 1:  Loss:     0.3645 Validation Accuracy: 0.464800
Epoch 52, CIFAR-10 Batch 1:  Loss:     0.2897 Validation Accuracy: 0.464200
Epoch 53, CIFAR-10 Batch 1:  Loss:     0.3211 Validation Accuracy: 0.469200
Epoch 54, CIFAR-10 Batch 1:  Loss:     0.2660 Validation Accuracy: 0.477600
Epoch 55, CIFAR-10 Batch 1:  Loss:     0.2722 Validation Accuracy: 0.477200
Epoch 56, CIFAR-10 Batch 1:  Loss:     0.2537 Validation Accuracy: 0.478000
Epoch 57, CIFAR-10 Batch 1:  Loss:     0.2218 Validation Accuracy: 0.478000
Epoch 58, CIFAR-10 Batch 1:  Loss:     0.2179 Validation Accuracy: 0.462600
Epoch 59, CIFAR-10 Batch 1:  Loss:     0.2437 Validation Accuracy: 0.464600
Epoch 60, CIFAR-10 Batch 1:  Loss:     0.2020 Validation Accuracy: 0.477000
Epoch 61, CIFAR-10 Batch 1:  Loss:     0.2133 Validation Accuracy: 0.476200
Epoch 62, CIFAR-10 Batch 1:  Loss:     0.1948 Validation Accuracy: 0.477800
Epoch 63, CIFAR-10 Batch 1:  Loss:     0.1813 Validation Accuracy: 0.479400
Epoch 64, CIFAR-10 Batch 1:  Loss:     0.1826 Validation Accuracy: 0.483000
Epoch 65, CIFAR-10 Batch 1:  Loss:     0.1531 Validation Accuracy: 0.484000
Epoch 66, CIFAR-10 Batch 1:  Loss:     0.1465 Validation Accuracy: 0.483000
Epoch 67, CIFAR-10 Batch 1:  Loss:     0.1306 Validation Accuracy: 0.486400
Epoch 68, CIFAR-10 Batch 1:  Loss:     0.1295 Validation Accuracy: 0.483200
Epoch 69, CIFAR-10 Batch 1:  Loss:     0.1226 Validation Accuracy: 0.488600
Epoch 70, CIFAR-10 Batch 1:  Loss:     0.1277 Validation Accuracy: 0.479800
Epoch 71, CIFAR-10 Batch 1:  Loss:     0.1314 Validation Accuracy: 0.476600
Epoch 72, CIFAR-10 Batch 1:  Loss:     0.1049 Validation Accuracy: 0.488200
Epoch 73, CIFAR-10 Batch 1:  Loss:     0.0954 Validation Accuracy: 0.476600
Epoch 74, CIFAR-10 Batch 1:  Loss:     0.0782 Validation Accuracy: 0.499600
Epoch 75, CIFAR-10 Batch 1:  Loss:     0.0854 Validation Accuracy: 0.486600
Epoch 76, CIFAR-10 Batch 1:  Loss:     0.1016 Validation Accuracy: 0.493400
Epoch 77, CIFAR-10 Batch 1:  Loss:     0.1595 Validation Accuracy: 0.490800
Epoch 78, CIFAR-10 Batch 1:  Loss:     0.1089 Validation Accuracy: 0.492400
Epoch 79, CIFAR-10 Batch 1:  Loss:     0.1145 Validation Accuracy: 0.489400
Epoch 80, CIFAR-10 Batch 1:  Loss:     0.0946 Validation Accuracy: 0.486800
Epoch 81, CIFAR-10 Batch 1:  Loss:     0.1013 Validation Accuracy: 0.495000
Epoch 82, CIFAR-10 Batch 1:  Loss:     0.0813 Validation Accuracy: 0.486200
Epoch 83, CIFAR-10 Batch 1:  Loss:     0.0788 Validation Accuracy: 0.490200
Epoch 84, CIFAR-10 Batch 1:  Loss:     0.0814 Validation Accuracy: 0.494400
Epoch 85, CIFAR-10 Batch 1:  Loss:     0.0773 Validation Accuracy: 0.499200
Epoch 86, CIFAR-10 Batch 1:  Loss:     0.0715 Validation Accuracy: 0.492400
Epoch 87, CIFAR-10 Batch 1:  Loss:     0.0722 Validation Accuracy: 0.492400
Epoch 88, CIFAR-10 Batch 1:  Loss:     0.0674 Validation Accuracy: 0.497200
Epoch 89, CIFAR-10 Batch 1:  Loss:     0.0566 Validation Accuracy: 0.503400
Epoch 90, CIFAR-10 Batch 1:  Loss:     0.0577 Validation Accuracy: 0.493800
Epoch 91, CIFAR-10 Batch 1:  Loss:     0.0527 Validation Accuracy: 0.493000
Epoch 92, CIFAR-10 Batch 1:  Loss:     0.0550 Validation Accuracy: 0.498200
Epoch 93, CIFAR-10 Batch 1:  Loss:     0.0328 Validation Accuracy: 0.501400
Epoch 94, CIFAR-10 Batch 1:  Loss:     0.0389 Validation Accuracy: 0.503000
Epoch 95, CIFAR-10 Batch 1:  Loss:     0.0499 Validation Accuracy: 0.500000
Epoch 96, CIFAR-10 Batch 1:  Loss:     0.0360 Validation Accuracy: 0.499400
Epoch 97, CIFAR-10 Batch 1:  Loss:     0.0267 Validation Accuracy: 0.507600
Epoch 98, CIFAR-10 Batch 1:  Loss:     0.0239 Validation Accuracy: 0.504200
Epoch 99, CIFAR-10 Batch 1:  Loss:     0.0217 Validation Accuracy: 0.517800
Epoch 100, CIFAR-10 Batch 1:  Loss:     0.0133 Validation Accuracy: 0.506400

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [54]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss:    13.0046 Validation Accuracy: 0.261200
Epoch  1, CIFAR-10 Batch 2:  Loss:     3.0388 Validation Accuracy: 0.177400
Epoch  1, CIFAR-10 Batch 3:  Loss:     2.4496 Validation Accuracy: 0.159800
Epoch  1, CIFAR-10 Batch 4:  Loss:     2.2417 Validation Accuracy: 0.167000
Epoch  1, CIFAR-10 Batch 5:  Loss:     2.0738 Validation Accuracy: 0.188000
Epoch  2, CIFAR-10 Batch 1:  Loss:     2.2522 Validation Accuracy: 0.209400
Epoch  2, CIFAR-10 Batch 2:  Loss:     2.2674 Validation Accuracy: 0.237800
Epoch  2, CIFAR-10 Batch 3:  Loss:     1.9075 Validation Accuracy: 0.253800
Epoch  2, CIFAR-10 Batch 4:  Loss:     2.1240 Validation Accuracy: 0.259600
Epoch  2, CIFAR-10 Batch 5:  Loss:     1.9391 Validation Accuracy: 0.267600
Epoch  3, CIFAR-10 Batch 1:  Loss:     2.1153 Validation Accuracy: 0.275400
Epoch  3, CIFAR-10 Batch 2:  Loss:     1.8035 Validation Accuracy: 0.269400
Epoch  3, CIFAR-10 Batch 3:  Loss:     1.8005 Validation Accuracy: 0.280000
Epoch  3, CIFAR-10 Batch 4:  Loss:     1.9023 Validation Accuracy: 0.288800
Epoch  3, CIFAR-10 Batch 5:  Loss:     1.7861 Validation Accuracy: 0.289600
Epoch  4, CIFAR-10 Batch 1:  Loss:     2.0444 Validation Accuracy: 0.309400
Epoch  4, CIFAR-10 Batch 2:  Loss:     1.6870 Validation Accuracy: 0.310600
Epoch  4, CIFAR-10 Batch 3:  Loss:     1.6770 Validation Accuracy: 0.296000
Epoch  4, CIFAR-10 Batch 4:  Loss:     1.8228 Validation Accuracy: 0.319200
Epoch  4, CIFAR-10 Batch 5:  Loss:     1.7234 Validation Accuracy: 0.309600
Epoch  5, CIFAR-10 Batch 1:  Loss:     2.0816 Validation Accuracy: 0.326400
Epoch  5, CIFAR-10 Batch 2:  Loss:     1.6943 Validation Accuracy: 0.321800
Epoch  5, CIFAR-10 Batch 3:  Loss:     1.5998 Validation Accuracy: 0.313200
Epoch  5, CIFAR-10 Batch 4:  Loss:     1.7497 Validation Accuracy: 0.328000
Epoch  5, CIFAR-10 Batch 5:  Loss:     1.6395 Validation Accuracy: 0.327000
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.9327 Validation Accuracy: 0.340400
Epoch  6, CIFAR-10 Batch 2:  Loss:     1.6153 Validation Accuracy: 0.325000
Epoch  6, CIFAR-10 Batch 3:  Loss:     1.5162 Validation Accuracy: 0.325000
Epoch  6, CIFAR-10 Batch 4:  Loss:     1.7083 Validation Accuracy: 0.344000
Epoch  6, CIFAR-10 Batch 5:  Loss:     1.4798 Validation Accuracy: 0.354600
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.8222 Validation Accuracy: 0.342400
Epoch  7, CIFAR-10 Batch 2:  Loss:     1.5720 Validation Accuracy: 0.353600
Epoch  7, CIFAR-10 Batch 3:  Loss:     1.3869 Validation Accuracy: 0.361800
Epoch  7, CIFAR-10 Batch 4:  Loss:     1.6799 Validation Accuracy: 0.361200
Epoch  7, CIFAR-10 Batch 5:  Loss:     1.4132 Validation Accuracy: 0.379000
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.7631 Validation Accuracy: 0.381200
Epoch  8, CIFAR-10 Batch 2:  Loss:     1.4853 Validation Accuracy: 0.379200
Epoch  8, CIFAR-10 Batch 3:  Loss:     1.3852 Validation Accuracy: 0.392600
Epoch  8, CIFAR-10 Batch 4:  Loss:     1.5106 Validation Accuracy: 0.398000
Epoch  8, CIFAR-10 Batch 5:  Loss:     1.4101 Validation Accuracy: 0.416200
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.6668 Validation Accuracy: 0.426400
Epoch  9, CIFAR-10 Batch 2:  Loss:     1.2814 Validation Accuracy: 0.417000
Epoch  9, CIFAR-10 Batch 3:  Loss:     1.2335 Validation Accuracy: 0.427600
Epoch  9, CIFAR-10 Batch 4:  Loss:     1.4014 Validation Accuracy: 0.429600
Epoch  9, CIFAR-10 Batch 5:  Loss:     1.3518 Validation Accuracy: 0.442800
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.5537 Validation Accuracy: 0.442800
Epoch 10, CIFAR-10 Batch 2:  Loss:     1.2570 Validation Accuracy: 0.427800
Epoch 10, CIFAR-10 Batch 3:  Loss:     1.1988 Validation Accuracy: 0.426000
Epoch 10, CIFAR-10 Batch 4:  Loss:     1.3714 Validation Accuracy: 0.451600
Epoch 10, CIFAR-10 Batch 5:  Loss:     1.2535 Validation Accuracy: 0.462600
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.4738 Validation Accuracy: 0.458400
Epoch 11, CIFAR-10 Batch 2:  Loss:     1.1701 Validation Accuracy: 0.431400
Epoch 11, CIFAR-10 Batch 3:  Loss:     1.1312 Validation Accuracy: 0.469600
Epoch 11, CIFAR-10 Batch 4:  Loss:     1.2994 Validation Accuracy: 0.467800
Epoch 11, CIFAR-10 Batch 5:  Loss:     1.1996 Validation Accuracy: 0.459200
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.3693 Validation Accuracy: 0.493800
Epoch 12, CIFAR-10 Batch 2:  Loss:     1.0952 Validation Accuracy: 0.482400
Epoch 12, CIFAR-10 Batch 3:  Loss:     1.0849 Validation Accuracy: 0.477000
Epoch 12, CIFAR-10 Batch 4:  Loss:     1.3094 Validation Accuracy: 0.477800
Epoch 12, CIFAR-10 Batch 5:  Loss:     1.2053 Validation Accuracy: 0.494200
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.3156 Validation Accuracy: 0.493800
Epoch 13, CIFAR-10 Batch 2:  Loss:     1.0322 Validation Accuracy: 0.497400
Epoch 13, CIFAR-10 Batch 3:  Loss:     1.0497 Validation Accuracy: 0.486400
Epoch 13, CIFAR-10 Batch 4:  Loss:     1.1659 Validation Accuracy: 0.491000
Epoch 13, CIFAR-10 Batch 5:  Loss:     1.1411 Validation Accuracy: 0.491400
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.2033 Validation Accuracy: 0.508600
Epoch 14, CIFAR-10 Batch 2:  Loss:     0.9917 Validation Accuracy: 0.508600
Epoch 14, CIFAR-10 Batch 3:  Loss:     1.0194 Validation Accuracy: 0.500200
Epoch 14, CIFAR-10 Batch 4:  Loss:     1.1228 Validation Accuracy: 0.497800
Epoch 14, CIFAR-10 Batch 5:  Loss:     1.1352 Validation Accuracy: 0.507800
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.1379 Validation Accuracy: 0.516400
Epoch 15, CIFAR-10 Batch 2:  Loss:     0.9710 Validation Accuracy: 0.504200
Epoch 15, CIFAR-10 Batch 3:  Loss:     0.9514 Validation Accuracy: 0.505600
Epoch 15, CIFAR-10 Batch 4:  Loss:     1.0860 Validation Accuracy: 0.514200
Epoch 15, CIFAR-10 Batch 5:  Loss:     1.1106 Validation Accuracy: 0.515400
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.0768 Validation Accuracy: 0.517000
Epoch 16, CIFAR-10 Batch 2:  Loss:     0.9030 Validation Accuracy: 0.508800
Epoch 16, CIFAR-10 Batch 3:  Loss:     0.9253 Validation Accuracy: 0.514800
Epoch 16, CIFAR-10 Batch 4:  Loss:     1.0356 Validation Accuracy: 0.517000
Epoch 16, CIFAR-10 Batch 5:  Loss:     1.1150 Validation Accuracy: 0.514200
Epoch 17, CIFAR-10 Batch 1:  Loss:     0.9881 Validation Accuracy: 0.523200
Epoch 17, CIFAR-10 Batch 2:  Loss:     0.9472 Validation Accuracy: 0.499800
Epoch 17, CIFAR-10 Batch 3:  Loss:     0.9039 Validation Accuracy: 0.530800
Epoch 17, CIFAR-10 Batch 4:  Loss:     1.0304 Validation Accuracy: 0.525800
Epoch 17, CIFAR-10 Batch 5:  Loss:     1.0570 Validation Accuracy: 0.526200
Epoch 18, CIFAR-10 Batch 1:  Loss:     0.9418 Validation Accuracy: 0.530400
Epoch 18, CIFAR-10 Batch 2:  Loss:     0.9098 Validation Accuracy: 0.522600
Epoch 18, CIFAR-10 Batch 3:  Loss:     0.8601 Validation Accuracy: 0.524800
Epoch 18, CIFAR-10 Batch 4:  Loss:     1.0301 Validation Accuracy: 0.535600
Epoch 18, CIFAR-10 Batch 5:  Loss:     1.0481 Validation Accuracy: 0.528400
Epoch 19, CIFAR-10 Batch 1:  Loss:     0.9377 Validation Accuracy: 0.544000
Epoch 19, CIFAR-10 Batch 2:  Loss:     0.8564 Validation Accuracy: 0.535400
Epoch 19, CIFAR-10 Batch 3:  Loss:     0.8042 Validation Accuracy: 0.529200
Epoch 19, CIFAR-10 Batch 4:  Loss:     0.9622 Validation Accuracy: 0.541200
Epoch 19, CIFAR-10 Batch 5:  Loss:     0.9983 Validation Accuracy: 0.530600
Epoch 20, CIFAR-10 Batch 1:  Loss:     0.9284 Validation Accuracy: 0.548400
Epoch 20, CIFAR-10 Batch 2:  Loss:     0.8783 Validation Accuracy: 0.543600
Epoch 20, CIFAR-10 Batch 3:  Loss:     0.7380 Validation Accuracy: 0.538800
Epoch 20, CIFAR-10 Batch 4:  Loss:     0.8837 Validation Accuracy: 0.544800
Epoch 20, CIFAR-10 Batch 5:  Loss:     0.9928 Validation Accuracy: 0.534400
Epoch 21, CIFAR-10 Batch 1:  Loss:     0.8992 Validation Accuracy: 0.549200
Epoch 21, CIFAR-10 Batch 2:  Loss:     0.7916 Validation Accuracy: 0.533000
Epoch 21, CIFAR-10 Batch 3:  Loss:     0.6778 Validation Accuracy: 0.549600
Epoch 21, CIFAR-10 Batch 4:  Loss:     0.8580 Validation Accuracy: 0.557600
Epoch 21, CIFAR-10 Batch 5:  Loss:     0.9524 Validation Accuracy: 0.539000
Epoch 22, CIFAR-10 Batch 1:  Loss:     0.8084 Validation Accuracy: 0.545400
Epoch 22, CIFAR-10 Batch 2:  Loss:     0.7635 Validation Accuracy: 0.535600
Epoch 22, CIFAR-10 Batch 3:  Loss:     0.7530 Validation Accuracy: 0.556000
Epoch 22, CIFAR-10 Batch 4:  Loss:     0.8474 Validation Accuracy: 0.557000
Epoch 22, CIFAR-10 Batch 5:  Loss:     0.9073 Validation Accuracy: 0.545600
Epoch 23, CIFAR-10 Batch 1:  Loss:     0.7785 Validation Accuracy: 0.549600
Epoch 23, CIFAR-10 Batch 2:  Loss:     0.7483 Validation Accuracy: 0.540400
Epoch 23, CIFAR-10 Batch 3:  Loss:     0.6956 Validation Accuracy: 0.550800
Epoch 23, CIFAR-10 Batch 4:  Loss:     0.8506 Validation Accuracy: 0.557200
Epoch 23, CIFAR-10 Batch 5:  Loss:     0.8775 Validation Accuracy: 0.550800
Epoch 24, CIFAR-10 Batch 1:  Loss:     0.7982 Validation Accuracy: 0.560600
Epoch 24, CIFAR-10 Batch 2:  Loss:     0.7185 Validation Accuracy: 0.554600
Epoch 24, CIFAR-10 Batch 3:  Loss:     0.6924 Validation Accuracy: 0.557800
Epoch 24, CIFAR-10 Batch 4:  Loss:     0.8493 Validation Accuracy: 0.555200
Epoch 24, CIFAR-10 Batch 5:  Loss:     0.8313 Validation Accuracy: 0.558400
Epoch 25, CIFAR-10 Batch 1:  Loss:     0.7575 Validation Accuracy: 0.556000
Epoch 25, CIFAR-10 Batch 2:  Loss:     0.7064 Validation Accuracy: 0.563800
Epoch 25, CIFAR-10 Batch 3:  Loss:     0.6843 Validation Accuracy: 0.553400
Epoch 25, CIFAR-10 Batch 4:  Loss:     0.8246 Validation Accuracy: 0.549400
Epoch 25, CIFAR-10 Batch 5:  Loss:     0.8594 Validation Accuracy: 0.566800
Epoch 26, CIFAR-10 Batch 1:  Loss:     0.8037 Validation Accuracy: 0.567200
Epoch 26, CIFAR-10 Batch 2:  Loss:     0.7086 Validation Accuracy: 0.567800
Epoch 26, CIFAR-10 Batch 3:  Loss:     0.6812 Validation Accuracy: 0.552600
Epoch 26, CIFAR-10 Batch 4:  Loss:     0.7808 Validation Accuracy: 0.559200
Epoch 26, CIFAR-10 Batch 5:  Loss:     0.7690 Validation Accuracy: 0.557400
Epoch 27, CIFAR-10 Batch 1:  Loss:     0.7926 Validation Accuracy: 0.574600
Epoch 27, CIFAR-10 Batch 2:  Loss:     0.6298 Validation Accuracy: 0.562600
Epoch 27, CIFAR-10 Batch 3:  Loss:     0.6146 Validation Accuracy: 0.563800
Epoch 27, CIFAR-10 Batch 4:  Loss:     0.7521 Validation Accuracy: 0.567800
Epoch 27, CIFAR-10 Batch 5:  Loss:     0.7624 Validation Accuracy: 0.576400
Epoch 28, CIFAR-10 Batch 1:  Loss:     0.7968 Validation Accuracy: 0.578000
Epoch 28, CIFAR-10 Batch 2:  Loss:     0.5947 Validation Accuracy: 0.558600
Epoch 28, CIFAR-10 Batch 3:  Loss:     0.6052 Validation Accuracy: 0.577800
Epoch 28, CIFAR-10 Batch 4:  Loss:     0.7128 Validation Accuracy: 0.567600
Epoch 28, CIFAR-10 Batch 5:  Loss:     0.7268 Validation Accuracy: 0.569600
Epoch 29, CIFAR-10 Batch 1:  Loss:     0.7588 Validation Accuracy: 0.576400
Epoch 29, CIFAR-10 Batch 2:  Loss:     0.5887 Validation Accuracy: 0.552600
Epoch 29, CIFAR-10 Batch 3:  Loss:     0.5492 Validation Accuracy: 0.572400
Epoch 29, CIFAR-10 Batch 4:  Loss:     0.7012 Validation Accuracy: 0.568200
Epoch 29, CIFAR-10 Batch 5:  Loss:     0.7126 Validation Accuracy: 0.572400
Epoch 30, CIFAR-10 Batch 1:  Loss:     0.7540 Validation Accuracy: 0.575000
Epoch 30, CIFAR-10 Batch 2:  Loss:     0.5736 Validation Accuracy: 0.569400
Epoch 30, CIFAR-10 Batch 3:  Loss:     0.4967 Validation Accuracy: 0.569600
Epoch 30, CIFAR-10 Batch 4:  Loss:     0.7409 Validation Accuracy: 0.575200
Epoch 30, CIFAR-10 Batch 5:  Loss:     0.6011 Validation Accuracy: 0.573200
Epoch 31, CIFAR-10 Batch 1:  Loss:     0.7120 Validation Accuracy: 0.584600
Epoch 31, CIFAR-10 Batch 2:  Loss:     0.5661 Validation Accuracy: 0.561400
Epoch 31, CIFAR-10 Batch 3:  Loss:     0.5290 Validation Accuracy: 0.569200
Epoch 31, CIFAR-10 Batch 4:  Loss:     0.6544 Validation Accuracy: 0.584200
Epoch 31, CIFAR-10 Batch 5:  Loss:     0.6233 Validation Accuracy: 0.583000
Epoch 32, CIFAR-10 Batch 1:  Loss:     0.7131 Validation Accuracy: 0.590000
Epoch 32, CIFAR-10 Batch 2:  Loss:     0.5800 Validation Accuracy: 0.570800
Epoch 32, CIFAR-10 Batch 3:  Loss:     0.4525 Validation Accuracy: 0.580400
Epoch 32, CIFAR-10 Batch 4:  Loss:     0.6232 Validation Accuracy: 0.581400
Epoch 32, CIFAR-10 Batch 5:  Loss:     0.5863 Validation Accuracy: 0.590400
Epoch 33, CIFAR-10 Batch 1:  Loss:     0.6709 Validation Accuracy: 0.588400
Epoch 33, CIFAR-10 Batch 2:  Loss:     0.5432 Validation Accuracy: 0.565800
Epoch 33, CIFAR-10 Batch 3:  Loss:     0.4805 Validation Accuracy: 0.582000
Epoch 33, CIFAR-10 Batch 4:  Loss:     0.6285 Validation Accuracy: 0.583000
Epoch 33, CIFAR-10 Batch 5:  Loss:     0.5323 Validation Accuracy: 0.594600
Epoch 34, CIFAR-10 Batch 1:  Loss:     0.6852 Validation Accuracy: 0.592000
Epoch 34, CIFAR-10 Batch 2:  Loss:     0.4979 Validation Accuracy: 0.565400
Epoch 34, CIFAR-10 Batch 3:  Loss:     0.4118 Validation Accuracy: 0.596800
Epoch 34, CIFAR-10 Batch 4:  Loss:     0.5520 Validation Accuracy: 0.589800
Epoch 34, CIFAR-10 Batch 5:  Loss:     0.4830 Validation Accuracy: 0.585400
Epoch 35, CIFAR-10 Batch 1:  Loss:     0.6969 Validation Accuracy: 0.588200
Epoch 35, CIFAR-10 Batch 2:  Loss:     0.4769 Validation Accuracy: 0.575200
Epoch 35, CIFAR-10 Batch 3:  Loss:     0.3596 Validation Accuracy: 0.592400
Epoch 35, CIFAR-10 Batch 4:  Loss:     0.4846 Validation Accuracy: 0.586600
Epoch 35, CIFAR-10 Batch 5:  Loss:     0.4975 Validation Accuracy: 0.601600
Epoch 36, CIFAR-10 Batch 1:  Loss:     0.6749 Validation Accuracy: 0.594600
Epoch 36, CIFAR-10 Batch 2:  Loss:     0.4374 Validation Accuracy: 0.579200
Epoch 36, CIFAR-10 Batch 3:  Loss:     0.3831 Validation Accuracy: 0.594600
Epoch 36, CIFAR-10 Batch 4:  Loss:     0.4623 Validation Accuracy: 0.585400
Epoch 36, CIFAR-10 Batch 5:  Loss:     0.4749 Validation Accuracy: 0.600800
Epoch 37, CIFAR-10 Batch 1:  Loss:     0.6009 Validation Accuracy: 0.599800
Epoch 37, CIFAR-10 Batch 2:  Loss:     0.3954 Validation Accuracy: 0.585200
Epoch 37, CIFAR-10 Batch 3:  Loss:     0.3339 Validation Accuracy: 0.599200
Epoch 37, CIFAR-10 Batch 4:  Loss:     0.4546 Validation Accuracy: 0.591600
Epoch 37, CIFAR-10 Batch 5:  Loss:     0.4365 Validation Accuracy: 0.597800
Epoch 38, CIFAR-10 Batch 1:  Loss:     0.6346 Validation Accuracy: 0.604400
Epoch 38, CIFAR-10 Batch 2:  Loss:     0.3625 Validation Accuracy: 0.590000
Epoch 38, CIFAR-10 Batch 3:  Loss:     0.2820 Validation Accuracy: 0.598800
Epoch 38, CIFAR-10 Batch 4:  Loss:     0.4183 Validation Accuracy: 0.593600
Epoch 38, CIFAR-10 Batch 5:  Loss:     0.4152 Validation Accuracy: 0.597400
Epoch 39, CIFAR-10 Batch 1:  Loss:     0.5887 Validation Accuracy: 0.597400
Epoch 39, CIFAR-10 Batch 2:  Loss:     0.3539 Validation Accuracy: 0.590600
Epoch 39, CIFAR-10 Batch 3:  Loss:     0.3092 Validation Accuracy: 0.604400
Epoch 39, CIFAR-10 Batch 4:  Loss:     0.4144 Validation Accuracy: 0.597600
Epoch 39, CIFAR-10 Batch 5:  Loss:     0.3847 Validation Accuracy: 0.605000
Epoch 40, CIFAR-10 Batch 1:  Loss:     0.6012 Validation Accuracy: 0.605400
Epoch 40, CIFAR-10 Batch 2:  Loss:     0.3527 Validation Accuracy: 0.592000
Epoch 40, CIFAR-10 Batch 3:  Loss:     0.2710 Validation Accuracy: 0.602200
Epoch 40, CIFAR-10 Batch 4:  Loss:     0.3594 Validation Accuracy: 0.591000
Epoch 40, CIFAR-10 Batch 5:  Loss:     0.3807 Validation Accuracy: 0.602600
Epoch 41, CIFAR-10 Batch 1:  Loss:     0.5555 Validation Accuracy: 0.609200
Epoch 41, CIFAR-10 Batch 2:  Loss:     0.3708 Validation Accuracy: 0.599800
Epoch 41, CIFAR-10 Batch 3:  Loss:     0.2966 Validation Accuracy: 0.604800
Epoch 41, CIFAR-10 Batch 4:  Loss:     0.3138 Validation Accuracy: 0.602800
Epoch 41, CIFAR-10 Batch 5:  Loss:     0.3442 Validation Accuracy: 0.612400
Epoch 42, CIFAR-10 Batch 1:  Loss:     0.5202 Validation Accuracy: 0.603000
Epoch 42, CIFAR-10 Batch 2:  Loss:     0.3266 Validation Accuracy: 0.604000
Epoch 42, CIFAR-10 Batch 3:  Loss:     0.2397 Validation Accuracy: 0.599400
Epoch 42, CIFAR-10 Batch 4:  Loss:     0.2929 Validation Accuracy: 0.588400
Epoch 42, CIFAR-10 Batch 5:  Loss:     0.3138 Validation Accuracy: 0.613600
Epoch 43, CIFAR-10 Batch 1:  Loss:     0.5077 Validation Accuracy: 0.600600
Epoch 43, CIFAR-10 Batch 2:  Loss:     0.3037 Validation Accuracy: 0.615000
Epoch 43, CIFAR-10 Batch 3:  Loss:     0.2223 Validation Accuracy: 0.601800
Epoch 43, CIFAR-10 Batch 4:  Loss:     0.2626 Validation Accuracy: 0.598800
Epoch 43, CIFAR-10 Batch 5:  Loss:     0.3438 Validation Accuracy: 0.614800
Epoch 44, CIFAR-10 Batch 1:  Loss:     0.5020 Validation Accuracy: 0.609000
Epoch 44, CIFAR-10 Batch 2:  Loss:     0.3083 Validation Accuracy: 0.605000
Epoch 44, CIFAR-10 Batch 3:  Loss:     0.2079 Validation Accuracy: 0.604800
Epoch 44, CIFAR-10 Batch 4:  Loss:     0.2554 Validation Accuracy: 0.597600
Epoch 44, CIFAR-10 Batch 5:  Loss:     0.3388 Validation Accuracy: 0.612600
Epoch 45, CIFAR-10 Batch 1:  Loss:     0.4835 Validation Accuracy: 0.610600
Epoch 45, CIFAR-10 Batch 2:  Loss:     0.2656 Validation Accuracy: 0.605200
Epoch 45, CIFAR-10 Batch 3:  Loss:     0.1806 Validation Accuracy: 0.613200
Epoch 45, CIFAR-10 Batch 4:  Loss:     0.2303 Validation Accuracy: 0.595600
Epoch 45, CIFAR-10 Batch 5:  Loss:     0.3044 Validation Accuracy: 0.610600
Epoch 46, CIFAR-10 Batch 1:  Loss:     0.4622 Validation Accuracy: 0.614000
Epoch 46, CIFAR-10 Batch 2:  Loss:     0.2808 Validation Accuracy: 0.603400
Epoch 46, CIFAR-10 Batch 3:  Loss:     0.1610 Validation Accuracy: 0.601400
Epoch 46, CIFAR-10 Batch 4:  Loss:     0.1959 Validation Accuracy: 0.598600
Epoch 46, CIFAR-10 Batch 5:  Loss:     0.2744 Validation Accuracy: 0.617600
Epoch 47, CIFAR-10 Batch 1:  Loss:     0.4316 Validation Accuracy: 0.613600
Epoch 47, CIFAR-10 Batch 2:  Loss:     0.2343 Validation Accuracy: 0.609000
Epoch 47, CIFAR-10 Batch 3:  Loss:     0.1796 Validation Accuracy: 0.601800
Epoch 47, CIFAR-10 Batch 4:  Loss:     0.1960 Validation Accuracy: 0.614000
Epoch 47, CIFAR-10 Batch 5:  Loss:     0.2674 Validation Accuracy: 0.609400
Epoch 48, CIFAR-10 Batch 1:  Loss:     0.4303 Validation Accuracy: 0.609400
Epoch 48, CIFAR-10 Batch 2:  Loss:     0.2180 Validation Accuracy: 0.611400
Epoch 48, CIFAR-10 Batch 3:  Loss:     0.1626 Validation Accuracy: 0.612600
Epoch 48, CIFAR-10 Batch 4:  Loss:     0.2008 Validation Accuracy: 0.609400
Epoch 48, CIFAR-10 Batch 5:  Loss:     0.2606 Validation Accuracy: 0.613000
Epoch 49, CIFAR-10 Batch 1:  Loss:     0.3779 Validation Accuracy: 0.608600
Epoch 49, CIFAR-10 Batch 2:  Loss:     0.2324 Validation Accuracy: 0.607200
Epoch 49, CIFAR-10 Batch 3:  Loss:     0.1595 Validation Accuracy: 0.610600
Epoch 49, CIFAR-10 Batch 4:  Loss:     0.2065 Validation Accuracy: 0.616400
Epoch 49, CIFAR-10 Batch 5:  Loss:     0.2368 Validation Accuracy: 0.608600
Epoch 50, CIFAR-10 Batch 1:  Loss:     0.4071 Validation Accuracy: 0.610800
Epoch 50, CIFAR-10 Batch 2:  Loss:     0.2277 Validation Accuracy: 0.610000
Epoch 50, CIFAR-10 Batch 3:  Loss:     0.1433 Validation Accuracy: 0.619000
Epoch 50, CIFAR-10 Batch 4:  Loss:     0.1777 Validation Accuracy: 0.622400
Epoch 50, CIFAR-10 Batch 5:  Loss:     0.2118 Validation Accuracy: 0.612000
Epoch 51, CIFAR-10 Batch 1:  Loss:     0.3721 Validation Accuracy: 0.609000
Epoch 51, CIFAR-10 Batch 2:  Loss:     0.2117 Validation Accuracy: 0.609600
Epoch 51, CIFAR-10 Batch 3:  Loss:     0.1348 Validation Accuracy: 0.613600
Epoch 51, CIFAR-10 Batch 4:  Loss:     0.1738 Validation Accuracy: 0.613600
Epoch 51, CIFAR-10 Batch 5:  Loss:     0.1981 Validation Accuracy: 0.616800
Epoch 52, CIFAR-10 Batch 1:  Loss:     0.3675 Validation Accuracy: 0.614000
Epoch 52, CIFAR-10 Batch 2:  Loss:     0.2084 Validation Accuracy: 0.610600
Epoch 52, CIFAR-10 Batch 3:  Loss:     0.1365 Validation Accuracy: 0.599000
Epoch 52, CIFAR-10 Batch 4:  Loss:     0.1589 Validation Accuracy: 0.609600
Epoch 52, CIFAR-10 Batch 5:  Loss:     0.2183 Validation Accuracy: 0.618000
Epoch 53, CIFAR-10 Batch 1:  Loss:     0.3349 Validation Accuracy: 0.608000
Epoch 53, CIFAR-10 Batch 2:  Loss:     0.1802 Validation Accuracy: 0.604800
Epoch 53, CIFAR-10 Batch 3:  Loss:     0.1351 Validation Accuracy: 0.610800
Epoch 53, CIFAR-10 Batch 4:  Loss:     0.1504 Validation Accuracy: 0.615400
Epoch 53, CIFAR-10 Batch 5:  Loss:     0.1742 Validation Accuracy: 0.618000
Epoch 54, CIFAR-10 Batch 1:  Loss:     0.3260 Validation Accuracy: 0.606600
Epoch 54, CIFAR-10 Batch 2:  Loss:     0.1889 Validation Accuracy: 0.596800
Epoch 54, CIFAR-10 Batch 3:  Loss:     0.1316 Validation Accuracy: 0.606200
Epoch 54, CIFAR-10 Batch 4:  Loss:     0.1287 Validation Accuracy: 0.612600
Epoch 54, CIFAR-10 Batch 5:  Loss:     0.1627 Validation Accuracy: 0.609400
Epoch 55, CIFAR-10 Batch 1:  Loss:     0.2812 Validation Accuracy: 0.603600
Epoch 55, CIFAR-10 Batch 2:  Loss:     0.1798 Validation Accuracy: 0.603200
Epoch 55, CIFAR-10 Batch 3:  Loss:     0.1039 Validation Accuracy: 0.604200
Epoch 55, CIFAR-10 Batch 4:  Loss:     0.1197 Validation Accuracy: 0.612400
Epoch 55, CIFAR-10 Batch 5:  Loss:     0.1562 Validation Accuracy: 0.615200
Epoch 56, CIFAR-10 Batch 1:  Loss:     0.2886 Validation Accuracy: 0.610600
Epoch 56, CIFAR-10 Batch 2:  Loss:     0.1553 Validation Accuracy: 0.607600
Epoch 56, CIFAR-10 Batch 3:  Loss:     0.0928 Validation Accuracy: 0.599000
Epoch 56, CIFAR-10 Batch 4:  Loss:     0.1209 Validation Accuracy: 0.618800
Epoch 56, CIFAR-10 Batch 5:  Loss:     0.1477 Validation Accuracy: 0.612000
Epoch 57, CIFAR-10 Batch 1:  Loss:     0.2919 Validation Accuracy: 0.606200
Epoch 57, CIFAR-10 Batch 2:  Loss:     0.1699 Validation Accuracy: 0.607400
Epoch 57, CIFAR-10 Batch 3:  Loss:     0.0796 Validation Accuracy: 0.610800
Epoch 57, CIFAR-10 Batch 4:  Loss:     0.1023 Validation Accuracy: 0.609200
Epoch 57, CIFAR-10 Batch 5:  Loss:     0.1286 Validation Accuracy: 0.611800
Epoch 58, CIFAR-10 Batch 1:  Loss:     0.2775 Validation Accuracy: 0.615000
Epoch 58, CIFAR-10 Batch 2:  Loss:     0.1466 Validation Accuracy: 0.605000
Epoch 58, CIFAR-10 Batch 3:  Loss:     0.0707 Validation Accuracy: 0.615000
Epoch 58, CIFAR-10 Batch 4:  Loss:     0.1144 Validation Accuracy: 0.615800
Epoch 58, CIFAR-10 Batch 5:  Loss:     0.1404 Validation Accuracy: 0.608800
Epoch 59, CIFAR-10 Batch 1:  Loss:     0.2787 Validation Accuracy: 0.607400
Epoch 59, CIFAR-10 Batch 2:  Loss:     0.1355 Validation Accuracy: 0.600200
Epoch 59, CIFAR-10 Batch 3:  Loss:     0.0683 Validation Accuracy: 0.611600
Epoch 59, CIFAR-10 Batch 4:  Loss:     0.1122 Validation Accuracy: 0.613200
Epoch 59, CIFAR-10 Batch 5:  Loss:     0.1161 Validation Accuracy: 0.609400
Epoch 60, CIFAR-10 Batch 1:  Loss:     0.2571 Validation Accuracy: 0.616200
Epoch 60, CIFAR-10 Batch 2:  Loss:     0.1340 Validation Accuracy: 0.601800
Epoch 60, CIFAR-10 Batch 3:  Loss:     0.0627 Validation Accuracy: 0.613400
Epoch 60, CIFAR-10 Batch 4:  Loss:     0.0862 Validation Accuracy: 0.602000
Epoch 60, CIFAR-10 Batch 5:  Loss:     0.1251 Validation Accuracy: 0.618400
Epoch 61, CIFAR-10 Batch 1:  Loss:     0.2080 Validation Accuracy: 0.615000
Epoch 61, CIFAR-10 Batch 2:  Loss:     0.1223 Validation Accuracy: 0.608600
Epoch 61, CIFAR-10 Batch 3:  Loss:     0.0573 Validation Accuracy: 0.617800
Epoch 61, CIFAR-10 Batch 4:  Loss:     0.0790 Validation Accuracy: 0.611600
Epoch 61, CIFAR-10 Batch 5:  Loss:     0.1031 Validation Accuracy: 0.613400
Epoch 62, CIFAR-10 Batch 1:  Loss:     0.1699 Validation Accuracy: 0.617200
Epoch 62, CIFAR-10 Batch 2:  Loss:     0.1126 Validation Accuracy: 0.607200
Epoch 62, CIFAR-10 Batch 3:  Loss:     0.0605 Validation Accuracy: 0.614400
Epoch 62, CIFAR-10 Batch 4:  Loss:     0.0691 Validation Accuracy: 0.605800
Epoch 62, CIFAR-10 Batch 5:  Loss:     0.1013 Validation Accuracy: 0.616800
Epoch 63, CIFAR-10 Batch 1:  Loss:     0.1588 Validation Accuracy: 0.614200
Epoch 63, CIFAR-10 Batch 2:  Loss:     0.0887 Validation Accuracy: 0.612200
Epoch 63, CIFAR-10 Batch 3:  Loss:     0.0466 Validation Accuracy: 0.621200
Epoch 63, CIFAR-10 Batch 4:  Loss:     0.0808 Validation Accuracy: 0.600200
Epoch 63, CIFAR-10 Batch 5:  Loss:     0.1024 Validation Accuracy: 0.622200
Epoch 64, CIFAR-10 Batch 1:  Loss:     0.1507 Validation Accuracy: 0.615400
Epoch 64, CIFAR-10 Batch 2:  Loss:     0.0849 Validation Accuracy: 0.618000
Epoch 64, CIFAR-10 Batch 3:  Loss:     0.0430 Validation Accuracy: 0.619800
Epoch 64, CIFAR-10 Batch 4:  Loss:     0.0792 Validation Accuracy: 0.607400
Epoch 64, CIFAR-10 Batch 5:  Loss:     0.0868 Validation Accuracy: 0.619200
Epoch 65, CIFAR-10 Batch 1:  Loss:     0.1393 Validation Accuracy: 0.612800
Epoch 65, CIFAR-10 Batch 2:  Loss:     0.0835 Validation Accuracy: 0.610600
Epoch 65, CIFAR-10 Batch 3:  Loss:     0.0497 Validation Accuracy: 0.617600
Epoch 65, CIFAR-10 Batch 4:  Loss:     0.0835 Validation Accuracy: 0.599000
Epoch 65, CIFAR-10 Batch 5:  Loss:     0.0918 Validation Accuracy: 0.622800
Epoch 66, CIFAR-10 Batch 1:  Loss:     0.1205 Validation Accuracy: 0.615000
Epoch 66, CIFAR-10 Batch 2:  Loss:     0.0834 Validation Accuracy: 0.600800
Epoch 66, CIFAR-10 Batch 3:  Loss:     0.0379 Validation Accuracy: 0.609800
Epoch 66, CIFAR-10 Batch 4:  Loss:     0.0805 Validation Accuracy: 0.612400
Epoch 66, CIFAR-10 Batch 5:  Loss:     0.0706 Validation Accuracy: 0.619400
Epoch 67, CIFAR-10 Batch 1:  Loss:     0.0896 Validation Accuracy: 0.611600
Epoch 67, CIFAR-10 Batch 2:  Loss:     0.0722 Validation Accuracy: 0.606000
Epoch 67, CIFAR-10 Batch 3:  Loss:     0.0351 Validation Accuracy: 0.613200
Epoch 67, CIFAR-10 Batch 4:  Loss:     0.0689 Validation Accuracy: 0.609400
Epoch 67, CIFAR-10 Batch 5:  Loss:     0.0735 Validation Accuracy: 0.620000
Epoch 68, CIFAR-10 Batch 1:  Loss:     0.0893 Validation Accuracy: 0.611400
Epoch 68, CIFAR-10 Batch 2:  Loss:     0.0774 Validation Accuracy: 0.609200
Epoch 68, CIFAR-10 Batch 3:  Loss:     0.0299 Validation Accuracy: 0.618200
Epoch 68, CIFAR-10 Batch 4:  Loss:     0.0639 Validation Accuracy: 0.611000
Epoch 68, CIFAR-10 Batch 5:  Loss:     0.0754 Validation Accuracy: 0.618800
Epoch 69, CIFAR-10 Batch 1:  Loss:     0.0964 Validation Accuracy: 0.608000
Epoch 69, CIFAR-10 Batch 2:  Loss:     0.0871 Validation Accuracy: 0.607400
Epoch 69, CIFAR-10 Batch 3:  Loss:     0.0292 Validation Accuracy: 0.611600
Epoch 69, CIFAR-10 Batch 4:  Loss:     0.0631 Validation Accuracy: 0.611800
Epoch 69, CIFAR-10 Batch 5:  Loss:     0.0860 Validation Accuracy: 0.620000
Epoch 70, CIFAR-10 Batch 1:  Loss:     0.0888 Validation Accuracy: 0.617400
Epoch 70, CIFAR-10 Batch 2:  Loss:     0.1090 Validation Accuracy: 0.613200
Epoch 70, CIFAR-10 Batch 3:  Loss:     0.0207 Validation Accuracy: 0.613400
Epoch 70, CIFAR-10 Batch 4:  Loss:     0.0642 Validation Accuracy: 0.610800
Epoch 70, CIFAR-10 Batch 5:  Loss:     0.0692 Validation Accuracy: 0.625400
Epoch 71, CIFAR-10 Batch 1:  Loss:     0.0814 Validation Accuracy: 0.613400
Epoch 71, CIFAR-10 Batch 2:  Loss:     0.0629 Validation Accuracy: 0.610800
Epoch 71, CIFAR-10 Batch 3:  Loss:     0.0291 Validation Accuracy: 0.618800
Epoch 71, CIFAR-10 Batch 4:  Loss:     0.0601 Validation Accuracy: 0.606400
Epoch 71, CIFAR-10 Batch 5:  Loss:     0.0635 Validation Accuracy: 0.622600
Epoch 72, CIFAR-10 Batch 1:  Loss:     0.0620 Validation Accuracy: 0.612200
Epoch 72, CIFAR-10 Batch 2:  Loss:     0.0652 Validation Accuracy: 0.608200
Epoch 72, CIFAR-10 Batch 3:  Loss:     0.0193 Validation Accuracy: 0.610000
Epoch 72, CIFAR-10 Batch 4:  Loss:     0.0534 Validation Accuracy: 0.608000
Epoch 72, CIFAR-10 Batch 5:  Loss:     0.0512 Validation Accuracy: 0.626600
Epoch 73, CIFAR-10 Batch 1:  Loss:     0.0758 Validation Accuracy: 0.613200
Epoch 73, CIFAR-10 Batch 2:  Loss:     0.0534 Validation Accuracy: 0.612400
Epoch 73, CIFAR-10 Batch 3:  Loss:     0.0208 Validation Accuracy: 0.610600
Epoch 73, CIFAR-10 Batch 4:  Loss:     0.0595 Validation Accuracy: 0.611200
Epoch 73, CIFAR-10 Batch 5:  Loss:     0.0515 Validation Accuracy: 0.626400
Epoch 74, CIFAR-10 Batch 1:  Loss:     0.0620 Validation Accuracy: 0.618000
Epoch 74, CIFAR-10 Batch 2:  Loss:     0.0518 Validation Accuracy: 0.619200
Epoch 74, CIFAR-10 Batch 3:  Loss:     0.0128 Validation Accuracy: 0.619600
Epoch 74, CIFAR-10 Batch 4:  Loss:     0.0580 Validation Accuracy: 0.615000
Epoch 74, CIFAR-10 Batch 5:  Loss:     0.0474 Validation Accuracy: 0.626000
Epoch 75, CIFAR-10 Batch 1:  Loss:     0.0811 Validation Accuracy: 0.615000
Epoch 75, CIFAR-10 Batch 2:  Loss:     0.0598 Validation Accuracy: 0.618200
Epoch 75, CIFAR-10 Batch 3:  Loss:     0.0133 Validation Accuracy: 0.613200
Epoch 75, CIFAR-10 Batch 4:  Loss:     0.0536 Validation Accuracy: 0.605600
Epoch 75, CIFAR-10 Batch 5:  Loss:     0.0620 Validation Accuracy: 0.615800
Epoch 76, CIFAR-10 Batch 1:  Loss:     0.0437 Validation Accuracy: 0.610000
Epoch 76, CIFAR-10 Batch 2:  Loss:     0.0628 Validation Accuracy: 0.608400
Epoch 76, CIFAR-10 Batch 3:  Loss:     0.0163 Validation Accuracy: 0.621600
Epoch 76, CIFAR-10 Batch 4:  Loss:     0.0498 Validation Accuracy: 0.605800
Epoch 76, CIFAR-10 Batch 5:  Loss:     0.0454 Validation Accuracy: 0.627600
Epoch 77, CIFAR-10 Batch 1:  Loss:     0.0506 Validation Accuracy: 0.614800
Epoch 77, CIFAR-10 Batch 2:  Loss:     0.0567 Validation Accuracy: 0.610000
Epoch 77, CIFAR-10 Batch 3:  Loss:     0.0154 Validation Accuracy: 0.615800
Epoch 77, CIFAR-10 Batch 4:  Loss:     0.0417 Validation Accuracy: 0.607800
Epoch 77, CIFAR-10 Batch 5:  Loss:     0.0319 Validation Accuracy: 0.625400
Epoch 78, CIFAR-10 Batch 1:  Loss:     0.0489 Validation Accuracy: 0.618600
Epoch 78, CIFAR-10 Batch 2:  Loss:     0.0484 Validation Accuracy: 0.617200
Epoch 78, CIFAR-10 Batch 3:  Loss:     0.0157 Validation Accuracy: 0.619200
Epoch 78, CIFAR-10 Batch 4:  Loss:     0.0328 Validation Accuracy: 0.616600
Epoch 78, CIFAR-10 Batch 5:  Loss:     0.0393 Validation Accuracy: 0.622800
Epoch 79, CIFAR-10 Batch 1:  Loss:     0.0450 Validation Accuracy: 0.613200
Epoch 79, CIFAR-10 Batch 2:  Loss:     0.0436 Validation Accuracy: 0.609800
Epoch 79, CIFAR-10 Batch 3:  Loss:     0.0128 Validation Accuracy: 0.619600
Epoch 79, CIFAR-10 Batch 4:  Loss:     0.0424 Validation Accuracy: 0.609400
Epoch 79, CIFAR-10 Batch 5:  Loss:     0.0310 Validation Accuracy: 0.617600
Epoch 80, CIFAR-10 Batch 1:  Loss:     0.0416 Validation Accuracy: 0.615000
Epoch 80, CIFAR-10 Batch 2:  Loss:     0.0584 Validation Accuracy: 0.609600
Epoch 80, CIFAR-10 Batch 3:  Loss:     0.0077 Validation Accuracy: 0.618800
Epoch 80, CIFAR-10 Batch 4:  Loss:     0.0442 Validation Accuracy: 0.619000
Epoch 80, CIFAR-10 Batch 5:  Loss:     0.0263 Validation Accuracy: 0.618400
Epoch 81, CIFAR-10 Batch 1:  Loss:     0.0716 Validation Accuracy: 0.616000
Epoch 81, CIFAR-10 Batch 2:  Loss:     0.0590 Validation Accuracy: 0.610400
Epoch 81, CIFAR-10 Batch 3:  Loss:     0.0104 Validation Accuracy: 0.618000
Epoch 81, CIFAR-10 Batch 4:  Loss:     0.0440 Validation Accuracy: 0.612600
Epoch 81, CIFAR-10 Batch 5:  Loss:     0.0256 Validation Accuracy: 0.625600
Epoch 82, CIFAR-10 Batch 1:  Loss:     0.0396 Validation Accuracy: 0.621000
Epoch 82, CIFAR-10 Batch 2:  Loss:     0.0549 Validation Accuracy: 0.616000
Epoch 82, CIFAR-10 Batch 3:  Loss:     0.0110 Validation Accuracy: 0.616600
Epoch 82, CIFAR-10 Batch 4:  Loss:     0.0373 Validation Accuracy: 0.615400
Epoch 82, CIFAR-10 Batch 5:  Loss:     0.0272 Validation Accuracy: 0.623200
Epoch 83, CIFAR-10 Batch 1:  Loss:     0.0431 Validation Accuracy: 0.623200
Epoch 83, CIFAR-10 Batch 2:  Loss:     0.0603 Validation Accuracy: 0.609200
Epoch 83, CIFAR-10 Batch 3:  Loss:     0.0159 Validation Accuracy: 0.620000
Epoch 83, CIFAR-10 Batch 4:  Loss:     0.0217 Validation Accuracy: 0.621000
Epoch 83, CIFAR-10 Batch 5:  Loss:     0.0335 Validation Accuracy: 0.623200
Epoch 84, CIFAR-10 Batch 1:  Loss:     0.0448 Validation Accuracy: 0.614800
Epoch 84, CIFAR-10 Batch 2:  Loss:     0.0638 Validation Accuracy: 0.615800
Epoch 84, CIFAR-10 Batch 3:  Loss:     0.0109 Validation Accuracy: 0.627400
Epoch 84, CIFAR-10 Batch 4:  Loss:     0.0231 Validation Accuracy: 0.620600
Epoch 84, CIFAR-10 Batch 5:  Loss:     0.0174 Validation Accuracy: 0.631000
Epoch 85, CIFAR-10 Batch 1:  Loss:     0.0361 Validation Accuracy: 0.618200
Epoch 85, CIFAR-10 Batch 2:  Loss:     0.0589 Validation Accuracy: 0.611600
Epoch 85, CIFAR-10 Batch 3:  Loss:     0.0108 Validation Accuracy: 0.623200
Epoch 85, CIFAR-10 Batch 4:  Loss:     0.0279 Validation Accuracy: 0.612000
Epoch 85, CIFAR-10 Batch 5:  Loss:     0.0210 Validation Accuracy: 0.624200
Epoch 86, CIFAR-10 Batch 1:  Loss:     0.0299 Validation Accuracy: 0.608400
Epoch 86, CIFAR-10 Batch 2:  Loss:     0.0412 Validation Accuracy: 0.617200
Epoch 86, CIFAR-10 Batch 3:  Loss:     0.0112 Validation Accuracy: 0.623600
Epoch 86, CIFAR-10 Batch 4:  Loss:     0.0240 Validation Accuracy: 0.614800
Epoch 86, CIFAR-10 Batch 5:  Loss:     0.0246 Validation Accuracy: 0.622200
Epoch 87, CIFAR-10 Batch 1:  Loss:     0.0258 Validation Accuracy: 0.623400
Epoch 87, CIFAR-10 Batch 2:  Loss:     0.0463 Validation Accuracy: 0.612400
Epoch 87, CIFAR-10 Batch 3:  Loss:     0.0123 Validation Accuracy: 0.624800
Epoch 87, CIFAR-10 Batch 4:  Loss:     0.0338 Validation Accuracy: 0.624600
Epoch 87, CIFAR-10 Batch 5:  Loss:     0.0295 Validation Accuracy: 0.628600
Epoch 88, CIFAR-10 Batch 1:  Loss:     0.0298 Validation Accuracy: 0.618000
Epoch 88, CIFAR-10 Batch 2:  Loss:     0.0542 Validation Accuracy: 0.618000
Epoch 88, CIFAR-10 Batch 3:  Loss:     0.0062 Validation Accuracy: 0.618800
Epoch 88, CIFAR-10 Batch 4:  Loss:     0.0241 Validation Accuracy: 0.619200
Epoch 88, CIFAR-10 Batch 5:  Loss:     0.0273 Validation Accuracy: 0.628800
Epoch 89, CIFAR-10 Batch 1:  Loss:     0.0326 Validation Accuracy: 0.616800
Epoch 89, CIFAR-10 Batch 2:  Loss:     0.0528 Validation Accuracy: 0.617800
Epoch 89, CIFAR-10 Batch 3:  Loss:     0.0032 Validation Accuracy: 0.621200
Epoch 89, CIFAR-10 Batch 4:  Loss:     0.0253 Validation Accuracy: 0.625600
Epoch 89, CIFAR-10 Batch 5:  Loss:     0.0208 Validation Accuracy: 0.623800
Epoch 90, CIFAR-10 Batch 1:  Loss:     0.0239 Validation Accuracy: 0.613600
Epoch 90, CIFAR-10 Batch 2:  Loss:     0.0351 Validation Accuracy: 0.618000
Epoch 90, CIFAR-10 Batch 3:  Loss:     0.0136 Validation Accuracy: 0.618600
Epoch 90, CIFAR-10 Batch 4:  Loss:     0.0358 Validation Accuracy: 0.621600
Epoch 90, CIFAR-10 Batch 5:  Loss:     0.0287 Validation Accuracy: 0.616800
Epoch 91, CIFAR-10 Batch 1:  Loss:     0.0165 Validation Accuracy: 0.619600
Epoch 91, CIFAR-10 Batch 2:  Loss:     0.0702 Validation Accuracy: 0.617000
Epoch 91, CIFAR-10 Batch 3:  Loss:     0.0037 Validation Accuracy: 0.625200
Epoch 91, CIFAR-10 Batch 4:  Loss:     0.0280 Validation Accuracy: 0.627800
Epoch 91, CIFAR-10 Batch 5:  Loss:     0.0280 Validation Accuracy: 0.622200
Epoch 92, CIFAR-10 Batch 1:  Loss:     0.0122 Validation Accuracy: 0.614400
Epoch 92, CIFAR-10 Batch 2:  Loss:     0.0233 Validation Accuracy: 0.625200
Epoch 92, CIFAR-10 Batch 3:  Loss:     0.0037 Validation Accuracy: 0.626600
Epoch 92, CIFAR-10 Batch 4:  Loss:     0.0175 Validation Accuracy: 0.622800
Epoch 92, CIFAR-10 Batch 5:  Loss:     0.0304 Validation Accuracy: 0.625800
Epoch 93, CIFAR-10 Batch 1:  Loss:     0.0122 Validation Accuracy: 0.620800
Epoch 93, CIFAR-10 Batch 2:  Loss:     0.0208 Validation Accuracy: 0.614400
Epoch 93, CIFAR-10 Batch 3:  Loss:     0.0045 Validation Accuracy: 0.619800
Epoch 93, CIFAR-10 Batch 4:  Loss:     0.0199 Validation Accuracy: 0.622400
Epoch 93, CIFAR-10 Batch 5:  Loss:     0.0208 Validation Accuracy: 0.619400
Epoch 94, CIFAR-10 Batch 1:  Loss:     0.0123 Validation Accuracy: 0.622000
Epoch 94, CIFAR-10 Batch 2:  Loss:     0.0263 Validation Accuracy: 0.624000
Epoch 94, CIFAR-10 Batch 3:  Loss:     0.0146 Validation Accuracy: 0.618600
Epoch 94, CIFAR-10 Batch 4:  Loss:     0.0270 Validation Accuracy: 0.620800
Epoch 94, CIFAR-10 Batch 5:  Loss:     0.0187 Validation Accuracy: 0.617400
Epoch 95, CIFAR-10 Batch 1:  Loss:     0.0108 Validation Accuracy: 0.617600
Epoch 95, CIFAR-10 Batch 2:  Loss:     0.0144 Validation Accuracy: 0.622800
Epoch 95, CIFAR-10 Batch 3:  Loss:     0.0031 Validation Accuracy: 0.619400
Epoch 95, CIFAR-10 Batch 4:  Loss:     0.0182 Validation Accuracy: 0.622000
Epoch 95, CIFAR-10 Batch 5:  Loss:     0.0176 Validation Accuracy: 0.627400
Epoch 96, CIFAR-10 Batch 1:  Loss:     0.0119 Validation Accuracy: 0.624600
Epoch 96, CIFAR-10 Batch 2:  Loss:     0.0146 Validation Accuracy: 0.619600
Epoch 96, CIFAR-10 Batch 3:  Loss:     0.0050 Validation Accuracy: 0.621200
Epoch 96, CIFAR-10 Batch 4:  Loss:     0.0197 Validation Accuracy: 0.612400
Epoch 96, CIFAR-10 Batch 5:  Loss:     0.0132 Validation Accuracy: 0.623000
Epoch 97, CIFAR-10 Batch 1:  Loss:     0.0097 Validation Accuracy: 0.617400
Epoch 97, CIFAR-10 Batch 2:  Loss:     0.0220 Validation Accuracy: 0.616200
Epoch 97, CIFAR-10 Batch 3:  Loss:     0.0022 Validation Accuracy: 0.618400
Epoch 97, CIFAR-10 Batch 4:  Loss:     0.0151 Validation Accuracy: 0.613400
Epoch 97, CIFAR-10 Batch 5:  Loss:     0.0119 Validation Accuracy: 0.626800
Epoch 98, CIFAR-10 Batch 1:  Loss:     0.0144 Validation Accuracy: 0.620000
Epoch 98, CIFAR-10 Batch 2:  Loss:     0.0180 Validation Accuracy: 0.613400
Epoch 98, CIFAR-10 Batch 3:  Loss:     0.0023 Validation Accuracy: 0.623800
Epoch 98, CIFAR-10 Batch 4:  Loss:     0.0578 Validation Accuracy: 0.612000
Epoch 98, CIFAR-10 Batch 5:  Loss:     0.0108 Validation Accuracy: 0.622000
Epoch 99, CIFAR-10 Batch 1:  Loss:     0.0099 Validation Accuracy: 0.624600
Epoch 99, CIFAR-10 Batch 2:  Loss:     0.0138 Validation Accuracy: 0.617800
Epoch 99, CIFAR-10 Batch 3:  Loss:     0.0025 Validation Accuracy: 0.620400
Epoch 99, CIFAR-10 Batch 4:  Loss:     0.0276 Validation Accuracy: 0.616400
Epoch 99, CIFAR-10 Batch 5:  Loss:     0.0083 Validation Accuracy: 0.630400
Epoch 100, CIFAR-10 Batch 1:  Loss:     0.0095 Validation Accuracy: 0.621800
Epoch 100, CIFAR-10 Batch 2:  Loss:     0.0298 Validation Accuracy: 0.616400
Epoch 100, CIFAR-10 Batch 3:  Loss:     0.0032 Validation Accuracy: 0.625800
Epoch 100, CIFAR-10 Batch 4:  Loss:     0.0266 Validation Accuracy: 0.618600
Epoch 100, CIFAR-10 Batch 5:  Loss:     0.0102 Validation Accuracy: 0.622200

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [55]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.631640625

Why 50-70% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 70%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.