Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


CIFAR-10 Dataset: 171MB [01:27, 1.96MB/s]                              
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [6]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 3
sample_id = 1016
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 3:
Samples: 10000
Label Counts: {0: 994, 1: 1042, 2: 965, 3: 997, 4: 990, 5: 1029, 6: 978, 7: 1015, 8: 961, 9: 1029}
First 20 Labels: [8, 5, 0, 6, 9, 2, 8, 3, 6, 2, 7, 4, 6, 9, 0, 0, 7, 3, 7, 2]

Example of Image 1016:
Image - Min Value: 0 Max Value: 241
Image - Shape: (32, 32, 3)
Label - Label Id: 4 Name: deer

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [7]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    return (x / np.max(x))


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [9]:
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    return np.eye(10)[x]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [3]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [4]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(tf.float32, shape=(None, n_classes), name="y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(tf.float32, name="keep_prob")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [5]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """

    weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], 
                                              int(x_tensor.get_shape()[3]), 
                                              conv_num_outputs], stddev=0.1))
    bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.1))
    
    pool_strides = [1, pool_strides[0], pool_strides[1], 1]
    pool_ksize = [1, pool_ksize[0], pool_ksize[1], 1]
    conv_strides = [1, conv_strides[0], conv_strides[1], 1]
    
    conv_layer = tf.nn.conv2d(x_tensor, weights, strides=conv_strides, padding='SAME')
    conv_layer = tf.nn.bias_add(conv_layer, bias)
    conv_layer = tf.nn.relu(conv_layer)
    conv_layer = tf.nn.max_pool(
        conv_layer,
        ksize=pool_ksize,
        strides=pool_strides,
        padding='SAME')
    
    return conv_layer




"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [6]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    return tf.reshape(x_tensor, shape=[-1, int(x_tensor.shape[1])*int(x_tensor.shape[2])*int(x_tensor.shape[3])])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [7]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    weights = tf.Variable(tf.truncated_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1))
    bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
    
    return tf.nn.relu(tf.matmul(x_tensor, weights) + bias)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [8]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    weights = tf.Variable(tf.truncated_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1))
    bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
    
    return tf.add(tf.matmul(x_tensor, weights), bias)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [31]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    layer = conv2d_maxpool(x, 32, (4, 4), (2, 2), (4, 4), (2, 2))
    #layer = tf.nn.dropout(layer, keep_prob)
    layer = conv2d_maxpool(layer, 64, (3, 3), (2, 2), (3, 3), (2, 2))
    layer = tf.nn.dropout(layer, keep_prob)
    #layer = conv2d_maxpool(layer, 128, (4, 4), (2, 2), (2, 2), (2, 2))
    #layer = tf.nn.dropout(layer, keep_prob)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    layer = flatten(layer)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    layer = fully_conn(layer, 40)
    layer = fully_conn(layer, 40)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #output(layer, num_outputs)
    
    
    # TODO: return output
    return output(layer, 10)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [16]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [17]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
    accuracy = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
    
    print("Loss: {}".format(loss))
    print("Validation Accuracy: {}".format(accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [36]:
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.8

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [37]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1746606826782227
Validation Accuracy: 0.18400000035762787
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.0287394523620605
Validation Accuracy: 0.2662000060081482
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.9457350969314575
Validation Accuracy: 0.29840001463890076
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.8301498889923096
Validation Accuracy: 0.3463999927043915
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.760606288909912
Validation Accuracy: 0.3734000027179718
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.6915462017059326
Validation Accuracy: 0.38419997692108154
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.6186200380325317
Validation Accuracy: 0.39819997549057007
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.5882208347320557
Validation Accuracy: 0.41159993410110474
Epoch  9, CIFAR-10 Batch 1:  Loss: 1.5262558460235596
Validation Accuracy: 0.41899996995925903
Epoch 10, CIFAR-10 Batch 1:  Loss: 1.4671363830566406
Validation Accuracy: 0.4283999800682068
Epoch 11, CIFAR-10 Batch 1:  Loss: 1.429485559463501
Validation Accuracy: 0.44439995288848877
Epoch 12, CIFAR-10 Batch 1:  Loss: 1.393850564956665
Validation Accuracy: 0.444599986076355
Epoch 13, CIFAR-10 Batch 1:  Loss: 1.3592623472213745
Validation Accuracy: 0.454399973154068
Epoch 14, CIFAR-10 Batch 1:  Loss: 1.3215725421905518
Validation Accuracy: 0.4605999290943146
Epoch 15, CIFAR-10 Batch 1:  Loss: 1.2827091217041016
Validation Accuracy: 0.4633999466896057
Epoch 16, CIFAR-10 Batch 1:  Loss: 1.2331130504608154
Validation Accuracy: 0.4773999750614166
Epoch 17, CIFAR-10 Batch 1:  Loss: 1.1900405883789062
Validation Accuracy: 0.4809999465942383
Epoch 18, CIFAR-10 Batch 1:  Loss: 1.1592490673065186
Validation Accuracy: 0.4793999493122101
Epoch 19, CIFAR-10 Batch 1:  Loss: 1.1357645988464355
Validation Accuracy: 0.4941999316215515
Epoch 20, CIFAR-10 Batch 1:  Loss: 1.1122007369995117
Validation Accuracy: 0.4923999607563019
Epoch 21, CIFAR-10 Batch 1:  Loss: 1.0796908140182495
Validation Accuracy: 0.49939998984336853
Epoch 22, CIFAR-10 Batch 1:  Loss: 1.0454235076904297
Validation Accuracy: 0.5063999891281128
Epoch 23, CIFAR-10 Batch 1:  Loss: 1.0266528129577637
Validation Accuracy: 0.5091999769210815
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.971301794052124
Validation Accuracy: 0.514799952507019
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.940711498260498
Validation Accuracy: 0.514799952507019
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.9319143891334534
Validation Accuracy: 0.5127999782562256
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.8847864866256714
Validation Accuracy: 0.5201998949050903
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.8622934818267822
Validation Accuracy: 0.5237999558448792
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.8206983804702759
Validation Accuracy: 0.5283999443054199
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.8045783042907715
Validation Accuracy: 0.5311999320983887
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.7602722644805908
Validation Accuracy: 0.535599946975708
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.7494924664497375
Validation Accuracy: 0.5423999428749084
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.7416500449180603
Validation Accuracy: 0.5375999808311462
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.7240997552871704
Validation Accuracy: 0.5371999740600586
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.6896461248397827
Validation Accuracy: 0.5437999367713928
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.6698452234268188
Validation Accuracy: 0.5405999422073364
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.6589939594268799
Validation Accuracy: 0.5449999570846558
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.6363341808319092
Validation Accuracy: 0.5459998846054077
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.626771092414856
Validation Accuracy: 0.5441999435424805
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.604494571685791
Validation Accuracy: 0.544999897480011
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.5974850654602051
Validation Accuracy: 0.5501999855041504
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.5834242105484009
Validation Accuracy: 0.5535999536514282
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.5942342281341553
Validation Accuracy: 0.5543999671936035
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.5883170366287231
Validation Accuracy: 0.556999921798706
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.5361185073852539
Validation Accuracy: 0.5587999224662781
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.5044657588005066
Validation Accuracy: 0.5667999386787415
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.5097840428352356
Validation Accuracy: 0.5681999325752258
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.48674461245536804
Validation Accuracy: 0.5641999244689941
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.4710724651813507
Validation Accuracy: 0.558199942111969
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.4694976210594177
Validation Accuracy: 0.5731999278068542
Epoch 51, CIFAR-10 Batch 1:  Loss: 0.4481922686100006
Validation Accuracy: 0.5699999332427979
Epoch 52, CIFAR-10 Batch 1:  Loss: 0.4443246126174927
Validation Accuracy: 0.5727999210357666
Epoch 53, CIFAR-10 Batch 1:  Loss: 0.4463832378387451
Validation Accuracy: 0.5635998845100403
Epoch 54, CIFAR-10 Batch 1:  Loss: 0.44175657629966736
Validation Accuracy: 0.5759999752044678
Epoch 55, CIFAR-10 Batch 1:  Loss: 0.41942736506462097
Validation Accuracy: 0.5777999758720398
Epoch 56, CIFAR-10 Batch 1:  Loss: 0.4077526926994324
Validation Accuracy: 0.5757999420166016
Epoch 57, CIFAR-10 Batch 1:  Loss: 0.39177778363227844
Validation Accuracy: 0.5637999773025513
Epoch 58, CIFAR-10 Batch 1:  Loss: 0.38824179768562317
Validation Accuracy: 0.5763999223709106
Epoch 59, CIFAR-10 Batch 1:  Loss: 0.3773651123046875
Validation Accuracy: 0.5799999237060547
Epoch 60, CIFAR-10 Batch 1:  Loss: 0.3739699125289917
Validation Accuracy: 0.5701999068260193
Epoch 61, CIFAR-10 Batch 1:  Loss: 0.3562885522842407
Validation Accuracy: 0.5849999785423279
Epoch 62, CIFAR-10 Batch 1:  Loss: 0.3653099834918976
Validation Accuracy: 0.5753999352455139
Epoch 63, CIFAR-10 Batch 1:  Loss: 0.3213104009628296
Validation Accuracy: 0.5767999291419983
Epoch 64, CIFAR-10 Batch 1:  Loss: 0.3085307776927948
Validation Accuracy: 0.5735999345779419
Epoch 65, CIFAR-10 Batch 1:  Loss: 0.3124387264251709
Validation Accuracy: 0.5619999170303345
Epoch 66, CIFAR-10 Batch 1:  Loss: 0.3073600232601166
Validation Accuracy: 0.5763999223709106
Epoch 67, CIFAR-10 Batch 1:  Loss: 0.30320054292678833
Validation Accuracy: 0.5817998647689819
Epoch 68, CIFAR-10 Batch 1:  Loss: 0.309445858001709
Validation Accuracy: 0.5717998743057251
Epoch 69, CIFAR-10 Batch 1:  Loss: 0.30452796816825867
Validation Accuracy: 0.5683999061584473
Epoch 70, CIFAR-10 Batch 1:  Loss: 0.29527944326400757
Validation Accuracy: 0.5785999298095703
Epoch 71, CIFAR-10 Batch 1:  Loss: 0.28981733322143555
Validation Accuracy: 0.5739999413490295
Epoch 72, CIFAR-10 Batch 1:  Loss: 0.29755088686943054
Validation Accuracy: 0.5761998891830444
Epoch 73, CIFAR-10 Batch 1:  Loss: 0.30996572971343994
Validation Accuracy: 0.5701998472213745
Epoch 74, CIFAR-10 Batch 1:  Loss: 0.30247440934181213
Validation Accuracy: 0.5505999326705933
Epoch 75, CIFAR-10 Batch 1:  Loss: 0.2648955285549164
Validation Accuracy: 0.5755998492240906
Epoch 76, CIFAR-10 Batch 1:  Loss: 0.2419528365135193
Validation Accuracy: 0.5773999691009521
Epoch 77, CIFAR-10 Batch 1:  Loss: 0.23336437344551086
Validation Accuracy: 0.5841999053955078
Epoch 78, CIFAR-10 Batch 1:  Loss: 0.23413966596126556
Validation Accuracy: 0.5797998905181885
Epoch 79, CIFAR-10 Batch 1:  Loss: 0.22126254439353943
Validation Accuracy: 0.5809999108314514
Epoch 80, CIFAR-10 Batch 1:  Loss: 0.2127663791179657
Validation Accuracy: 0.5765999555587769
Epoch 81, CIFAR-10 Batch 1:  Loss: 0.22599779069423676
Validation Accuracy: 0.5705999135971069
Epoch 82, CIFAR-10 Batch 1:  Loss: 0.22242745757102966
Validation Accuracy: 0.5627999901771545
Epoch 83, CIFAR-10 Batch 1:  Loss: 0.2043347954750061
Validation Accuracy: 0.586199939250946
Epoch 84, CIFAR-10 Batch 1:  Loss: 0.19840021431446075
Validation Accuracy: 0.5815998911857605
Epoch 85, CIFAR-10 Batch 1:  Loss: 0.18979302048683167
Validation Accuracy: 0.5801998972892761
Epoch 86, CIFAR-10 Batch 1:  Loss: 0.19136761128902435
Validation Accuracy: 0.5835999250411987
Epoch 87, CIFAR-10 Batch 1:  Loss: 0.1836610734462738
Validation Accuracy: 0.5719999074935913
Epoch 88, CIFAR-10 Batch 1:  Loss: 0.2077423632144928
Validation Accuracy: 0.5625998973846436
Epoch 89, CIFAR-10 Batch 1:  Loss: 0.18396669626235962
Validation Accuracy: 0.5679999589920044
Epoch 90, CIFAR-10 Batch 1:  Loss: 0.15346796810626984
Validation Accuracy: 0.5811998844146729
Epoch 91, CIFAR-10 Batch 1:  Loss: 0.15729911625385284
Validation Accuracy: 0.5747998952865601
Epoch 92, CIFAR-10 Batch 1:  Loss: 0.1647510826587677
Validation Accuracy: 0.5649999380111694
Epoch 93, CIFAR-10 Batch 1:  Loss: 0.19637228548526764
Validation Accuracy: 0.5691999197006226
Epoch 94, CIFAR-10 Batch 1:  Loss: 0.16514122486114502
Validation Accuracy: 0.5759999752044678
Epoch 95, CIFAR-10 Batch 1:  Loss: 0.16009865701198578
Validation Accuracy: 0.5773999094963074
Epoch 96, CIFAR-10 Batch 1:  Loss: 0.15932121872901917
Validation Accuracy: 0.5701999664306641
Epoch 97, CIFAR-10 Batch 1:  Loss: 0.1455449014902115
Validation Accuracy: 0.5703999400138855
Epoch 98, CIFAR-10 Batch 1:  Loss: 0.13883209228515625
Validation Accuracy: 0.5705999135971069
Epoch 99, CIFAR-10 Batch 1:  Loss: 0.13548415899276733
Validation Accuracy: 0.5699999332427979
Epoch 100, CIFAR-10 Batch 1:  Loss: 0.15523676574230194
Validation Accuracy: 0.5727999210357666

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [38]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1977498531341553
Validation Accuracy: 0.20059999823570251
Epoch  1, CIFAR-10 Batch 2:  Loss: 1.988148808479309
Validation Accuracy: 0.2833999693393707
Epoch  1, CIFAR-10 Batch 3:  Loss: 1.6414859294891357
Validation Accuracy: 0.3081999719142914
Epoch  1, CIFAR-10 Batch 4:  Loss: 1.7286639213562012
Validation Accuracy: 0.34880000352859497
Epoch  1, CIFAR-10 Batch 5:  Loss: 1.688643217086792
Validation Accuracy: 0.36459997296333313
Epoch  2, CIFAR-10 Batch 1:  Loss: 1.7484030723571777
Validation Accuracy: 0.3937999904155731
Epoch  2, CIFAR-10 Batch 2:  Loss: 1.7505152225494385
Validation Accuracy: 0.399399995803833
Epoch  2, CIFAR-10 Batch 3:  Loss: 1.3875747919082642
Validation Accuracy: 0.39879995584487915
Epoch  2, CIFAR-10 Batch 4:  Loss: 1.6338930130004883
Validation Accuracy: 0.42879998683929443
Epoch  2, CIFAR-10 Batch 5:  Loss: 1.4947240352630615
Validation Accuracy: 0.437999963760376
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.5879207849502563
Validation Accuracy: 0.44620001316070557
Epoch  3, CIFAR-10 Batch 2:  Loss: 1.611999750137329
Validation Accuracy: 0.45579993724823
Epoch  3, CIFAR-10 Batch 3:  Loss: 1.2323493957519531
Validation Accuracy: 0.44919997453689575
Epoch  3, CIFAR-10 Batch 4:  Loss: 1.5603946447372437
Validation Accuracy: 0.47200000286102295
Epoch  3, CIFAR-10 Batch 5:  Loss: 1.4046629667282104
Validation Accuracy: 0.46699994802474976
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.4267324209213257
Validation Accuracy: 0.4829999506473541
Epoch  4, CIFAR-10 Batch 2:  Loss: 1.4663374423980713
Validation Accuracy: 0.49199995398521423
Epoch  4, CIFAR-10 Batch 3:  Loss: 1.132730484008789
Validation Accuracy: 0.4949999451637268
Epoch  4, CIFAR-10 Batch 4:  Loss: 1.4437487125396729
Validation Accuracy: 0.4997999668121338
Epoch  4, CIFAR-10 Batch 5:  Loss: 1.2886548042297363
Validation Accuracy: 0.4949999451637268
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.2837653160095215
Validation Accuracy: 0.5127999782562256
Epoch  5, CIFAR-10 Batch 2:  Loss: 1.3707767724990845
Validation Accuracy: 0.5059999823570251
Epoch  5, CIFAR-10 Batch 3:  Loss: 1.0916495323181152
Validation Accuracy: 0.5121999979019165
Epoch  5, CIFAR-10 Batch 4:  Loss: 1.3771164417266846
Validation Accuracy: 0.5295999646186829
Epoch  5, CIFAR-10 Batch 5:  Loss: 1.2021005153656006
Validation Accuracy: 0.5179999470710754
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.1907583475112915
Validation Accuracy: 0.5267999172210693
Epoch  6, CIFAR-10 Batch 2:  Loss: 1.3104201555252075
Validation Accuracy: 0.5252000093460083
Epoch  6, CIFAR-10 Batch 3:  Loss: 1.0367233753204346
Validation Accuracy: 0.5375999808311462
Epoch  6, CIFAR-10 Batch 4:  Loss: 1.2396347522735596
Validation Accuracy: 0.5433999300003052
Epoch  6, CIFAR-10 Batch 5:  Loss: 1.1205297708511353
Validation Accuracy: 0.5335999131202698
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.1361535787582397
Validation Accuracy: 0.5363999605178833
Epoch  7, CIFAR-10 Batch 2:  Loss: 1.2286032438278198
Validation Accuracy: 0.5410000085830688
Epoch  7, CIFAR-10 Batch 3:  Loss: 0.980373740196228
Validation Accuracy: 0.5471998453140259
Epoch  7, CIFAR-10 Batch 4:  Loss: 1.1710336208343506
Validation Accuracy: 0.5625998973846436
Epoch  7, CIFAR-10 Batch 5:  Loss: 1.059490442276001
Validation Accuracy: 0.5569999814033508
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.0728704929351807
Validation Accuracy: 0.5625998973846436
Epoch  8, CIFAR-10 Batch 2:  Loss: 1.148860216140747
Validation Accuracy: 0.5517998933792114
Epoch  8, CIFAR-10 Batch 3:  Loss: 0.9174534678459167
Validation Accuracy: 0.5633999109268188
Epoch  8, CIFAR-10 Batch 4:  Loss: 1.1140468120574951
Validation Accuracy: 0.573199987411499
Epoch  8, CIFAR-10 Batch 5:  Loss: 1.031442403793335
Validation Accuracy: 0.5679999589920044
Epoch  9, CIFAR-10 Batch 1:  Loss: 1.0553324222564697
Validation Accuracy: 0.5703999400138855
Epoch  9, CIFAR-10 Batch 2:  Loss: 1.1303911209106445
Validation Accuracy: 0.5661999583244324
Epoch  9, CIFAR-10 Batch 3:  Loss: 0.8652509450912476
Validation Accuracy: 0.5719999074935913
Epoch  9, CIFAR-10 Batch 4:  Loss: 1.046394944190979
Validation Accuracy: 0.5777999758720398
Epoch  9, CIFAR-10 Batch 5:  Loss: 0.9565870761871338
Validation Accuracy: 0.5851999521255493
Epoch 10, CIFAR-10 Batch 1:  Loss: 1.001705527305603
Validation Accuracy: 0.5735999345779419
Epoch 10, CIFAR-10 Batch 2:  Loss: 1.081337571144104
Validation Accuracy: 0.574199914932251
Epoch 10, CIFAR-10 Batch 3:  Loss: 0.802502453327179
Validation Accuracy: 0.5787999629974365
Epoch 10, CIFAR-10 Batch 4:  Loss: 1.01389479637146
Validation Accuracy: 0.590999960899353
Epoch 10, CIFAR-10 Batch 5:  Loss: 0.8609421253204346
Validation Accuracy: 0.5951998829841614
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.9621260166168213
Validation Accuracy: 0.5883998870849609
Epoch 11, CIFAR-10 Batch 2:  Loss: 1.0428920984268188
Validation Accuracy: 0.5887999534606934
Epoch 11, CIFAR-10 Batch 3:  Loss: 0.7476211190223694
Validation Accuracy: 0.5831999182701111
Epoch 11, CIFAR-10 Batch 4:  Loss: 0.9659223556518555
Validation Accuracy: 0.5995998978614807
Epoch 11, CIFAR-10 Batch 5:  Loss: 0.8117021322250366
Validation Accuracy: 0.5893999338150024
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.9700316786766052
Validation Accuracy: 0.5921999216079712
Epoch 12, CIFAR-10 Batch 2:  Loss: 1.0059361457824707
Validation Accuracy: 0.5955999493598938
Epoch 12, CIFAR-10 Batch 3:  Loss: 0.7056434750556946
Validation Accuracy: 0.5903998613357544
Epoch 12, CIFAR-10 Batch 4:  Loss: 0.9203470349311829
Validation Accuracy: 0.6017999649047852
Epoch 12, CIFAR-10 Batch 5:  Loss: 0.7557036280632019
Validation Accuracy: 0.6007999777793884
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.9592165946960449
Validation Accuracy: 0.5991998910903931
Epoch 13, CIFAR-10 Batch 2:  Loss: 0.9515608549118042
Validation Accuracy: 0.6037998795509338
Epoch 13, CIFAR-10 Batch 3:  Loss: 0.6732942461967468
Validation Accuracy: 0.5929999351501465
Epoch 13, CIFAR-10 Batch 4:  Loss: 0.8724186420440674
Validation Accuracy: 0.6089999079704285
Epoch 13, CIFAR-10 Batch 5:  Loss: 0.7073040008544922
Validation Accuracy: 0.6115999221801758
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.907864511013031
Validation Accuracy: 0.6123999357223511
Epoch 14, CIFAR-10 Batch 2:  Loss: 0.8996661901473999
Validation Accuracy: 0.6159999370574951
Epoch 14, CIFAR-10 Batch 3:  Loss: 0.6387280225753784
Validation Accuracy: 0.5951998829841614
Epoch 14, CIFAR-10 Batch 4:  Loss: 0.8259354829788208
Validation Accuracy: 0.6143999099731445
Epoch 14, CIFAR-10 Batch 5:  Loss: 0.6516329050064087
Validation Accuracy: 0.613399863243103
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.8909046649932861
Validation Accuracy: 0.6139999032020569
Epoch 15, CIFAR-10 Batch 2:  Loss: 0.8882606029510498
Validation Accuracy: 0.6129999160766602
Epoch 15, CIFAR-10 Batch 3:  Loss: 0.6103866100311279
Validation Accuracy: 0.6001999378204346
Epoch 15, CIFAR-10 Batch 4:  Loss: 0.7851787805557251
Validation Accuracy: 0.6185998916625977
Epoch 15, CIFAR-10 Batch 5:  Loss: 0.6350667476654053
Validation Accuracy: 0.6129999160766602
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.8888770937919617
Validation Accuracy: 0.6191999316215515
Epoch 16, CIFAR-10 Batch 2:  Loss: 0.8311955332756042
Validation Accuracy: 0.6213999390602112
Epoch 16, CIFAR-10 Batch 3:  Loss: 0.554959774017334
Validation Accuracy: 0.6107998490333557
Epoch 16, CIFAR-10 Batch 4:  Loss: 0.7638996839523315
Validation Accuracy: 0.6183999180793762
Epoch 16, CIFAR-10 Batch 5:  Loss: 0.6164000034332275
Validation Accuracy: 0.6213998794555664
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.8789526224136353
Validation Accuracy: 0.6041998863220215
Epoch 17, CIFAR-10 Batch 2:  Loss: 0.787085235118866
Validation Accuracy: 0.6219998598098755
Epoch 17, CIFAR-10 Batch 3:  Loss: 0.54900062084198
Validation Accuracy: 0.6149998903274536
Epoch 17, CIFAR-10 Batch 4:  Loss: 0.7117475271224976
Validation Accuracy: 0.6271998882293701
Epoch 17, CIFAR-10 Batch 5:  Loss: 0.6058753728866577
Validation Accuracy: 0.6233998537063599
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.823367714881897
Validation Accuracy: 0.622999906539917
Epoch 18, CIFAR-10 Batch 2:  Loss: 0.766086995601654
Validation Accuracy: 0.6347998976707458
Epoch 18, CIFAR-10 Batch 3:  Loss: 0.5125723481178284
Validation Accuracy: 0.6187998652458191
Epoch 18, CIFAR-10 Batch 4:  Loss: 0.7046117782592773
Validation Accuracy: 0.6239999532699585
Epoch 18, CIFAR-10 Batch 5:  Loss: 0.5821499824523926
Validation Accuracy: 0.6261999011039734
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.8069278597831726
Validation Accuracy: 0.6255999207496643
Epoch 19, CIFAR-10 Batch 2:  Loss: 0.7208167314529419
Validation Accuracy: 0.627799928188324
Epoch 19, CIFAR-10 Batch 3:  Loss: 0.5067182183265686
Validation Accuracy: 0.6149998903274536
Epoch 19, CIFAR-10 Batch 4:  Loss: 0.6833785772323608
Validation Accuracy: 0.6361998915672302
Epoch 19, CIFAR-10 Batch 5:  Loss: 0.5660731792449951
Validation Accuracy: 0.6317999362945557
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.8028745651245117
Validation Accuracy: 0.6333998441696167
Epoch 20, CIFAR-10 Batch 2:  Loss: 0.6952913999557495
Validation Accuracy: 0.6287999749183655
Epoch 20, CIFAR-10 Batch 3:  Loss: 0.48867490887641907
Validation Accuracy: 0.6281999349594116
Epoch 20, CIFAR-10 Batch 4:  Loss: 0.6547895669937134
Validation Accuracy: 0.627599835395813
Epoch 20, CIFAR-10 Batch 5:  Loss: 0.5440071821212769
Validation Accuracy: 0.6307998895645142
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.7800212502479553
Validation Accuracy: 0.6297999024391174
Epoch 21, CIFAR-10 Batch 2:  Loss: 0.6863542795181274
Validation Accuracy: 0.6333999037742615
Epoch 21, CIFAR-10 Batch 3:  Loss: 0.4561624526977539
Validation Accuracy: 0.6241999268531799
Epoch 21, CIFAR-10 Batch 4:  Loss: 0.6119809746742249
Validation Accuracy: 0.6373999118804932
Epoch 21, CIFAR-10 Batch 5:  Loss: 0.5441064834594727
Validation Accuracy: 0.6341999173164368
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.7433232069015503
Validation Accuracy: 0.640799880027771
Epoch 22, CIFAR-10 Batch 2:  Loss: 0.6705741286277771
Validation Accuracy: 0.6349999308586121
Epoch 22, CIFAR-10 Batch 3:  Loss: 0.4015306234359741
Validation Accuracy: 0.6369999051094055
Epoch 22, CIFAR-10 Batch 4:  Loss: 0.6076388359069824
Validation Accuracy: 0.640799880027771
Epoch 22, CIFAR-10 Batch 5:  Loss: 0.5024596452713013
Validation Accuracy: 0.645599901676178
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.7181150913238525
Validation Accuracy: 0.6419999003410339
Epoch 23, CIFAR-10 Batch 2:  Loss: 0.6371383666992188
Validation Accuracy: 0.6381998658180237
Epoch 23, CIFAR-10 Batch 3:  Loss: 0.4023593068122864
Validation Accuracy: 0.631399929523468
Epoch 23, CIFAR-10 Batch 4:  Loss: 0.6169359683990479
Validation Accuracy: 0.6321998834609985
Epoch 23, CIFAR-10 Batch 5:  Loss: 0.5024625658988953
Validation Accuracy: 0.6435998678207397
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.70493084192276
Validation Accuracy: 0.6393998861312866
Epoch 24, CIFAR-10 Batch 2:  Loss: 0.621566116809845
Validation Accuracy: 0.6377999186515808
Epoch 24, CIFAR-10 Batch 3:  Loss: 0.4277169108390808
Validation Accuracy: 0.6439999341964722
Epoch 24, CIFAR-10 Batch 4:  Loss: 0.5740879774093628
Validation Accuracy: 0.640999972820282
Epoch 24, CIFAR-10 Batch 5:  Loss: 0.4723868668079376
Validation Accuracy: 0.6444000005722046
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.7005448937416077
Validation Accuracy: 0.6351999044418335
Epoch 25, CIFAR-10 Batch 2:  Loss: 0.5925984978675842
Validation Accuracy: 0.6395999193191528
Epoch 25, CIFAR-10 Batch 3:  Loss: 0.4026998281478882
Validation Accuracy: 0.6429998874664307
Epoch 25, CIFAR-10 Batch 4:  Loss: 0.5574508905410767
Validation Accuracy: 0.648999810218811
Epoch 25, CIFAR-10 Batch 5:  Loss: 0.43602555990219116
Validation Accuracy: 0.6451998949050903
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.6487689018249512
Validation Accuracy: 0.6345998644828796
Epoch 26, CIFAR-10 Batch 2:  Loss: 0.573045015335083
Validation Accuracy: 0.6369998455047607
Epoch 26, CIFAR-10 Batch 3:  Loss: 0.38534870743751526
Validation Accuracy: 0.642599880695343
Epoch 26, CIFAR-10 Batch 4:  Loss: 0.5258550047874451
Validation Accuracy: 0.6463999152183533
Epoch 26, CIFAR-10 Batch 5:  Loss: 0.42165523767471313
Validation Accuracy: 0.64739990234375
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.6617226004600525
Validation Accuracy: 0.6299999356269836
Epoch 27, CIFAR-10 Batch 2:  Loss: 0.5610063076019287
Validation Accuracy: 0.6465998888015747
Epoch 27, CIFAR-10 Batch 3:  Loss: 0.3872659206390381
Validation Accuracy: 0.6363998651504517
Epoch 27, CIFAR-10 Batch 4:  Loss: 0.5089085102081299
Validation Accuracy: 0.6479998826980591
Epoch 27, CIFAR-10 Batch 5:  Loss: 0.426917165517807
Validation Accuracy: 0.6481998562812805
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.6445666551589966
Validation Accuracy: 0.627799928188324
Epoch 28, CIFAR-10 Batch 2:  Loss: 0.5478131175041199
Validation Accuracy: 0.6429998874664307
Epoch 28, CIFAR-10 Batch 3:  Loss: 0.3620033860206604
Validation Accuracy: 0.6415998935699463
Epoch 28, CIFAR-10 Batch 4:  Loss: 0.48613542318344116
Validation Accuracy: 0.6429999470710754
Epoch 28, CIFAR-10 Batch 5:  Loss: 0.4119555652141571
Validation Accuracy: 0.6519998908042908
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.6333691477775574
Validation Accuracy: 0.6339999437332153
Epoch 29, CIFAR-10 Batch 2:  Loss: 0.5522593855857849
Validation Accuracy: 0.6425999402999878
Epoch 29, CIFAR-10 Batch 3:  Loss: 0.3478941321372986
Validation Accuracy: 0.6375998258590698
Epoch 29, CIFAR-10 Batch 4:  Loss: 0.49010148644447327
Validation Accuracy: 0.6447998881340027
Epoch 29, CIFAR-10 Batch 5:  Loss: 0.45046722888946533
Validation Accuracy: 0.6481999158859253
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.6273802518844604
Validation Accuracy: 0.6393999457359314
Epoch 30, CIFAR-10 Batch 2:  Loss: 0.5654564499855042
Validation Accuracy: 0.6475998163223267
Epoch 30, CIFAR-10 Batch 3:  Loss: 0.328299880027771
Validation Accuracy: 0.6345999240875244
Epoch 30, CIFAR-10 Batch 4:  Loss: 0.46261247992515564
Validation Accuracy: 0.6511999368667603
Epoch 30, CIFAR-10 Batch 5:  Loss: 0.41128766536712646
Validation Accuracy: 0.6535999178886414
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.644677460193634
Validation Accuracy: 0.6235998868942261
Epoch 31, CIFAR-10 Batch 2:  Loss: 0.4801250696182251
Validation Accuracy: 0.6431998610496521
Epoch 31, CIFAR-10 Batch 3:  Loss: 0.32946228981018066
Validation Accuracy: 0.6465998888015747
Epoch 31, CIFAR-10 Batch 4:  Loss: 0.46382737159729004
Validation Accuracy: 0.6463999152183533
Epoch 31, CIFAR-10 Batch 5:  Loss: 0.39182159304618835
Validation Accuracy: 0.6537998914718628
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.588614284992218
Validation Accuracy: 0.6431998610496521
Epoch 32, CIFAR-10 Batch 2:  Loss: 0.5043936967849731
Validation Accuracy: 0.6475999355316162
Epoch 32, CIFAR-10 Batch 3:  Loss: 0.30502215027809143
Validation Accuracy: 0.6459999084472656
Epoch 32, CIFAR-10 Batch 4:  Loss: 0.4441353678703308
Validation Accuracy: 0.6575998663902283
Epoch 32, CIFAR-10 Batch 5:  Loss: 0.3586779832839966
Validation Accuracy: 0.653799831867218
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.5669272541999817
Validation Accuracy: 0.6271999478340149
Epoch 33, CIFAR-10 Batch 2:  Loss: 0.4525797963142395
Validation Accuracy: 0.6493998765945435
Epoch 33, CIFAR-10 Batch 3:  Loss: 0.3151791989803314
Validation Accuracy: 0.6495999097824097
Epoch 33, CIFAR-10 Batch 4:  Loss: 0.45039233565330505
Validation Accuracy: 0.6573998928070068
Epoch 33, CIFAR-10 Batch 5:  Loss: 0.37638717889785767
Validation Accuracy: 0.6523998975753784
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.6012490391731262
Validation Accuracy: 0.6333998441696167
Epoch 34, CIFAR-10 Batch 2:  Loss: 0.4306783974170685
Validation Accuracy: 0.6445999145507812
Epoch 34, CIFAR-10 Batch 3:  Loss: 0.2825191915035248
Validation Accuracy: 0.6471998691558838
Epoch 34, CIFAR-10 Batch 4:  Loss: 0.4442032277584076
Validation Accuracy: 0.658799946308136
Epoch 34, CIFAR-10 Batch 5:  Loss: 0.35091233253479004
Validation Accuracy: 0.6523998379707336
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.5919427871704102
Validation Accuracy: 0.6351999044418335
Epoch 35, CIFAR-10 Batch 2:  Loss: 0.4563086926937103
Validation Accuracy: 0.6469998955726624
Epoch 35, CIFAR-10 Batch 3:  Loss: 0.2717805504798889
Validation Accuracy: 0.6465998888015747
Epoch 35, CIFAR-10 Batch 4:  Loss: 0.41754812002182007
Validation Accuracy: 0.654999852180481
Epoch 35, CIFAR-10 Batch 5:  Loss: 0.3495824933052063
Validation Accuracy: 0.6513999104499817
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.5482213497161865
Validation Accuracy: 0.6425998210906982
Epoch 36, CIFAR-10 Batch 2:  Loss: 0.427451491355896
Validation Accuracy: 0.6437999606132507
Epoch 36, CIFAR-10 Batch 3:  Loss: 0.2652438282966614
Validation Accuracy: 0.6513998508453369
Epoch 36, CIFAR-10 Batch 4:  Loss: 0.4359656572341919
Validation Accuracy: 0.656799852848053
Epoch 36, CIFAR-10 Batch 5:  Loss: 0.3495517075061798
Validation Accuracy: 0.6551999449729919
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.5460016131401062
Validation Accuracy: 0.6433998346328735
Epoch 37, CIFAR-10 Batch 2:  Loss: 0.4297662675380707
Validation Accuracy: 0.6453999280929565
Epoch 37, CIFAR-10 Batch 3:  Loss: 0.28954851627349854
Validation Accuracy: 0.6533999443054199
Epoch 37, CIFAR-10 Batch 4:  Loss: 0.404166579246521
Validation Accuracy: 0.6569998264312744
Epoch 37, CIFAR-10 Batch 5:  Loss: 0.3481847047805786
Validation Accuracy: 0.6531999111175537
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.5439050793647766
Validation Accuracy: 0.6301999092102051
Epoch 38, CIFAR-10 Batch 2:  Loss: 0.40804582834243774
Validation Accuracy: 0.6369999051094055
Epoch 38, CIFAR-10 Batch 3:  Loss: 0.269390732049942
Validation Accuracy: 0.6477999091148376
Epoch 38, CIFAR-10 Batch 4:  Loss: 0.37944334745407104
Validation Accuracy: 0.6599999666213989
Epoch 38, CIFAR-10 Batch 5:  Loss: 0.3406701385974884
Validation Accuracy: 0.6511998176574707
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.5181202292442322
Validation Accuracy: 0.6453998684883118
Epoch 39, CIFAR-10 Batch 2:  Loss: 0.39736053347587585
Validation Accuracy: 0.6403998732566833
Epoch 39, CIFAR-10 Batch 3:  Loss: 0.25187385082244873
Validation Accuracy: 0.6553998589515686
Epoch 39, CIFAR-10 Batch 4:  Loss: 0.4015665352344513
Validation Accuracy: 0.6539998650550842
Epoch 39, CIFAR-10 Batch 5:  Loss: 0.33895474672317505
Validation Accuracy: 0.6495999097824097
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.49749863147735596
Validation Accuracy: 0.6503998637199402
Epoch 40, CIFAR-10 Batch 2:  Loss: 0.3643900156021118
Validation Accuracy: 0.6477999091148376
Epoch 40, CIFAR-10 Batch 3:  Loss: 0.24874678254127502
Validation Accuracy: 0.6581999063491821
Epoch 40, CIFAR-10 Batch 4:  Loss: 0.3867996633052826
Validation Accuracy: 0.6581998467445374
Epoch 40, CIFAR-10 Batch 5:  Loss: 0.31977537274360657
Validation Accuracy: 0.6609998941421509
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.5021096467971802
Validation Accuracy: 0.6429998874664307
Epoch 41, CIFAR-10 Batch 2:  Loss: 0.37291523814201355
Validation Accuracy: 0.6533999443054199
Epoch 41, CIFAR-10 Batch 3:  Loss: 0.23333808779716492
Validation Accuracy: 0.6579998731613159
Epoch 41, CIFAR-10 Batch 4:  Loss: 0.39001187682151794
Validation Accuracy: 0.657599925994873
Epoch 41, CIFAR-10 Batch 5:  Loss: 0.3040219247341156
Validation Accuracy: 0.6553999185562134
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.4874580502510071
Validation Accuracy: 0.6481999158859253
Epoch 42, CIFAR-10 Batch 2:  Loss: 0.37568768858909607
Validation Accuracy: 0.6519998908042908
Epoch 42, CIFAR-10 Batch 3:  Loss: 0.23452383279800415
Validation Accuracy: 0.6415998935699463
Epoch 42, CIFAR-10 Batch 4:  Loss: 0.38821613788604736
Validation Accuracy: 0.6577998995780945
Epoch 42, CIFAR-10 Batch 5:  Loss: 0.32819944620132446
Validation Accuracy: 0.6499998569488525
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.5197715163230896
Validation Accuracy: 0.6391998529434204
Epoch 43, CIFAR-10 Batch 2:  Loss: 0.3618895411491394
Validation Accuracy: 0.6543999314308167
Epoch 43, CIFAR-10 Batch 3:  Loss: 0.22709479928016663
Validation Accuracy: 0.6595999002456665
Epoch 43, CIFAR-10 Batch 4:  Loss: 0.3705103099346161
Validation Accuracy: 0.6663998961448669
Epoch 43, CIFAR-10 Batch 5:  Loss: 0.2979734539985657
Validation Accuracy: 0.6535999178886414
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.49991461634635925
Validation Accuracy: 0.6531999111175537
Epoch 44, CIFAR-10 Batch 2:  Loss: 0.33553946018218994
Validation Accuracy: 0.6573998928070068
Epoch 44, CIFAR-10 Batch 3:  Loss: 0.21202747523784637
Validation Accuracy: 0.6653998494148254
Epoch 44, CIFAR-10 Batch 4:  Loss: 0.3715304732322693
Validation Accuracy: 0.6615999341011047
Epoch 44, CIFAR-10 Batch 5:  Loss: 0.2981499433517456
Validation Accuracy: 0.6583998799324036
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.4830361008644104
Validation Accuracy: 0.642599880695343
Epoch 45, CIFAR-10 Batch 2:  Loss: 0.3148803412914276
Validation Accuracy: 0.6543998718261719
Epoch 45, CIFAR-10 Batch 3:  Loss: 0.22578972578048706
Validation Accuracy: 0.6563999056816101
Epoch 45, CIFAR-10 Batch 4:  Loss: 0.40322011709213257
Validation Accuracy: 0.6515998840332031
Epoch 45, CIFAR-10 Batch 5:  Loss: 0.31900808215141296
Validation Accuracy: 0.6527999043464661
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.4502914547920227
Validation Accuracy: 0.6539998650550842
Epoch 46, CIFAR-10 Batch 2:  Loss: 0.32677319645881653
Validation Accuracy: 0.6571998000144958
Epoch 46, CIFAR-10 Batch 3:  Loss: 0.22679394483566284
Validation Accuracy: 0.6575998663902283
Epoch 46, CIFAR-10 Batch 4:  Loss: 0.36127907037734985
Validation Accuracy: 0.6679998636245728
Epoch 46, CIFAR-10 Batch 5:  Loss: 0.30114516615867615
Validation Accuracy: 0.6617998480796814
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.4360753297805786
Validation Accuracy: 0.6541999578475952
Epoch 47, CIFAR-10 Batch 2:  Loss: 0.3511624336242676
Validation Accuracy: 0.6607998609542847
Epoch 47, CIFAR-10 Batch 3:  Loss: 0.20950940251350403
Validation Accuracy: 0.6691999435424805
Epoch 47, CIFAR-10 Batch 4:  Loss: 0.35590827465057373
Validation Accuracy: 0.6635998487472534
Epoch 47, CIFAR-10 Batch 5:  Loss: 0.27611738443374634
Validation Accuracy: 0.6583998799324036
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.45979201793670654
Validation Accuracy: 0.6523998975753784
Epoch 48, CIFAR-10 Batch 2:  Loss: 0.3038058876991272
Validation Accuracy: 0.6607999205589294
Epoch 48, CIFAR-10 Batch 3:  Loss: 0.21346788108348846
Validation Accuracy: 0.6635999083518982
Epoch 48, CIFAR-10 Batch 4:  Loss: 0.35509008169174194
Validation Accuracy: 0.6645998954772949
Epoch 48, CIFAR-10 Batch 5:  Loss: 0.2953076958656311
Validation Accuracy: 0.6559998989105225
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.41801658272743225
Validation Accuracy: 0.6601998805999756
Epoch 49, CIFAR-10 Batch 2:  Loss: 0.3135547637939453
Validation Accuracy: 0.6591999530792236
Epoch 49, CIFAR-10 Batch 3:  Loss: 0.21626152098178864
Validation Accuracy: 0.6643999218940735
Epoch 49, CIFAR-10 Batch 4:  Loss: 0.3135557770729065
Validation Accuracy: 0.668799877166748
Epoch 49, CIFAR-10 Batch 5:  Loss: 0.28541791439056396
Validation Accuracy: 0.6635998487472534
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.404996782541275
Validation Accuracy: 0.6667999029159546
Epoch 50, CIFAR-10 Batch 2:  Loss: 0.3203143775463104
Validation Accuracy: 0.6587998867034912
Epoch 50, CIFAR-10 Batch 3:  Loss: 0.21111318469047546
Validation Accuracy: 0.6625999212265015
Epoch 50, CIFAR-10 Batch 4:  Loss: 0.34025391936302185
Validation Accuracy: 0.6629998683929443
Epoch 50, CIFAR-10 Batch 5:  Loss: 0.2742578983306885
Validation Accuracy: 0.665199875831604
Epoch 51, CIFAR-10 Batch 1:  Loss: 0.41268450021743774
Validation Accuracy: 0.6677998900413513
Epoch 51, CIFAR-10 Batch 2:  Loss: 0.3148064613342285
Validation Accuracy: 0.6601998805999756
Epoch 51, CIFAR-10 Batch 3:  Loss: 0.19486704468727112
Validation Accuracy: 0.6661999225616455
Epoch 51, CIFAR-10 Batch 4:  Loss: 0.30908215045928955
Validation Accuracy: 0.6697999238967896
Epoch 51, CIFAR-10 Batch 5:  Loss: 0.2831408679485321
Validation Accuracy: 0.6601998805999756
Epoch 52, CIFAR-10 Batch 1:  Loss: 0.40715086460113525
Validation Accuracy: 0.6633998155593872
Epoch 52, CIFAR-10 Batch 2:  Loss: 0.2955167889595032
Validation Accuracy: 0.6621999144554138
Epoch 52, CIFAR-10 Batch 3:  Loss: 0.19782938063144684
Validation Accuracy: 0.6727998852729797
Epoch 52, CIFAR-10 Batch 4:  Loss: 0.3015593886375427
Validation Accuracy: 0.6689999103546143
Epoch 52, CIFAR-10 Batch 5:  Loss: 0.26455408334732056
Validation Accuracy: 0.6619998812675476
Epoch 53, CIFAR-10 Batch 1:  Loss: 0.4075208902359009
Validation Accuracy: 0.6679998636245728
Epoch 53, CIFAR-10 Batch 2:  Loss: 0.2854780852794647
Validation Accuracy: 0.6675999164581299
Epoch 53, CIFAR-10 Batch 3:  Loss: 0.208605095744133
Validation Accuracy: 0.6683999300003052
Epoch 53, CIFAR-10 Batch 4:  Loss: 0.3071604371070862
Validation Accuracy: 0.6661999225616455
Epoch 53, CIFAR-10 Batch 5:  Loss: 0.26867592334747314
Validation Accuracy: 0.6579999327659607
Epoch 54, CIFAR-10 Batch 1:  Loss: 0.38148701190948486
Validation Accuracy: 0.6547998785972595
Epoch 54, CIFAR-10 Batch 2:  Loss: 0.26000887155532837
Validation Accuracy: 0.6649999022483826
Epoch 54, CIFAR-10 Batch 3:  Loss: 0.20421721041202545
Validation Accuracy: 0.6709999442100525
Epoch 54, CIFAR-10 Batch 4:  Loss: 0.30626803636550903
Validation Accuracy: 0.6675999164581299
Epoch 54, CIFAR-10 Batch 5:  Loss: 0.2606838345527649
Validation Accuracy: 0.6553999185562134
Epoch 55, CIFAR-10 Batch 1:  Loss: 0.3851734697818756
Validation Accuracy: 0.6607998609542847
Epoch 55, CIFAR-10 Batch 2:  Loss: 0.2913861870765686
Validation Accuracy: 0.66159987449646
Epoch 55, CIFAR-10 Batch 3:  Loss: 0.20283013582229614
Validation Accuracy: 0.6659998893737793
Epoch 55, CIFAR-10 Batch 4:  Loss: 0.28033125400543213
Validation Accuracy: 0.6693999171257019
Epoch 55, CIFAR-10 Batch 5:  Loss: 0.2645723819732666
Validation Accuracy: 0.6489998698234558
Epoch 56, CIFAR-10 Batch 1:  Loss: 0.37492018938064575
Validation Accuracy: 0.6617998480796814
Epoch 56, CIFAR-10 Batch 2:  Loss: 0.2587917447090149
Validation Accuracy: 0.6703998446464539
Epoch 56, CIFAR-10 Batch 3:  Loss: 0.20381230115890503
Validation Accuracy: 0.6697999238967896
Epoch 56, CIFAR-10 Batch 4:  Loss: 0.27326658368110657
Validation Accuracy: 0.6761998534202576
Epoch 56, CIFAR-10 Batch 5:  Loss: 0.24407152831554413
Validation Accuracy: 0.6661998629570007
Epoch 57, CIFAR-10 Batch 1:  Loss: 0.3789929747581482
Validation Accuracy: 0.6649999022483826
Epoch 57, CIFAR-10 Batch 2:  Loss: 0.25149789452552795
Validation Accuracy: 0.6725998520851135
Epoch 57, CIFAR-10 Batch 3:  Loss: 0.1887582689523697
Validation Accuracy: 0.6725999116897583
Epoch 57, CIFAR-10 Batch 4:  Loss: 0.2656835913658142
Validation Accuracy: 0.6721998453140259
Epoch 57, CIFAR-10 Batch 5:  Loss: 0.23364564776420593
Validation Accuracy: 0.6625998616218567
Epoch 58, CIFAR-10 Batch 1:  Loss: 0.37848836183547974
Validation Accuracy: 0.6725999116897583
Epoch 58, CIFAR-10 Batch 2:  Loss: 0.2566867470741272
Validation Accuracy: 0.6635998487472534
Epoch 58, CIFAR-10 Batch 3:  Loss: 0.18623892962932587
Validation Accuracy: 0.6685998439788818
Epoch 58, CIFAR-10 Batch 4:  Loss: 0.2835032343864441
Validation Accuracy: 0.668799877166748
Epoch 58, CIFAR-10 Batch 5:  Loss: 0.22174669802188873
Validation Accuracy: 0.6653998494148254
Epoch 59, CIFAR-10 Batch 1:  Loss: 0.3839302659034729
Validation Accuracy: 0.6741998791694641
Epoch 59, CIFAR-10 Batch 2:  Loss: 0.25174593925476074
Validation Accuracy: 0.6631999015808105
Epoch 59, CIFAR-10 Batch 3:  Loss: 0.1853119134902954
Validation Accuracy: 0.6747998595237732
Epoch 59, CIFAR-10 Batch 4:  Loss: 0.2610483765602112
Validation Accuracy: 0.6799998879432678
Epoch 59, CIFAR-10 Batch 5:  Loss: 0.21884450316429138
Validation Accuracy: 0.6505998969078064
Epoch 60, CIFAR-10 Batch 1:  Loss: 0.35686933994293213
Validation Accuracy: 0.6739999055862427
Epoch 60, CIFAR-10 Batch 2:  Loss: 0.24940693378448486
Validation Accuracy: 0.6667999029159546
Epoch 60, CIFAR-10 Batch 3:  Loss: 0.1733761429786682
Validation Accuracy: 0.6725999116897583
Epoch 60, CIFAR-10 Batch 4:  Loss: 0.2735709249973297
Validation Accuracy: 0.6727998852729797
Epoch 60, CIFAR-10 Batch 5:  Loss: 0.21531061828136444
Validation Accuracy: 0.6811997890472412
Epoch 61, CIFAR-10 Batch 1:  Loss: 0.33016520738601685
Validation Accuracy: 0.6621999144554138
Epoch 61, CIFAR-10 Batch 2:  Loss: 0.24481189250946045
Validation Accuracy: 0.6721998453140259
Epoch 61, CIFAR-10 Batch 3:  Loss: 0.19064143300056458
Validation Accuracy: 0.6761998534202576
Epoch 61, CIFAR-10 Batch 4:  Loss: 0.29592180252075195
Validation Accuracy: 0.6751998662948608
Epoch 61, CIFAR-10 Batch 5:  Loss: 0.2379412055015564
Validation Accuracy: 0.6557998657226562
Epoch 62, CIFAR-10 Batch 1:  Loss: 0.3261808454990387
Validation Accuracy: 0.6667999029159546
Epoch 62, CIFAR-10 Batch 2:  Loss: 0.23822683095932007
Validation Accuracy: 0.6691998839378357
Epoch 62, CIFAR-10 Batch 3:  Loss: 0.1809990257024765
Validation Accuracy: 0.6729998588562012
Epoch 62, CIFAR-10 Batch 4:  Loss: 0.22976568341255188
Validation Accuracy: 0.6757999062538147
Epoch 62, CIFAR-10 Batch 5:  Loss: 0.21085895597934723
Validation Accuracy: 0.6601998805999756
Epoch 63, CIFAR-10 Batch 1:  Loss: 0.34260109066963196
Validation Accuracy: 0.6697998642921448
Epoch 63, CIFAR-10 Batch 2:  Loss: 0.24308007955551147
Validation Accuracy: 0.6761999130249023
Epoch 63, CIFAR-10 Batch 3:  Loss: 0.16163599491119385
Validation Accuracy: 0.6775998473167419
Epoch 63, CIFAR-10 Batch 4:  Loss: 0.24550211429595947
Validation Accuracy: 0.6769998669624329
Epoch 63, CIFAR-10 Batch 5:  Loss: 0.23147347569465637
Validation Accuracy: 0.6531999111175537
Epoch 64, CIFAR-10 Batch 1:  Loss: 0.3323710262775421
Validation Accuracy: 0.6659998893737793
Epoch 64, CIFAR-10 Batch 2:  Loss: 0.24048739671707153
Validation Accuracy: 0.6647999286651611
Epoch 64, CIFAR-10 Batch 3:  Loss: 0.1644536256790161
Validation Accuracy: 0.6755999326705933
Epoch 64, CIFAR-10 Batch 4:  Loss: 0.23613819479942322
Validation Accuracy: 0.6773999333381653
Epoch 64, CIFAR-10 Batch 5:  Loss: 0.23462675511837006
Validation Accuracy: 0.6647998690605164
Epoch 65, CIFAR-10 Batch 1:  Loss: 0.3080148696899414
Validation Accuracy: 0.6679998636245728
Epoch 65, CIFAR-10 Batch 2:  Loss: 0.2150614708662033
Validation Accuracy: 0.6657998561859131
Epoch 65, CIFAR-10 Batch 3:  Loss: 0.17388251423835754
Validation Accuracy: 0.6807999014854431
Epoch 65, CIFAR-10 Batch 4:  Loss: 0.27977919578552246
Validation Accuracy: 0.6737998723983765
Epoch 65, CIFAR-10 Batch 5:  Loss: 0.22085757553577423
Validation Accuracy: 0.6669999361038208
Epoch 66, CIFAR-10 Batch 1:  Loss: 0.2987882196903229
Validation Accuracy: 0.6659998893737793
Epoch 66, CIFAR-10 Batch 2:  Loss: 0.2284328043460846
Validation Accuracy: 0.6733998656272888
Epoch 66, CIFAR-10 Batch 3:  Loss: 0.14568637311458588
Validation Accuracy: 0.6767998933792114
Epoch 66, CIFAR-10 Batch 4:  Loss: 0.24028794467449188
Validation Accuracy: 0.6753999590873718
Epoch 66, CIFAR-10 Batch 5:  Loss: 0.20090705156326294
Validation Accuracy: 0.6671999096870422
Epoch 67, CIFAR-10 Batch 1:  Loss: 0.31957435607910156
Validation Accuracy: 0.6771999001502991
Epoch 67, CIFAR-10 Batch 2:  Loss: 0.23325882852077484
Validation Accuracy: 0.6761997938156128
Epoch 67, CIFAR-10 Batch 3:  Loss: 0.16996799409389496
Validation Accuracy: 0.6773998737335205
Epoch 67, CIFAR-10 Batch 4:  Loss: 0.229888454079628
Validation Accuracy: 0.6717998385429382
Epoch 67, CIFAR-10 Batch 5:  Loss: 0.18584579229354858
Validation Accuracy: 0.6641998887062073
Epoch 68, CIFAR-10 Batch 1:  Loss: 0.3120509088039398
Validation Accuracy: 0.6735998392105103
Epoch 68, CIFAR-10 Batch 2:  Loss: 0.24065367877483368
Validation Accuracy: 0.6751998662948608
Epoch 68, CIFAR-10 Batch 3:  Loss: 0.15568341314792633
Validation Accuracy: 0.6707999110221863
Epoch 68, CIFAR-10 Batch 4:  Loss: 0.26769059896469116
Validation Accuracy: 0.6717998385429382
Epoch 68, CIFAR-10 Batch 5:  Loss: 0.22120794653892517
Validation Accuracy: 0.6551998853683472
Epoch 69, CIFAR-10 Batch 1:  Loss: 0.3027576804161072
Validation Accuracy: 0.6635999083518982
Epoch 69, CIFAR-10 Batch 2:  Loss: 0.22596099972724915
Validation Accuracy: 0.6663999557495117
Epoch 69, CIFAR-10 Batch 3:  Loss: 0.16494974493980408
Validation Accuracy: 0.6765998601913452
Epoch 69, CIFAR-10 Batch 4:  Loss: 0.23053967952728271
Validation Accuracy: 0.6773998737335205
Epoch 69, CIFAR-10 Batch 5:  Loss: 0.1981363594532013
Validation Accuracy: 0.6713998317718506
Epoch 70, CIFAR-10 Batch 1:  Loss: 0.28627318143844604
Validation Accuracy: 0.6715999245643616
Epoch 70, CIFAR-10 Batch 2:  Loss: 0.22339582443237305
Validation Accuracy: 0.668799877166748
Epoch 70, CIFAR-10 Batch 3:  Loss: 0.14589321613311768
Validation Accuracy: 0.6801998615264893
Epoch 70, CIFAR-10 Batch 4:  Loss: 0.22388076782226562
Validation Accuracy: 0.6749999523162842
Epoch 70, CIFAR-10 Batch 5:  Loss: 0.20397309958934784
Validation Accuracy: 0.6511999368667603
Epoch 71, CIFAR-10 Batch 1:  Loss: 0.3025345206260681
Validation Accuracy: 0.6679998636245728
Epoch 71, CIFAR-10 Batch 2:  Loss: 0.21026787161827087
Validation Accuracy: 0.6663999557495117
Epoch 71, CIFAR-10 Batch 3:  Loss: 0.15017588436603546
Validation Accuracy: 0.6771998405456543
Epoch 71, CIFAR-10 Batch 4:  Loss: 0.20205625891685486
Validation Accuracy: 0.6835998892784119
Epoch 71, CIFAR-10 Batch 5:  Loss: 0.1758928745985031
Validation Accuracy: 0.6689999103546143
Epoch 72, CIFAR-10 Batch 1:  Loss: 0.284553587436676
Validation Accuracy: 0.674799919128418
Epoch 72, CIFAR-10 Batch 2:  Loss: 0.19996102154254913
Validation Accuracy: 0.6671999096870422
Epoch 72, CIFAR-10 Batch 3:  Loss: 0.14800158143043518
Validation Accuracy: 0.6715999841690063
Epoch 72, CIFAR-10 Batch 4:  Loss: 0.21123360097408295
Validation Accuracy: 0.6719998717308044
Epoch 72, CIFAR-10 Batch 5:  Loss: 0.19139054417610168
Validation Accuracy: 0.6705998778343201
Epoch 73, CIFAR-10 Batch 1:  Loss: 0.28435641527175903
Validation Accuracy: 0.6627998352050781
Epoch 73, CIFAR-10 Batch 2:  Loss: 0.221767857670784
Validation Accuracy: 0.6571998596191406
Epoch 73, CIFAR-10 Batch 3:  Loss: 0.13869063556194305
Validation Accuracy: 0.6761999130249023
Epoch 73, CIFAR-10 Batch 4:  Loss: 0.2218361645936966
Validation Accuracy: 0.6789998412132263
Epoch 73, CIFAR-10 Batch 5:  Loss: 0.20326675474643707
Validation Accuracy: 0.6625999212265015
Epoch 74, CIFAR-10 Batch 1:  Loss: 0.32103562355041504
Validation Accuracy: 0.6603997945785522
Epoch 74, CIFAR-10 Batch 2:  Loss: 0.2218325138092041
Validation Accuracy: 0.669999897480011
Epoch 74, CIFAR-10 Batch 3:  Loss: 0.13809895515441895
Validation Accuracy: 0.6791998744010925
Epoch 74, CIFAR-10 Batch 4:  Loss: 0.21742142736911774
Validation Accuracy: 0.6779999136924744
Epoch 74, CIFAR-10 Batch 5:  Loss: 0.1937088668346405
Validation Accuracy: 0.666999876499176
Epoch 75, CIFAR-10 Batch 1:  Loss: 0.3016940653324127
Validation Accuracy: 0.6709998846054077
Epoch 75, CIFAR-10 Batch 2:  Loss: 0.2074679136276245
Validation Accuracy: 0.6667999625205994
Epoch 75, CIFAR-10 Batch 3:  Loss: 0.1330987811088562
Validation Accuracy: 0.6703999042510986
Epoch 75, CIFAR-10 Batch 4:  Loss: 0.21405759453773499
Validation Accuracy: 0.6709998846054077
Epoch 75, CIFAR-10 Batch 5:  Loss: 0.19126063585281372
Validation Accuracy: 0.6693998575210571
Epoch 76, CIFAR-10 Batch 1:  Loss: 0.28848838806152344
Validation Accuracy: 0.6719998121261597
Epoch 76, CIFAR-10 Batch 2:  Loss: 0.22632180154323578
Validation Accuracy: 0.6603999137878418
Epoch 76, CIFAR-10 Batch 3:  Loss: 0.14441931247711182
Validation Accuracy: 0.6737998723983765
Epoch 76, CIFAR-10 Batch 4:  Loss: 0.22183352708816528
Validation Accuracy: 0.6721998453140259
Epoch 76, CIFAR-10 Batch 5:  Loss: 0.2045401632785797
Validation Accuracy: 0.6691998243331909
Epoch 77, CIFAR-10 Batch 1:  Loss: 0.2679436504840851
Validation Accuracy: 0.6695998311042786
Epoch 77, CIFAR-10 Batch 2:  Loss: 0.18259365856647491
Validation Accuracy: 0.6679998636245728
Epoch 77, CIFAR-10 Batch 3:  Loss: 0.1314261555671692
Validation Accuracy: 0.6763998866081238
Epoch 77, CIFAR-10 Batch 4:  Loss: 0.240557000041008
Validation Accuracy: 0.6745998859405518
Epoch 77, CIFAR-10 Batch 5:  Loss: 0.1709538847208023
Validation Accuracy: 0.671799898147583
Epoch 78, CIFAR-10 Batch 1:  Loss: 0.2760853171348572
Validation Accuracy: 0.6691998839378357
Epoch 78, CIFAR-10 Batch 2:  Loss: 0.25482094287872314
Validation Accuracy: 0.6649999022483826
Epoch 78, CIFAR-10 Batch 3:  Loss: 0.13727450370788574
Validation Accuracy: 0.6743998527526855
Epoch 78, CIFAR-10 Batch 4:  Loss: 0.24047884345054626
Validation Accuracy: 0.6785998940467834
Epoch 78, CIFAR-10 Batch 5:  Loss: 0.17267368733882904
Validation Accuracy: 0.6703999042510986
Epoch 79, CIFAR-10 Batch 1:  Loss: 0.2639995515346527
Validation Accuracy: 0.6651999354362488
Epoch 79, CIFAR-10 Batch 2:  Loss: 0.22027228772640228
Validation Accuracy: 0.6673998832702637
Epoch 79, CIFAR-10 Batch 3:  Loss: 0.1387423574924469
Validation Accuracy: 0.6821998357772827
Epoch 79, CIFAR-10 Batch 4:  Loss: 0.20747984945774078
Validation Accuracy: 0.678399920463562
Epoch 79, CIFAR-10 Batch 5:  Loss: 0.16920414566993713
Validation Accuracy: 0.6717998385429382
Epoch 80, CIFAR-10 Batch 1:  Loss: 0.26470592617988586
Validation Accuracy: 0.6705998778343201
Epoch 80, CIFAR-10 Batch 2:  Loss: 0.22701528668403625
Validation Accuracy: 0.6641998887062073
Epoch 80, CIFAR-10 Batch 3:  Loss: 0.14302952587604523
Validation Accuracy: 0.6817998886108398
Epoch 80, CIFAR-10 Batch 4:  Loss: 0.2327347695827484
Validation Accuracy: 0.6789998412132263
Epoch 80, CIFAR-10 Batch 5:  Loss: 0.1653161495923996
Validation Accuracy: 0.6775999069213867
Epoch 81, CIFAR-10 Batch 1:  Loss: 0.2613367736339569
Validation Accuracy: 0.6697999238967896
Epoch 81, CIFAR-10 Batch 2:  Loss: 0.23083637654781342
Validation Accuracy: 0.674599826335907
Epoch 81, CIFAR-10 Batch 3:  Loss: 0.14487935602664948
Validation Accuracy: 0.6761999130249023
Epoch 81, CIFAR-10 Batch 4:  Loss: 0.22825342416763306
Validation Accuracy: 0.6797998547554016
Epoch 81, CIFAR-10 Batch 5:  Loss: 0.17824842035770416
Validation Accuracy: 0.6709998846054077
Epoch 82, CIFAR-10 Batch 1:  Loss: 0.25048553943634033
Validation Accuracy: 0.6739999055862427
Epoch 82, CIFAR-10 Batch 2:  Loss: 0.25016656517982483
Validation Accuracy: 0.6715998649597168
Epoch 82, CIFAR-10 Batch 3:  Loss: 0.1294407844543457
Validation Accuracy: 0.6759999394416809
Epoch 82, CIFAR-10 Batch 4:  Loss: 0.21693947911262512
Validation Accuracy: 0.6771999001502991
Epoch 82, CIFAR-10 Batch 5:  Loss: 0.19002552330493927
Validation Accuracy: 0.6743999123573303
Epoch 83, CIFAR-10 Batch 1:  Loss: 0.25255438685417175
Validation Accuracy: 0.668799877166748
Epoch 83, CIFAR-10 Batch 2:  Loss: 0.25611957907676697
Validation Accuracy: 0.6623998880386353
Epoch 83, CIFAR-10 Batch 3:  Loss: 0.13255758583545685
Validation Accuracy: 0.6723998188972473
Epoch 83, CIFAR-10 Batch 4:  Loss: 0.22326655685901642
Validation Accuracy: 0.6749998927116394
Epoch 83, CIFAR-10 Batch 5:  Loss: 0.1937580704689026
Validation Accuracy: 0.6675999164581299
Epoch 84, CIFAR-10 Batch 1:  Loss: 0.25574201345443726
Validation Accuracy: 0.6669999361038208
Epoch 84, CIFAR-10 Batch 2:  Loss: 0.23119671642780304
Validation Accuracy: 0.671799898147583
Epoch 84, CIFAR-10 Batch 3:  Loss: 0.14914080500602722
Validation Accuracy: 0.6797998547554016
Epoch 84, CIFAR-10 Batch 4:  Loss: 0.21834909915924072
Validation Accuracy: 0.6751998662948608
Epoch 84, CIFAR-10 Batch 5:  Loss: 0.1855824589729309
Validation Accuracy: 0.6705999374389648
Epoch 85, CIFAR-10 Batch 1:  Loss: 0.2495529055595398
Validation Accuracy: 0.6705998778343201
Epoch 85, CIFAR-10 Batch 2:  Loss: 0.22854547202587128
Validation Accuracy: 0.665199875831604
Epoch 85, CIFAR-10 Batch 3:  Loss: 0.12460757791996002
Validation Accuracy: 0.6741998791694641
Epoch 85, CIFAR-10 Batch 4:  Loss: 0.20964163541793823
Validation Accuracy: 0.6779999136924744
Epoch 85, CIFAR-10 Batch 5:  Loss: 0.17292341589927673
Validation Accuracy: 0.6707998514175415
Epoch 86, CIFAR-10 Batch 1:  Loss: 0.25346821546554565
Validation Accuracy: 0.6735998392105103
Epoch 86, CIFAR-10 Batch 2:  Loss: 0.21385802328586578
Validation Accuracy: 0.6709998846054077
Epoch 86, CIFAR-10 Batch 3:  Loss: 0.1314273327589035
Validation Accuracy: 0.6797997951507568
Epoch 86, CIFAR-10 Batch 4:  Loss: 0.18834124505519867
Validation Accuracy: 0.6821998953819275
Epoch 86, CIFAR-10 Batch 5:  Loss: 0.17082226276397705
Validation Accuracy: 0.6687999367713928
Epoch 87, CIFAR-10 Batch 1:  Loss: 0.2428545206785202
Validation Accuracy: 0.6797997951507568
Epoch 87, CIFAR-10 Batch 2:  Loss: 0.23697468638420105
Validation Accuracy: 0.6779999136924744
Epoch 87, CIFAR-10 Batch 3:  Loss: 0.12194352596998215
Validation Accuracy: 0.6745998859405518
Epoch 87, CIFAR-10 Batch 4:  Loss: 0.1766243577003479
Validation Accuracy: 0.6783998012542725
Epoch 87, CIFAR-10 Batch 5:  Loss: 0.16270311176776886
Validation Accuracy: 0.6739999055862427
Epoch 88, CIFAR-10 Batch 1:  Loss: 0.2433518022298813
Validation Accuracy: 0.6767998933792114
Epoch 88, CIFAR-10 Batch 2:  Loss: 0.23571299016475677
Validation Accuracy: 0.6643998622894287
Epoch 88, CIFAR-10 Batch 3:  Loss: 0.12210287153720856
Validation Accuracy: 0.6767998337745667
Epoch 88, CIFAR-10 Batch 4:  Loss: 0.20213589072227478
Validation Accuracy: 0.6765998601913452
Epoch 88, CIFAR-10 Batch 5:  Loss: 0.159438818693161
Validation Accuracy: 0.6705998182296753
Epoch 89, CIFAR-10 Batch 1:  Loss: 0.2338835597038269
Validation Accuracy: 0.6719998717308044
Epoch 89, CIFAR-10 Batch 2:  Loss: 0.20881518721580505
Validation Accuracy: 0.6697999238967896
Epoch 89, CIFAR-10 Batch 3:  Loss: 0.1287546157836914
Validation Accuracy: 0.673399806022644
Epoch 89, CIFAR-10 Batch 4:  Loss: 0.18869724869728088
Validation Accuracy: 0.6787998676300049
Epoch 89, CIFAR-10 Batch 5:  Loss: 0.16560310125350952
Validation Accuracy: 0.6691999435424805
Epoch 90, CIFAR-10 Batch 1:  Loss: 0.24074576795101166
Validation Accuracy: 0.6673998832702637
Epoch 90, CIFAR-10 Batch 2:  Loss: 0.20704540610313416
Validation Accuracy: 0.6727999448776245
Epoch 90, CIFAR-10 Batch 3:  Loss: 0.12490062415599823
Validation Accuracy: 0.6845998167991638
Epoch 90, CIFAR-10 Batch 4:  Loss: 0.18117626011371613
Validation Accuracy: 0.672999918460846
Epoch 90, CIFAR-10 Batch 5:  Loss: 0.1505563110113144
Validation Accuracy: 0.6781998872756958
Epoch 91, CIFAR-10 Batch 1:  Loss: 0.25553444027900696
Validation Accuracy: 0.6691998839378357
Epoch 91, CIFAR-10 Batch 2:  Loss: 0.19858744740486145
Validation Accuracy: 0.6673998832702637
Epoch 91, CIFAR-10 Batch 3:  Loss: 0.1127004474401474
Validation Accuracy: 0.6749999523162842
Epoch 91, CIFAR-10 Batch 4:  Loss: 0.17541463673114777
Validation Accuracy: 0.6849998235702515
Epoch 91, CIFAR-10 Batch 5:  Loss: 0.15558406710624695
Validation Accuracy: 0.6723998785018921
Epoch 92, CIFAR-10 Batch 1:  Loss: 0.22733935713768005
Validation Accuracy: 0.6749998927116394
Epoch 92, CIFAR-10 Batch 2:  Loss: 0.21483199298381805
Validation Accuracy: 0.6597998738288879
Epoch 92, CIFAR-10 Batch 3:  Loss: 0.12437789887189865
Validation Accuracy: 0.6729997992515564
Epoch 92, CIFAR-10 Batch 4:  Loss: 0.20251315832138062
Validation Accuracy: 0.6751998662948608
Epoch 92, CIFAR-10 Batch 5:  Loss: 0.1676541417837143
Validation Accuracy: 0.6717998385429382
Epoch 93, CIFAR-10 Batch 1:  Loss: 0.21977156400680542
Validation Accuracy: 0.6727998852729797
Epoch 93, CIFAR-10 Batch 2:  Loss: 0.20385566353797913
Validation Accuracy: 0.6665998697280884
Epoch 93, CIFAR-10 Batch 3:  Loss: 0.13768760859966278
Validation Accuracy: 0.6741998791694641
Epoch 93, CIFAR-10 Batch 4:  Loss: 0.17293351888656616
Validation Accuracy: 0.6821998357772827
Epoch 93, CIFAR-10 Batch 5:  Loss: 0.15944495797157288
Validation Accuracy: 0.671799898147583
Epoch 94, CIFAR-10 Batch 1:  Loss: 0.23111872375011444
Validation Accuracy: 0.6745998859405518
Epoch 94, CIFAR-10 Batch 2:  Loss: 0.19350118935108185
Validation Accuracy: 0.6697998642921448
Epoch 94, CIFAR-10 Batch 3:  Loss: 0.13295923173427582
Validation Accuracy: 0.6669999361038208
Epoch 94, CIFAR-10 Batch 4:  Loss: 0.1845422238111496
Validation Accuracy: 0.678399920463562
Epoch 94, CIFAR-10 Batch 5:  Loss: 0.14043214917182922
Validation Accuracy: 0.6739998459815979
Epoch 95, CIFAR-10 Batch 1:  Loss: 0.19993340969085693
Validation Accuracy: 0.6749998927116394
Epoch 95, CIFAR-10 Batch 2:  Loss: 0.18956080079078674
Validation Accuracy: 0.675399899482727
Epoch 95, CIFAR-10 Batch 3:  Loss: 0.12863197922706604
Validation Accuracy: 0.6795998811721802
Epoch 95, CIFAR-10 Batch 4:  Loss: 0.17314055562019348
Validation Accuracy: 0.6763999462127686
Epoch 95, CIFAR-10 Batch 5:  Loss: 0.1426345556974411
Validation Accuracy: 0.6773998737335205
Epoch 96, CIFAR-10 Batch 1:  Loss: 0.21913793683052063
Validation Accuracy: 0.6709998846054077
Epoch 96, CIFAR-10 Batch 2:  Loss: 0.19099614024162292
Validation Accuracy: 0.6757998466491699
Epoch 96, CIFAR-10 Batch 3:  Loss: 0.12060028314590454
Validation Accuracy: 0.6767998933792114
Epoch 96, CIFAR-10 Batch 4:  Loss: 0.17891231179237366
Validation Accuracy: 0.6817998886108398
Epoch 96, CIFAR-10 Batch 5:  Loss: 0.14112752676010132
Validation Accuracy: 0.6697998642921448
Epoch 97, CIFAR-10 Batch 1:  Loss: 0.21975237131118774
Validation Accuracy: 0.668799877166748
Epoch 97, CIFAR-10 Batch 2:  Loss: 0.18662655353546143
Validation Accuracy: 0.6671998500823975
Epoch 97, CIFAR-10 Batch 3:  Loss: 0.1169939860701561
Validation Accuracy: 0.6739999651908875
Epoch 97, CIFAR-10 Batch 4:  Loss: 0.15079736709594727
Validation Accuracy: 0.6883999109268188
Epoch 97, CIFAR-10 Batch 5:  Loss: 0.1583212912082672
Validation Accuracy: 0.6767998933792114
Epoch 98, CIFAR-10 Batch 1:  Loss: 0.21347761154174805
Validation Accuracy: 0.6739999055862427
Epoch 98, CIFAR-10 Batch 2:  Loss: 0.2073219120502472
Validation Accuracy: 0.6663998365402222
Epoch 98, CIFAR-10 Batch 3:  Loss: 0.12760764360427856
Validation Accuracy: 0.6737998723983765
Epoch 98, CIFAR-10 Batch 4:  Loss: 0.17566341161727905
Validation Accuracy: 0.6795998811721802
Epoch 98, CIFAR-10 Batch 5:  Loss: 0.14664757251739502
Validation Accuracy: 0.6745998859405518
Epoch 99, CIFAR-10 Batch 1:  Loss: 0.2003284990787506
Validation Accuracy: 0.6657999157905579
Epoch 99, CIFAR-10 Batch 2:  Loss: 0.18858759105205536
Validation Accuracy: 0.674799919128418
Epoch 99, CIFAR-10 Batch 3:  Loss: 0.13152146339416504
Validation Accuracy: 0.6755998730659485
Epoch 99, CIFAR-10 Batch 4:  Loss: 0.1574142575263977
Validation Accuracy: 0.6801998615264893
Epoch 99, CIFAR-10 Batch 5:  Loss: 0.15182048082351685
Validation Accuracy: 0.6683998107910156
Epoch 100, CIFAR-10 Batch 1:  Loss: 0.22650930285453796
Validation Accuracy: 0.668799877166748
Epoch 100, CIFAR-10 Batch 2:  Loss: 0.19564953446388245
Validation Accuracy: 0.6701999306678772
Epoch 100, CIFAR-10 Batch 3:  Loss: 0.12198585271835327
Validation Accuracy: 0.6747998595237732
Epoch 100, CIFAR-10 Batch 4:  Loss: 0.14250633120536804
Validation Accuracy: 0.6819998025894165
Epoch 100, CIFAR-10 Batch 5:  Loss: 0.1378372311592102
Validation Accuracy: 0.6749998331069946

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [39]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.668359375

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.