Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    return np.array(x/255)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [4]:
from sklearn import preprocessing

encoder = preprocessing.LabelBinarizer()

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    encoder.fit(x)
    encoder.classes_ = np.array(list(range(10)))
    return encoder.transform(x)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [7]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    return tf.placeholder(tf.float32, shape=[None, *image_shape], name = 'x')


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(tf.float32, shape=[None, n_classes], name = 'y')


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(tf.float32, name='keep_prob')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [8]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    input_depth = int(x_tensor.get_shape()[3])
#     the typecast to int is needed because in weights->truncated the expected input_depth is int
    
    conv_ksize_height = conv_ksize[0]
    conv_ksize_width = conv_ksize[1]
#     What do kernel height and width represent? How do we know that conv_ksize[0] is height? 
    
    conv_stride_height = conv_strides[0]
    conv_stride_width = conv_strides[1]
    
#     Similar attributes for the maxpool layer
    pool_ksize_height = pool_ksize[0]
    pool_ksize_width = pool_ksize[1]
    
    pool_stride_height = pool_strides[0]
    pool_stride_width = pool_strides[1]
    
    weights_shape = [conv_ksize_height, conv_ksize_width, input_depth, conv_num_outputs]
    truncated = tf.truncated_normal(weights_shape, mean = 0.0, stddev = 0.05, dtype = tf.float32)
    weights = tf.Variable(truncated)
    biases = tf.Variable(tf.zeros(conv_num_outputs))
    
    conv_strides = [1, conv_stride_height, conv_stride_width, 1]
    
    layer = tf.nn.conv2d(input = x_tensor, filter = weights, strides = conv_strides, padding = 'SAME')
    layer = tf.nn.bias_add(layer, biases)
    layer = tf.nn.relu(layer)
#   non-linear activation needed
    pool_shape = [1, pool_ksize_height, pool_ksize_width, 1]
    pool_strides = [1, pool_stride_height, pool_stride_width, 1]
    
    layer = tf.nn.max_pool(layer, pool_shape, pool_strides, padding = 'SAME')
    
    return layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [9]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    shape = x_tensor.get_shape().as_list() # returns shape = [None, 10, 30, 6]
#     shape = list(x_tensor.get_shape()) # returns shape = [Dimension(None), Dimension(10), Dimension(30), Dimension(6)]
#     print('shape',shape)
    batch_sz = shape[0] or -1
    height = shape[1]
    width = shape[2]
    depth = shape[3]
    return tf.reshape(x_tensor, [batch_sz, height*width*depth])
#     return tf.contrib.layers.flatten(x_tensor)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [10]:
def connected(x_tensor, num_outputs):
    '''Support function for fully_conn and output functions below
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs
    - does not apply any activation
    '''
    batch_sz = x_tensor.get_shape().as_list()[1]
    weights = tf.Variable(tf.truncated_normal((batch_sz, num_outputs), mean=0.0, stddev=0.05))
#     how to choose the stddev above? How does contrib.layer choose it? What is the default in contrib.layer? 
    bias = tf.Variable(tf.zeros(num_outputs))
    connected_layer = tf.add(tf.matmul(x_tensor, weights), bias)
    return connected_layer

In [11]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    fully_conn_layer = connected(x_tensor, num_outputs)
    fully_conn_layer = tf.nn.relu(fully_conn_layer)
    return fully_conn_layer
#     return tf.nn.relu(tf.contrib.layers.fully_connected(x_tensor, num_outputs))


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [12]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    return connected(x_tensor, num_outputs)
#     return tf.contrib.layers.fully_connected(x_tensor, num_outputs)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [13]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv_num_outputs = 18
    conv_ksize = (4,4)
    conv_strides = (1,1)
    pool_ksize = (4,4)
    pool_strides = (1,1)
    num_outputs = 10 # for the 10 classes 
#   TK: try more conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides
    
    network = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    network = tf.nn.dropout(network, keep_prob)
    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    network = flatten(network)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    network = fully_conn(network, 384)
    network = tf.nn.dropout(network, keep_prob)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    output_network = output(network, num_outputs)
    
    # TODO: return output
    return output_network


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [14]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
#      no return here, this is an execution function

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [15]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : x: feature_batch: Batch of Numpy image data
    : y: label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    : keep_prob: 1.0 # added by me
    """
    acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
    loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
    print('Acc: {} Loss: {}'.format(acc, loss))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [16]:
# TODO: Tune Parameters
epochs = 75
batch_size = 512
keep_probability = 0.3

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [17]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Acc: 0.2070000171661377 Loss: 2.2606866359710693
Epoch  2, CIFAR-10 Batch 1:  Acc: 0.24139997363090515 Loss: 2.1199159622192383
Epoch  3, CIFAR-10 Batch 1:  Acc: 0.3369999825954437 Loss: 1.976081132888794
Epoch  4, CIFAR-10 Batch 1:  Acc: 0.378199964761734 Loss: 1.832689881324768
Epoch  5, CIFAR-10 Batch 1:  Acc: 0.4275999665260315 Loss: 1.679229497909546
Epoch  6, CIFAR-10 Batch 1:  Acc: 0.4562000036239624 Loss: 1.6051061153411865
Epoch  7, CIFAR-10 Batch 1:  Acc: 0.47099992632865906 Loss: 1.5333713293075562
Epoch  8, CIFAR-10 Batch 1:  Acc: 0.49379992485046387 Loss: 1.4611042737960815
Epoch  9, CIFAR-10 Batch 1:  Acc: 0.5051999688148499 Loss: 1.4166672229766846
Epoch 10, CIFAR-10 Batch 1:  Acc: 0.5061999559402466 Loss: 1.368802547454834
Epoch 11, CIFAR-10 Batch 1:  Acc: 0.514799952507019 Loss: 1.320952296257019
Epoch 12, CIFAR-10 Batch 1:  Acc: 0.5229998826980591 Loss: 1.27889084815979
Epoch 13, CIFAR-10 Batch 1:  Acc: 0.5263999104499817 Loss: 1.2394481897354126
Epoch 14, CIFAR-10 Batch 1:  Acc: 0.525399923324585 Loss: 1.196798324584961
Epoch 15, CIFAR-10 Batch 1:  Acc: 0.5333999395370483 Loss: 1.1623128652572632
Epoch 16, CIFAR-10 Batch 1:  Acc: 0.5433999300003052 Loss: 1.1183048486709595
Epoch 17, CIFAR-10 Batch 1:  Acc: 0.5461999773979187 Loss: 1.083551049232483
Epoch 18, CIFAR-10 Batch 1:  Acc: 0.5535999536514282 Loss: 1.0472323894500732
Epoch 19, CIFAR-10 Batch 1:  Acc: 0.554599940776825 Loss: 1.034951090812683
Epoch 20, CIFAR-10 Batch 1:  Acc: 0.5619999766349792 Loss: 0.9933929443359375
Epoch 21, CIFAR-10 Batch 1:  Acc: 0.5637999176979065 Loss: 0.9780582785606384
Epoch 22, CIFAR-10 Batch 1:  Acc: 0.5673999786376953 Loss: 0.91942298412323
Epoch 23, CIFAR-10 Batch 1:  Acc: 0.5663999319076538 Loss: 0.9002440571784973
Epoch 24, CIFAR-10 Batch 1:  Acc: 0.5681999325752258 Loss: 0.8920129537582397
Epoch 25, CIFAR-10 Batch 1:  Acc: 0.5677999258041382 Loss: 0.8581892848014832
Epoch 26, CIFAR-10 Batch 1:  Acc: 0.5693999528884888 Loss: 0.8330419659614563
Epoch 27, CIFAR-10 Batch 1:  Acc: 0.574199914932251 Loss: 0.8194858431816101
Epoch 28, CIFAR-10 Batch 1:  Acc: 0.5769999027252197 Loss: 0.799200713634491
Epoch 29, CIFAR-10 Batch 1:  Acc: 0.5739998817443848 Loss: 0.7771028876304626
Epoch 30, CIFAR-10 Batch 1:  Acc: 0.5813999176025391 Loss: 0.7738284468650818
Epoch 31, CIFAR-10 Batch 1:  Acc: 0.5841999053955078 Loss: 0.7235175371170044
Epoch 32, CIFAR-10 Batch 1:  Acc: 0.5827999711036682 Loss: 0.718305766582489
Epoch 33, CIFAR-10 Batch 1:  Acc: 0.5913999080657959 Loss: 0.6837224364280701
Epoch 34, CIFAR-10 Batch 1:  Acc: 0.5917999148368835 Loss: 0.6706568002700806
Epoch 35, CIFAR-10 Batch 1:  Acc: 0.578999936580658 Loss: 0.701298713684082
Epoch 36, CIFAR-10 Batch 1:  Acc: 0.5951999425888062 Loss: 0.6310335993766785
Epoch 37, CIFAR-10 Batch 1:  Acc: 0.5889999270439148 Loss: 0.6287098526954651
Epoch 38, CIFAR-10 Batch 1:  Acc: 0.5977998971939087 Loss: 0.6033364534378052
Epoch 39, CIFAR-10 Batch 1:  Acc: 0.5961998701095581 Loss: 0.6043642163276672
Epoch 40, CIFAR-10 Batch 1:  Acc: 0.5949999690055847 Loss: 0.5931424498558044
Epoch 41, CIFAR-10 Batch 1:  Acc: 0.5969998836517334 Loss: 0.5841193795204163
Epoch 42, CIFAR-10 Batch 1:  Acc: 0.600399911403656 Loss: 0.5548740029335022
Epoch 43, CIFAR-10 Batch 1:  Acc: 0.6045998930931091 Loss: 0.5190531015396118
Epoch 44, CIFAR-10 Batch 1:  Acc: 0.6019998788833618 Loss: 0.534369945526123
Epoch 45, CIFAR-10 Batch 1:  Acc: 0.601599931716919 Loss: 0.5098670125007629
Epoch 46, CIFAR-10 Batch 1:  Acc: 0.6019999384880066 Loss: 0.5128952860832214
Epoch 47, CIFAR-10 Batch 1:  Acc: 0.6055999398231506 Loss: 0.48687830567359924
Epoch 48, CIFAR-10 Batch 1:  Acc: 0.6019999384880066 Loss: 0.4740486443042755
Epoch 49, CIFAR-10 Batch 1:  Acc: 0.6095998883247375 Loss: 0.4642544686794281
Epoch 50, CIFAR-10 Batch 1:  Acc: 0.6101998686790466 Loss: 0.4426875114440918
Epoch 51, CIFAR-10 Batch 1:  Acc: 0.6011999249458313 Loss: 0.45465409755706787
Epoch 52, CIFAR-10 Batch 1:  Acc: 0.6155998706817627 Loss: 0.4253270626068115
Epoch 53, CIFAR-10 Batch 1:  Acc: 0.6149998903274536 Loss: 0.40973514318466187
Epoch 54, CIFAR-10 Batch 1:  Acc: 0.614599883556366 Loss: 0.40877988934516907
Epoch 55, CIFAR-10 Batch 1:  Acc: 0.6145999431610107 Loss: 0.40588289499282837
Epoch 56, CIFAR-10 Batch 1:  Acc: 0.610599935054779 Loss: 0.3924029469490051
Epoch 57, CIFAR-10 Batch 1:  Acc: 0.6069998741149902 Loss: 0.3949475884437561
Epoch 58, CIFAR-10 Batch 1:  Acc: 0.6089999079704285 Loss: 0.3694089949131012
Epoch 59, CIFAR-10 Batch 1:  Acc: 0.6205999255180359 Loss: 0.3537289500236511
Epoch 60, CIFAR-10 Batch 1:  Acc: 0.6063998937606812 Loss: 0.3654311001300812
Epoch 61, CIFAR-10 Batch 1:  Acc: 0.6173999309539795 Loss: 0.3289925754070282
Epoch 62, CIFAR-10 Batch 1:  Acc: 0.6061999201774597 Loss: 0.3414945602416992
Epoch 63, CIFAR-10 Batch 1:  Acc: 0.6205998659133911 Loss: 0.3291623592376709
Epoch 64, CIFAR-10 Batch 1:  Acc: 0.6063998937606812 Loss: 0.34183940291404724
Epoch 65, CIFAR-10 Batch 1:  Acc: 0.6147999167442322 Loss: 0.306556761264801
Epoch 66, CIFAR-10 Batch 1:  Acc: 0.6141999363899231 Loss: 0.3003336489200592
Epoch 67, CIFAR-10 Batch 1:  Acc: 0.6139998435974121 Loss: 0.30850690603256226
Epoch 68, CIFAR-10 Batch 1:  Acc: 0.6153998970985413 Loss: 0.3111613392829895
Epoch 69, CIFAR-10 Batch 1:  Acc: 0.6123999357223511 Loss: 0.2917592227458954
Epoch 70, CIFAR-10 Batch 1:  Acc: 0.6175999045372009 Loss: 0.2961059510707855
Epoch 71, CIFAR-10 Batch 1:  Acc: 0.6041998863220215 Loss: 0.2835111618041992
Epoch 72, CIFAR-10 Batch 1:  Acc: 0.6101999282836914 Loss: 0.27770352363586426
Epoch 73, CIFAR-10 Batch 1:  Acc: 0.6131999492645264 Loss: 0.26806098222732544
Epoch 74, CIFAR-10 Batch 1:  Acc: 0.6123999357223511 Loss: 0.2530602514743805
Epoch 75, CIFAR-10 Batch 1:  Acc: 0.6151999235153198 Loss: 0.2624821662902832

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Acc: 0.141199991106987 Loss: 2.292141914367676
Epoch  1, CIFAR-10 Batch 2:  Acc: 0.2619999647140503 Loss: 2.220061779022217
Epoch  1, CIFAR-10 Batch 3:  Acc: 0.35359999537467957 Loss: 1.8594279289245605
Epoch  1, CIFAR-10 Batch 4:  Acc: 0.40799999237060547 Loss: 1.6762678623199463
Epoch  1, CIFAR-10 Batch 5:  Acc: 0.4285999834537506 Loss: 1.645232081413269
Epoch  2, CIFAR-10 Batch 1:  Acc: 0.4501999616622925 Loss: 1.6584151983261108
Epoch  2, CIFAR-10 Batch 2:  Acc: 0.4761999547481537 Loss: 1.4891963005065918
Epoch  2, CIFAR-10 Batch 3:  Acc: 0.48259997367858887 Loss: 1.3551785945892334
Epoch  2, CIFAR-10 Batch 4:  Acc: 0.4957999587059021 Loss: 1.3686282634735107
Epoch  2, CIFAR-10 Batch 5:  Acc: 0.5027999877929688 Loss: 1.3785433769226074
Epoch  3, CIFAR-10 Batch 1:  Acc: 0.5179999470710754 Loss: 1.4455411434173584
Epoch  3, CIFAR-10 Batch 2:  Acc: 0.5281999111175537 Loss: 1.314115047454834
Epoch  3, CIFAR-10 Batch 3:  Acc: 0.5217999219894409 Loss: 1.2136805057525635
Epoch  3, CIFAR-10 Batch 4:  Acc: 0.538599967956543 Loss: 1.2235019207000732
Epoch  3, CIFAR-10 Batch 5:  Acc: 0.540399968624115 Loss: 1.2697416543960571
Epoch  4, CIFAR-10 Batch 1:  Acc: 0.5397999286651611 Loss: 1.322235107421875
Epoch  4, CIFAR-10 Batch 2:  Acc: 0.5523999333381653 Loss: 1.233292818069458
Epoch  4, CIFAR-10 Batch 3:  Acc: 0.5403999090194702 Loss: 1.1848864555358887
Epoch  4, CIFAR-10 Batch 4:  Acc: 0.5641999244689941 Loss: 1.1589192152023315
Epoch  4, CIFAR-10 Batch 5:  Acc: 0.5669999122619629 Loss: 1.1920369863510132
Epoch  5, CIFAR-10 Batch 1:  Acc: 0.5645999312400818 Loss: 1.2458763122558594
Epoch  5, CIFAR-10 Batch 2:  Acc: 0.5739999413490295 Loss: 1.1478676795959473
Epoch  5, CIFAR-10 Batch 3:  Acc: 0.5699999332427979 Loss: 1.0948446989059448
Epoch  5, CIFAR-10 Batch 4:  Acc: 0.5837998390197754 Loss: 1.0703402757644653
Epoch  5, CIFAR-10 Batch 5:  Acc: 0.5789998769760132 Loss: 1.1449276208877563
Epoch  6, CIFAR-10 Batch 1:  Acc: 0.5813999176025391 Loss: 1.1949539184570312
Epoch  6, CIFAR-10 Batch 2:  Acc: 0.5743999481201172 Loss: 1.1382073163986206
Epoch  6, CIFAR-10 Batch 3:  Acc: 0.587399959564209 Loss: 1.041297197341919
Epoch  6, CIFAR-10 Batch 4:  Acc: 0.5925999283790588 Loss: 1.0312598943710327
Epoch  6, CIFAR-10 Batch 5:  Acc: 0.5947998762130737 Loss: 1.094668984413147
Epoch  7, CIFAR-10 Batch 1:  Acc: 0.595599889755249 Loss: 1.1366487741470337
Epoch  7, CIFAR-10 Batch 2:  Acc: 0.5863999128341675 Loss: 1.093252182006836
Epoch  7, CIFAR-10 Batch 3:  Acc: 0.5933999419212341 Loss: 0.9995617866516113
Epoch  7, CIFAR-10 Batch 4:  Acc: 0.6061999201774597 Loss: 0.9800075888633728
Epoch  7, CIFAR-10 Batch 5:  Acc: 0.5983999371528625 Loss: 1.0575982332229614
Epoch  8, CIFAR-10 Batch 1:  Acc: 0.6059999465942383 Loss: 1.0891050100326538
Epoch  8, CIFAR-10 Batch 2:  Acc: 0.6013998985290527 Loss: 1.055504560470581
Epoch  8, CIFAR-10 Batch 3:  Acc: 0.6091998815536499 Loss: 0.9518459439277649
Epoch  8, CIFAR-10 Batch 4:  Acc: 0.6163999438285828 Loss: 0.961550772190094
Epoch  8, CIFAR-10 Batch 5:  Acc: 0.6075999140739441 Loss: 1.0070090293884277
Epoch  9, CIFAR-10 Batch 1:  Acc: 0.6117998957633972 Loss: 1.0606740713119507
Epoch  9, CIFAR-10 Batch 2:  Acc: 0.6031998991966248 Loss: 1.0330379009246826
Epoch  9, CIFAR-10 Batch 3:  Acc: 0.619399905204773 Loss: 0.9006326198577881
Epoch  9, CIFAR-10 Batch 4:  Acc: 0.6219999194145203 Loss: 0.9324452877044678
Epoch  9, CIFAR-10 Batch 5:  Acc: 0.6155998706817627 Loss: 0.9624276757240295
Epoch 10, CIFAR-10 Batch 1:  Acc: 0.6237998604774475 Loss: 1.0272901058197021
Epoch 10, CIFAR-10 Batch 2:  Acc: 0.6145999431610107 Loss: 1.0132626295089722
Epoch 10, CIFAR-10 Batch 3:  Acc: 0.6221998929977417 Loss: 0.8836092352867126
Epoch 10, CIFAR-10 Batch 4:  Acc: 0.6261999011039734 Loss: 0.9241086840629578
Epoch 10, CIFAR-10 Batch 5:  Acc: 0.6269999146461487 Loss: 0.9597398638725281
Epoch 11, CIFAR-10 Batch 1:  Acc: 0.6289998888969421 Loss: 1.0001157522201538
Epoch 11, CIFAR-10 Batch 2:  Acc: 0.6251999139785767 Loss: 0.9589055180549622
Epoch 11, CIFAR-10 Batch 3:  Acc: 0.6235998868942261 Loss: 0.8613937497138977
Epoch 11, CIFAR-10 Batch 4:  Acc: 0.6291999220848083 Loss: 0.8887715339660645
Epoch 11, CIFAR-10 Batch 5:  Acc: 0.6239999532699585 Loss: 0.9232635498046875
Epoch 12, CIFAR-10 Batch 1:  Acc: 0.6277998685836792 Loss: 0.9982369542121887
Epoch 12, CIFAR-10 Batch 2:  Acc: 0.6237998604774475 Loss: 0.9433609247207642
Epoch 12, CIFAR-10 Batch 3:  Acc: 0.6307999491691589 Loss: 0.8411661982536316
Epoch 12, CIFAR-10 Batch 4:  Acc: 0.6389998197555542 Loss: 0.8624340295791626
Epoch 12, CIFAR-10 Batch 5:  Acc: 0.6343998908996582 Loss: 0.8932749032974243
Epoch 13, CIFAR-10 Batch 1:  Acc: 0.6317999362945557 Loss: 0.9624526500701904
Epoch 13, CIFAR-10 Batch 2:  Acc: 0.6375998854637146 Loss: 0.8805538415908813
Epoch 13, CIFAR-10 Batch 3:  Acc: 0.6387999057769775 Loss: 0.7895940542221069
Epoch 13, CIFAR-10 Batch 4:  Acc: 0.6451998353004456 Loss: 0.8443084359169006
Epoch 13, CIFAR-10 Batch 5:  Acc: 0.6351998448371887 Loss: 0.854547381401062
Epoch 14, CIFAR-10 Batch 1:  Acc: 0.6427998542785645 Loss: 0.9411112070083618
Epoch 14, CIFAR-10 Batch 2:  Acc: 0.6361998915672302 Loss: 0.8920583724975586
Epoch 14, CIFAR-10 Batch 3:  Acc: 0.6423999071121216 Loss: 0.761746346950531
Epoch 14, CIFAR-10 Batch 4:  Acc: 0.6451999545097351 Loss: 0.824722170829773
Epoch 14, CIFAR-10 Batch 5:  Acc: 0.6537998914718628 Loss: 0.8383753895759583
Epoch 15, CIFAR-10 Batch 1:  Acc: 0.6425999402999878 Loss: 0.9044048190116882
Epoch 15, CIFAR-10 Batch 2:  Acc: 0.6475999355316162 Loss: 0.8591500520706177
Epoch 15, CIFAR-10 Batch 3:  Acc: 0.6507999300956726 Loss: 0.7499629259109497
Epoch 15, CIFAR-10 Batch 4:  Acc: 0.6541998386383057 Loss: 0.8009517788887024
Epoch 15, CIFAR-10 Batch 5:  Acc: 0.6461999416351318 Loss: 0.795843243598938
Epoch 16, CIFAR-10 Batch 1:  Acc: 0.6453999280929565 Loss: 0.8844353556632996
Epoch 16, CIFAR-10 Batch 2:  Acc: 0.6505998969078064 Loss: 0.8565227389335632
Epoch 16, CIFAR-10 Batch 3:  Acc: 0.6531999111175537 Loss: 0.718188464641571
Epoch 16, CIFAR-10 Batch 4:  Acc: 0.6519998908042908 Loss: 0.7932668924331665
Epoch 16, CIFAR-10 Batch 5:  Acc: 0.6527999043464661 Loss: 0.773852527141571
Epoch 17, CIFAR-10 Batch 1:  Acc: 0.6545999050140381 Loss: 0.8612384796142578
Epoch 17, CIFAR-10 Batch 2:  Acc: 0.6505998969078064 Loss: 0.8206895589828491
Epoch 17, CIFAR-10 Batch 3:  Acc: 0.6533998847007751 Loss: 0.729544460773468
Epoch 17, CIFAR-10 Batch 4:  Acc: 0.6627998352050781 Loss: 0.763338565826416
Epoch 17, CIFAR-10 Batch 5:  Acc: 0.6527998447418213 Loss: 0.7605482339859009
Epoch 18, CIFAR-10 Batch 1:  Acc: 0.6557998657226562 Loss: 0.8506314158439636
Epoch 18, CIFAR-10 Batch 2:  Acc: 0.6477999091148376 Loss: 0.8410263061523438
Epoch 18, CIFAR-10 Batch 3:  Acc: 0.6591998338699341 Loss: 0.7084282636642456
Epoch 18, CIFAR-10 Batch 4:  Acc: 0.6661999225616455 Loss: 0.7409738302230835
Epoch 18, CIFAR-10 Batch 5:  Acc: 0.6569998264312744 Loss: 0.7416462302207947
Epoch 19, CIFAR-10 Batch 1:  Acc: 0.6479998826980591 Loss: 0.8551541566848755
Epoch 19, CIFAR-10 Batch 2:  Acc: 0.6439998745918274 Loss: 0.8456168174743652
Epoch 19, CIFAR-10 Batch 3:  Acc: 0.6601998805999756 Loss: 0.6902596354484558
Epoch 19, CIFAR-10 Batch 4:  Acc: 0.660599946975708 Loss: 0.7313786745071411
Epoch 19, CIFAR-10 Batch 5:  Acc: 0.6511998772621155 Loss: 0.731621503829956
Epoch 20, CIFAR-10 Batch 1:  Acc: 0.6649998426437378 Loss: 0.8155912160873413
Epoch 20, CIFAR-10 Batch 2:  Acc: 0.6587998867034912 Loss: 0.7689743638038635
Epoch 20, CIFAR-10 Batch 3:  Acc: 0.6591998934745789 Loss: 0.6834587454795837
Epoch 20, CIFAR-10 Batch 4:  Acc: 0.6673998832702637 Loss: 0.7147316932678223
Epoch 20, CIFAR-10 Batch 5:  Acc: 0.6627998948097229 Loss: 0.6986665725708008
Epoch 21, CIFAR-10 Batch 1:  Acc: 0.6719998717308044 Loss: 0.7714529037475586
Epoch 21, CIFAR-10 Batch 2:  Acc: 0.6647999286651611 Loss: 0.7523974180221558
Epoch 21, CIFAR-10 Batch 3:  Acc: 0.6581998467445374 Loss: 0.6642685532569885
Epoch 21, CIFAR-10 Batch 4:  Acc: 0.6647999286651611 Loss: 0.6893409490585327
Epoch 21, CIFAR-10 Batch 5:  Acc: 0.664199948310852 Loss: 0.6933926939964294
Epoch 22, CIFAR-10 Batch 1:  Acc: 0.6695998311042786 Loss: 0.770095705986023
Epoch 22, CIFAR-10 Batch 2:  Acc: 0.6619998216629028 Loss: 0.749186635017395
Epoch 22, CIFAR-10 Batch 3:  Acc: 0.6711999177932739 Loss: 0.6373014450073242
Epoch 22, CIFAR-10 Batch 4:  Acc: 0.6699998378753662 Loss: 0.6856833696365356
Epoch 22, CIFAR-10 Batch 5:  Acc: 0.669999897480011 Loss: 0.6736327409744263
Epoch 23, CIFAR-10 Batch 1:  Acc: 0.6693998575210571 Loss: 0.7779026031494141
Epoch 23, CIFAR-10 Batch 2:  Acc: 0.6665998101234436 Loss: 0.735245406627655
Epoch 23, CIFAR-10 Batch 3:  Acc: 0.6741998195648193 Loss: 0.6493409276008606
Epoch 23, CIFAR-10 Batch 4:  Acc: 0.666999876499176 Loss: 0.6726826429367065
Epoch 23, CIFAR-10 Batch 5:  Acc: 0.6753998398780823 Loss: 0.6581411361694336
Epoch 24, CIFAR-10 Batch 1:  Acc: 0.6719998717308044 Loss: 0.7571991086006165
Epoch 24, CIFAR-10 Batch 2:  Acc: 0.6707998514175415 Loss: 0.7596547603607178
Epoch 24, CIFAR-10 Batch 3:  Acc: 0.6711997985839844 Loss: 0.6426133513450623
Epoch 24, CIFAR-10 Batch 4:  Acc: 0.6757998466491699 Loss: 0.6489472389221191
Epoch 24, CIFAR-10 Batch 5:  Acc: 0.6723999381065369 Loss: 0.6522965431213379
Epoch 25, CIFAR-10 Batch 1:  Acc: 0.6795998215675354 Loss: 0.7336345911026001
Epoch 25, CIFAR-10 Batch 2:  Acc: 0.6643998026847839 Loss: 0.730442225933075
Epoch 25, CIFAR-10 Batch 3:  Acc: 0.665199875831604 Loss: 0.618083119392395
Epoch 25, CIFAR-10 Batch 4:  Acc: 0.6807998418807983 Loss: 0.6324822902679443
Epoch 25, CIFAR-10 Batch 5:  Acc: 0.6713998913764954 Loss: 0.6428791880607605
Epoch 26, CIFAR-10 Batch 1:  Acc: 0.6785998940467834 Loss: 0.7120857238769531
Epoch 26, CIFAR-10 Batch 2:  Acc: 0.6759998798370361 Loss: 0.7181347012519836
Epoch 26, CIFAR-10 Batch 3:  Acc: 0.6763998866081238 Loss: 0.6032458543777466
Epoch 26, CIFAR-10 Batch 4:  Acc: 0.682999849319458 Loss: 0.6294094324111938
Epoch 26, CIFAR-10 Batch 5:  Acc: 0.6737998723983765 Loss: 0.6172604560852051
Epoch 27, CIFAR-10 Batch 1:  Acc: 0.6831998825073242 Loss: 0.698796272277832
Epoch 27, CIFAR-10 Batch 2:  Acc: 0.678399920463562 Loss: 0.6897468566894531
Epoch 27, CIFAR-10 Batch 3:  Acc: 0.6793999075889587 Loss: 0.5949028730392456
Epoch 27, CIFAR-10 Batch 4:  Acc: 0.6799998879432678 Loss: 0.6110883355140686
Epoch 27, CIFAR-10 Batch 5:  Acc: 0.6739998459815979 Loss: 0.6040221452713013
Epoch 28, CIFAR-10 Batch 1:  Acc: 0.6835999488830566 Loss: 0.6793056130409241
Epoch 28, CIFAR-10 Batch 2:  Acc: 0.6855999231338501 Loss: 0.6897904872894287
Epoch 28, CIFAR-10 Batch 3:  Acc: 0.6707998514175415 Loss: 0.6075827479362488
Epoch 28, CIFAR-10 Batch 4:  Acc: 0.6847999095916748 Loss: 0.6052381992340088
Epoch 28, CIFAR-10 Batch 5:  Acc: 0.6835998296737671 Loss: 0.5983839631080627
Epoch 29, CIFAR-10 Batch 1:  Acc: 0.6855998635292053 Loss: 0.6754835247993469
Epoch 29, CIFAR-10 Batch 2:  Acc: 0.6761999130249023 Loss: 0.6815438866615295
Epoch 29, CIFAR-10 Batch 3:  Acc: 0.6829999089241028 Loss: 0.5586497783660889
Epoch 29, CIFAR-10 Batch 4:  Acc: 0.686599850654602 Loss: 0.5979545712471008
Epoch 29, CIFAR-10 Batch 5:  Acc: 0.6811999082565308 Loss: 0.592379629611969
Epoch 30, CIFAR-10 Batch 1:  Acc: 0.681199848651886 Loss: 0.6767008304595947
Epoch 30, CIFAR-10 Batch 2:  Acc: 0.6845998167991638 Loss: 0.6665360927581787
Epoch 30, CIFAR-10 Batch 3:  Acc: 0.6765998005867004 Loss: 0.5790765285491943
Epoch 30, CIFAR-10 Batch 4:  Acc: 0.6885998249053955 Loss: 0.5882636904716492
Epoch 30, CIFAR-10 Batch 5:  Acc: 0.6885998845100403 Loss: 0.5807139873504639
Epoch 31, CIFAR-10 Batch 1:  Acc: 0.6893998384475708 Loss: 0.6617971062660217
Epoch 31, CIFAR-10 Batch 2:  Acc: 0.6877998113632202 Loss: 0.6575818061828613
Epoch 31, CIFAR-10 Batch 3:  Acc: 0.6833998560905457 Loss: 0.5519174337387085
Epoch 31, CIFAR-10 Batch 4:  Acc: 0.6959999203681946 Loss: 0.5650498867034912
Epoch 31, CIFAR-10 Batch 5:  Acc: 0.6817998290061951 Loss: 0.5600888729095459
Epoch 32, CIFAR-10 Batch 1:  Acc: 0.6949998736381531 Loss: 0.6326887607574463
Epoch 32, CIFAR-10 Batch 2:  Acc: 0.6843999028205872 Loss: 0.6633485555648804
Epoch 32, CIFAR-10 Batch 3:  Acc: 0.6873998641967773 Loss: 0.5404685139656067
Epoch 32, CIFAR-10 Batch 4:  Acc: 0.684599757194519 Loss: 0.5559779405593872
Epoch 32, CIFAR-10 Batch 5:  Acc: 0.6887997984886169 Loss: 0.535934329032898
Epoch 33, CIFAR-10 Batch 1:  Acc: 0.686599850654602 Loss: 0.6478426456451416
Epoch 33, CIFAR-10 Batch 2:  Acc: 0.6843998432159424 Loss: 0.6403388977050781
Epoch 33, CIFAR-10 Batch 3:  Acc: 0.6839998960494995 Loss: 0.5309780836105347
Epoch 33, CIFAR-10 Batch 4:  Acc: 0.6967998147010803 Loss: 0.5470262765884399
Epoch 33, CIFAR-10 Batch 5:  Acc: 0.6881998777389526 Loss: 0.5375345349311829
Epoch 34, CIFAR-10 Batch 1:  Acc: 0.6939998865127563 Loss: 0.6160491108894348
Epoch 34, CIFAR-10 Batch 2:  Acc: 0.6919998526573181 Loss: 0.6216643452644348
Epoch 34, CIFAR-10 Batch 3:  Acc: 0.6899998784065247 Loss: 0.5272124409675598
Epoch 34, CIFAR-10 Batch 4:  Acc: 0.6947998404502869 Loss: 0.5370578765869141
Epoch 34, CIFAR-10 Batch 5:  Acc: 0.6873998641967773 Loss: 0.532461404800415
Epoch 35, CIFAR-10 Batch 1:  Acc: 0.7015998959541321 Loss: 0.5989810824394226
Epoch 35, CIFAR-10 Batch 2:  Acc: 0.6943998336791992 Loss: 0.6155146360397339
Epoch 35, CIFAR-10 Batch 3:  Acc: 0.6893998980522156 Loss: 0.5229493975639343
Epoch 35, CIFAR-10 Batch 4:  Acc: 0.6983999013900757 Loss: 0.5370885133743286
Epoch 35, CIFAR-10 Batch 5:  Acc: 0.6869999170303345 Loss: 0.5335934162139893
Epoch 36, CIFAR-10 Batch 1:  Acc: 0.6899999380111694 Loss: 0.6243200302124023
Epoch 36, CIFAR-10 Batch 2:  Acc: 0.6923998594284058 Loss: 0.6132400631904602
Epoch 36, CIFAR-10 Batch 3:  Acc: 0.6957998871803284 Loss: 0.5023660063743591
Epoch 36, CIFAR-10 Batch 4:  Acc: 0.6957998871803284 Loss: 0.5289989709854126
Epoch 36, CIFAR-10 Batch 5:  Acc: 0.6883998513221741 Loss: 0.5316897034645081
Epoch 37, CIFAR-10 Batch 1:  Acc: 0.6969998478889465 Loss: 0.5886597633361816
Epoch 37, CIFAR-10 Batch 2:  Acc: 0.6967998743057251 Loss: 0.6076028943061829
Epoch 37, CIFAR-10 Batch 3:  Acc: 0.6959998607635498 Loss: 0.51261305809021
Epoch 37, CIFAR-10 Batch 4:  Acc: 0.6969998478889465 Loss: 0.5227651596069336
Epoch 37, CIFAR-10 Batch 5:  Acc: 0.689599871635437 Loss: 0.5106371641159058
Epoch 38, CIFAR-10 Batch 1:  Acc: 0.6963998079299927 Loss: 0.6042265892028809
Epoch 38, CIFAR-10 Batch 2:  Acc: 0.6953998804092407 Loss: 0.5958796143531799
Epoch 38, CIFAR-10 Batch 3:  Acc: 0.6891998648643494 Loss: 0.517090916633606
Epoch 38, CIFAR-10 Batch 4:  Acc: 0.7011998891830444 Loss: 0.5101186037063599
Epoch 38, CIFAR-10 Batch 5:  Acc: 0.6937998533248901 Loss: 0.5078257918357849
Epoch 39, CIFAR-10 Batch 1:  Acc: 0.6953998804092407 Loss: 0.5877919793128967
Epoch 39, CIFAR-10 Batch 2:  Acc: 0.6967998743057251 Loss: 0.5883544683456421
Epoch 39, CIFAR-10 Batch 3:  Acc: 0.6871998310089111 Loss: 0.5002595782279968
Epoch 39, CIFAR-10 Batch 4:  Acc: 0.701999843120575 Loss: 0.513970136642456
Epoch 39, CIFAR-10 Batch 5:  Acc: 0.6957999467849731 Loss: 0.5020233392715454
Epoch 40, CIFAR-10 Batch 1:  Acc: 0.7049998641014099 Loss: 0.5652244687080383
Epoch 40, CIFAR-10 Batch 2:  Acc: 0.6975998878479004 Loss: 0.5651987195014954
Epoch 40, CIFAR-10 Batch 3:  Acc: 0.6927998661994934 Loss: 0.4978020191192627
Epoch 40, CIFAR-10 Batch 4:  Acc: 0.7035998702049255 Loss: 0.5029235482215881
Epoch 40, CIFAR-10 Batch 5:  Acc: 0.7009998559951782 Loss: 0.47442626953125
Epoch 41, CIFAR-10 Batch 1:  Acc: 0.6961998343467712 Loss: 0.5607460141181946
Epoch 41, CIFAR-10 Batch 2:  Acc: 0.6861999034881592 Loss: 0.5646108984947205
Epoch 41, CIFAR-10 Batch 3:  Acc: 0.6837998628616333 Loss: 0.4924292266368866
Epoch 41, CIFAR-10 Batch 4:  Acc: 0.7005998492240906 Loss: 0.5019193887710571
Epoch 41, CIFAR-10 Batch 5:  Acc: 0.6901999115943909 Loss: 0.48550230264663696
Epoch 42, CIFAR-10 Batch 1:  Acc: 0.6979998350143433 Loss: 0.5656691193580627
Epoch 42, CIFAR-10 Batch 2:  Acc: 0.6819998621940613 Loss: 0.5644468069076538
Epoch 42, CIFAR-10 Batch 3:  Acc: 0.6851998567581177 Loss: 0.4982820153236389
Epoch 42, CIFAR-10 Batch 4:  Acc: 0.6999998092651367 Loss: 0.5256360769271851
Epoch 42, CIFAR-10 Batch 5:  Acc: 0.6999998092651367 Loss: 0.48925426602363586
Epoch 43, CIFAR-10 Batch 1:  Acc: 0.699199914932251 Loss: 0.5546533465385437
Epoch 43, CIFAR-10 Batch 2:  Acc: 0.6919998526573181 Loss: 0.5485380291938782
Epoch 43, CIFAR-10 Batch 3:  Acc: 0.6947998404502869 Loss: 0.4585837125778198
Epoch 43, CIFAR-10 Batch 4:  Acc: 0.6975998878479004 Loss: 0.4868130087852478
Epoch 43, CIFAR-10 Batch 5:  Acc: 0.6999998092651367 Loss: 0.4657435417175293
Epoch 44, CIFAR-10 Batch 1:  Acc: 0.7023999094963074 Loss: 0.5361549854278564
Epoch 44, CIFAR-10 Batch 2:  Acc: 0.7097998857498169 Loss: 0.5308551788330078
Epoch 44, CIFAR-10 Batch 3:  Acc: 0.6805999279022217 Loss: 0.4699957072734833
Epoch 44, CIFAR-10 Batch 4:  Acc: 0.7029998302459717 Loss: 0.4938697814941406
Epoch 44, CIFAR-10 Batch 5:  Acc: 0.7029998898506165 Loss: 0.46430346369743347
Epoch 45, CIFAR-10 Batch 1:  Acc: 0.7021999359130859 Loss: 0.5280871391296387
Epoch 45, CIFAR-10 Batch 2:  Acc: 0.6957998275756836 Loss: 0.5362437963485718
Epoch 45, CIFAR-10 Batch 3:  Acc: 0.6887998580932617 Loss: 0.4781653881072998
Epoch 45, CIFAR-10 Batch 4:  Acc: 0.712199866771698 Loss: 0.4722849428653717
Epoch 45, CIFAR-10 Batch 5:  Acc: 0.6917998790740967 Loss: 0.4420734941959381
Epoch 46, CIFAR-10 Batch 1:  Acc: 0.7001998424530029 Loss: 0.5297003984451294
Epoch 46, CIFAR-10 Batch 2:  Acc: 0.7043999433517456 Loss: 0.5067304372787476
Epoch 46, CIFAR-10 Batch 3:  Acc: 0.6965998411178589 Loss: 0.44946572184562683
Epoch 46, CIFAR-10 Batch 4:  Acc: 0.7065998315811157 Loss: 0.4702773988246918
Epoch 46, CIFAR-10 Batch 5:  Acc: 0.7015998959541321 Loss: 0.46753549575805664
Epoch 47, CIFAR-10 Batch 1:  Acc: 0.7087998390197754 Loss: 0.509699285030365
Epoch 47, CIFAR-10 Batch 2:  Acc: 0.704399824142456 Loss: 0.49784016609191895
Epoch 47, CIFAR-10 Batch 3:  Acc: 0.6959998607635498 Loss: 0.44523558020591736
Epoch 47, CIFAR-10 Batch 4:  Acc: 0.7043998837471008 Loss: 0.4564356803894043
Epoch 47, CIFAR-10 Batch 5:  Acc: 0.6973998546600342 Loss: 0.4407595098018646
Epoch 48, CIFAR-10 Batch 1:  Acc: 0.7069999575614929 Loss: 0.5060122609138489
Epoch 48, CIFAR-10 Batch 2:  Acc: 0.705599844455719 Loss: 0.4859088063240051
Epoch 48, CIFAR-10 Batch 3:  Acc: 0.6893998384475708 Loss: 0.43442803621292114
Epoch 48, CIFAR-10 Batch 4:  Acc: 0.7033998966217041 Loss: 0.44312942028045654
Epoch 48, CIFAR-10 Batch 5:  Acc: 0.703799843788147 Loss: 0.43368199467658997
Epoch 49, CIFAR-10 Batch 1:  Acc: 0.7093998789787292 Loss: 0.4895519018173218
Epoch 49, CIFAR-10 Batch 2:  Acc: 0.705599844455719 Loss: 0.47151216864585876
Epoch 49, CIFAR-10 Batch 3:  Acc: 0.6911998391151428 Loss: 0.43499359488487244
Epoch 49, CIFAR-10 Batch 4:  Acc: 0.705599844455719 Loss: 0.4299301505088806
Epoch 49, CIFAR-10 Batch 5:  Acc: 0.6997998952865601 Loss: 0.4276633858680725
Epoch 50, CIFAR-10 Batch 1:  Acc: 0.7057998776435852 Loss: 0.4963178038597107
Epoch 50, CIFAR-10 Batch 2:  Acc: 0.7059999108314514 Loss: 0.47659409046173096
Epoch 50, CIFAR-10 Batch 3:  Acc: 0.6985998153686523 Loss: 0.4071740210056305
Epoch 50, CIFAR-10 Batch 4:  Acc: 0.7055997848510742 Loss: 0.4184207320213318
Epoch 50, CIFAR-10 Batch 5:  Acc: 0.7041998505592346 Loss: 0.39471936225891113
Epoch 51, CIFAR-10 Batch 1:  Acc: 0.7097998857498169 Loss: 0.4885687530040741
Epoch 51, CIFAR-10 Batch 2:  Acc: 0.709199845790863 Loss: 0.4538951814174652
Epoch 51, CIFAR-10 Batch 3:  Acc: 0.697399914264679 Loss: 0.4197547733783722
Epoch 51, CIFAR-10 Batch 4:  Acc: 0.7069998383522034 Loss: 0.43717965483665466
Epoch 51, CIFAR-10 Batch 5:  Acc: 0.7051998376846313 Loss: 0.418944776058197
Epoch 52, CIFAR-10 Batch 1:  Acc: 0.7069997787475586 Loss: 0.47101283073425293
Epoch 52, CIFAR-10 Batch 2:  Acc: 0.7079998254776001 Loss: 0.46886253356933594
Epoch 52, CIFAR-10 Batch 3:  Acc: 0.692599892616272 Loss: 0.43039220571517944
Epoch 52, CIFAR-10 Batch 4:  Acc: 0.7123997807502747 Loss: 0.4154602289199829
Epoch 52, CIFAR-10 Batch 5:  Acc: 0.7061998248100281 Loss: 0.40035268664360046
Epoch 53, CIFAR-10 Batch 1:  Acc: 0.6979998350143433 Loss: 0.48613253235816956
Epoch 53, CIFAR-10 Batch 2:  Acc: 0.7059998512268066 Loss: 0.44557812809944153
Epoch 53, CIFAR-10 Batch 3:  Acc: 0.6963999271392822 Loss: 0.40674635767936707
Epoch 53, CIFAR-10 Batch 4:  Acc: 0.7081997990608215 Loss: 0.4044840335845947
Epoch 53, CIFAR-10 Batch 5:  Acc: 0.7071998715400696 Loss: 0.39406973123550415
Epoch 54, CIFAR-10 Batch 1:  Acc: 0.7049999237060547 Loss: 0.4795662462711334
Epoch 54, CIFAR-10 Batch 2:  Acc: 0.6963999271392822 Loss: 0.473911851644516
Epoch 54, CIFAR-10 Batch 3:  Acc: 0.6935999393463135 Loss: 0.4169144034385681
Epoch 54, CIFAR-10 Batch 4:  Acc: 0.7117998600006104 Loss: 0.40054789185523987
Epoch 54, CIFAR-10 Batch 5:  Acc: 0.7079998850822449 Loss: 0.39753231406211853
Epoch 55, CIFAR-10 Batch 1:  Acc: 0.7047998905181885 Loss: 0.46048200130462646
Epoch 55, CIFAR-10 Batch 2:  Acc: 0.7099998593330383 Loss: 0.439397931098938
Epoch 55, CIFAR-10 Batch 3:  Acc: 0.7095998525619507 Loss: 0.39026594161987305
Epoch 55, CIFAR-10 Batch 4:  Acc: 0.6995998620986938 Loss: 0.4126507043838501
Epoch 55, CIFAR-10 Batch 5:  Acc: 0.7087998986244202 Loss: 0.37659919261932373
Epoch 56, CIFAR-10 Batch 1:  Acc: 0.7029998898506165 Loss: 0.43292665481567383
Epoch 56, CIFAR-10 Batch 2:  Acc: 0.7127997875213623 Loss: 0.4255537986755371
Epoch 56, CIFAR-10 Batch 3:  Acc: 0.6941999197006226 Loss: 0.39432013034820557
Epoch 56, CIFAR-10 Batch 4:  Acc: 0.7123998999595642 Loss: 0.40565380454063416
Epoch 56, CIFAR-10 Batch 5:  Acc: 0.7037999033927917 Loss: 0.41064268350601196
Epoch 57, CIFAR-10 Batch 1:  Acc: 0.7047998905181885 Loss: 0.4574573040008545
Epoch 57, CIFAR-10 Batch 2:  Acc: 0.712199866771698 Loss: 0.44580236077308655
Epoch 57, CIFAR-10 Batch 3:  Acc: 0.7019999027252197 Loss: 0.3679203987121582
Epoch 57, CIFAR-10 Batch 4:  Acc: 0.710399866104126 Loss: 0.39970287680625916
Epoch 57, CIFAR-10 Batch 5:  Acc: 0.703799843788147 Loss: 0.3749535381793976
Epoch 58, CIFAR-10 Batch 1:  Acc: 0.7129998207092285 Loss: 0.4431809186935425
Epoch 58, CIFAR-10 Batch 2:  Acc: 0.7071998119354248 Loss: 0.41719168424606323
Epoch 58, CIFAR-10 Batch 3:  Acc: 0.6869999170303345 Loss: 0.38525480031967163
Epoch 58, CIFAR-10 Batch 4:  Acc: 0.6979998350143433 Loss: 0.40247049927711487
Epoch 58, CIFAR-10 Batch 5:  Acc: 0.7063997983932495 Loss: 0.3781849443912506
Epoch 59, CIFAR-10 Batch 1:  Acc: 0.7083998322486877 Loss: 0.4330177307128906
Epoch 59, CIFAR-10 Batch 2:  Acc: 0.7057998776435852 Loss: 0.4211168885231018
Epoch 59, CIFAR-10 Batch 3:  Acc: 0.7073999047279358 Loss: 0.37461912631988525
Epoch 59, CIFAR-10 Batch 4:  Acc: 0.7087998390197754 Loss: 0.37888965010643005
Epoch 59, CIFAR-10 Batch 5:  Acc: 0.7129998803138733 Loss: 0.3620792031288147
Epoch 60, CIFAR-10 Batch 1:  Acc: 0.7043998837471008 Loss: 0.4474979341030121
Epoch 60, CIFAR-10 Batch 2:  Acc: 0.6997998356819153 Loss: 0.421286404132843
Epoch 60, CIFAR-10 Batch 3:  Acc: 0.7035998702049255 Loss: 0.3763902187347412
Epoch 60, CIFAR-10 Batch 4:  Acc: 0.7095999121665955 Loss: 0.3928171992301941
Epoch 60, CIFAR-10 Batch 5:  Acc: 0.7147998213768005 Loss: 0.3586356043815613
Epoch 61, CIFAR-10 Batch 1:  Acc: 0.7179998755455017 Loss: 0.4150947630405426
Epoch 61, CIFAR-10 Batch 2:  Acc: 0.7051998972892761 Loss: 0.40325823426246643
Epoch 61, CIFAR-10 Batch 3:  Acc: 0.6889999508857727 Loss: 0.3625335693359375
Epoch 61, CIFAR-10 Batch 4:  Acc: 0.7099998593330383 Loss: 0.3673391044139862
Epoch 61, CIFAR-10 Batch 5:  Acc: 0.7095997929573059 Loss: 0.35773009061813354
Epoch 62, CIFAR-10 Batch 1:  Acc: 0.7109998464584351 Loss: 0.41284751892089844
Epoch 62, CIFAR-10 Batch 2:  Acc: 0.7107998728752136 Loss: 0.40330612659454346
Epoch 62, CIFAR-10 Batch 3:  Acc: 0.6945998668670654 Loss: 0.3836485743522644
Epoch 62, CIFAR-10 Batch 4:  Acc: 0.6903998851776123 Loss: 0.41302451491355896
Epoch 62, CIFAR-10 Batch 5:  Acc: 0.7061998844146729 Loss: 0.3504703938961029
Epoch 63, CIFAR-10 Batch 1:  Acc: 0.7115998268127441 Loss: 0.4166758358478546
Epoch 63, CIFAR-10 Batch 2:  Acc: 0.7089998126029968 Loss: 0.3947864770889282
Epoch 63, CIFAR-10 Batch 3:  Acc: 0.7089999318122864 Loss: 0.35638803243637085
Epoch 63, CIFAR-10 Batch 4:  Acc: 0.7025998830795288 Loss: 0.3715118169784546
Epoch 63, CIFAR-10 Batch 5:  Acc: 0.7003998756408691 Loss: 0.3458467125892639
Epoch 64, CIFAR-10 Batch 1:  Acc: 0.7037997841835022 Loss: 0.4150097966194153
Epoch 64, CIFAR-10 Batch 2:  Acc: 0.7125998735427856 Loss: 0.38930413126945496
Epoch 64, CIFAR-10 Batch 3:  Acc: 0.7081998586654663 Loss: 0.35106992721557617
Epoch 64, CIFAR-10 Batch 4:  Acc: 0.7143998146057129 Loss: 0.3583250641822815
Epoch 64, CIFAR-10 Batch 5:  Acc: 0.7085998058319092 Loss: 0.3411683440208435
Epoch 65, CIFAR-10 Batch 1:  Acc: 0.7123998403549194 Loss: 0.3947315812110901
Epoch 65, CIFAR-10 Batch 2:  Acc: 0.7123998999595642 Loss: 0.3941989541053772
Epoch 65, CIFAR-10 Batch 3:  Acc: 0.7091999053955078 Loss: 0.35381776094436646
Epoch 65, CIFAR-10 Batch 4:  Acc: 0.7107998728752136 Loss: 0.36315861344337463
Epoch 65, CIFAR-10 Batch 5:  Acc: 0.7087998390197754 Loss: 0.3306802213191986
Epoch 66, CIFAR-10 Batch 1:  Acc: 0.710399866104126 Loss: 0.4031592309474945
Epoch 66, CIFAR-10 Batch 2:  Acc: 0.7149999141693115 Loss: 0.3830147087574005
Epoch 66, CIFAR-10 Batch 3:  Acc: 0.7055999040603638 Loss: 0.35454079508781433
Epoch 66, CIFAR-10 Batch 4:  Acc: 0.7069998383522034 Loss: 0.3575868308544159
Epoch 66, CIFAR-10 Batch 5:  Acc: 0.7093998193740845 Loss: 0.3338938057422638
Epoch 67, CIFAR-10 Batch 1:  Acc: 0.7105998992919922 Loss: 0.38904982805252075
Epoch 67, CIFAR-10 Batch 2:  Acc: 0.7055999040603638 Loss: 0.37896960973739624
Epoch 67, CIFAR-10 Batch 3:  Acc: 0.7089998722076416 Loss: 0.35174205899238586
Epoch 67, CIFAR-10 Batch 4:  Acc: 0.7087998390197754 Loss: 0.3529598116874695
Epoch 67, CIFAR-10 Batch 5:  Acc: 0.7047998309135437 Loss: 0.32722726464271545
Epoch 68, CIFAR-10 Batch 1:  Acc: 0.7151998281478882 Loss: 0.3783468008041382
Epoch 68, CIFAR-10 Batch 2:  Acc: 0.709199845790863 Loss: 0.3700273633003235
Epoch 68, CIFAR-10 Batch 3:  Acc: 0.7059999108314514 Loss: 0.33363077044487
Epoch 68, CIFAR-10 Batch 4:  Acc: 0.7041999101638794 Loss: 0.35306984186172485
Epoch 68, CIFAR-10 Batch 5:  Acc: 0.7135998606681824 Loss: 0.3186187744140625
Epoch 69, CIFAR-10 Batch 1:  Acc: 0.7153998613357544 Loss: 0.37178266048431396
Epoch 69, CIFAR-10 Batch 2:  Acc: 0.7149998545646667 Loss: 0.368953675031662
Epoch 69, CIFAR-10 Batch 3:  Acc: 0.7033998966217041 Loss: 0.3759543299674988
Epoch 69, CIFAR-10 Batch 4:  Acc: 0.7169998288154602 Loss: 0.34605008363723755
Epoch 69, CIFAR-10 Batch 5:  Acc: 0.7101998329162598 Loss: 0.34041234850883484
Epoch 70, CIFAR-10 Batch 1:  Acc: 0.7143998146057129 Loss: 0.40283891558647156
Epoch 70, CIFAR-10 Batch 2:  Acc: 0.7103998064994812 Loss: 0.36234718561172485
Epoch 70, CIFAR-10 Batch 3:  Acc: 0.7083998918533325 Loss: 0.3374750316143036
Epoch 70, CIFAR-10 Batch 4:  Acc: 0.7009997963905334 Loss: 0.34857603907585144
Epoch 70, CIFAR-10 Batch 5:  Acc: 0.7127999067306519 Loss: 0.3271303176879883
Epoch 71, CIFAR-10 Batch 1:  Acc: 0.7105998992919922 Loss: 0.3762771189212799
Epoch 71, CIFAR-10 Batch 2:  Acc: 0.715199887752533 Loss: 0.3575713634490967
Epoch 71, CIFAR-10 Batch 3:  Acc: 0.7067998647689819 Loss: 0.34344202280044556
Epoch 71, CIFAR-10 Batch 4:  Acc: 0.7135998010635376 Loss: 0.3257976174354553
Epoch 71, CIFAR-10 Batch 5:  Acc: 0.708599865436554 Loss: 0.32457929849624634
Epoch 72, CIFAR-10 Batch 1:  Acc: 0.7117998600006104 Loss: 0.3621031939983368
Epoch 72, CIFAR-10 Batch 2:  Acc: 0.7193998098373413 Loss: 0.35306036472320557
Epoch 72, CIFAR-10 Batch 3:  Acc: 0.7047997713088989 Loss: 0.3360639214515686
Epoch 72, CIFAR-10 Batch 4:  Acc: 0.7065998315811157 Loss: 0.34754297137260437
Epoch 72, CIFAR-10 Batch 5:  Acc: 0.7071998119354248 Loss: 0.328592449426651
Epoch 73, CIFAR-10 Batch 1:  Acc: 0.7131998538970947 Loss: 0.350019633769989
Epoch 73, CIFAR-10 Batch 2:  Acc: 0.7111998796463013 Loss: 0.3487444221973419
Epoch 73, CIFAR-10 Batch 3:  Acc: 0.7059998512268066 Loss: 0.33505910634994507
Epoch 73, CIFAR-10 Batch 4:  Acc: 0.7077998518943787 Loss: 0.33029109239578247
Epoch 73, CIFAR-10 Batch 5:  Acc: 0.707399845123291 Loss: 0.31690481305122375
Epoch 74, CIFAR-10 Batch 1:  Acc: 0.709199845790863 Loss: 0.35659241676330566
Epoch 74, CIFAR-10 Batch 2:  Acc: 0.7163998484611511 Loss: 0.35584399104118347
Epoch 74, CIFAR-10 Batch 3:  Acc: 0.7057998776435852 Loss: 0.32217901945114136
Epoch 74, CIFAR-10 Batch 4:  Acc: 0.7149998545646667 Loss: 0.3192402422428131
Epoch 74, CIFAR-10 Batch 5:  Acc: 0.7113998532295227 Loss: 0.2914791703224182
Epoch 75, CIFAR-10 Batch 1:  Acc: 0.7155998349189758 Loss: 0.37381890416145325
Epoch 75, CIFAR-10 Batch 2:  Acc: 0.7047998905181885 Loss: 0.333723247051239
Epoch 75, CIFAR-10 Batch 3:  Acc: 0.7069998979568481 Loss: 0.29982325434684753
Epoch 75, CIFAR-10 Batch 4:  Acc: 0.7127998471260071 Loss: 0.3242560029029846
Epoch 75, CIFAR-10 Batch 5:  Acc: 0.7071998119354248 Loss: 0.2993510067462921

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [19]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.7018841952085495

Request to Reviewer:

Ideas on better networks, better development practices and visualization relevant to this assignment would be really helpful

Note to the Reviewer: The hyperparameters and architecture design are inspired from this link which is the work of a fellow Udacian it seems. I felt bad borrowing ideas from there without crediting it.

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.