Transfer Learning

Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.

VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.

You can read more about transfer learning from the CS231n course notes.

Pretrained VGGNet

We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.

git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg

This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.


In [1]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm

vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
    raise Exception("VGG directory doesn't exist!")

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(vgg_dir + "vgg16.npy"):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
        urlretrieve(
            'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
            vgg_dir + 'vgg16.npy',
            pbar.hook)
else:
    print("Parameter file already exists!")


Parameter file already exists!

Flower power

Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.


In [2]:
import tarfile

dataset_folder_path = 'flower_photos'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('flower_photos.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
        urlretrieve(
            'http://download.tensorflow.org/example_images/flower_photos.tgz',
            'flower_photos.tar.gz',
            pbar.hook)

if not isdir(dataset_folder_path):
    with tarfile.open('flower_photos.tar.gz') as tar:
        tar.extractall()
        tar.close()

ConvNet Codes

Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.

Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):

self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')

self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')

self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')

self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')

self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')

self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)

So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use

with tf.Session() as sess:
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
    with tf.name_scope("content_vgg"):
        vgg.build(input_)

This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,

feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)

In [3]:
import os

import numpy as np
import tensorflow as tf

from tensorflow_vgg import vgg16
from tensorflow_vgg import utils

In [4]:
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]

Below I'm running images through the VGG network in batches.

Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).


In [5]:
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 32
codes_list = []
labels = []
batch = []

codes = None

with tf.Session() as sess:
    
    # TODO: Build the vgg network here
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3], name='input')
    with tf.name_scope('vgg'):
        vgg.build(input_)

    for each in classes:
        print("Starting {} images".format(each))
        class_path = data_dir + each
        files = os.listdir(class_path)
        for ii, file in enumerate(files, 1):
            # Add images to the current batch
            # utils.load_image crops the input images for us, from the center
            img = utils.load_image(os.path.join(class_path, file))
            batch.append(img.reshape((1, 224, 224, 3)))
            labels.append(each)
            
            # Running the batch through the network to get the codes
            if ii % batch_size == 0 or ii == len(files):
                
                # Image batch to pass to VGG network
                images = np.concatenate(batch)
                
                # TODO: Get the values from the relu6 layer of the VGG network
                feed_dict = {input_: images}
                codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
                
                # Here I'm building an array of the codes
                if codes is None:
                    codes = codes_batch
                else:
                    codes = np.concatenate((codes, codes_batch))
                
                # Reset to start building the next batch
                batch = []
                print('{} images processed'.format(ii))


/home/carnd/deep_learning_foundation/transfer-learning/tensorflow_vgg/vgg16.npy
npy file loaded
build model started
build model finished: 0s
Starting dandelion images
32 images processed
64 images processed
96 images processed
128 images processed
160 images processed
192 images processed
224 images processed
256 images processed
288 images processed
320 images processed
352 images processed
384 images processed
416 images processed
448 images processed
480 images processed
512 images processed
544 images processed
576 images processed
608 images processed
640 images processed
672 images processed
704 images processed
736 images processed
768 images processed
800 images processed
832 images processed
864 images processed
896 images processed
898 images processed
Starting roses images
32 images processed
64 images processed
96 images processed
128 images processed
160 images processed
192 images processed
224 images processed
256 images processed
288 images processed
320 images processed
352 images processed
384 images processed
416 images processed
448 images processed
480 images processed
512 images processed
544 images processed
576 images processed
608 images processed
640 images processed
641 images processed
Starting daisy images
32 images processed
64 images processed
96 images processed
128 images processed
160 images processed
192 images processed
224 images processed
256 images processed
288 images processed
320 images processed
352 images processed
384 images processed
416 images processed
448 images processed
480 images processed
512 images processed
544 images processed
576 images processed
608 images processed
633 images processed
Starting tulips images
32 images processed
64 images processed
96 images processed
128 images processed
160 images processed
192 images processed
224 images processed
256 images processed
288 images processed
320 images processed
352 images processed
384 images processed
416 images processed
448 images processed
480 images processed
512 images processed
544 images processed
576 images processed
608 images processed
640 images processed
672 images processed
704 images processed
736 images processed
768 images processed
799 images processed
Starting sunflowers images
32 images processed
64 images processed
96 images processed
128 images processed
160 images processed
192 images processed
224 images processed
256 images processed
288 images processed
320 images processed
352 images processed
384 images processed
416 images processed
448 images processed
480 images processed
512 images processed
544 images processed
576 images processed
608 images processed
640 images processed
672 images processed
699 images processed

In [6]:
# write codes to file
with open('codes', 'w') as f:
    codes.tofile(f)
    
# write labels to file
import csv
with open('labels', 'w') as f:
    writer = csv.writer(f, delimiter='\n')
    writer.writerow(labels)

Building the Classifier

Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.


In [7]:
# read codes and labels from file
import csv

with open('labels') as f:
    reader = csv.reader(f, delimiter='\n')
    labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
    codes = np.fromfile(f, dtype=np.float32)
    codes = codes.reshape((len(labels), -1))

Data prep

As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!

Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.


In [8]:
from sklearn import preprocessing
pre = preprocessing.LabelBinarizer()
pre.fit(labels)
labels_vecs = pre.transform(labels)
labels_vecs[:10]


Out[8]:
array([[0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0]])

Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.

You can create the splitter like so:

ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)

Then split the data with

splitter = ss.split(x, y)

ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.

Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.


In [9]:
from sklearn.model_selection import StratifiedShuffleSplit
splits = StratifiedShuffleSplit(n_splits=1, test_size=0.2)

train_idx, val_idx = next(splits.split(codes, labels))
middle = int(len(val_idx)/2)

val_idx, test_idx = val_idx[:middle], val_idx[middle:]

train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y =  codes[test_idx], labels_vecs[test_idx]

In [10]:
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)


Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

If you did it right, you should see these sizes for the training sets:

Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

Classifier layers

Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.

Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.


In [11]:
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])

full = tf.contrib.layers.fully_connected(inputs_, 512)

logits = tf.contrib.layers.fully_connected(full, labels_vecs.shape[1], activation_fn=None)# output layer logits
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))# cross entropy loss

optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)

# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Batches!

Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.


In [12]:
def get_batches(x, y, n_batches=10):
    """ Return a generator that yields batches from arrays x and y. """
    batch_size = len(x)//n_batches
    
    for ii in range(0, n_batches*batch_size, batch_size):
        # If we're not on the last batch, grab data with size batch_size
        if ii != (n_batches-1)*batch_size:
            X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] 
        # On the last batch, grab the rest of the data
        else:
            X, Y = x[ii:], y[ii:]
        # I love generators
        yield X, Y

Training

Here, we'll train the network.

Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!


In [13]:
epochs = 50
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for epoch in range(epochs):
        for batch_x, batch_y in get_batches(train_x, train_y):
            _, loss = sess.run([optimizer, cost], feed_dict = {inputs_: batch_x, labels_: batch_y})
            iteration += 1
            print('Epoch: {} Iteration: {} Training loss: {} '.format(epoch, iteration, loss))
            
            if not iteration%10:
                valid_acc = sess.run(accuracy, feed_dict = {inputs_: val_x, labels_: val_y})
                print('Epoch: {} Iteration: {} Validation Acc : {} '.format(epoch, iteration, valid_acc))
                
    saver.save(sess, "checkpoints/flowers.ckpt")


Epoch: 0 Iteration: 1 Training loss: 7.112163066864014 
Epoch: 0 Iteration: 2 Training loss: 21.162372589111328 
Epoch: 0 Iteration: 3 Training loss: 13.35370922088623 
Epoch: 0 Iteration: 4 Training loss: 9.959602355957031 
Epoch: 0 Iteration: 5 Training loss: 3.78314471244812 
Epoch: 0 Iteration: 6 Training loss: 3.2155325412750244 
Epoch: 0 Iteration: 7 Training loss: 2.67684268951416 
Epoch: 0 Iteration: 8 Training loss: 3.8360166549682617 
Epoch: 0 Iteration: 9 Training loss: 4.266743183135986 
Epoch: 0 Iteration: 10 Training loss: 3.31071138381958 
Epoch: 0 Iteration: 10 Validation Acc : 0.7465939521789551 
Epoch: 1 Iteration: 11 Training loss: 1.639525294303894 
Epoch: 1 Iteration: 12 Training loss: 1.1752194166183472 
Epoch: 1 Iteration: 13 Training loss: 1.4715919494628906 
Epoch: 1 Iteration: 14 Training loss: 1.1131974458694458 
Epoch: 1 Iteration: 15 Training loss: 0.7685015797615051 
Epoch: 1 Iteration: 16 Training loss: 0.8395979404449463 
Epoch: 1 Iteration: 17 Training loss: 0.9420003890991211 
Epoch: 1 Iteration: 18 Training loss: 0.8115130662918091 
Epoch: 1 Iteration: 19 Training loss: 1.2851022481918335 
Epoch: 1 Iteration: 20 Training loss: 0.8963378667831421 
Epoch: 1 Iteration: 20 Validation Acc : 0.8119890093803406 
Epoch: 2 Iteration: 21 Training loss: 0.8754130005836487 
Epoch: 2 Iteration: 22 Training loss: 0.5891628861427307 
Epoch: 2 Iteration: 23 Training loss: 0.47748643159866333 
Epoch: 2 Iteration: 24 Training loss: 0.3637942671775818 
Epoch: 2 Iteration: 25 Training loss: 0.3570675253868103 
Epoch: 2 Iteration: 26 Training loss: 0.46865224838256836 
Epoch: 2 Iteration: 27 Training loss: 0.34801602363586426 
Epoch: 2 Iteration: 28 Training loss: 0.2711789011955261 
Epoch: 2 Iteration: 29 Training loss: 0.3933025896549225 
Epoch: 2 Iteration: 30 Training loss: 0.4238584637641907 
Epoch: 2 Iteration: 30 Validation Acc : 0.8719345331192017 
Epoch: 3 Iteration: 31 Training loss: 0.35784676671028137 
Epoch: 3 Iteration: 32 Training loss: 0.24800926446914673 
Epoch: 3 Iteration: 33 Training loss: 0.3530862331390381 
Epoch: 3 Iteration: 34 Training loss: 0.30278727412223816 
Epoch: 3 Iteration: 35 Training loss: 0.29412010312080383 
Epoch: 3 Iteration: 36 Training loss: 0.2832767963409424 
Epoch: 3 Iteration: 37 Training loss: 0.21090316772460938 
Epoch: 3 Iteration: 38 Training loss: 0.28507012128829956 
Epoch: 3 Iteration: 39 Training loss: 0.2657119631767273 
Epoch: 3 Iteration: 40 Training loss: 0.23741121590137482 
Epoch: 3 Iteration: 40 Validation Acc : 0.8991825580596924 
Epoch: 4 Iteration: 41 Training loss: 0.1504206359386444 
Epoch: 4 Iteration: 42 Training loss: 0.10514160245656967 
Epoch: 4 Iteration: 43 Training loss: 0.14421702921390533 
Epoch: 4 Iteration: 44 Training loss: 0.15761220455169678 
Epoch: 4 Iteration: 45 Training loss: 0.1892394721508026 
Epoch: 4 Iteration: 46 Training loss: 0.17364716529846191 
Epoch: 4 Iteration: 47 Training loss: 0.12400928139686584 
Epoch: 4 Iteration: 48 Training loss: 0.1421915590763092 
Epoch: 4 Iteration: 49 Training loss: 0.15228694677352905 
Epoch: 4 Iteration: 50 Training loss: 0.13932263851165771 
Epoch: 4 Iteration: 50 Validation Acc : 0.888283371925354 
Epoch: 5 Iteration: 51 Training loss: 0.11880595982074738 
Epoch: 5 Iteration: 52 Training loss: 0.07720328122377396 
Epoch: 5 Iteration: 53 Training loss: 0.07403110712766647 
Epoch: 5 Iteration: 54 Training loss: 0.09087150543928146 
Epoch: 5 Iteration: 55 Training loss: 0.09769857674837112 
Epoch: 5 Iteration: 56 Training loss: 0.09527048468589783 
Epoch: 5 Iteration: 57 Training loss: 0.067816361784935 
Epoch: 5 Iteration: 58 Training loss: 0.09157539159059525 
Epoch: 5 Iteration: 59 Training loss: 0.09680068492889404 
Epoch: 5 Iteration: 60 Training loss: 0.09129512310028076 
Epoch: 5 Iteration: 60 Validation Acc : 0.9073569178581238 
Epoch: 6 Iteration: 61 Training loss: 0.08104866743087769 
Epoch: 6 Iteration: 62 Training loss: 0.049854960292577744 
Epoch: 6 Iteration: 63 Training loss: 0.055568426847457886 
Epoch: 6 Iteration: 64 Training loss: 0.06930754333734512 
Epoch: 6 Iteration: 65 Training loss: 0.0795779749751091 
Epoch: 6 Iteration: 66 Training loss: 0.0661478340625763 
Epoch: 6 Iteration: 67 Training loss: 0.05323062837123871 
Epoch: 6 Iteration: 68 Training loss: 0.06907987594604492 
Epoch: 6 Iteration: 69 Training loss: 0.07431986182928085 
Epoch: 6 Iteration: 70 Training loss: 0.0636824518442154 
Epoch: 6 Iteration: 70 Validation Acc : 0.9046321511268616 
Epoch: 7 Iteration: 71 Training loss: 0.05344851315021515 
Epoch: 7 Iteration: 72 Training loss: 0.03338610380887985 
Epoch: 7 Iteration: 73 Training loss: 0.033233605325222015 
Epoch: 7 Iteration: 74 Training loss: 0.04649962857365608 
Epoch: 7 Iteration: 75 Training loss: 0.05485773831605911 
Epoch: 7 Iteration: 76 Training loss: 0.04968499764800072 
Epoch: 7 Iteration: 77 Training loss: 0.03542507439851761 
Epoch: 7 Iteration: 78 Training loss: 0.05057203397154808 
Epoch: 7 Iteration: 79 Training loss: 0.0583563931286335 
Epoch: 7 Iteration: 80 Training loss: 0.04845704138278961 
Epoch: 7 Iteration: 80 Validation Acc : 0.910081684589386 
Epoch: 8 Iteration: 81 Training loss: 0.03907744213938713 
Epoch: 8 Iteration: 82 Training loss: 0.024700481444597244 
Epoch: 8 Iteration: 83 Training loss: 0.025427911430597305 
Epoch: 8 Iteration: 84 Training loss: 0.03178495541214943 
Epoch: 8 Iteration: 85 Training loss: 0.03926254063844681 
Epoch: 8 Iteration: 86 Training loss: 0.03874930366873741 
Epoch: 8 Iteration: 87 Training loss: 0.027966570109128952 
Epoch: 8 Iteration: 88 Training loss: 0.039869263768196106 
Epoch: 8 Iteration: 89 Training loss: 0.045544084161520004 
Epoch: 8 Iteration: 90 Training loss: 0.039184339344501495 
Epoch: 8 Iteration: 90 Validation Acc : 0.9155313372612 
Epoch: 9 Iteration: 91 Training loss: 0.02669292502105236 
Epoch: 9 Iteration: 92 Training loss: 0.02073250338435173 
Epoch: 9 Iteration: 93 Training loss: 0.019840558990836143 
Epoch: 9 Iteration: 94 Training loss: 0.023427467793226242 
Epoch: 9 Iteration: 95 Training loss: 0.031654324382543564 
Epoch: 9 Iteration: 96 Training loss: 0.030516093596816063 
Epoch: 9 Iteration: 97 Training loss: 0.01983167603611946 
Epoch: 9 Iteration: 98 Training loss: 0.03052198514342308 
Epoch: 9 Iteration: 99 Training loss: 0.03460464999079704 
Epoch: 9 Iteration: 100 Training loss: 0.031667131930589676 
Epoch: 9 Iteration: 100 Validation Acc : 0.9182561039924622 
Epoch: 10 Iteration: 101 Training loss: 0.021275360137224197 
Epoch: 10 Iteration: 102 Training loss: 0.016642916947603226 
Epoch: 10 Iteration: 103 Training loss: 0.01613801158964634 
Epoch: 10 Iteration: 104 Training loss: 0.01858741044998169 
Epoch: 10 Iteration: 105 Training loss: 0.02341875620186329 
Epoch: 10 Iteration: 106 Training loss: 0.025117289274930954 
Epoch: 10 Iteration: 107 Training loss: 0.015542278066277504 
Epoch: 10 Iteration: 108 Training loss: 0.025840943679213524 
Epoch: 10 Iteration: 109 Training loss: 0.026282483711838722 
Epoch: 10 Iteration: 110 Training loss: 0.02603856660425663 
Epoch: 10 Iteration: 110 Validation Acc : 0.9182561039924622 
Epoch: 11 Iteration: 111 Training loss: 0.01744692027568817 
Epoch: 11 Iteration: 112 Training loss: 0.01374979130923748 
Epoch: 11 Iteration: 113 Training loss: 0.0135941281914711 
Epoch: 11 Iteration: 114 Training loss: 0.015005636028945446 
Epoch: 11 Iteration: 115 Training loss: 0.01995515637099743 
Epoch: 11 Iteration: 116 Training loss: 0.020944274961948395 
Epoch: 11 Iteration: 117 Training loss: 0.012241054326295853 
Epoch: 11 Iteration: 118 Training loss: 0.021164080128073692 
Epoch: 11 Iteration: 119 Training loss: 0.01993333175778389 
Epoch: 11 Iteration: 120 Training loss: 0.021289009600877762 
Epoch: 11 Iteration: 120 Validation Acc : 0.9182561039924622 
Epoch: 12 Iteration: 121 Training loss: 0.014837225899100304 
Epoch: 12 Iteration: 122 Training loss: 0.011474468745291233 
Epoch: 12 Iteration: 123 Training loss: 0.011480775661766529 
Epoch: 12 Iteration: 124 Training loss: 0.012523048557341099 
Epoch: 12 Iteration: 125 Training loss: 0.016677824780344963 
Epoch: 12 Iteration: 126 Training loss: 0.017803702503442764 
Epoch: 12 Iteration: 127 Training loss: 0.010074255056679249 
Epoch: 12 Iteration: 128 Training loss: 0.01858345977962017 
Epoch: 12 Iteration: 129 Training loss: 0.015370141714811325 
Epoch: 12 Iteration: 130 Training loss: 0.017808780074119568 
Epoch: 12 Iteration: 130 Validation Acc : 0.9182561039924622 
Epoch: 13 Iteration: 131 Training loss: 0.012506174854934216 
Epoch: 13 Iteration: 132 Training loss: 0.009777706116437912 
Epoch: 13 Iteration: 133 Training loss: 0.009817147627472878 
Epoch: 13 Iteration: 134 Training loss: 0.010705175809562206 
Epoch: 13 Iteration: 135 Training loss: 0.014856522902846336 
Epoch: 13 Iteration: 136 Training loss: 0.015153376385569572 
Epoch: 13 Iteration: 137 Training loss: 0.008500155992805958 
Epoch: 13 Iteration: 138 Training loss: 0.01599750854074955 
Epoch: 13 Iteration: 139 Training loss: 0.012495361268520355 
Epoch: 13 Iteration: 140 Training loss: 0.015108121559023857 
Epoch: 13 Iteration: 140 Validation Acc : 0.9209809303283691 
Epoch: 14 Iteration: 141 Training loss: 0.010904554277658463 
Epoch: 14 Iteration: 142 Training loss: 0.008397095836699009 
Epoch: 14 Iteration: 143 Training loss: 0.008541001938283443 
Epoch: 14 Iteration: 144 Training loss: 0.00915269274264574 
Epoch: 14 Iteration: 145 Training loss: 0.012949984520673752 
Epoch: 14 Iteration: 146 Training loss: 0.012904618866741657 
Epoch: 14 Iteration: 147 Training loss: 0.007212349679321051 
Epoch: 14 Iteration: 148 Training loss: 0.014626330696046352 
Epoch: 14 Iteration: 149 Training loss: 0.01053975522518158 
Epoch: 14 Iteration: 150 Training loss: 0.012729592621326447 
Epoch: 14 Iteration: 150 Validation Acc : 0.9209809303283691 
Epoch: 15 Iteration: 151 Training loss: 0.009661504067480564 
Epoch: 15 Iteration: 152 Training loss: 0.007200445979833603 
Epoch: 15 Iteration: 153 Training loss: 0.007438895758241415 
Epoch: 15 Iteration: 154 Training loss: 0.007859582081437111 
Epoch: 15 Iteration: 155 Training loss: 0.011764733120799065 
Epoch: 15 Iteration: 156 Training loss: 0.011112126521766186 
Epoch: 15 Iteration: 157 Training loss: 0.006220828276127577 
Epoch: 15 Iteration: 158 Training loss: 0.01294205617159605 
Epoch: 15 Iteration: 159 Training loss: 0.009104087017476559 
Epoch: 15 Iteration: 160 Training loss: 0.011135026812553406 
Epoch: 15 Iteration: 160 Validation Acc : 0.9209809303283691 
Epoch: 16 Iteration: 161 Training loss: 0.008408885449171066 
Epoch: 16 Iteration: 162 Training loss: 0.006409581284970045 
Epoch: 16 Iteration: 163 Training loss: 0.006682361476123333 
Epoch: 16 Iteration: 164 Training loss: 0.006870666053146124 
Epoch: 16 Iteration: 165 Training loss: 0.010866707190871239 
Epoch: 16 Iteration: 166 Training loss: 0.009645596146583557 
Epoch: 16 Iteration: 167 Training loss: 0.005486321169883013 
Epoch: 16 Iteration: 168 Training loss: 0.01175061333924532 
Epoch: 16 Iteration: 169 Training loss: 0.007754070218652487 
Epoch: 16 Iteration: 170 Training loss: 0.009554300457239151 
Epoch: 16 Iteration: 170 Validation Acc : 0.9209809303283691 
Epoch: 17 Iteration: 171 Training loss: 0.007528713904321194 
Epoch: 17 Iteration: 172 Training loss: 0.005585978273302317 
Epoch: 17 Iteration: 173 Training loss: 0.005875604227185249 
Epoch: 17 Iteration: 174 Training loss: 0.006079031154513359 
Epoch: 17 Iteration: 175 Training loss: 0.00990616250783205 
Epoch: 17 Iteration: 176 Training loss: 0.008440593257546425 
Epoch: 17 Iteration: 177 Training loss: 0.004784026183187962 
Epoch: 17 Iteration: 178 Training loss: 0.01102660596370697 
Epoch: 17 Iteration: 179 Training loss: 0.006838412955403328 
Epoch: 17 Iteration: 180 Training loss: 0.008373649790883064 
Epoch: 17 Iteration: 180 Validation Acc : 0.9237057566642761 
Epoch: 18 Iteration: 181 Training loss: 0.0067645227536559105 
Epoch: 18 Iteration: 182 Training loss: 0.005036484450101852 
Epoch: 18 Iteration: 183 Training loss: 0.005270333960652351 
Epoch: 18 Iteration: 184 Training loss: 0.005490986630320549 
Epoch: 18 Iteration: 185 Training loss: 0.009306548163294792 
Epoch: 18 Iteration: 186 Training loss: 0.007415710017085075 
Epoch: 18 Iteration: 187 Training loss: 0.0043534282594919205 
Epoch: 18 Iteration: 188 Training loss: 0.01009234320372343 
Epoch: 18 Iteration: 189 Training loss: 0.0060622673481702805 
Epoch: 18 Iteration: 190 Training loss: 0.007435765117406845 
Epoch: 18 Iteration: 190 Validation Acc : 0.9237056970596313 
Epoch: 19 Iteration: 191 Training loss: 0.006119183264672756 
Epoch: 19 Iteration: 192 Training loss: 0.004532421473413706 
Epoch: 19 Iteration: 193 Training loss: 0.004724116064608097 
Epoch: 19 Iteration: 194 Training loss: 0.004960930440574884 
Epoch: 19 Iteration: 195 Training loss: 0.008708097040653229 
Epoch: 19 Iteration: 196 Training loss: 0.006605216301977634 
Epoch: 19 Iteration: 197 Training loss: 0.0037813917733728886 
Epoch: 19 Iteration: 198 Training loss: 0.009467914700508118 
Epoch: 19 Iteration: 199 Training loss: 0.00538559490814805 
Epoch: 19 Iteration: 200 Training loss: 0.006694372743368149 
Epoch: 19 Iteration: 200 Validation Acc : 0.9209809303283691 
Epoch: 20 Iteration: 201 Training loss: 0.005656828638166189 
Epoch: 20 Iteration: 202 Training loss: 0.004051991738379002 
Epoch: 20 Iteration: 203 Training loss: 0.004394339397549629 
Epoch: 20 Iteration: 204 Training loss: 0.0045124078169465065 
Epoch: 20 Iteration: 205 Training loss: 0.011552749201655388 
Epoch: 20 Iteration: 206 Training loss: 0.0059251971542835236 
Epoch: 20 Iteration: 207 Training loss: 0.0034145356621593237 
Epoch: 20 Iteration: 208 Training loss: 0.008959551341831684 
Epoch: 20 Iteration: 209 Training loss: 0.004890224896371365 
Epoch: 20 Iteration: 210 Training loss: 0.0059631578624248505 
Epoch: 20 Iteration: 210 Validation Acc : 0.9209808707237244 
Epoch: 21 Iteration: 211 Training loss: 0.00512709841132164 
Epoch: 21 Iteration: 212 Training loss: 0.003700917586684227 
Epoch: 21 Iteration: 213 Training loss: 0.0038654706440865993 
Epoch: 21 Iteration: 214 Training loss: 0.0041173119097948074 
Epoch: 21 Iteration: 215 Training loss: 0.007200257387012243 
Epoch: 21 Iteration: 216 Training loss: 0.00533096631988883 
Epoch: 21 Iteration: 217 Training loss: 0.003126415889710188 
Epoch: 21 Iteration: 218 Training loss: 0.010448852553963661 
Epoch: 21 Iteration: 219 Training loss: 0.004414948634803295 
Epoch: 21 Iteration: 220 Training loss: 0.005451773758977652 
Epoch: 21 Iteration: 220 Validation Acc : 0.9182561039924622 
Epoch: 22 Iteration: 221 Training loss: 0.004749051295220852 
Epoch: 22 Iteration: 222 Training loss: 0.0033088866621255875 
Epoch: 22 Iteration: 223 Training loss: 0.0036248895339667797 
Epoch: 22 Iteration: 224 Training loss: 0.003756449092179537 
Epoch: 22 Iteration: 225 Training loss: 0.00977832730859518 
Epoch: 22 Iteration: 226 Training loss: 0.004868947900831699 
Epoch: 22 Iteration: 227 Training loss: 0.0028333370573818684 
Epoch: 22 Iteration: 228 Training loss: 0.008219851180911064 
Epoch: 22 Iteration: 229 Training loss: 0.0039802114479243755 
Epoch: 22 Iteration: 230 Training loss: 0.004943479783833027 
Epoch: 22 Iteration: 230 Validation Acc : 0.9209808707237244 
Epoch: 23 Iteration: 231 Training loss: 0.004310018382966518 
Epoch: 23 Iteration: 232 Training loss: 0.003068631747737527 
Epoch: 23 Iteration: 233 Training loss: 0.0032659326680004597 
Epoch: 23 Iteration: 234 Training loss: 0.0034518027678132057 
Epoch: 23 Iteration: 235 Training loss: 0.00741919968277216 
Epoch: 23 Iteration: 236 Training loss: 0.004441240336745977 
Epoch: 23 Iteration: 237 Training loss: 0.0026021269150078297 
Epoch: 23 Iteration: 238 Training loss: 0.009263649582862854 
Epoch: 23 Iteration: 239 Training loss: 0.003684181487187743 
Epoch: 23 Iteration: 240 Training loss: 0.004532490856945515 
Epoch: 23 Iteration: 240 Validation Acc : 0.9182561039924622 
Epoch: 24 Iteration: 241 Training loss: 0.004028473049402237 
Epoch: 24 Iteration: 242 Training loss: 0.0027923129964619875 
Epoch: 24 Iteration: 243 Training loss: 0.0030117786955088377 
Epoch: 24 Iteration: 244 Training loss: 0.0032009293790906668 
Epoch: 24 Iteration: 245 Training loss: 0.008411485701799393 
Epoch: 24 Iteration: 246 Training loss: 0.004083261825144291 
Epoch: 24 Iteration: 247 Training loss: 0.0024125026538968086 
Epoch: 24 Iteration: 248 Training loss: 0.008039208129048347 
Epoch: 24 Iteration: 249 Training loss: 0.003341512754559517 
Epoch: 24 Iteration: 250 Training loss: 0.0041993469931185246 
Epoch: 24 Iteration: 250 Validation Acc : 0.9182561039924622 
Epoch: 25 Iteration: 251 Training loss: 0.0037238840013742447 
Epoch: 25 Iteration: 252 Training loss: 0.0025827717036008835 
Epoch: 25 Iteration: 253 Training loss: 0.0028126691468060017 
Epoch: 25 Iteration: 254 Training loss: 0.002973804483190179 
Epoch: 25 Iteration: 255 Training loss: 0.007499778177589178 
Epoch: 25 Iteration: 256 Training loss: 0.0037820811849087477 
Epoch: 25 Iteration: 257 Training loss: 0.0022391066886484623 
Epoch: 25 Iteration: 258 Training loss: 0.008329386822879314 
Epoch: 25 Iteration: 259 Training loss: 0.003100443398579955 
Epoch: 25 Iteration: 260 Training loss: 0.0038308482617139816 
Epoch: 25 Iteration: 260 Validation Acc : 0.9182561039924622 
Epoch: 26 Iteration: 261 Training loss: 0.0035103438422083855 
Epoch: 26 Iteration: 262 Training loss: 0.002375229261815548 
Epoch: 26 Iteration: 263 Training loss: 0.0025967173278331757 
Epoch: 26 Iteration: 264 Training loss: 0.002769678831100464 
Epoch: 26 Iteration: 265 Training loss: 0.007670050486922264 
Epoch: 26 Iteration: 266 Training loss: 0.003505704691633582 
Epoch: 26 Iteration: 267 Training loss: 0.002068366389721632 
Epoch: 26 Iteration: 268 Training loss: 0.00786683987826109 
Epoch: 26 Iteration: 269 Training loss: 0.0028393324464559555 
Epoch: 26 Iteration: 270 Training loss: 0.003587661311030388 
Epoch: 26 Iteration: 270 Validation Acc : 0.9182561039924622 
Epoch: 27 Iteration: 271 Training loss: 0.0032358833122998476 
Epoch: 27 Iteration: 272 Training loss: 0.002208299934864044 
Epoch: 27 Iteration: 273 Training loss: 0.002403486054390669 
Epoch: 27 Iteration: 274 Training loss: 0.0025873270351439714 
Epoch: 27 Iteration: 275 Training loss: 0.007285645231604576 
Epoch: 27 Iteration: 276 Training loss: 0.0032366756349802017 
Epoch: 27 Iteration: 277 Training loss: 0.001926683122292161 
Epoch: 27 Iteration: 278 Training loss: 0.00782078132033348 
Epoch: 27 Iteration: 279 Training loss: 0.0026375791057944298 
Epoch: 27 Iteration: 280 Training loss: 0.003311580279842019 
Epoch: 27 Iteration: 280 Validation Acc : 0.9182561039924622 
Epoch: 28 Iteration: 281 Training loss: 0.0030275981407612562 
Epoch: 28 Iteration: 282 Training loss: 0.0020588282495737076 
Epoch: 28 Iteration: 283 Training loss: 0.002227107062935829 
Epoch: 28 Iteration: 284 Training loss: 0.002432685811072588 
Epoch: 28 Iteration: 285 Training loss: 0.007337748538702726 
Epoch: 28 Iteration: 286 Training loss: 0.0030098888091742992 
Epoch: 28 Iteration: 287 Training loss: 0.0017807185649871826 
Epoch: 28 Iteration: 288 Training loss: 0.007531753741204739 
Epoch: 28 Iteration: 289 Training loss: 0.0024368136655539274 
Epoch: 28 Iteration: 290 Training loss: 0.0030996843706816435 
Epoch: 28 Iteration: 290 Validation Acc : 0.9155312776565552 
Epoch: 29 Iteration: 291 Training loss: 0.002827732590958476 
Epoch: 29 Iteration: 292 Training loss: 0.0019054554868489504 
Epoch: 29 Iteration: 293 Training loss: 0.00207553431391716 
Epoch: 29 Iteration: 294 Training loss: 0.0022717027459293604 
Epoch: 29 Iteration: 295 Training loss: 0.007028082385659218 
Epoch: 29 Iteration: 296 Training loss: 0.0027916899416595697 
Epoch: 29 Iteration: 297 Training loss: 0.0016619055531919003 
Epoch: 29 Iteration: 298 Training loss: 0.007486375980079174 
Epoch: 29 Iteration: 299 Training loss: 0.002288162475451827 
Epoch: 29 Iteration: 300 Training loss: 0.0028884103521704674 
Epoch: 29 Iteration: 300 Validation Acc : 0.9155312776565552 
Epoch: 30 Iteration: 301 Training loss: 0.002647215034812689 
Epoch: 30 Iteration: 302 Training loss: 0.0017773498548194766 
Epoch: 30 Iteration: 303 Training loss: 0.0019373929826542735 
Epoch: 30 Iteration: 304 Training loss: 0.002136124996468425 
Epoch: 30 Iteration: 305 Training loss: 0.007104083430022001 
Epoch: 30 Iteration: 306 Training loss: 0.002612742828205228 
Epoch: 30 Iteration: 307 Training loss: 0.001562255434691906 
Epoch: 30 Iteration: 308 Training loss: 0.007209340576082468 
Epoch: 30 Iteration: 309 Training loss: 0.002119460143148899 
Epoch: 30 Iteration: 310 Training loss: 0.0027045749593526125 
Epoch: 30 Iteration: 310 Validation Acc : 0.9155312776565552 
Epoch: 31 Iteration: 311 Training loss: 0.0024818112142384052 
Epoch: 31 Iteration: 312 Training loss: 0.0016603815602138638 
Epoch: 31 Iteration: 313 Training loss: 0.0018171777483075857 
Epoch: 31 Iteration: 314 Training loss: 0.002008438343182206 
Epoch: 31 Iteration: 315 Training loss: 0.006846240255981684 
Epoch: 31 Iteration: 316 Training loss: 0.0024368171580135822 
Epoch: 31 Iteration: 317 Training loss: 0.0014564909506589174 
Epoch: 31 Iteration: 318 Training loss: 0.007222137413918972 
Epoch: 31 Iteration: 319 Training loss: 0.00198850454762578 
Epoch: 31 Iteration: 320 Training loss: 0.002534918487071991 
Epoch: 31 Iteration: 320 Validation Acc : 0.9155312776565552 
Epoch: 32 Iteration: 321 Training loss: 0.0023355879820883274 
Epoch: 32 Iteration: 322 Training loss: 0.0015588233945891261 
Epoch: 32 Iteration: 323 Training loss: 0.0017199813155457377 
Epoch: 32 Iteration: 324 Training loss: 0.0018939857836812735 
Epoch: 32 Iteration: 325 Training loss: 0.0068643647246062756 
Epoch: 32 Iteration: 326 Training loss: 0.002298475941643119 
Epoch: 32 Iteration: 327 Training loss: 0.001376560889184475 
Epoch: 32 Iteration: 328 Training loss: 0.007036794442683458 
Epoch: 32 Iteration: 329 Training loss: 0.0018372879130765796 
Epoch: 32 Iteration: 330 Training loss: 0.002377368975430727 
Epoch: 32 Iteration: 330 Validation Acc : 0.9182561039924622 
Epoch: 33 Iteration: 331 Training loss: 0.0021942423190921545 
Epoch: 33 Iteration: 332 Training loss: 0.0014802452642470598 
Epoch: 33 Iteration: 333 Training loss: 0.0016185572603717446 
Epoch: 33 Iteration: 334 Training loss: 0.0017890853341668844 
Epoch: 33 Iteration: 335 Training loss: 0.006835445296019316 
Epoch: 33 Iteration: 336 Training loss: 0.0021506468765437603 
Epoch: 33 Iteration: 337 Training loss: 0.0012923850445076823 
Epoch: 33 Iteration: 338 Training loss: 0.0069603510200977325 
Epoch: 33 Iteration: 339 Training loss: 0.0017135380767285824 
Epoch: 33 Iteration: 340 Training loss: 0.002241274109110236 
Epoch: 33 Iteration: 340 Validation Acc : 0.9155312776565552 
Epoch: 34 Iteration: 341 Training loss: 0.0020430271979421377 
Epoch: 34 Iteration: 342 Training loss: 0.0013969833962619305 
Epoch: 34 Iteration: 343 Training loss: 0.0015204419614747167 
Epoch: 34 Iteration: 344 Training loss: 0.001672284910455346 
Epoch: 34 Iteration: 345 Training loss: 0.006656171288341284 
Epoch: 34 Iteration: 346 Training loss: 0.0020156577229499817 
Epoch: 34 Iteration: 347 Training loss: 0.0012115477584302425 
Epoch: 34 Iteration: 348 Training loss: 0.006949689704924822 
Epoch: 34 Iteration: 349 Training loss: 0.0016238276148214936 
Epoch: 34 Iteration: 350 Training loss: 0.0020870051812380552 
Epoch: 34 Iteration: 350 Validation Acc : 0.9182561039924622 
Epoch: 35 Iteration: 351 Training loss: 0.001895005232654512 
Epoch: 35 Iteration: 352 Training loss: 0.0013176309876143932 
Epoch: 35 Iteration: 353 Training loss: 0.0014419881626963615 
Epoch: 35 Iteration: 354 Training loss: 0.0015852443175390363 
Epoch: 35 Iteration: 355 Training loss: 0.006705255247652531 
Epoch: 35 Iteration: 356 Training loss: 0.0018945668125525117 
Epoch: 35 Iteration: 357 Training loss: 0.0011434457264840603 
Epoch: 35 Iteration: 358 Training loss: 0.006801100447773933 
Epoch: 35 Iteration: 359 Training loss: 0.001516863121651113 
Epoch: 35 Iteration: 360 Training loss: 0.0019587958231568336 
Epoch: 35 Iteration: 360 Validation Acc : 0.9155312776565552 
Epoch: 36 Iteration: 361 Training loss: 0.0017861778615042567 
Epoch: 36 Iteration: 362 Training loss: 0.0012527095386758447 
Epoch: 36 Iteration: 363 Training loss: 0.001359054702334106 
Epoch: 36 Iteration: 364 Training loss: 0.0015212814323604107 
Epoch: 36 Iteration: 365 Training loss: 0.006588096264749765 
Epoch: 36 Iteration: 366 Training loss: 0.0017731962725520134 
Epoch: 36 Iteration: 367 Training loss: 0.001076943357475102 
Epoch: 36 Iteration: 368 Training loss: 0.006832135375589132 
Epoch: 36 Iteration: 369 Training loss: 0.0014527811435982585 
Epoch: 36 Iteration: 370 Training loss: 0.001876284833997488 
Epoch: 36 Iteration: 370 Validation Acc : 0.9155312776565552 
Epoch: 37 Iteration: 371 Training loss: 0.0017447543796151876 
Epoch: 37 Iteration: 372 Training loss: 0.0011902551632374525 
Epoch: 37 Iteration: 373 Training loss: 0.0012981268810108304 
Epoch: 37 Iteration: 374 Training loss: 0.0013942124787718058 
Epoch: 37 Iteration: 375 Training loss: 0.006604343187063932 
Epoch: 37 Iteration: 376 Training loss: 0.0016773452516645193 
Epoch: 37 Iteration: 377 Training loss: 0.001017313334159553 
Epoch: 37 Iteration: 378 Training loss: 0.006720247678458691 
Epoch: 37 Iteration: 379 Training loss: 0.001364421215839684 
Epoch: 37 Iteration: 380 Training loss: 0.001749285962432623 
Epoch: 37 Iteration: 380 Validation Acc : 0.9155312776565552 
Epoch: 38 Iteration: 381 Training loss: 0.0016465544467791915 
Epoch: 38 Iteration: 382 Training loss: 0.001106512499973178 
Epoch: 38 Iteration: 383 Training loss: 0.0012269408907741308 
Epoch: 38 Iteration: 384 Training loss: 0.0012950017116963863 
Epoch: 38 Iteration: 385 Training loss: 0.0065186345018446445 
Epoch: 38 Iteration: 386 Training loss: 0.0015717125497758389 
Epoch: 38 Iteration: 387 Training loss: 0.0009537894511595368 
Epoch: 38 Iteration: 388 Training loss: 0.006743108853697777 
Epoch: 38 Iteration: 389 Training loss: 0.0012949744705110788 
Epoch: 38 Iteration: 390 Training loss: 0.0016580403316766024 
Epoch: 38 Iteration: 390 Validation Acc : 0.912806510925293 
Epoch: 39 Iteration: 391 Training loss: 0.0015577297890558839 
Epoch: 39 Iteration: 392 Training loss: 0.0010471526766195893 
Epoch: 39 Iteration: 393 Training loss: 0.0011641879100352526 
Epoch: 39 Iteration: 394 Training loss: 0.0012353061465546489 
Epoch: 39 Iteration: 395 Training loss: 0.006522973068058491 
Epoch: 39 Iteration: 396 Training loss: 0.0014907901640981436 
Epoch: 39 Iteration: 397 Training loss: 0.0009095823625102639 
Epoch: 39 Iteration: 398 Training loss: 0.006659322418272495 
Epoch: 39 Iteration: 399 Training loss: 0.0012196255847811699 
Epoch: 39 Iteration: 400 Training loss: 0.001560149248689413 
Epoch: 39 Iteration: 400 Validation Acc : 0.912806510925293 
Epoch: 40 Iteration: 401 Training loss: 0.0014848036225885153 
Epoch: 40 Iteration: 402 Training loss: 0.0009916642447933555 
Epoch: 40 Iteration: 403 Training loss: 0.0010919900378212333 
Epoch: 40 Iteration: 404 Training loss: 0.0011675163405016065 
Epoch: 40 Iteration: 405 Training loss: 0.006457787938416004 
Epoch: 40 Iteration: 406 Training loss: 0.0014232038520276546 
Epoch: 40 Iteration: 407 Training loss: 0.0008598227286711335 
Epoch: 40 Iteration: 408 Training loss: 0.00667093601077795 
Epoch: 40 Iteration: 409 Training loss: 0.0011678856099024415 
Epoch: 40 Iteration: 410 Training loss: 0.0014760687481611967 
Epoch: 40 Iteration: 410 Validation Acc : 0.912806510925293 
Epoch: 41 Iteration: 411 Training loss: 0.0014066901057958603 
Epoch: 41 Iteration: 412 Training loss: 0.0009468008065596223 
Epoch: 41 Iteration: 413 Training loss: 0.0010380453895777464 
Epoch: 41 Iteration: 414 Training loss: 0.001113480655476451 
Epoch: 41 Iteration: 415 Training loss: 0.006517396308481693 
Epoch: 41 Iteration: 416 Training loss: 0.0013424588833004236 
Epoch: 41 Iteration: 417 Training loss: 0.0008142878650687635 
Epoch: 41 Iteration: 418 Training loss: 0.006558889988809824 
Epoch: 41 Iteration: 419 Training loss: 0.001104080118238926 
Epoch: 41 Iteration: 420 Training loss: 0.0014034988125786185 
Epoch: 41 Iteration: 420 Validation Acc : 0.912806510925293 
Epoch: 42 Iteration: 421 Training loss: 0.0013242093846201897 
Epoch: 42 Iteration: 422 Training loss: 0.0009110455284826458 
Epoch: 42 Iteration: 423 Training loss: 0.0009680542279966176 
Epoch: 42 Iteration: 424 Training loss: 0.0010718974517658353 
Epoch: 42 Iteration: 425 Training loss: 0.006373717915266752 
Epoch: 42 Iteration: 426 Training loss: 0.0012772631598636508 
Epoch: 42 Iteration: 427 Training loss: 0.0007696409593336284 
Epoch: 42 Iteration: 428 Training loss: 0.0066603971645236015 
Epoch: 42 Iteration: 429 Training loss: 0.0010597482323646545 
Epoch: 42 Iteration: 430 Training loss: 0.0013333174865692854 
Epoch: 42 Iteration: 430 Validation Acc : 0.9182561039924622 
Epoch: 43 Iteration: 431 Training loss: 0.0012372748460620642 
Epoch: 43 Iteration: 432 Training loss: 0.0008591676596552134 
Epoch: 43 Iteration: 433 Training loss: 0.0009249410941265523 
Epoch: 43 Iteration: 434 Training loss: 0.0010050199925899506 
Epoch: 43 Iteration: 435 Training loss: 0.006450872868299484 
Epoch: 43 Iteration: 436 Training loss: 0.001238390919752419 
Epoch: 43 Iteration: 437 Training loss: 0.0007465921808034182 
Epoch: 43 Iteration: 438 Training loss: 0.006539958529174328 
Epoch: 43 Iteration: 439 Training loss: 0.0010058193001896143 
Epoch: 43 Iteration: 440 Training loss: 0.001266833976842463 
Epoch: 43 Iteration: 440 Validation Acc : 0.9155312776565552 
Epoch: 44 Iteration: 441 Training loss: 0.001165714580565691 
Epoch: 44 Iteration: 442 Training loss: 0.0008137272670865059 
Epoch: 44 Iteration: 443 Training loss: 0.0008816755726002157 
Epoch: 44 Iteration: 444 Training loss: 0.000961583573371172 
Epoch: 44 Iteration: 445 Training loss: 0.006394144147634506 
Epoch: 44 Iteration: 446 Training loss: 0.001151935663074255 
Epoch: 44 Iteration: 447 Training loss: 0.0006980651523917913 
Epoch: 44 Iteration: 448 Training loss: 0.006555357947945595 
Epoch: 44 Iteration: 449 Training loss: 0.0009802018757909536 
Epoch: 44 Iteration: 450 Training loss: 0.001192904426716268 
Epoch: 44 Iteration: 450 Validation Acc : 0.912806510925293 
Epoch: 45 Iteration: 451 Training loss: 0.0011228345101699233 
Epoch: 45 Iteration: 452 Training loss: 0.0007781394524499774 
Epoch: 45 Iteration: 453 Training loss: 0.0008604219183325768 
Epoch: 45 Iteration: 454 Training loss: 0.0009233683813363314 
Epoch: 45 Iteration: 455 Training loss: 0.009843658655881882 
Epoch: 45 Iteration: 456 Training loss: 0.0011015220079571009 
Epoch: 45 Iteration: 457 Training loss: 0.0006711173336952925 
Epoch: 45 Iteration: 458 Training loss: 0.00614022184163332 
Epoch: 45 Iteration: 459 Training loss: 0.0009271765011362731 
Epoch: 45 Iteration: 460 Training loss: 0.001126301591284573 
Epoch: 45 Iteration: 460 Validation Acc : 0.912806510925293 
Epoch: 46 Iteration: 461 Training loss: 0.0010667764581739902 
Epoch: 46 Iteration: 462 Training loss: 0.0007480287458747625 
Epoch: 46 Iteration: 463 Training loss: 0.000813773600384593 
Epoch: 46 Iteration: 464 Training loss: 0.0008860202506184578 
Epoch: 46 Iteration: 465 Training loss: 0.005546879954636097 
Epoch: 46 Iteration: 466 Training loss: 0.0010529323481023312 
Epoch: 46 Iteration: 467 Training loss: 0.0006495543057098985 
Epoch: 46 Iteration: 468 Training loss: 0.00861878041177988 
Epoch: 46 Iteration: 469 Training loss: 0.0008892008336260915 
Epoch: 46 Iteration: 470 Training loss: 0.0010640033287927508 
Epoch: 46 Iteration: 470 Validation Acc : 0.9155312776565552 
Epoch: 47 Iteration: 471 Training loss: 0.0010194028727710247 
Epoch: 47 Iteration: 472 Training loss: 0.0006960342871025205 
Epoch: 47 Iteration: 473 Training loss: 0.0007792428368702531 
Epoch: 47 Iteration: 474 Training loss: 0.0008302563219331205 
Epoch: 47 Iteration: 475 Training loss: 0.008507071994245052 
Epoch: 47 Iteration: 476 Training loss: 0.0010008043609559536 
Epoch: 47 Iteration: 477 Training loss: 0.0006104362546466291 
Epoch: 47 Iteration: 478 Training loss: 0.006410068832337856 
Epoch: 47 Iteration: 479 Training loss: 0.0008394430624321103 
Epoch: 47 Iteration: 480 Training loss: 0.0010255005909129977 
Epoch: 47 Iteration: 480 Validation Acc : 0.9155312776565552 
Epoch: 48 Iteration: 481 Training loss: 0.0009536945726722479 
Epoch: 48 Iteration: 482 Training loss: 0.0006664016982540488 
Epoch: 48 Iteration: 483 Training loss: 0.000733580905944109 
Epoch: 48 Iteration: 484 Training loss: 0.000798105145804584 
Epoch: 48 Iteration: 485 Training loss: 0.006459811702370644 
Epoch: 48 Iteration: 486 Training loss: 0.0009594120783731341 
Epoch: 48 Iteration: 487 Training loss: 0.0005866530118510127 
Epoch: 48 Iteration: 488 Training loss: 0.008068476803600788 
Epoch: 48 Iteration: 489 Training loss: 0.0008117483812384307 
Epoch: 48 Iteration: 490 Training loss: 0.0009581422200426459 
Epoch: 48 Iteration: 490 Validation Acc : 0.9155312776565552 
Epoch: 49 Iteration: 491 Training loss: 0.000908504705876112 
Epoch: 49 Iteration: 492 Training loss: 0.0006350197363644838 
Epoch: 49 Iteration: 493 Training loss: 0.0006916798884049058 
Epoch: 49 Iteration: 494 Training loss: 0.0007367482176050544 
Epoch: 49 Iteration: 495 Training loss: 0.007499780971556902 
Epoch: 49 Iteration: 496 Training loss: 0.0009238996426574886 
Epoch: 49 Iteration: 497 Training loss: 0.0005637882859446108 
Epoch: 49 Iteration: 498 Training loss: 0.007057153154164553 
Epoch: 49 Iteration: 499 Training loss: 0.0007760635926388204 
Epoch: 49 Iteration: 500 Training loss: 0.0009171898709610105 
Epoch: 49 Iteration: 500 Validation Acc : 0.9155312776565552 

Testing

Below you see the test accuracy. You can also see the predictions returned for images.


In [14]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: test_x,
            labels_: test_y}
    test_acc = sess.run(accuracy, feed_dict=feed)
    print("Test accuracy: {:.4f}".format(test_acc))


Test accuracy: 0.8992

In [15]:
%matplotlib inline

import matplotlib.pyplot as plt
from scipy.ndimage import imread

Below, feel free to choose images and see how the trained classifier predicts the flowers in them.


In [16]:
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)


Out[16]:
<matplotlib.image.AxesImage at 0x7f882ed2b278>

In [17]:
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
    print('"vgg" object already exists.  Will not create again.')
else:
    #create vgg
    with tf.Session() as sess:
        input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
        vgg = vgg16.Vgg16()
        vgg.build(input_)


"vgg" object already exists.  Will not create again.

In [18]:
with tf.Session() as sess:
    img = utils.load_image(test_img_path)
    img = img.reshape((1, 224, 224, 3))

    feed_dict = {input_: img}
    code = sess.run(vgg.relu6, feed_dict=feed_dict)
        
saver = tf.train.Saver()
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: code}
    prediction = sess.run(predicted, feed_dict=feed).squeeze()

In [19]:
plt.imshow(test_img)


Out[19]:
<matplotlib.image.AxesImage at 0x7f882ec5ea58>

In [20]:
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), pre.classes_)