Transfer Learning

Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.

VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.

You can read more about transfer learning from the CS231n course notes.

Pretrained VGGNet

We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.

git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg

This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.


In [1]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm

vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
    raise Exception("VGG directory doesn't exist!")

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(vgg_dir + "vgg16.npy"):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
        urlretrieve(
            'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
            vgg_dir + 'vgg16.npy',
            pbar.hook)
else:
    print("Parameter file already exists!")


VGG16 Parameters: 553MB [01:38, 5.60MB/s]                               

Flower power

Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.


In [2]:
import tarfile

dataset_folder_path = 'flower_photos'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('flower_photos.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
        urlretrieve(
            'http://download.tensorflow.org/example_images/flower_photos.tgz',
            'flower_photos.tar.gz',
            pbar.hook)

if not isdir(dataset_folder_path):
    with tarfile.open('flower_photos.tar.gz') as tar:
        tar.extractall()
        tar.close()


Flowers Dataset: 229MB [00:04, 53.3MB/s]                              

ConvNet Codes

Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.

Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):

self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')

self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')

self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')

self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')

self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')

self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)

So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use

with tf.Session() as sess:
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
    with tf.name_scope("content_vgg"):
        vgg.build(input_)

This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,

feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)

In [4]:
import os

import numpy as np
import tensorflow as tf

from tensorflow_vgg import vgg16
from tensorflow_vgg import utils

In [5]:
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]

Below I'm running images through the VGG network in batches.

Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).


In [6]:
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []

codes = None

with tf.Session() as sess:
    
    # TODO: Build the vgg network here
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
    with tf.name_scope("content_vgg"):
        vgg.build(input_)

    for each in classes:
        print("Starting {} images".format(each))
        class_path = data_dir + each
        files = os.listdir(class_path)
        for ii, file in enumerate(files, 1):
            # Add images to the current batch
            # utils.load_image crops the input images for us, from the center
            img = utils.load_image(os.path.join(class_path, file))
            batch.append(img.reshape((1, 224, 224, 3)))
            labels.append(each)
            
            # Running the batch through the network to get the codes
            if ii % batch_size == 0 or ii == len(files):
                
                # Image batch to pass to VGG network
                images = np.concatenate(batch)
                
                # TODO: Get the values from the relu6 layer of the VGG network
                feed_dict = {input_: images}
                codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
                
                # Here I'm building an array of the codes
                if codes is None:
                    codes = codes_batch
                else:
                    codes = np.concatenate((codes, codes_batch))
                
                # Reset to start building the next batch
                batch = []
                print('{} images processed'.format(ii))


/home/carnd/deep-learning/transfer-learning/tensorflow_vgg/vgg16.npy
npy file loaded
build model started
build model finished: 0s
Starting dandelion images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
700 images processed
710 images processed
720 images processed
730 images processed
740 images processed
750 images processed
760 images processed
770 images processed
780 images processed
790 images processed
800 images processed
810 images processed
820 images processed
830 images processed
840 images processed
850 images processed
860 images processed
870 images processed
880 images processed
890 images processed
898 images processed
Starting roses images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
641 images processed
Starting daisy images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
633 images processed
Starting tulips images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
700 images processed
710 images processed
720 images processed
730 images processed
740 images processed
750 images processed
760 images processed
770 images processed
780 images processed
790 images processed
799 images processed
Starting sunflowers images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
699 images processed

In [8]:
# write codes to file
with open('codes', 'w') as f:
    codes.tofile(f)
    
# write labels to file
import csv
with open('labels', 'w') as f:
    writer = csv.writer(f, delimiter='\n')
    writer.writerow(labels)

Building the Classifier

Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.


In [9]:
# read codes and labels from file
import csv

with open('labels') as f:
    reader = csv.reader(f, delimiter='\n')
    labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
    codes = np.fromfile(f, dtype=np.float32)
    codes = codes.reshape((len(labels), -1))

Data prep

As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!

Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.


In [10]:
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()

In [13]:
labels_vecs =  lb.fit_transform(labels)# Your one-hot encoded labels array here

Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.

You can create the splitter like so:

ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)

Then split the data with

splitter = ss.split(x, y)

ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.

Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.


In [14]:
from sklearn.model_selection import StratifiedShuffleSplit

In [15]:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels)

In [17]:
train_idx, val_idx = next(splitter)

In [21]:
# 我们取一半的作为validation 
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]

In [22]:
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y =  codes[test_idx], labels_vecs[test_idx]

In [23]:
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)


Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

If you did it right, you should see these sizes for the training sets:

Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

Classifier layers

Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.

Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.


In [26]:
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])

# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256,
                                       weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
                                      biases_initializer=tf.zeros_initializer())

logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None,
                                          weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
                                      biases_initializer=tf.zeros_initializer())
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)

optimizer = tf.train.AdamOptimizer().minimize(cost)

# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

In [27]:
# print(predicted), 我们在计算 argmax 需要制定 dimension = 1 表示按行取最大值


Tensor("Softmax_1:0", shape=(?, 5), dtype=float32)

In [28]:
# print(labels_,logits)


Tensor("Placeholder_6:0", shape=(?, 5), dtype=int64) Tensor("fully_connected_5/BiasAdd:0", shape=(?, 5), dtype=float32)

Batches!

Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.


In [29]:
def get_batches(x, y, n_batches=10):
    """ Return a generator that yields batches from arrays x and y. """
    batch_size = len(x)//n_batches
    
    for ii in range(0, n_batches*batch_size, batch_size):
        # If we're not on the last batch, grab data with size batch_size
        if ii != (n_batches-1)*batch_size:
            X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] 
        # On the last batch, grab the rest of the data
        else:
            X, Y = x[ii:], y[ii:]
        # I love generators
        yield X, Y

Training

Here, we'll train the network.

Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!


In [31]:
epochs = 100
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
    
    sess.run(tf.global_variables_initializer())
    for e in range(epochs):
        for x, y in get_batches(train_x, train_y):
            feed = {inputs_: x,
                    labels_: y}
            loss, _ = sess.run([cost, optimizer], feed_dict=feed)
            print("Epoch: {}/{}".format(e+1, epochs),
                  "Iteration: {}".format(iteration),
                  "Training loss: {:.5f}".format(loss))
            iteration += 1
            
            if iteration % 5 == 0:
                feed = {inputs_: val_x,
                        labels_: val_y}
                val_acc = sess.run(accuracy, feed_dict=feed)
                print("Epoch: {}/{}".format(e, epochs),
                      "Iteration: {}".format(iteration),
                      "Validation Acc: {:.4f}".format(val_acc))
    saver.save(sess, "checkpoints/flowers.ckpt")


Epoch: 1/100 Iteration: 0 Training loss: 23.83269
Epoch: 1/100 Iteration: 1 Training loss: 13.62282
Epoch: 1/100 Iteration: 2 Training loss: 10.96373
Epoch: 1/100 Iteration: 3 Training loss: 6.62250
Epoch: 1/100 Iteration: 4 Training loss: 5.26765
Epoch: 0/100 Iteration: 5 Validation Acc: 0.7084
Epoch: 1/100 Iteration: 5 Training loss: 4.87608
Epoch: 1/100 Iteration: 6 Training loss: 6.40864
Epoch: 1/100 Iteration: 7 Training loss: 5.13451
Epoch: 1/100 Iteration: 8 Training loss: 4.61043
Epoch: 1/100 Iteration: 9 Training loss: 3.98855
Epoch: 0/100 Iteration: 10 Validation Acc: 0.8120
Epoch: 2/100 Iteration: 10 Training loss: 1.88251
Epoch: 2/100 Iteration: 11 Training loss: 2.05693
Epoch: 2/100 Iteration: 12 Training loss: 2.13021
Epoch: 2/100 Iteration: 13 Training loss: 2.13897
Epoch: 2/100 Iteration: 14 Training loss: 1.24097
Epoch: 1/100 Iteration: 15 Validation Acc: 0.8392
Epoch: 2/100 Iteration: 15 Training loss: 1.73784
Epoch: 2/100 Iteration: 16 Training loss: 2.87969
Epoch: 2/100 Iteration: 17 Training loss: 2.35750
Epoch: 2/100 Iteration: 18 Training loss: 2.00242
Epoch: 2/100 Iteration: 19 Training loss: 1.90884
Epoch: 1/100 Iteration: 20 Validation Acc: 0.8283
Epoch: 3/100 Iteration: 20 Training loss: 1.09712
Epoch: 3/100 Iteration: 21 Training loss: 1.20918
Epoch: 3/100 Iteration: 22 Training loss: 0.98712
Epoch: 3/100 Iteration: 23 Training loss: 1.11887
Epoch: 3/100 Iteration: 24 Training loss: 0.45117
Epoch: 2/100 Iteration: 25 Validation Acc: 0.8311
Epoch: 3/100 Iteration: 25 Training loss: 1.03028
Epoch: 3/100 Iteration: 26 Training loss: 1.42752
Epoch: 3/100 Iteration: 27 Training loss: 0.94362
Epoch: 3/100 Iteration: 28 Training loss: 0.99319
Epoch: 3/100 Iteration: 29 Training loss: 0.78441
Epoch: 2/100 Iteration: 30 Validation Acc: 0.8365
Epoch: 4/100 Iteration: 30 Training loss: 0.52665
Epoch: 4/100 Iteration: 31 Training loss: 0.56638
Epoch: 4/100 Iteration: 32 Training loss: 0.44366
Epoch: 4/100 Iteration: 33 Training loss: 0.36943
Epoch: 4/100 Iteration: 34 Training loss: 0.28610
Epoch: 3/100 Iteration: 35 Validation Acc: 0.8420
Epoch: 4/100 Iteration: 35 Training loss: 0.33934
Epoch: 4/100 Iteration: 36 Training loss: 0.63916
Epoch: 4/100 Iteration: 37 Training loss: 0.30342
Epoch: 4/100 Iteration: 38 Training loss: 0.59215
Epoch: 4/100 Iteration: 39 Training loss: 0.42402
Epoch: 3/100 Iteration: 40 Validation Acc: 0.8420
Epoch: 5/100 Iteration: 40 Training loss: 0.39011
Epoch: 5/100 Iteration: 41 Training loss: 0.23199
Epoch: 5/100 Iteration: 42 Training loss: 0.10905
Epoch: 5/100 Iteration: 43 Training loss: 0.24405
Epoch: 5/100 Iteration: 44 Training loss: 0.52384
Epoch: 4/100 Iteration: 45 Validation Acc: 0.8501
Epoch: 5/100 Iteration: 45 Training loss: 0.20694
Epoch: 5/100 Iteration: 46 Training loss: 0.46407
Epoch: 5/100 Iteration: 47 Training loss: 0.18003
Epoch: 5/100 Iteration: 48 Training loss: 0.17112
Epoch: 5/100 Iteration: 49 Training loss: 0.22470
Epoch: 4/100 Iteration: 50 Validation Acc: 0.8256
Epoch: 6/100 Iteration: 50 Training loss: 0.26801
Epoch: 6/100 Iteration: 51 Training loss: 0.31462
Epoch: 6/100 Iteration: 52 Training loss: 0.29095
Epoch: 6/100 Iteration: 53 Training loss: 0.12704
Epoch: 6/100 Iteration: 54 Training loss: 0.05134
Epoch: 5/100 Iteration: 55 Validation Acc: 0.8638
Epoch: 6/100 Iteration: 55 Training loss: 0.04967
Epoch: 6/100 Iteration: 56 Training loss: 0.17506
Epoch: 6/100 Iteration: 57 Training loss: 0.20733
Epoch: 6/100 Iteration: 58 Training loss: 0.21254
Epoch: 6/100 Iteration: 59 Training loss: 0.19309
Epoch: 5/100 Iteration: 60 Validation Acc: 0.8638
Epoch: 7/100 Iteration: 60 Training loss: 0.03225
Epoch: 7/100 Iteration: 61 Training loss: 0.02844
Epoch: 7/100 Iteration: 62 Training loss: 0.02317
Epoch: 7/100 Iteration: 63 Training loss: 0.02721
Epoch: 7/100 Iteration: 64 Training loss: 0.05029
Epoch: 6/100 Iteration: 65 Validation Acc: 0.8638
Epoch: 7/100 Iteration: 65 Training loss: 0.06841
Epoch: 7/100 Iteration: 66 Training loss: 0.11995
Epoch: 7/100 Iteration: 67 Training loss: 0.03139
Epoch: 7/100 Iteration: 68 Training loss: 0.04139
Epoch: 7/100 Iteration: 69 Training loss: 0.01098
Epoch: 6/100 Iteration: 70 Validation Acc: 0.8665
Epoch: 8/100 Iteration: 70 Training loss: 0.00151
Epoch: 8/100 Iteration: 71 Training loss: 0.01896
Epoch: 8/100 Iteration: 72 Training loss: 0.00051
Epoch: 8/100 Iteration: 73 Training loss: 0.00132
Epoch: 8/100 Iteration: 74 Training loss: 0.00804
Epoch: 7/100 Iteration: 75 Validation Acc: 0.8638
Epoch: 8/100 Iteration: 75 Training loss: 0.00092
Epoch: 8/100 Iteration: 76 Training loss: 0.02055
Epoch: 8/100 Iteration: 77 Training loss: 0.00130
Epoch: 8/100 Iteration: 78 Training loss: 0.00921
Epoch: 8/100 Iteration: 79 Training loss: 0.01975
Epoch: 7/100 Iteration: 80 Validation Acc: 0.8692
Epoch: 9/100 Iteration: 80 Training loss: 0.00209
Epoch: 9/100 Iteration: 81 Training loss: 0.02045
Epoch: 9/100 Iteration: 82 Training loss: 0.00116
Epoch: 9/100 Iteration: 83 Training loss: 0.00070
Epoch: 9/100 Iteration: 84 Training loss: 0.00579
Epoch: 8/100 Iteration: 85 Validation Acc: 0.8747
Epoch: 9/100 Iteration: 85 Training loss: 0.00024
Epoch: 9/100 Iteration: 86 Training loss: 0.02974
Epoch: 9/100 Iteration: 87 Training loss: 0.00111
Epoch: 9/100 Iteration: 88 Training loss: 0.01908
Epoch: 9/100 Iteration: 89 Training loss: 0.00660
Epoch: 8/100 Iteration: 90 Validation Acc: 0.8774
Epoch: 10/100 Iteration: 90 Training loss: 0.00031
Epoch: 10/100 Iteration: 91 Training loss: 0.00288
Epoch: 10/100 Iteration: 92 Training loss: 0.00029
Epoch: 10/100 Iteration: 93 Training loss: 0.00172
Epoch: 10/100 Iteration: 94 Training loss: 0.00014
Epoch: 9/100 Iteration: 95 Validation Acc: 0.8719
Epoch: 10/100 Iteration: 95 Training loss: 0.00251
Epoch: 10/100 Iteration: 96 Training loss: 0.01675
Epoch: 10/100 Iteration: 97 Training loss: 0.00193
Epoch: 10/100 Iteration: 98 Training loss: 0.00177
Epoch: 10/100 Iteration: 99 Training loss: 0.00713
Epoch: 9/100 Iteration: 100 Validation Acc: 0.8665
Epoch: 11/100 Iteration: 100 Training loss: 0.00041
Epoch: 11/100 Iteration: 101 Training loss: 0.00158
Epoch: 11/100 Iteration: 102 Training loss: 0.00036
Epoch: 11/100 Iteration: 103 Training loss: 0.00055
Epoch: 11/100 Iteration: 104 Training loss: 0.00010
Epoch: 10/100 Iteration: 105 Validation Acc: 0.8747
Epoch: 11/100 Iteration: 105 Training loss: 0.00022
Epoch: 11/100 Iteration: 106 Training loss: 0.00087
Epoch: 11/100 Iteration: 107 Training loss: 0.00030
Epoch: 11/100 Iteration: 108 Training loss: 0.00114
Epoch: 11/100 Iteration: 109 Training loss: 0.00057
Epoch: 10/100 Iteration: 110 Validation Acc: 0.8719
Epoch: 12/100 Iteration: 110 Training loss: 0.00088
Epoch: 12/100 Iteration: 111 Training loss: 0.00071
Epoch: 12/100 Iteration: 112 Training loss: 0.00058
Epoch: 12/100 Iteration: 113 Training loss: 0.00038
Epoch: 12/100 Iteration: 114 Training loss: 0.00030
Epoch: 11/100 Iteration: 115 Validation Acc: 0.8719
Epoch: 12/100 Iteration: 115 Training loss: 0.00014
Epoch: 12/100 Iteration: 116 Training loss: 0.00135
Epoch: 12/100 Iteration: 117 Training loss: 0.00047
Epoch: 12/100 Iteration: 118 Training loss: 0.00095
Epoch: 12/100 Iteration: 119 Training loss: 0.00042
Epoch: 11/100 Iteration: 120 Validation Acc: 0.8747
Epoch: 13/100 Iteration: 120 Training loss: 0.00034
Epoch: 13/100 Iteration: 121 Training loss: 0.00030
Epoch: 13/100 Iteration: 122 Training loss: 0.00028
Epoch: 13/100 Iteration: 123 Training loss: 0.00020
Epoch: 13/100 Iteration: 124 Training loss: 0.00012
Epoch: 12/100 Iteration: 125 Validation Acc: 0.8719
Epoch: 13/100 Iteration: 125 Training loss: 0.00010
Epoch: 13/100 Iteration: 126 Training loss: 0.00009
Epoch: 13/100 Iteration: 127 Training loss: 0.00017
Epoch: 13/100 Iteration: 128 Training loss: 0.00035
Epoch: 13/100 Iteration: 129 Training loss: 0.00014
Epoch: 12/100 Iteration: 130 Validation Acc: 0.8719
Epoch: 14/100 Iteration: 130 Training loss: 0.00012
Epoch: 14/100 Iteration: 131 Training loss: 0.00012
Epoch: 14/100 Iteration: 132 Training loss: 0.00013
Epoch: 14/100 Iteration: 133 Training loss: 0.00016
Epoch: 14/100 Iteration: 134 Training loss: 0.00007
Epoch: 13/100 Iteration: 135 Validation Acc: 0.8747
Epoch: 14/100 Iteration: 135 Training loss: 0.00010
Epoch: 14/100 Iteration: 136 Training loss: 0.00006
Epoch: 14/100 Iteration: 137 Training loss: 0.00013
Epoch: 14/100 Iteration: 138 Training loss: 0.00025
Epoch: 14/100 Iteration: 139 Training loss: 0.00013
Epoch: 13/100 Iteration: 140 Validation Acc: 0.8774
Epoch: 15/100 Iteration: 140 Training loss: 0.00013
Epoch: 15/100 Iteration: 141 Training loss: 0.00010
Epoch: 15/100 Iteration: 142 Training loss: 0.00010
Epoch: 15/100 Iteration: 143 Training loss: 0.00016
Epoch: 15/100 Iteration: 144 Training loss: 0.00005
Epoch: 14/100 Iteration: 145 Validation Acc: 0.8774
Epoch: 15/100 Iteration: 145 Training loss: 0.00011
Epoch: 15/100 Iteration: 146 Training loss: 0.00006
Epoch: 15/100 Iteration: 147 Training loss: 0.00011
Epoch: 15/100 Iteration: 148 Training loss: 0.00022
Epoch: 15/100 Iteration: 149 Training loss: 0.00013
Epoch: 14/100 Iteration: 150 Validation Acc: 0.8774
Epoch: 16/100 Iteration: 150 Training loss: 0.00013
Epoch: 16/100 Iteration: 151 Training loss: 0.00009
Epoch: 16/100 Iteration: 152 Training loss: 0.00009
Epoch: 16/100 Iteration: 153 Training loss: 0.00015
Epoch: 16/100 Iteration: 154 Training loss: 0.00005
Epoch: 15/100 Iteration: 155 Validation Acc: 0.8774
Epoch: 16/100 Iteration: 155 Training loss: 0.00010
Epoch: 16/100 Iteration: 156 Training loss: 0.00006
Epoch: 16/100 Iteration: 157 Training loss: 0.00011
Epoch: 16/100 Iteration: 158 Training loss: 0.00021
Epoch: 16/100 Iteration: 159 Training loss: 0.00012
Epoch: 15/100 Iteration: 160 Validation Acc: 0.8774
Epoch: 17/100 Iteration: 160 Training loss: 0.00011
Epoch: 17/100 Iteration: 161 Training loss: 0.00009
Epoch: 17/100 Iteration: 162 Training loss: 0.00008
Epoch: 17/100 Iteration: 163 Training loss: 0.00014
Epoch: 17/100 Iteration: 164 Training loss: 0.00005
Epoch: 16/100 Iteration: 165 Validation Acc: 0.8774
Epoch: 17/100 Iteration: 165 Training loss: 0.00010
Epoch: 17/100 Iteration: 166 Training loss: 0.00006
Epoch: 17/100 Iteration: 167 Training loss: 0.00010
Epoch: 17/100 Iteration: 168 Training loss: 0.00020
Epoch: 17/100 Iteration: 169 Training loss: 0.00011
Epoch: 16/100 Iteration: 170 Validation Acc: 0.8747
Epoch: 18/100 Iteration: 170 Training loss: 0.00010
Epoch: 18/100 Iteration: 171 Training loss: 0.00008
Epoch: 18/100 Iteration: 172 Training loss: 0.00008
Epoch: 18/100 Iteration: 173 Training loss: 0.00013
Epoch: 18/100 Iteration: 174 Training loss: 0.00005
Epoch: 17/100 Iteration: 175 Validation Acc: 0.8747
Epoch: 18/100 Iteration: 175 Training loss: 0.00009
Epoch: 18/100 Iteration: 176 Training loss: 0.00005
Epoch: 18/100 Iteration: 177 Training loss: 0.00009
Epoch: 18/100 Iteration: 178 Training loss: 0.00019
Epoch: 18/100 Iteration: 179 Training loss: 0.00011
Epoch: 17/100 Iteration: 180 Validation Acc: 0.8747
Epoch: 19/100 Iteration: 180 Training loss: 0.00009
Epoch: 19/100 Iteration: 181 Training loss: 0.00008
Epoch: 19/100 Iteration: 182 Training loss: 0.00008
Epoch: 19/100 Iteration: 183 Training loss: 0.00012
Epoch: 19/100 Iteration: 184 Training loss: 0.00005
Epoch: 18/100 Iteration: 185 Validation Acc: 0.8747
Epoch: 19/100 Iteration: 185 Training loss: 0.00008
Epoch: 19/100 Iteration: 186 Training loss: 0.00005
Epoch: 19/100 Iteration: 187 Training loss: 0.00009
Epoch: 19/100 Iteration: 188 Training loss: 0.00018
Epoch: 19/100 Iteration: 189 Training loss: 0.00010
Epoch: 18/100 Iteration: 190 Validation Acc: 0.8747
Epoch: 20/100 Iteration: 190 Training loss: 0.00008
Epoch: 20/100 Iteration: 191 Training loss: 0.00007
Epoch: 20/100 Iteration: 192 Training loss: 0.00007
Epoch: 20/100 Iteration: 193 Training loss: 0.00012
Epoch: 20/100 Iteration: 194 Training loss: 0.00005
Epoch: 19/100 Iteration: 195 Validation Acc: 0.8747
Epoch: 20/100 Iteration: 195 Training loss: 0.00008
Epoch: 20/100 Iteration: 196 Training loss: 0.00005
Epoch: 20/100 Iteration: 197 Training loss: 0.00009
Epoch: 20/100 Iteration: 198 Training loss: 0.00018
Epoch: 20/100 Iteration: 199 Training loss: 0.00010
Epoch: 19/100 Iteration: 200 Validation Acc: 0.8747
Epoch: 21/100 Iteration: 200 Training loss: 0.00008
Epoch: 21/100 Iteration: 201 Training loss: 0.00007
Epoch: 21/100 Iteration: 202 Training loss: 0.00007
Epoch: 21/100 Iteration: 203 Training loss: 0.00011
Epoch: 21/100 Iteration: 204 Training loss: 0.00004
Epoch: 20/100 Iteration: 205 Validation Acc: 0.8747
Epoch: 21/100 Iteration: 205 Training loss: 0.00008
Epoch: 21/100 Iteration: 206 Training loss: 0.00005
Epoch: 21/100 Iteration: 207 Training loss: 0.00008
Epoch: 21/100 Iteration: 208 Training loss: 0.00017
Epoch: 21/100 Iteration: 209 Training loss: 0.00009
Epoch: 20/100 Iteration: 210 Validation Acc: 0.8747
Epoch: 22/100 Iteration: 210 Training loss: 0.00007
Epoch: 22/100 Iteration: 211 Training loss: 0.00007
Epoch: 22/100 Iteration: 212 Training loss: 0.00007
Epoch: 22/100 Iteration: 213 Training loss: 0.00011
Epoch: 22/100 Iteration: 214 Training loss: 0.00004
Epoch: 21/100 Iteration: 215 Validation Acc: 0.8747
Epoch: 22/100 Iteration: 215 Training loss: 0.00007
Epoch: 22/100 Iteration: 216 Training loss: 0.00005
Epoch: 22/100 Iteration: 217 Training loss: 0.00008
Epoch: 22/100 Iteration: 218 Training loss: 0.00016
Epoch: 22/100 Iteration: 219 Training loss: 0.00009
Epoch: 21/100 Iteration: 220 Validation Acc: 0.8719
Epoch: 23/100 Iteration: 220 Training loss: 0.00007
Epoch: 23/100 Iteration: 221 Training loss: 0.00007
Epoch: 23/100 Iteration: 222 Training loss: 0.00006
Epoch: 23/100 Iteration: 223 Training loss: 0.00011
Epoch: 23/100 Iteration: 224 Training loss: 0.00004
Epoch: 22/100 Iteration: 225 Validation Acc: 0.8719
Epoch: 23/100 Iteration: 225 Training loss: 0.00007
Epoch: 23/100 Iteration: 226 Training loss: 0.00005
Epoch: 23/100 Iteration: 227 Training loss: 0.00008
Epoch: 23/100 Iteration: 228 Training loss: 0.00016
Epoch: 23/100 Iteration: 229 Training loss: 0.00009
Epoch: 22/100 Iteration: 230 Validation Acc: 0.8719
Epoch: 24/100 Iteration: 230 Training loss: 0.00007
Epoch: 24/100 Iteration: 231 Training loss: 0.00006
Epoch: 24/100 Iteration: 232 Training loss: 0.00006
Epoch: 24/100 Iteration: 233 Training loss: 0.00010
Epoch: 24/100 Iteration: 234 Training loss: 0.00004
Epoch: 23/100 Iteration: 235 Validation Acc: 0.8719
Epoch: 24/100 Iteration: 235 Training loss: 0.00007
Epoch: 24/100 Iteration: 236 Training loss: 0.00005
Epoch: 24/100 Iteration: 237 Training loss: 0.00008
Epoch: 24/100 Iteration: 238 Training loss: 0.00015
Epoch: 24/100 Iteration: 239 Training loss: 0.00009
Epoch: 23/100 Iteration: 240 Validation Acc: 0.8719
Epoch: 25/100 Iteration: 240 Training loss: 0.00006
Epoch: 25/100 Iteration: 241 Training loss: 0.00006
Epoch: 25/100 Iteration: 242 Training loss: 0.00006
Epoch: 25/100 Iteration: 243 Training loss: 0.00010
Epoch: 25/100 Iteration: 244 Training loss: 0.00004
Epoch: 24/100 Iteration: 245 Validation Acc: 0.8719
Epoch: 25/100 Iteration: 245 Training loss: 0.00007
Epoch: 25/100 Iteration: 246 Training loss: 0.00005
Epoch: 25/100 Iteration: 247 Training loss: 0.00007
Epoch: 25/100 Iteration: 248 Training loss: 0.00015
Epoch: 25/100 Iteration: 249 Training loss: 0.00008
Epoch: 24/100 Iteration: 250 Validation Acc: 0.8719
Epoch: 26/100 Iteration: 250 Training loss: 0.00006
Epoch: 26/100 Iteration: 251 Training loss: 0.00006
Epoch: 26/100 Iteration: 252 Training loss: 0.00006
Epoch: 26/100 Iteration: 253 Training loss: 0.00010
Epoch: 26/100 Iteration: 254 Training loss: 0.00004
Epoch: 25/100 Iteration: 255 Validation Acc: 0.8719
Epoch: 26/100 Iteration: 255 Training loss: 0.00006
Epoch: 26/100 Iteration: 256 Training loss: 0.00005
Epoch: 26/100 Iteration: 257 Training loss: 0.00007
Epoch: 26/100 Iteration: 258 Training loss: 0.00014
Epoch: 26/100 Iteration: 259 Training loss: 0.00008
Epoch: 25/100 Iteration: 260 Validation Acc: 0.8719
Epoch: 27/100 Iteration: 260 Training loss: 0.00006
Epoch: 27/100 Iteration: 261 Training loss: 0.00006
Epoch: 27/100 Iteration: 262 Training loss: 0.00005
Epoch: 27/100 Iteration: 263 Training loss: 0.00009
Epoch: 27/100 Iteration: 264 Training loss: 0.00004
Epoch: 26/100 Iteration: 265 Validation Acc: 0.8719
Epoch: 27/100 Iteration: 265 Training loss: 0.00006
Epoch: 27/100 Iteration: 266 Training loss: 0.00004
Epoch: 27/100 Iteration: 267 Training loss: 0.00007
Epoch: 27/100 Iteration: 268 Training loss: 0.00014
Epoch: 27/100 Iteration: 269 Training loss: 0.00008
Epoch: 26/100 Iteration: 270 Validation Acc: 0.8719
Epoch: 28/100 Iteration: 270 Training loss: 0.00006
Epoch: 28/100 Iteration: 271 Training loss: 0.00006
Epoch: 28/100 Iteration: 272 Training loss: 0.00005
Epoch: 28/100 Iteration: 273 Training loss: 0.00009
Epoch: 28/100 Iteration: 274 Training loss: 0.00003
Epoch: 27/100 Iteration: 275 Validation Acc: 0.8719
Epoch: 28/100 Iteration: 275 Training loss: 0.00006
Epoch: 28/100 Iteration: 276 Training loss: 0.00004
Epoch: 28/100 Iteration: 277 Training loss: 0.00007
Epoch: 28/100 Iteration: 278 Training loss: 0.00013
Epoch: 28/100 Iteration: 279 Training loss: 0.00008
Epoch: 27/100 Iteration: 280 Validation Acc: 0.8719
Epoch: 29/100 Iteration: 280 Training loss: 0.00006
Epoch: 29/100 Iteration: 281 Training loss: 0.00005
Epoch: 29/100 Iteration: 282 Training loss: 0.00005
Epoch: 29/100 Iteration: 283 Training loss: 0.00009
Epoch: 29/100 Iteration: 284 Training loss: 0.00003
Epoch: 28/100 Iteration: 285 Validation Acc: 0.8719
Epoch: 29/100 Iteration: 285 Training loss: 0.00006
Epoch: 29/100 Iteration: 286 Training loss: 0.00004
Epoch: 29/100 Iteration: 287 Training loss: 0.00006
Epoch: 29/100 Iteration: 288 Training loss: 0.00013
Epoch: 29/100 Iteration: 289 Training loss: 0.00008
Epoch: 28/100 Iteration: 290 Validation Acc: 0.8719
Epoch: 30/100 Iteration: 290 Training loss: 0.00005
Epoch: 30/100 Iteration: 291 Training loss: 0.00005
Epoch: 30/100 Iteration: 292 Training loss: 0.00005
Epoch: 30/100 Iteration: 293 Training loss: 0.00008
Epoch: 30/100 Iteration: 294 Training loss: 0.00003
Epoch: 29/100 Iteration: 295 Validation Acc: 0.8719
Epoch: 30/100 Iteration: 295 Training loss: 0.00006
Epoch: 30/100 Iteration: 296 Training loss: 0.00004
Epoch: 30/100 Iteration: 297 Training loss: 0.00006
Epoch: 30/100 Iteration: 298 Training loss: 0.00012
Epoch: 30/100 Iteration: 299 Training loss: 0.00007
Epoch: 29/100 Iteration: 300 Validation Acc: 0.8719
Epoch: 31/100 Iteration: 300 Training loss: 0.00005
Epoch: 31/100 Iteration: 301 Training loss: 0.00005
Epoch: 31/100 Iteration: 302 Training loss: 0.00005
Epoch: 31/100 Iteration: 303 Training loss: 0.00008
Epoch: 31/100 Iteration: 304 Training loss: 0.00003
Epoch: 30/100 Iteration: 305 Validation Acc: 0.8719
Epoch: 31/100 Iteration: 305 Training loss: 0.00006
Epoch: 31/100 Iteration: 306 Training loss: 0.00004
Epoch: 31/100 Iteration: 307 Training loss: 0.00006
Epoch: 31/100 Iteration: 308 Training loss: 0.00012
Epoch: 31/100 Iteration: 309 Training loss: 0.00007
Epoch: 30/100 Iteration: 310 Validation Acc: 0.8719
Epoch: 32/100 Iteration: 310 Training loss: 0.00005
Epoch: 32/100 Iteration: 311 Training loss: 0.00005
Epoch: 32/100 Iteration: 312 Training loss: 0.00005
Epoch: 32/100 Iteration: 313 Training loss: 0.00008
Epoch: 32/100 Iteration: 314 Training loss: 0.00003
Epoch: 31/100 Iteration: 315 Validation Acc: 0.8719
Epoch: 32/100 Iteration: 315 Training loss: 0.00005
Epoch: 32/100 Iteration: 316 Training loss: 0.00004
Epoch: 32/100 Iteration: 317 Training loss: 0.00006
Epoch: 32/100 Iteration: 318 Training loss: 0.00012
Epoch: 32/100 Iteration: 319 Training loss: 0.00007
Epoch: 31/100 Iteration: 320 Validation Acc: 0.8719
Epoch: 33/100 Iteration: 320 Training loss: 0.00005
Epoch: 33/100 Iteration: 321 Training loss: 0.00005
Epoch: 33/100 Iteration: 322 Training loss: 0.00005
Epoch: 33/100 Iteration: 323 Training loss: 0.00008
Epoch: 33/100 Iteration: 324 Training loss: 0.00003
Epoch: 32/100 Iteration: 325 Validation Acc: 0.8719
Epoch: 33/100 Iteration: 325 Training loss: 0.00005
Epoch: 33/100 Iteration: 326 Training loss: 0.00004
Epoch: 33/100 Iteration: 327 Training loss: 0.00006
Epoch: 33/100 Iteration: 328 Training loss: 0.00011
Epoch: 33/100 Iteration: 329 Training loss: 0.00007
Epoch: 32/100 Iteration: 330 Validation Acc: 0.8719
Epoch: 34/100 Iteration: 330 Training loss: 0.00005
Epoch: 34/100 Iteration: 331 Training loss: 0.00005
Epoch: 34/100 Iteration: 332 Training loss: 0.00004
Epoch: 34/100 Iteration: 333 Training loss: 0.00008
Epoch: 34/100 Iteration: 334 Training loss: 0.00003
Epoch: 33/100 Iteration: 335 Validation Acc: 0.8719
Epoch: 34/100 Iteration: 335 Training loss: 0.00005
Epoch: 34/100 Iteration: 336 Training loss: 0.00004
Epoch: 34/100 Iteration: 337 Training loss: 0.00006
Epoch: 34/100 Iteration: 338 Training loss: 0.00011
Epoch: 34/100 Iteration: 339 Training loss: 0.00007
Epoch: 33/100 Iteration: 340 Validation Acc: 0.8719
Epoch: 35/100 Iteration: 340 Training loss: 0.00005
Epoch: 35/100 Iteration: 341 Training loss: 0.00005
Epoch: 35/100 Iteration: 342 Training loss: 0.00004
Epoch: 35/100 Iteration: 343 Training loss: 0.00007
Epoch: 35/100 Iteration: 344 Training loss: 0.00003
Epoch: 34/100 Iteration: 345 Validation Acc: 0.8719
Epoch: 35/100 Iteration: 345 Training loss: 0.00005
Epoch: 35/100 Iteration: 346 Training loss: 0.00004
Epoch: 35/100 Iteration: 347 Training loss: 0.00006
Epoch: 35/100 Iteration: 348 Training loss: 0.00011
Epoch: 35/100 Iteration: 349 Training loss: 0.00007
Epoch: 34/100 Iteration: 350 Validation Acc: 0.8719
Epoch: 36/100 Iteration: 350 Training loss: 0.00005
Epoch: 36/100 Iteration: 351 Training loss: 0.00005
Epoch: 36/100 Iteration: 352 Training loss: 0.00004
Epoch: 36/100 Iteration: 353 Training loss: 0.00007
Epoch: 36/100 Iteration: 354 Training loss: 0.00003
Epoch: 35/100 Iteration: 355 Validation Acc: 0.8719
Epoch: 36/100 Iteration: 355 Training loss: 0.00005
Epoch: 36/100 Iteration: 356 Training loss: 0.00004
Epoch: 36/100 Iteration: 357 Training loss: 0.00005
Epoch: 36/100 Iteration: 358 Training loss: 0.00010
Epoch: 36/100 Iteration: 359 Training loss: 0.00006
Epoch: 35/100 Iteration: 360 Validation Acc: 0.8719
Epoch: 37/100 Iteration: 360 Training loss: 0.00004
Epoch: 37/100 Iteration: 361 Training loss: 0.00004
Epoch: 37/100 Iteration: 362 Training loss: 0.00004
Epoch: 37/100 Iteration: 363 Training loss: 0.00007
Epoch: 37/100 Iteration: 364 Training loss: 0.00003
Epoch: 36/100 Iteration: 365 Validation Acc: 0.8719
Epoch: 37/100 Iteration: 365 Training loss: 0.00005
Epoch: 37/100 Iteration: 366 Training loss: 0.00004
Epoch: 37/100 Iteration: 367 Training loss: 0.00005
Epoch: 37/100 Iteration: 368 Training loss: 0.00010
Epoch: 37/100 Iteration: 369 Training loss: 0.00006
Epoch: 36/100 Iteration: 370 Validation Acc: 0.8719
Epoch: 38/100 Iteration: 370 Training loss: 0.00004
Epoch: 38/100 Iteration: 371 Training loss: 0.00004
Epoch: 38/100 Iteration: 372 Training loss: 0.00004
Epoch: 38/100 Iteration: 373 Training loss: 0.00007
Epoch: 38/100 Iteration: 374 Training loss: 0.00003
Epoch: 37/100 Iteration: 375 Validation Acc: 0.8719
Epoch: 38/100 Iteration: 375 Training loss: 0.00005
Epoch: 38/100 Iteration: 376 Training loss: 0.00004
Epoch: 38/100 Iteration: 377 Training loss: 0.00005
Epoch: 38/100 Iteration: 378 Training loss: 0.00010
Epoch: 38/100 Iteration: 379 Training loss: 0.00006
Epoch: 37/100 Iteration: 380 Validation Acc: 0.8719
Epoch: 39/100 Iteration: 380 Training loss: 0.00004
Epoch: 39/100 Iteration: 381 Training loss: 0.00004
Epoch: 39/100 Iteration: 382 Training loss: 0.00004
Epoch: 39/100 Iteration: 383 Training loss: 0.00007
Epoch: 39/100 Iteration: 384 Training loss: 0.00003
Epoch: 38/100 Iteration: 385 Validation Acc: 0.8719
Epoch: 39/100 Iteration: 385 Training loss: 0.00005
Epoch: 39/100 Iteration: 386 Training loss: 0.00004
Epoch: 39/100 Iteration: 387 Training loss: 0.00005
Epoch: 39/100 Iteration: 388 Training loss: 0.00010
Epoch: 39/100 Iteration: 389 Training loss: 0.00006
Epoch: 38/100 Iteration: 390 Validation Acc: 0.8719
Epoch: 40/100 Iteration: 390 Training loss: 0.00004
Epoch: 40/100 Iteration: 391 Training loss: 0.00004
Epoch: 40/100 Iteration: 392 Training loss: 0.00004
Epoch: 40/100 Iteration: 393 Training loss: 0.00006
Epoch: 40/100 Iteration: 394 Training loss: 0.00002
Epoch: 39/100 Iteration: 395 Validation Acc: 0.8719
Epoch: 40/100 Iteration: 395 Training loss: 0.00005
Epoch: 40/100 Iteration: 396 Training loss: 0.00004
Epoch: 40/100 Iteration: 397 Training loss: 0.00005
Epoch: 40/100 Iteration: 398 Training loss: 0.00009
Epoch: 40/100 Iteration: 399 Training loss: 0.00006
Epoch: 39/100 Iteration: 400 Validation Acc: 0.8719
Epoch: 41/100 Iteration: 400 Training loss: 0.00004
Epoch: 41/100 Iteration: 401 Training loss: 0.00004
Epoch: 41/100 Iteration: 402 Training loss: 0.00004
Epoch: 41/100 Iteration: 403 Training loss: 0.00006
Epoch: 41/100 Iteration: 404 Training loss: 0.00002
Epoch: 40/100 Iteration: 405 Validation Acc: 0.8719
Epoch: 41/100 Iteration: 405 Training loss: 0.00004
Epoch: 41/100 Iteration: 406 Training loss: 0.00004
Epoch: 41/100 Iteration: 407 Training loss: 0.00005
Epoch: 41/100 Iteration: 408 Training loss: 0.00009
Epoch: 41/100 Iteration: 409 Training loss: 0.00006
Epoch: 40/100 Iteration: 410 Validation Acc: 0.8719
Epoch: 42/100 Iteration: 410 Training loss: 0.00004
Epoch: 42/100 Iteration: 411 Training loss: 0.00004
Epoch: 42/100 Iteration: 412 Training loss: 0.00004
Epoch: 42/100 Iteration: 413 Training loss: 0.00006
Epoch: 42/100 Iteration: 414 Training loss: 0.00002
Epoch: 41/100 Iteration: 415 Validation Acc: 0.8719
Epoch: 42/100 Iteration: 415 Training loss: 0.00004
Epoch: 42/100 Iteration: 416 Training loss: 0.00004
Epoch: 42/100 Iteration: 417 Training loss: 0.00005
Epoch: 42/100 Iteration: 418 Training loss: 0.00009
Epoch: 42/100 Iteration: 419 Training loss: 0.00006
Epoch: 41/100 Iteration: 420 Validation Acc: 0.8719
Epoch: 43/100 Iteration: 420 Training loss: 0.00004
Epoch: 43/100 Iteration: 421 Training loss: 0.00004
Epoch: 43/100 Iteration: 422 Training loss: 0.00003
Epoch: 43/100 Iteration: 423 Training loss: 0.00006
Epoch: 43/100 Iteration: 424 Training loss: 0.00002
Epoch: 42/100 Iteration: 425 Validation Acc: 0.8719
Epoch: 43/100 Iteration: 425 Training loss: 0.00004
Epoch: 43/100 Iteration: 426 Training loss: 0.00004
Epoch: 43/100 Iteration: 427 Training loss: 0.00005
Epoch: 43/100 Iteration: 428 Training loss: 0.00009
Epoch: 43/100 Iteration: 429 Training loss: 0.00006
Epoch: 42/100 Iteration: 430 Validation Acc: 0.8719
Epoch: 44/100 Iteration: 430 Training loss: 0.00004
Epoch: 44/100 Iteration: 431 Training loss: 0.00004
Epoch: 44/100 Iteration: 432 Training loss: 0.00003
Epoch: 44/100 Iteration: 433 Training loss: 0.00006
Epoch: 44/100 Iteration: 434 Training loss: 0.00002
Epoch: 43/100 Iteration: 435 Validation Acc: 0.8719
Epoch: 44/100 Iteration: 435 Training loss: 0.00004
Epoch: 44/100 Iteration: 436 Training loss: 0.00003
Epoch: 44/100 Iteration: 437 Training loss: 0.00005
Epoch: 44/100 Iteration: 438 Training loss: 0.00008
Epoch: 44/100 Iteration: 439 Training loss: 0.00006
Epoch: 43/100 Iteration: 440 Validation Acc: 0.8719
Epoch: 45/100 Iteration: 440 Training loss: 0.00004
Epoch: 45/100 Iteration: 441 Training loss: 0.00004
Epoch: 45/100 Iteration: 442 Training loss: 0.00003
Epoch: 45/100 Iteration: 443 Training loss: 0.00006
Epoch: 45/100 Iteration: 444 Training loss: 0.00002
Epoch: 44/100 Iteration: 445 Validation Acc: 0.8719
Epoch: 45/100 Iteration: 445 Training loss: 0.00004
Epoch: 45/100 Iteration: 446 Training loss: 0.00003
Epoch: 45/100 Iteration: 447 Training loss: 0.00005
Epoch: 45/100 Iteration: 448 Training loss: 0.00008
Epoch: 45/100 Iteration: 449 Training loss: 0.00005
Epoch: 44/100 Iteration: 450 Validation Acc: 0.8719
Epoch: 46/100 Iteration: 450 Training loss: 0.00004
Epoch: 46/100 Iteration: 451 Training loss: 0.00004
Epoch: 46/100 Iteration: 452 Training loss: 0.00003
Epoch: 46/100 Iteration: 453 Training loss: 0.00006
Epoch: 46/100 Iteration: 454 Training loss: 0.00002
Epoch: 45/100 Iteration: 455 Validation Acc: 0.8719
Epoch: 46/100 Iteration: 455 Training loss: 0.00004
Epoch: 46/100 Iteration: 456 Training loss: 0.00003
Epoch: 46/100 Iteration: 457 Training loss: 0.00004
Epoch: 46/100 Iteration: 458 Training loss: 0.00008
Epoch: 46/100 Iteration: 459 Training loss: 0.00005
Epoch: 45/100 Iteration: 460 Validation Acc: 0.8719
Epoch: 47/100 Iteration: 460 Training loss: 0.00004
Epoch: 47/100 Iteration: 461 Training loss: 0.00004
Epoch: 47/100 Iteration: 462 Training loss: 0.00003
Epoch: 47/100 Iteration: 463 Training loss: 0.00006
Epoch: 47/100 Iteration: 464 Training loss: 0.00002
Epoch: 46/100 Iteration: 465 Validation Acc: 0.8719
Epoch: 47/100 Iteration: 465 Training loss: 0.00004
Epoch: 47/100 Iteration: 466 Training loss: 0.00003
Epoch: 47/100 Iteration: 467 Training loss: 0.00004
Epoch: 47/100 Iteration: 468 Training loss: 0.00008
Epoch: 47/100 Iteration: 469 Training loss: 0.00005
Epoch: 46/100 Iteration: 470 Validation Acc: 0.8719
Epoch: 48/100 Iteration: 470 Training loss: 0.00004
Epoch: 48/100 Iteration: 471 Training loss: 0.00004
Epoch: 48/100 Iteration: 472 Training loss: 0.00003
Epoch: 48/100 Iteration: 473 Training loss: 0.00005
Epoch: 48/100 Iteration: 474 Training loss: 0.00002
Epoch: 47/100 Iteration: 475 Validation Acc: 0.8719
Epoch: 48/100 Iteration: 475 Training loss: 0.00004
Epoch: 48/100 Iteration: 476 Training loss: 0.00003
Epoch: 48/100 Iteration: 477 Training loss: 0.00004
Epoch: 48/100 Iteration: 478 Training loss: 0.00008
Epoch: 48/100 Iteration: 479 Training loss: 0.00005
Epoch: 47/100 Iteration: 480 Validation Acc: 0.8719
Epoch: 49/100 Iteration: 480 Training loss: 0.00003
Epoch: 49/100 Iteration: 481 Training loss: 0.00004
Epoch: 49/100 Iteration: 482 Training loss: 0.00003
Epoch: 49/100 Iteration: 483 Training loss: 0.00005
Epoch: 49/100 Iteration: 484 Training loss: 0.00002
Epoch: 48/100 Iteration: 485 Validation Acc: 0.8719
Epoch: 49/100 Iteration: 485 Training loss: 0.00004
Epoch: 49/100 Iteration: 486 Training loss: 0.00003
Epoch: 49/100 Iteration: 487 Training loss: 0.00004
Epoch: 49/100 Iteration: 488 Training loss: 0.00008
Epoch: 49/100 Iteration: 489 Training loss: 0.00005
Epoch: 48/100 Iteration: 490 Validation Acc: 0.8719
Epoch: 50/100 Iteration: 490 Training loss: 0.00003
Epoch: 50/100 Iteration: 491 Training loss: 0.00003
Epoch: 50/100 Iteration: 492 Training loss: 0.00003
Epoch: 50/100 Iteration: 493 Training loss: 0.00005
Epoch: 50/100 Iteration: 494 Training loss: 0.00002
Epoch: 49/100 Iteration: 495 Validation Acc: 0.8719
Epoch: 50/100 Iteration: 495 Training loss: 0.00004
Epoch: 50/100 Iteration: 496 Training loss: 0.00003
Epoch: 50/100 Iteration: 497 Training loss: 0.00004
Epoch: 50/100 Iteration: 498 Training loss: 0.00007
Epoch: 50/100 Iteration: 499 Training loss: 0.00005
Epoch: 49/100 Iteration: 500 Validation Acc: 0.8719
Epoch: 51/100 Iteration: 500 Training loss: 0.00003
Epoch: 51/100 Iteration: 501 Training loss: 0.00003
Epoch: 51/100 Iteration: 502 Training loss: 0.00003
Epoch: 51/100 Iteration: 503 Training loss: 0.00005
Epoch: 51/100 Iteration: 504 Training loss: 0.00002
Epoch: 50/100 Iteration: 505 Validation Acc: 0.8719
Epoch: 51/100 Iteration: 505 Training loss: 0.00004
Epoch: 51/100 Iteration: 506 Training loss: 0.00003
Epoch: 51/100 Iteration: 507 Training loss: 0.00004
Epoch: 51/100 Iteration: 508 Training loss: 0.00007
Epoch: 51/100 Iteration: 509 Training loss: 0.00005
Epoch: 50/100 Iteration: 510 Validation Acc: 0.8719
Epoch: 52/100 Iteration: 510 Training loss: 0.00003
Epoch: 52/100 Iteration: 511 Training loss: 0.00003
Epoch: 52/100 Iteration: 512 Training loss: 0.00003
Epoch: 52/100 Iteration: 513 Training loss: 0.00005
Epoch: 52/100 Iteration: 514 Training loss: 0.00002
Epoch: 51/100 Iteration: 515 Validation Acc: 0.8719
Epoch: 52/100 Iteration: 515 Training loss: 0.00004
Epoch: 52/100 Iteration: 516 Training loss: 0.00003
Epoch: 52/100 Iteration: 517 Training loss: 0.00004
Epoch: 52/100 Iteration: 518 Training loss: 0.00007
Epoch: 52/100 Iteration: 519 Training loss: 0.00005
Epoch: 51/100 Iteration: 520 Validation Acc: 0.8719
Epoch: 53/100 Iteration: 520 Training loss: 0.00003
Epoch: 53/100 Iteration: 521 Training loss: 0.00003
Epoch: 53/100 Iteration: 522 Training loss: 0.00003
Epoch: 53/100 Iteration: 523 Training loss: 0.00005
Epoch: 53/100 Iteration: 524 Training loss: 0.00002
Epoch: 52/100 Iteration: 525 Validation Acc: 0.8719
Epoch: 53/100 Iteration: 525 Training loss: 0.00004
Epoch: 53/100 Iteration: 526 Training loss: 0.00003
Epoch: 53/100 Iteration: 527 Training loss: 0.00004
Epoch: 53/100 Iteration: 528 Training loss: 0.00007
Epoch: 53/100 Iteration: 529 Training loss: 0.00005
Epoch: 52/100 Iteration: 530 Validation Acc: 0.8719
Epoch: 54/100 Iteration: 530 Training loss: 0.00003
Epoch: 54/100 Iteration: 531 Training loss: 0.00003
Epoch: 54/100 Iteration: 532 Training loss: 0.00003
Epoch: 54/100 Iteration: 533 Training loss: 0.00005
Epoch: 54/100 Iteration: 534 Training loss: 0.00002
Epoch: 53/100 Iteration: 535 Validation Acc: 0.8719
Epoch: 54/100 Iteration: 535 Training loss: 0.00003
Epoch: 54/100 Iteration: 536 Training loss: 0.00003
Epoch: 54/100 Iteration: 537 Training loss: 0.00004
Epoch: 54/100 Iteration: 538 Training loss: 0.00007
Epoch: 54/100 Iteration: 539 Training loss: 0.00005
Epoch: 53/100 Iteration: 540 Validation Acc: 0.8719
Epoch: 55/100 Iteration: 540 Training loss: 0.00003
Epoch: 55/100 Iteration: 541 Training loss: 0.00003
Epoch: 55/100 Iteration: 542 Training loss: 0.00003
Epoch: 55/100 Iteration: 543 Training loss: 0.00005
Epoch: 55/100 Iteration: 544 Training loss: 0.00002
Epoch: 54/100 Iteration: 545 Validation Acc: 0.8719
Epoch: 55/100 Iteration: 545 Training loss: 0.00003
Epoch: 55/100 Iteration: 546 Training loss: 0.00003
Epoch: 55/100 Iteration: 547 Training loss: 0.00004
Epoch: 55/100 Iteration: 548 Training loss: 0.00007
Epoch: 55/100 Iteration: 549 Training loss: 0.00005
Epoch: 54/100 Iteration: 550 Validation Acc: 0.8719
Epoch: 56/100 Iteration: 550 Training loss: 0.00003
Epoch: 56/100 Iteration: 551 Training loss: 0.00003
Epoch: 56/100 Iteration: 552 Training loss: 0.00003
Epoch: 56/100 Iteration: 553 Training loss: 0.00005
Epoch: 56/100 Iteration: 554 Training loss: 0.00002
Epoch: 55/100 Iteration: 555 Validation Acc: 0.8719
Epoch: 56/100 Iteration: 555 Training loss: 0.00003
Epoch: 56/100 Iteration: 556 Training loss: 0.00003
Epoch: 56/100 Iteration: 557 Training loss: 0.00004
Epoch: 56/100 Iteration: 558 Training loss: 0.00007
Epoch: 56/100 Iteration: 559 Training loss: 0.00005
Epoch: 55/100 Iteration: 560 Validation Acc: 0.8719
Epoch: 57/100 Iteration: 560 Training loss: 0.00003
Epoch: 57/100 Iteration: 561 Training loss: 0.00003
Epoch: 57/100 Iteration: 562 Training loss: 0.00003
Epoch: 57/100 Iteration: 563 Training loss: 0.00005
Epoch: 57/100 Iteration: 564 Training loss: 0.00002
Epoch: 56/100 Iteration: 565 Validation Acc: 0.8719
Epoch: 57/100 Iteration: 565 Training loss: 0.00003
Epoch: 57/100 Iteration: 566 Training loss: 0.00003
Epoch: 57/100 Iteration: 567 Training loss: 0.00004
Epoch: 57/100 Iteration: 568 Training loss: 0.00006
Epoch: 57/100 Iteration: 569 Training loss: 0.00004
Epoch: 56/100 Iteration: 570 Validation Acc: 0.8719
Epoch: 58/100 Iteration: 570 Training loss: 0.00003
Epoch: 58/100 Iteration: 571 Training loss: 0.00003
Epoch: 58/100 Iteration: 572 Training loss: 0.00003
Epoch: 58/100 Iteration: 573 Training loss: 0.00004
Epoch: 58/100 Iteration: 574 Training loss: 0.00002
Epoch: 57/100 Iteration: 575 Validation Acc: 0.8719
Epoch: 58/100 Iteration: 575 Training loss: 0.00003
Epoch: 58/100 Iteration: 576 Training loss: 0.00003
Epoch: 58/100 Iteration: 577 Training loss: 0.00004
Epoch: 58/100 Iteration: 578 Training loss: 0.00006
Epoch: 58/100 Iteration: 579 Training loss: 0.00004
Epoch: 57/100 Iteration: 580 Validation Acc: 0.8719
Epoch: 59/100 Iteration: 580 Training loss: 0.00003
Epoch: 59/100 Iteration: 581 Training loss: 0.00003
Epoch: 59/100 Iteration: 582 Training loss: 0.00003
Epoch: 59/100 Iteration: 583 Training loss: 0.00004
Epoch: 59/100 Iteration: 584 Training loss: 0.00002
Epoch: 58/100 Iteration: 585 Validation Acc: 0.8719
Epoch: 59/100 Iteration: 585 Training loss: 0.00003
Epoch: 59/100 Iteration: 586 Training loss: 0.00003
Epoch: 59/100 Iteration: 587 Training loss: 0.00004
Epoch: 59/100 Iteration: 588 Training loss: 0.00006
Epoch: 59/100 Iteration: 589 Training loss: 0.00004
Epoch: 58/100 Iteration: 590 Validation Acc: 0.8719
Epoch: 60/100 Iteration: 590 Training loss: 0.00003
Epoch: 60/100 Iteration: 591 Training loss: 0.00003
Epoch: 60/100 Iteration: 592 Training loss: 0.00003
Epoch: 60/100 Iteration: 593 Training loss: 0.00004
Epoch: 60/100 Iteration: 594 Training loss: 0.00002
Epoch: 59/100 Iteration: 595 Validation Acc: 0.8719
Epoch: 60/100 Iteration: 595 Training loss: 0.00003
Epoch: 60/100 Iteration: 596 Training loss: 0.00003
Epoch: 60/100 Iteration: 597 Training loss: 0.00004
Epoch: 60/100 Iteration: 598 Training loss: 0.00006
Epoch: 60/100 Iteration: 599 Training loss: 0.00004
Epoch: 59/100 Iteration: 600 Validation Acc: 0.8719
Epoch: 61/100 Iteration: 600 Training loss: 0.00003
Epoch: 61/100 Iteration: 601 Training loss: 0.00003
Epoch: 61/100 Iteration: 602 Training loss: 0.00002
Epoch: 61/100 Iteration: 603 Training loss: 0.00004
Epoch: 61/100 Iteration: 604 Training loss: 0.00002
Epoch: 60/100 Iteration: 605 Validation Acc: 0.8719
Epoch: 61/100 Iteration: 605 Training loss: 0.00003
Epoch: 61/100 Iteration: 606 Training loss: 0.00003
Epoch: 61/100 Iteration: 607 Training loss: 0.00003
Epoch: 61/100 Iteration: 608 Training loss: 0.00006
Epoch: 61/100 Iteration: 609 Training loss: 0.00004
Epoch: 60/100 Iteration: 610 Validation Acc: 0.8719
Epoch: 62/100 Iteration: 610 Training loss: 0.00003
Epoch: 62/100 Iteration: 611 Training loss: 0.00003
Epoch: 62/100 Iteration: 612 Training loss: 0.00002
Epoch: 62/100 Iteration: 613 Training loss: 0.00004
Epoch: 62/100 Iteration: 614 Training loss: 0.00002
Epoch: 61/100 Iteration: 615 Validation Acc: 0.8719
Epoch: 62/100 Iteration: 615 Training loss: 0.00003
Epoch: 62/100 Iteration: 616 Training loss: 0.00003
Epoch: 62/100 Iteration: 617 Training loss: 0.00003
Epoch: 62/100 Iteration: 618 Training loss: 0.00006
Epoch: 62/100 Iteration: 619 Training loss: 0.00004
Epoch: 61/100 Iteration: 620 Validation Acc: 0.8719
Epoch: 63/100 Iteration: 620 Training loss: 0.00003
Epoch: 63/100 Iteration: 621 Training loss: 0.00003
Epoch: 63/100 Iteration: 622 Training loss: 0.00002
Epoch: 63/100 Iteration: 623 Training loss: 0.00004
Epoch: 63/100 Iteration: 624 Training loss: 0.00002
Epoch: 62/100 Iteration: 625 Validation Acc: 0.8719
Epoch: 63/100 Iteration: 625 Training loss: 0.00003
Epoch: 63/100 Iteration: 626 Training loss: 0.00003
Epoch: 63/100 Iteration: 627 Training loss: 0.00003
Epoch: 63/100 Iteration: 628 Training loss: 0.00006
Epoch: 63/100 Iteration: 629 Training loss: 0.00004
Epoch: 62/100 Iteration: 630 Validation Acc: 0.8719
Epoch: 64/100 Iteration: 630 Training loss: 0.00003
Epoch: 64/100 Iteration: 631 Training loss: 0.00003
Epoch: 64/100 Iteration: 632 Training loss: 0.00002
Epoch: 64/100 Iteration: 633 Training loss: 0.00004
Epoch: 64/100 Iteration: 634 Training loss: 0.00002
Epoch: 63/100 Iteration: 635 Validation Acc: 0.8719
Epoch: 64/100 Iteration: 635 Training loss: 0.00003
Epoch: 64/100 Iteration: 636 Training loss: 0.00003
Epoch: 64/100 Iteration: 637 Training loss: 0.00003
Epoch: 64/100 Iteration: 638 Training loss: 0.00006
Epoch: 64/100 Iteration: 639 Training loss: 0.00004
Epoch: 63/100 Iteration: 640 Validation Acc: 0.8719
Epoch: 65/100 Iteration: 640 Training loss: 0.00003
Epoch: 65/100 Iteration: 641 Training loss: 0.00003
Epoch: 65/100 Iteration: 642 Training loss: 0.00002
Epoch: 65/100 Iteration: 643 Training loss: 0.00004
Epoch: 65/100 Iteration: 644 Training loss: 0.00002
Epoch: 64/100 Iteration: 645 Validation Acc: 0.8747
Epoch: 65/100 Iteration: 645 Training loss: 0.00003
Epoch: 65/100 Iteration: 646 Training loss: 0.00003
Epoch: 65/100 Iteration: 647 Training loss: 0.00003
Epoch: 65/100 Iteration: 648 Training loss: 0.00006
Epoch: 65/100 Iteration: 649 Training loss: 0.00004
Epoch: 64/100 Iteration: 650 Validation Acc: 0.8719
Epoch: 66/100 Iteration: 650 Training loss: 0.00003
Epoch: 66/100 Iteration: 651 Training loss: 0.00003
Epoch: 66/100 Iteration: 652 Training loss: 0.00002
Epoch: 66/100 Iteration: 653 Training loss: 0.00004
Epoch: 66/100 Iteration: 654 Training loss: 0.00002
Epoch: 65/100 Iteration: 655 Validation Acc: 0.8747
Epoch: 66/100 Iteration: 655 Training loss: 0.00003
Epoch: 66/100 Iteration: 656 Training loss: 0.00003
Epoch: 66/100 Iteration: 657 Training loss: 0.00003
Epoch: 66/100 Iteration: 658 Training loss: 0.00005
Epoch: 66/100 Iteration: 659 Training loss: 0.00004
Epoch: 65/100 Iteration: 660 Validation Acc: 0.8747
Epoch: 67/100 Iteration: 660 Training loss: 0.00003
Epoch: 67/100 Iteration: 661 Training loss: 0.00003
Epoch: 67/100 Iteration: 662 Training loss: 0.00002
Epoch: 67/100 Iteration: 663 Training loss: 0.00004
Epoch: 67/100 Iteration: 664 Training loss: 0.00002
Epoch: 66/100 Iteration: 665 Validation Acc: 0.8747
Epoch: 67/100 Iteration: 665 Training loss: 0.00003
Epoch: 67/100 Iteration: 666 Training loss: 0.00003
Epoch: 67/100 Iteration: 667 Training loss: 0.00003
Epoch: 67/100 Iteration: 668 Training loss: 0.00005
Epoch: 67/100 Iteration: 669 Training loss: 0.00004
Epoch: 66/100 Iteration: 670 Validation Acc: 0.8747
Epoch: 68/100 Iteration: 670 Training loss: 0.00003
Epoch: 68/100 Iteration: 671 Training loss: 0.00003
Epoch: 68/100 Iteration: 672 Training loss: 0.00002
Epoch: 68/100 Iteration: 673 Training loss: 0.00004
Epoch: 68/100 Iteration: 674 Training loss: 0.00001
Epoch: 67/100 Iteration: 675 Validation Acc: 0.8747
Epoch: 68/100 Iteration: 675 Training loss: 0.00003
Epoch: 68/100 Iteration: 676 Training loss: 0.00003
Epoch: 68/100 Iteration: 677 Training loss: 0.00003
Epoch: 68/100 Iteration: 678 Training loss: 0.00005
Epoch: 68/100 Iteration: 679 Training loss: 0.00004
Epoch: 67/100 Iteration: 680 Validation Acc: 0.8747
Epoch: 69/100 Iteration: 680 Training loss: 0.00002
Epoch: 69/100 Iteration: 681 Training loss: 0.00003
Epoch: 69/100 Iteration: 682 Training loss: 0.00002
Epoch: 69/100 Iteration: 683 Training loss: 0.00004
Epoch: 69/100 Iteration: 684 Training loss: 0.00001
Epoch: 68/100 Iteration: 685 Validation Acc: 0.8774
Epoch: 69/100 Iteration: 685 Training loss: 0.00003
Epoch: 69/100 Iteration: 686 Training loss: 0.00003
Epoch: 69/100 Iteration: 687 Training loss: 0.00003
Epoch: 69/100 Iteration: 688 Training loss: 0.00005
Epoch: 69/100 Iteration: 689 Training loss: 0.00004
Epoch: 68/100 Iteration: 690 Validation Acc: 0.8774
Epoch: 70/100 Iteration: 690 Training loss: 0.00002
Epoch: 70/100 Iteration: 691 Training loss: 0.00003
Epoch: 70/100 Iteration: 692 Training loss: 0.00002
Epoch: 70/100 Iteration: 693 Training loss: 0.00004
Epoch: 70/100 Iteration: 694 Training loss: 0.00001
Epoch: 69/100 Iteration: 695 Validation Acc: 0.8774
Epoch: 70/100 Iteration: 695 Training loss: 0.00003
Epoch: 70/100 Iteration: 696 Training loss: 0.00003
Epoch: 70/100 Iteration: 697 Training loss: 0.00003
Epoch: 70/100 Iteration: 698 Training loss: 0.00005
Epoch: 70/100 Iteration: 699 Training loss: 0.00004
Epoch: 69/100 Iteration: 700 Validation Acc: 0.8774
Epoch: 71/100 Iteration: 700 Training loss: 0.00002
Epoch: 71/100 Iteration: 701 Training loss: 0.00003
Epoch: 71/100 Iteration: 702 Training loss: 0.00002
Epoch: 71/100 Iteration: 703 Training loss: 0.00004
Epoch: 71/100 Iteration: 704 Training loss: 0.00001
Epoch: 70/100 Iteration: 705 Validation Acc: 0.8774
Epoch: 71/100 Iteration: 705 Training loss: 0.00003
Epoch: 71/100 Iteration: 706 Training loss: 0.00003
Epoch: 71/100 Iteration: 707 Training loss: 0.00003
Epoch: 71/100 Iteration: 708 Training loss: 0.00005
Epoch: 71/100 Iteration: 709 Training loss: 0.00004
Epoch: 70/100 Iteration: 710 Validation Acc: 0.8774
Epoch: 72/100 Iteration: 710 Training loss: 0.00002
Epoch: 72/100 Iteration: 711 Training loss: 0.00003
Epoch: 72/100 Iteration: 712 Training loss: 0.00002
Epoch: 72/100 Iteration: 713 Training loss: 0.00004
Epoch: 72/100 Iteration: 714 Training loss: 0.00001
Epoch: 71/100 Iteration: 715 Validation Acc: 0.8774
Epoch: 72/100 Iteration: 715 Training loss: 0.00003
Epoch: 72/100 Iteration: 716 Training loss: 0.00003
Epoch: 72/100 Iteration: 717 Training loss: 0.00003
Epoch: 72/100 Iteration: 718 Training loss: 0.00005
Epoch: 72/100 Iteration: 719 Training loss: 0.00004
Epoch: 71/100 Iteration: 720 Validation Acc: 0.8774
Epoch: 73/100 Iteration: 720 Training loss: 0.00002
Epoch: 73/100 Iteration: 721 Training loss: 0.00003
Epoch: 73/100 Iteration: 722 Training loss: 0.00002
Epoch: 73/100 Iteration: 723 Training loss: 0.00004
Epoch: 73/100 Iteration: 724 Training loss: 0.00001
Epoch: 72/100 Iteration: 725 Validation Acc: 0.8774
Epoch: 73/100 Iteration: 725 Training loss: 0.00003
Epoch: 73/100 Iteration: 726 Training loss: 0.00002
Epoch: 73/100 Iteration: 727 Training loss: 0.00003
Epoch: 73/100 Iteration: 728 Training loss: 0.00005
Epoch: 73/100 Iteration: 729 Training loss: 0.00004
Epoch: 72/100 Iteration: 730 Validation Acc: 0.8774
Epoch: 74/100 Iteration: 730 Training loss: 0.00002
Epoch: 74/100 Iteration: 731 Training loss: 0.00003
Epoch: 74/100 Iteration: 732 Training loss: 0.00002
Epoch: 74/100 Iteration: 733 Training loss: 0.00003
Epoch: 74/100 Iteration: 734 Training loss: 0.00001
Epoch: 73/100 Iteration: 735 Validation Acc: 0.8774
Epoch: 74/100 Iteration: 735 Training loss: 0.00003
Epoch: 74/100 Iteration: 736 Training loss: 0.00002
Epoch: 74/100 Iteration: 737 Training loss: 0.00003
Epoch: 74/100 Iteration: 738 Training loss: 0.00005
Epoch: 74/100 Iteration: 739 Training loss: 0.00004
Epoch: 73/100 Iteration: 740 Validation Acc: 0.8774
Epoch: 75/100 Iteration: 740 Training loss: 0.00002
Epoch: 75/100 Iteration: 741 Training loss: 0.00003
Epoch: 75/100 Iteration: 742 Training loss: 0.00002
Epoch: 75/100 Iteration: 743 Training loss: 0.00003
Epoch: 75/100 Iteration: 744 Training loss: 0.00001
Epoch: 74/100 Iteration: 745 Validation Acc: 0.8774
Epoch: 75/100 Iteration: 745 Training loss: 0.00003
Epoch: 75/100 Iteration: 746 Training loss: 0.00002
Epoch: 75/100 Iteration: 747 Training loss: 0.00003
Epoch: 75/100 Iteration: 748 Training loss: 0.00005
Epoch: 75/100 Iteration: 749 Training loss: 0.00003
Epoch: 74/100 Iteration: 750 Validation Acc: 0.8774
Epoch: 76/100 Iteration: 750 Training loss: 0.00002
Epoch: 76/100 Iteration: 751 Training loss: 0.00002
Epoch: 76/100 Iteration: 752 Training loss: 0.00002
Epoch: 76/100 Iteration: 753 Training loss: 0.00003
Epoch: 76/100 Iteration: 754 Training loss: 0.00001
Epoch: 75/100 Iteration: 755 Validation Acc: 0.8774
Epoch: 76/100 Iteration: 755 Training loss: 0.00003
Epoch: 76/100 Iteration: 756 Training loss: 0.00002
Epoch: 76/100 Iteration: 757 Training loss: 0.00003
Epoch: 76/100 Iteration: 758 Training loss: 0.00005
Epoch: 76/100 Iteration: 759 Training loss: 0.00003
Epoch: 75/100 Iteration: 760 Validation Acc: 0.8774
Epoch: 77/100 Iteration: 760 Training loss: 0.00002
Epoch: 77/100 Iteration: 761 Training loss: 0.00002
Epoch: 77/100 Iteration: 762 Training loss: 0.00002
Epoch: 77/100 Iteration: 763 Training loss: 0.00003
Epoch: 77/100 Iteration: 764 Training loss: 0.00001
Epoch: 76/100 Iteration: 765 Validation Acc: 0.8774
Epoch: 77/100 Iteration: 765 Training loss: 0.00002
Epoch: 77/100 Iteration: 766 Training loss: 0.00002
Epoch: 77/100 Iteration: 767 Training loss: 0.00003
Epoch: 77/100 Iteration: 768 Training loss: 0.00005
Epoch: 77/100 Iteration: 769 Training loss: 0.00003
Epoch: 76/100 Iteration: 770 Validation Acc: 0.8774
Epoch: 78/100 Iteration: 770 Training loss: 0.00002
Epoch: 78/100 Iteration: 771 Training loss: 0.00002
Epoch: 78/100 Iteration: 772 Training loss: 0.00002
Epoch: 78/100 Iteration: 773 Training loss: 0.00003
Epoch: 78/100 Iteration: 774 Training loss: 0.00001
Epoch: 77/100 Iteration: 775 Validation Acc: 0.8774
Epoch: 78/100 Iteration: 775 Training loss: 0.00002
Epoch: 78/100 Iteration: 776 Training loss: 0.00002
Epoch: 78/100 Iteration: 777 Training loss: 0.00003
Epoch: 78/100 Iteration: 778 Training loss: 0.00004
Epoch: 78/100 Iteration: 779 Training loss: 0.00003
Epoch: 77/100 Iteration: 780 Validation Acc: 0.8774
Epoch: 79/100 Iteration: 780 Training loss: 0.00002
Epoch: 79/100 Iteration: 781 Training loss: 0.00002
Epoch: 79/100 Iteration: 782 Training loss: 0.00002
Epoch: 79/100 Iteration: 783 Training loss: 0.00003
Epoch: 79/100 Iteration: 784 Training loss: 0.00001
Epoch: 78/100 Iteration: 785 Validation Acc: 0.8774
Epoch: 79/100 Iteration: 785 Training loss: 0.00002
Epoch: 79/100 Iteration: 786 Training loss: 0.00002
Epoch: 79/100 Iteration: 787 Training loss: 0.00003
Epoch: 79/100 Iteration: 788 Training loss: 0.00004
Epoch: 79/100 Iteration: 789 Training loss: 0.00003
Epoch: 78/100 Iteration: 790 Validation Acc: 0.8774
Epoch: 80/100 Iteration: 790 Training loss: 0.00002
Epoch: 80/100 Iteration: 791 Training loss: 0.00002
Epoch: 80/100 Iteration: 792 Training loss: 0.00002
Epoch: 80/100 Iteration: 793 Training loss: 0.00003
Epoch: 80/100 Iteration: 794 Training loss: 0.00001
Epoch: 79/100 Iteration: 795 Validation Acc: 0.8774
Epoch: 80/100 Iteration: 795 Training loss: 0.00002
Epoch: 80/100 Iteration: 796 Training loss: 0.00002
Epoch: 80/100 Iteration: 797 Training loss: 0.00003
Epoch: 80/100 Iteration: 798 Training loss: 0.00004
Epoch: 80/100 Iteration: 799 Training loss: 0.00003
Epoch: 79/100 Iteration: 800 Validation Acc: 0.8774
Epoch: 81/100 Iteration: 800 Training loss: 0.00002
Epoch: 81/100 Iteration: 801 Training loss: 0.00002
Epoch: 81/100 Iteration: 802 Training loss: 0.00002
Epoch: 81/100 Iteration: 803 Training loss: 0.00003
Epoch: 81/100 Iteration: 804 Training loss: 0.00001
Epoch: 80/100 Iteration: 805 Validation Acc: 0.8774
Epoch: 81/100 Iteration: 805 Training loss: 0.00002
Epoch: 81/100 Iteration: 806 Training loss: 0.00002
Epoch: 81/100 Iteration: 807 Training loss: 0.00003
Epoch: 81/100 Iteration: 808 Training loss: 0.00004
Epoch: 81/100 Iteration: 809 Training loss: 0.00003
Epoch: 80/100 Iteration: 810 Validation Acc: 0.8774
Epoch: 82/100 Iteration: 810 Training loss: 0.00002
Epoch: 82/100 Iteration: 811 Training loss: 0.00002
Epoch: 82/100 Iteration: 812 Training loss: 0.00002
Epoch: 82/100 Iteration: 813 Training loss: 0.00003
Epoch: 82/100 Iteration: 814 Training loss: 0.00001
Epoch: 81/100 Iteration: 815 Validation Acc: 0.8774
Epoch: 82/100 Iteration: 815 Training loss: 0.00002
Epoch: 82/100 Iteration: 816 Training loss: 0.00002
Epoch: 82/100 Iteration: 817 Training loss: 0.00003
Epoch: 82/100 Iteration: 818 Training loss: 0.00004
Epoch: 82/100 Iteration: 819 Training loss: 0.00003
Epoch: 81/100 Iteration: 820 Validation Acc: 0.8774
Epoch: 83/100 Iteration: 820 Training loss: 0.00002
Epoch: 83/100 Iteration: 821 Training loss: 0.00002
Epoch: 83/100 Iteration: 822 Training loss: 0.00002
Epoch: 83/100 Iteration: 823 Training loss: 0.00003
Epoch: 83/100 Iteration: 824 Training loss: 0.00001
Epoch: 82/100 Iteration: 825 Validation Acc: 0.8774
Epoch: 83/100 Iteration: 825 Training loss: 0.00002
Epoch: 83/100 Iteration: 826 Training loss: 0.00002
Epoch: 83/100 Iteration: 827 Training loss: 0.00003
Epoch: 83/100 Iteration: 828 Training loss: 0.00004
Epoch: 83/100 Iteration: 829 Training loss: 0.00003
Epoch: 82/100 Iteration: 830 Validation Acc: 0.8774
Epoch: 84/100 Iteration: 830 Training loss: 0.00002
Epoch: 84/100 Iteration: 831 Training loss: 0.00002
Epoch: 84/100 Iteration: 832 Training loss: 0.00002
Epoch: 84/100 Iteration: 833 Training loss: 0.00003
Epoch: 84/100 Iteration: 834 Training loss: 0.00001
Epoch: 83/100 Iteration: 835 Validation Acc: 0.8774
Epoch: 84/100 Iteration: 835 Training loss: 0.00002
Epoch: 84/100 Iteration: 836 Training loss: 0.00002
Epoch: 84/100 Iteration: 837 Training loss: 0.00002
Epoch: 84/100 Iteration: 838 Training loss: 0.00004
Epoch: 84/100 Iteration: 839 Training loss: 0.00003
Epoch: 83/100 Iteration: 840 Validation Acc: 0.8774
Epoch: 85/100 Iteration: 840 Training loss: 0.00002
Epoch: 85/100 Iteration: 841 Training loss: 0.00002
Epoch: 85/100 Iteration: 842 Training loss: 0.00002
Epoch: 85/100 Iteration: 843 Training loss: 0.00003
Epoch: 85/100 Iteration: 844 Training loss: 0.00001
Epoch: 84/100 Iteration: 845 Validation Acc: 0.8774
Epoch: 85/100 Iteration: 845 Training loss: 0.00002
Epoch: 85/100 Iteration: 846 Training loss: 0.00002
Epoch: 85/100 Iteration: 847 Training loss: 0.00002
Epoch: 85/100 Iteration: 848 Training loss: 0.00004
Epoch: 85/100 Iteration: 849 Training loss: 0.00003
Epoch: 84/100 Iteration: 850 Validation Acc: 0.8774
Epoch: 86/100 Iteration: 850 Training loss: 0.00002
Epoch: 86/100 Iteration: 851 Training loss: 0.00002
Epoch: 86/100 Iteration: 852 Training loss: 0.00002
Epoch: 86/100 Iteration: 853 Training loss: 0.00003
Epoch: 86/100 Iteration: 854 Training loss: 0.00001
Epoch: 85/100 Iteration: 855 Validation Acc: 0.8774
Epoch: 86/100 Iteration: 855 Training loss: 0.00002
Epoch: 86/100 Iteration: 856 Training loss: 0.00002
Epoch: 86/100 Iteration: 857 Training loss: 0.00002
Epoch: 86/100 Iteration: 858 Training loss: 0.00004
Epoch: 86/100 Iteration: 859 Training loss: 0.00003
Epoch: 85/100 Iteration: 860 Validation Acc: 0.8774
Epoch: 87/100 Iteration: 860 Training loss: 0.00002
Epoch: 87/100 Iteration: 861 Training loss: 0.00002
Epoch: 87/100 Iteration: 862 Training loss: 0.00002
Epoch: 87/100 Iteration: 863 Training loss: 0.00003
Epoch: 87/100 Iteration: 864 Training loss: 0.00001
Epoch: 86/100 Iteration: 865 Validation Acc: 0.8774
Epoch: 87/100 Iteration: 865 Training loss: 0.00002
Epoch: 87/100 Iteration: 866 Training loss: 0.00002
Epoch: 87/100 Iteration: 867 Training loss: 0.00002
Epoch: 87/100 Iteration: 868 Training loss: 0.00004
Epoch: 87/100 Iteration: 869 Training loss: 0.00003
Epoch: 86/100 Iteration: 870 Validation Acc: 0.8774
Epoch: 88/100 Iteration: 870 Training loss: 0.00002
Epoch: 88/100 Iteration: 871 Training loss: 0.00002
Epoch: 88/100 Iteration: 872 Training loss: 0.00002
Epoch: 88/100 Iteration: 873 Training loss: 0.00003
Epoch: 88/100 Iteration: 874 Training loss: 0.00001
Epoch: 87/100 Iteration: 875 Validation Acc: 0.8774
Epoch: 88/100 Iteration: 875 Training loss: 0.00002
Epoch: 88/100 Iteration: 876 Training loss: 0.00002
Epoch: 88/100 Iteration: 877 Training loss: 0.00002
Epoch: 88/100 Iteration: 878 Training loss: 0.00004
Epoch: 88/100 Iteration: 879 Training loss: 0.00003
Epoch: 87/100 Iteration: 880 Validation Acc: 0.8774
Epoch: 89/100 Iteration: 880 Training loss: 0.00002
Epoch: 89/100 Iteration: 881 Training loss: 0.00002
Epoch: 89/100 Iteration: 882 Training loss: 0.00002
Epoch: 89/100 Iteration: 883 Training loss: 0.00003
Epoch: 89/100 Iteration: 884 Training loss: 0.00001
Epoch: 88/100 Iteration: 885 Validation Acc: 0.8774
Epoch: 89/100 Iteration: 885 Training loss: 0.00002
Epoch: 89/100 Iteration: 886 Training loss: 0.00002
Epoch: 89/100 Iteration: 887 Training loss: 0.00002
Epoch: 89/100 Iteration: 888 Training loss: 0.00004
Epoch: 89/100 Iteration: 889 Training loss: 0.00003
Epoch: 88/100 Iteration: 890 Validation Acc: 0.8774
Epoch: 90/100 Iteration: 890 Training loss: 0.00002
Epoch: 90/100 Iteration: 891 Training loss: 0.00002
Epoch: 90/100 Iteration: 892 Training loss: 0.00002
Epoch: 90/100 Iteration: 893 Training loss: 0.00003
Epoch: 90/100 Iteration: 894 Training loss: 0.00001
Epoch: 89/100 Iteration: 895 Validation Acc: 0.8774
Epoch: 90/100 Iteration: 895 Training loss: 0.00002
Epoch: 90/100 Iteration: 896 Training loss: 0.00002
Epoch: 90/100 Iteration: 897 Training loss: 0.00002
Epoch: 90/100 Iteration: 898 Training loss: 0.00004
Epoch: 90/100 Iteration: 899 Training loss: 0.00003
Epoch: 89/100 Iteration: 900 Validation Acc: 0.8774
Epoch: 91/100 Iteration: 900 Training loss: 0.00002
Epoch: 91/100 Iteration: 901 Training loss: 0.00002
Epoch: 91/100 Iteration: 902 Training loss: 0.00002
Epoch: 91/100 Iteration: 903 Training loss: 0.00003
Epoch: 91/100 Iteration: 904 Training loss: 0.00001
Epoch: 90/100 Iteration: 905 Validation Acc: 0.8774
Epoch: 91/100 Iteration: 905 Training loss: 0.00002
Epoch: 91/100 Iteration: 906 Training loss: 0.00002
Epoch: 91/100 Iteration: 907 Training loss: 0.00002
Epoch: 91/100 Iteration: 908 Training loss: 0.00004
Epoch: 91/100 Iteration: 909 Training loss: 0.00003
Epoch: 90/100 Iteration: 910 Validation Acc: 0.8774
Epoch: 92/100 Iteration: 910 Training loss: 0.00002
Epoch: 92/100 Iteration: 911 Training loss: 0.00002
Epoch: 92/100 Iteration: 912 Training loss: 0.00002
Epoch: 92/100 Iteration: 913 Training loss: 0.00003
Epoch: 92/100 Iteration: 914 Training loss: 0.00001
Epoch: 91/100 Iteration: 915 Validation Acc: 0.8774
Epoch: 92/100 Iteration: 915 Training loss: 0.00002
Epoch: 92/100 Iteration: 916 Training loss: 0.00002
Epoch: 92/100 Iteration: 917 Training loss: 0.00002
Epoch: 92/100 Iteration: 918 Training loss: 0.00004
Epoch: 92/100 Iteration: 919 Training loss: 0.00003
Epoch: 91/100 Iteration: 920 Validation Acc: 0.8774
Epoch: 93/100 Iteration: 920 Training loss: 0.00002
Epoch: 93/100 Iteration: 921 Training loss: 0.00002
Epoch: 93/100 Iteration: 922 Training loss: 0.00002
Epoch: 93/100 Iteration: 923 Training loss: 0.00003
Epoch: 93/100 Iteration: 924 Training loss: 0.00001
Epoch: 92/100 Iteration: 925 Validation Acc: 0.8774
Epoch: 93/100 Iteration: 925 Training loss: 0.00002
Epoch: 93/100 Iteration: 926 Training loss: 0.00002
Epoch: 93/100 Iteration: 927 Training loss: 0.00002
Epoch: 93/100 Iteration: 928 Training loss: 0.00004
Epoch: 93/100 Iteration: 929 Training loss: 0.00003
Epoch: 92/100 Iteration: 930 Validation Acc: 0.8774
Epoch: 94/100 Iteration: 930 Training loss: 0.00002
Epoch: 94/100 Iteration: 931 Training loss: 0.00002
Epoch: 94/100 Iteration: 932 Training loss: 0.00002
Epoch: 94/100 Iteration: 933 Training loss: 0.00003
Epoch: 94/100 Iteration: 934 Training loss: 0.00001
Epoch: 93/100 Iteration: 935 Validation Acc: 0.8774
Epoch: 94/100 Iteration: 935 Training loss: 0.00002
Epoch: 94/100 Iteration: 936 Training loss: 0.00002
Epoch: 94/100 Iteration: 937 Training loss: 0.00002
Epoch: 94/100 Iteration: 938 Training loss: 0.00004
Epoch: 94/100 Iteration: 939 Training loss: 0.00003
Epoch: 93/100 Iteration: 940 Validation Acc: 0.8774
Epoch: 95/100 Iteration: 940 Training loss: 0.00002
Epoch: 95/100 Iteration: 941 Training loss: 0.00002
Epoch: 95/100 Iteration: 942 Training loss: 0.00002
Epoch: 95/100 Iteration: 943 Training loss: 0.00003
Epoch: 95/100 Iteration: 944 Training loss: 0.00001
Epoch: 94/100 Iteration: 945 Validation Acc: 0.8774
Epoch: 95/100 Iteration: 945 Training loss: 0.00002
Epoch: 95/100 Iteration: 946 Training loss: 0.00002
Epoch: 95/100 Iteration: 947 Training loss: 0.00002
Epoch: 95/100 Iteration: 948 Training loss: 0.00004
Epoch: 95/100 Iteration: 949 Training loss: 0.00003
Epoch: 94/100 Iteration: 950 Validation Acc: 0.8774
Epoch: 96/100 Iteration: 950 Training loss: 0.00002
Epoch: 96/100 Iteration: 951 Training loss: 0.00002
Epoch: 96/100 Iteration: 952 Training loss: 0.00002
Epoch: 96/100 Iteration: 953 Training loss: 0.00003
Epoch: 96/100 Iteration: 954 Training loss: 0.00001
Epoch: 95/100 Iteration: 955 Validation Acc: 0.8774
Epoch: 96/100 Iteration: 955 Training loss: 0.00002
Epoch: 96/100 Iteration: 956 Training loss: 0.00002
Epoch: 96/100 Iteration: 957 Training loss: 0.00002
Epoch: 96/100 Iteration: 958 Training loss: 0.00003
Epoch: 96/100 Iteration: 959 Training loss: 0.00003
Epoch: 95/100 Iteration: 960 Validation Acc: 0.8774
Epoch: 97/100 Iteration: 960 Training loss: 0.00002
Epoch: 97/100 Iteration: 961 Training loss: 0.00002
Epoch: 97/100 Iteration: 962 Training loss: 0.00001
Epoch: 97/100 Iteration: 963 Training loss: 0.00003
Epoch: 97/100 Iteration: 964 Training loss: 0.00001
Epoch: 96/100 Iteration: 965 Validation Acc: 0.8774
Epoch: 97/100 Iteration: 965 Training loss: 0.00002
Epoch: 97/100 Iteration: 966 Training loss: 0.00002
Epoch: 97/100 Iteration: 967 Training loss: 0.00002
Epoch: 97/100 Iteration: 968 Training loss: 0.00003
Epoch: 97/100 Iteration: 969 Training loss: 0.00003
Epoch: 96/100 Iteration: 970 Validation Acc: 0.8774
Epoch: 98/100 Iteration: 970 Training loss: 0.00002
Epoch: 98/100 Iteration: 971 Training loss: 0.00002
Epoch: 98/100 Iteration: 972 Training loss: 0.00001
Epoch: 98/100 Iteration: 973 Training loss: 0.00003
Epoch: 98/100 Iteration: 974 Training loss: 0.00001
Epoch: 97/100 Iteration: 975 Validation Acc: 0.8774
Epoch: 98/100 Iteration: 975 Training loss: 0.00002
Epoch: 98/100 Iteration: 976 Training loss: 0.00002
Epoch: 98/100 Iteration: 977 Training loss: 0.00002
Epoch: 98/100 Iteration: 978 Training loss: 0.00003
Epoch: 98/100 Iteration: 979 Training loss: 0.00003
Epoch: 97/100 Iteration: 980 Validation Acc: 0.8774
Epoch: 99/100 Iteration: 980 Training loss: 0.00002
Epoch: 99/100 Iteration: 981 Training loss: 0.00002
Epoch: 99/100 Iteration: 982 Training loss: 0.00001
Epoch: 99/100 Iteration: 983 Training loss: 0.00002
Epoch: 99/100 Iteration: 984 Training loss: 0.00001
Epoch: 98/100 Iteration: 985 Validation Acc: 0.8774
Epoch: 99/100 Iteration: 985 Training loss: 0.00002
Epoch: 99/100 Iteration: 986 Training loss: 0.00002
Epoch: 99/100 Iteration: 987 Training loss: 0.00002
Epoch: 99/100 Iteration: 988 Training loss: 0.00003
Epoch: 99/100 Iteration: 989 Training loss: 0.00003
Epoch: 98/100 Iteration: 990 Validation Acc: 0.8774
Epoch: 100/100 Iteration: 990 Training loss: 0.00002
Epoch: 100/100 Iteration: 991 Training loss: 0.00002
Epoch: 100/100 Iteration: 992 Training loss: 0.00001
Epoch: 100/100 Iteration: 993 Training loss: 0.00002
Epoch: 100/100 Iteration: 994 Training loss: 0.00001
Epoch: 99/100 Iteration: 995 Validation Acc: 0.8774
Epoch: 100/100 Iteration: 995 Training loss: 0.00002
Epoch: 100/100 Iteration: 996 Training loss: 0.00002
Epoch: 100/100 Iteration: 997 Training loss: 0.00002
Epoch: 100/100 Iteration: 998 Training loss: 0.00003
Epoch: 100/100 Iteration: 999 Training loss: 0.00003
Epoch: 99/100 Iteration: 1000 Validation Acc: 0.8774

Testing

Below you see the test accuracy. You can also see the predictions returned for images.


In [32]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: test_x,
            labels_: test_y}
    test_acc = sess.run(accuracy, feed_dict=feed)
    print("Test accuracy: {:.4f}".format(test_acc))


Test accuracy: 0.8665

In [33]:
%matplotlib inline

import matplotlib.pyplot as plt
from scipy.ndimage import imread

Below, feel free to choose images and see how the trained classifier predicts the flowers in them.


In [34]:
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)


Out[34]:
<matplotlib.image.AxesImage at 0x7fdcca3e9208>

In [35]:
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
    print('"vgg" object already exists.  Will not create again.')
else:
    #create vgg
    with tf.Session() as sess:
        input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
        vgg = vgg16.Vgg16()
        vgg.build(input_)


"vgg" object already exists.  Will not create again.

In [36]:
with tf.Session() as sess:
    img = utils.load_image(test_img_path)
    img = img.reshape((1, 224, 224, 3))

    feed_dict = {input_: img}
    code = sess.run(vgg.relu6, feed_dict=feed_dict)
        
saver = tf.train.Saver()
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: code}
    prediction = sess.run(predicted, feed_dict=feed).squeeze()

In [37]:
plt.imshow(test_img)


Out[37]:
<matplotlib.image.AxesImage at 0x7fdcca2ba0f0>

In [38]:
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)



In [39]:
print(prediction)


[  3.40686249e-30   1.13232111e-33   9.99999762e-01   4.93547819e-32
   1.81003401e-07]

In [ ]: