Transfer Learning

Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.

VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.

You can read more about transfer learning from the CS231n course notes.

Pretrained VGGNet

We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.

This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.


In [1]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm

vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
    raise Exception("VGG directory doesn't exist!")

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(vgg_dir + "vgg16.npy"):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
        urlretrieve(
            'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
            vgg_dir + 'vgg16.npy',
            pbar.hook)
else:
    print("Parameter file already exists!")


Parameter file already exists!

Flower power

Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.


In [2]:
import tarfile

dataset_folder_path = 'flower_photos'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('flower_photos.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
        urlretrieve(
            'http://download.tensorflow.org/example_images/flower_photos.tgz',
            'flower_photos.tar.gz',
            pbar.hook)

if not isdir(dataset_folder_path):
    with tarfile.open('flower_photos.tar.gz') as tar:
        tar.extractall()
        tar.close()

ConvNet Codes

Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.

Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):

self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')

self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')

self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')

self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')

self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')

self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)

So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use

with tf.Session() as sess:
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
    with tf.name_scope("content_vgg"):
        vgg.build(input_)

This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,

feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)

In [3]:
import os

import numpy as np
import tensorflow as tf

from tensorflow_vgg import vgg16
from tensorflow_vgg import utils

In [4]:
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]

Below I'm running images through the VGG network in batches.

Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).


In [14]:
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []

codes = None

with tf.Session() as sess:
    vgg = vgg16.Vgg16()
    input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
    with tf.name_scope("content_vgg"):
        vgg.build(input_)

    for each in classes:
        print("Starting {} images".format(each))
        class_path = data_dir + each
        files = os.listdir(class_path)
        for ii, file in enumerate(files, 1):
            # Add images to the current batch
            # utils.load_image crops the input images for us, from the center
            img = utils.load_image(os.path.join(class_path, file))
            batch.append(img.reshape((1, 224, 224, 3)))
            labels.append(each)
            
            # Running the batch through the network to get the codes
            if ii % batch_size == 0 or ii == len(files):
                images = np.concatenate(batch)

                feed_dict = {input_: images}
                codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
                
                # Here I'm building an array of the codes
                if codes is None:
                    codes = codes_batch
                else:
                    codes = np.concatenate((codes, codes_batch))
                
                # Reset to start building the next batch
                batch = []
                print('{} images processed'.format(ii))


/home/moox/projects/ai/deep_learning/transfer-learning/tensorflow_vgg/vgg16.npy
npy file loaded
build model started
build model finished: 0s
Starting sunflowers images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
699 images processed
Starting daisy images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
633 images processed
Starting roses images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
641 images processed
Starting dandelion images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
700 images processed
710 images processed
720 images processed
730 images processed
740 images processed
750 images processed
760 images processed
770 images processed
780 images processed
790 images processed
800 images processed
810 images processed
820 images processed
830 images processed
840 images processed
850 images processed
860 images processed
870 images processed
880 images processed
890 images processed
898 images processed
Starting tulips images
10 images processed
20 images processed
30 images processed
40 images processed
50 images processed
60 images processed
70 images processed
80 images processed
90 images processed
100 images processed
110 images processed
120 images processed
130 images processed
140 images processed
150 images processed
160 images processed
170 images processed
180 images processed
190 images processed
200 images processed
210 images processed
220 images processed
230 images processed
240 images processed
250 images processed
260 images processed
270 images processed
280 images processed
290 images processed
300 images processed
310 images processed
320 images processed
330 images processed
340 images processed
350 images processed
360 images processed
370 images processed
380 images processed
390 images processed
400 images processed
410 images processed
420 images processed
430 images processed
440 images processed
450 images processed
460 images processed
470 images processed
480 images processed
490 images processed
500 images processed
510 images processed
520 images processed
530 images processed
540 images processed
550 images processed
560 images processed
570 images processed
580 images processed
590 images processed
600 images processed
610 images processed
620 images processed
630 images processed
640 images processed
650 images processed
660 images processed
670 images processed
680 images processed
690 images processed
700 images processed
710 images processed
720 images processed
730 images processed
740 images processed
750 images processed
760 images processed
770 images processed
780 images processed
790 images processed
799 images processed

In [6]:
# write codes to file
with open('codes', 'w') as f:
    codes.tofile(f)
    
# write labels to file
import csv
with open('labels', 'w') as f:
    writer = csv.writer(f, delimiter='\n')
    writer.writerow(labels)

Building the Classifier

Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.


In [7]:
# read codes and labels from file
import csv

with open('labels') as f:
    reader = csv.reader(f, delimiter='\n')
    labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
    codes = np.fromfile(f, dtype=np.float32)
    codes = codes.reshape((len(labels), -1))

Data prep

As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!

Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.


In [8]:
from sklearn import preprocessing
labels_vecs = preprocessing.LabelBinarizer().fit(labels).transform(labels)

Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.

You can create the splitter like so:

ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)

Then split the data with

splitter = ss.split(x, y)

ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.

Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.


In [9]:
from sklearn import model_selection
ss = model_selection.StratifiedShuffleSplit(n_splits=1, test_size=0.2)

splitter = ss.split(codes, labels)
split_i = next(splitter)

train_x, train_y = codes[split_i[0]], labels_vecs[split_i[0]]
val_x, val_y = codes[split_i[1][1::2]], labels_vecs[split_i[1][1::2]]
test_x, test_y = codes[split_i[1][::2]], labels_vecs[split_i[1][::2]]

In [10]:
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)


Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

If you did it right, you should see these sizes for the training sets:

Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

Classifier layers

Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.

Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.


In [11]:
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])

# TODO: Classifier layers and operations
output_size = labels_vecs.shape[1]
fc1 = tf.contrib.layers.fully_connected(inputs_, 1024)

logits = tf.contrib.layers.fully_connected(fc1, output_size)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)
cost = tf.reduce_mean(cross_entropy)

optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)

# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Batches!

Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.


In [12]:
def get_batches(x, y, n_batches=10):
    """ Return a generator that yields batches from arrays x and y. """
    batch_size = len(x)//n_batches
    
    for ii in range(0, n_batches*batch_size, batch_size):
        # If we're not on the last batch, grab data with size batch_size
        if ii != (n_batches-1)*batch_size:
            X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] 
        # On the last batch, grab the rest of the data
        else:
            X, Y = x[ii:], y[ii:]
        # I love generators
        yield X, Y

Training

Here, we'll train the network.

Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!


In [18]:
epochs = 20
num_batches = 64

saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(epochs):
        batch_i = 0
        for x, y in get_batches(train_x, train_y, num_batches):
            batch_i += 1
            feed = {inputs_: x, labels_: y}
            loss, _ = sess.run([cost, optimizer], feed_dict=feed)
            print("Epoch: {}/{}:, Batch: {}, Training loss: {:>2}".format(epoch + 1, epochs, batch_i, loss))
            
            if batch_i % 5 == 0:
                feed = {inputs_: val_x,
                        labels_: val_y}
                val_acc = sess.run(accuracy, feed_dict=feed)
                print("Epoch: {}/{}".format(epoch, epochs),
                      "Iteration: {}".format(batch_i),
                      "Validation Acc: {:.4f}".format(val_acc))

    saver.save(sess, "checkpoints/flowers.ckpt")


Epoch: 1/20:, Batch: [], Training loss: 5.10406494140625
Epoch: 1/20:, Batch: [], Training loss: 9.990558624267578
Epoch: 1/20:, Batch: [], Training loss: 8.525075912475586
Epoch: 1/20:, Batch: [], Training loss: 2.1617112159729004
Epoch: 1/20:, Batch: [], Training loss: 1.5193305015563965
Epoch: 0/20 Iteration: 5 Validation Acc: 0.1907
Epoch: 1/20:, Batch: [], Training loss: 1.538386583328247
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.5737117528915405
Epoch: 1/20:, Batch: [], Training loss: 1.5736743211746216
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 0/20 Iteration: 10 Validation Acc: 0.1744
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 0/20 Iteration: 15 Validation Acc: 0.1744
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.5821866989135742
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 0/20 Iteration: 20 Validation Acc: 0.1771
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.527172565460205
Epoch: 0/20 Iteration: 25 Validation Acc: 0.1853
Epoch: 1/20:, Batch: [], Training loss: 1.6065673828125
Epoch: 1/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 1/20:, Batch: [], Training loss: 1.4665677547454834
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 0/20 Iteration: 30 Validation Acc: 0.2343
Epoch: 1/20:, Batch: [], Training loss: 1.5379078388214111
Epoch: 1/20:, Batch: [], Training loss: 1.5021567344665527
Epoch: 1/20:, Batch: [], Training loss: 1.4474302530288696
Epoch: 1/20:, Batch: [], Training loss: 1.6106185913085938
Epoch: 1/20:, Batch: [], Training loss: 1.8003687858581543
Epoch: 0/20 Iteration: 35 Validation Acc: 0.2589
Epoch: 1/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 1/20:, Batch: [], Training loss: 1.399619698524475
Epoch: 1/20:, Batch: [], Training loss: 2.258979320526123
Epoch: 0/20 Iteration: 40 Validation Acc: 0.2371
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5025651454925537
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.460182785987854
Epoch: 1/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 0/20 Iteration: 45 Validation Acc: 0.1962
Epoch: 1/20:, Batch: [], Training loss: 1.5589439868927002
Epoch: 1/20:, Batch: [], Training loss: 1.5129443407058716
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.509397268295288
Epoch: 1/20:, Batch: [], Training loss: 1.5736737251281738
Epoch: 0/20 Iteration: 50 Validation Acc: 0.2044
Epoch: 1/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 1/20:, Batch: [], Training loss: 1.5055423974990845
Epoch: 1/20:, Batch: [], Training loss: 1.737260103225708
Epoch: 1/20:, Batch: [], Training loss: 1.502156376838684
Epoch: 1/20:, Batch: [], Training loss: 1.4667894840240479
Epoch: 0/20 Iteration: 55 Validation Acc: 0.2071
Epoch: 1/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 0/20 Iteration: 60 Validation Acc: 0.1962
Epoch: 1/20:, Batch: [], Training loss: 1.4312233924865723
Epoch: 1/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20:, Batch: [], Training loss: 1.5422471761703491
Epoch: 1/20:, Batch: [], Training loss: 1.5775678157806396
Epoch: 2/20:, Batch: [], Training loss: 1.5024240016937256
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.528983235359192
Epoch: 2/20:, Batch: [], Training loss: 1.573672890663147
Epoch: 2/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 1/20 Iteration: 5 Validation Acc: 0.2044
Epoch: 2/20:, Batch: [], Training loss: 1.5397605895996094
Epoch: 2/20:, Batch: [], Training loss: 1.53791081905365
Epoch: 2/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20:, Batch: [], Training loss: 1.5379077196121216
Epoch: 2/20:, Batch: [], Training loss: 1.5737050771713257
Epoch: 1/20 Iteration: 10 Validation Acc: 0.2153
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5593044757843018
Epoch: 2/20:, Batch: [], Training loss: 1.4664134979248047
Epoch: 2/20:, Batch: [], Training loss: 1.569226861000061
Epoch: 2/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20 Iteration: 15 Validation Acc: 0.2616
Epoch: 2/20:, Batch: [], Training loss: 1.5033950805664062
Epoch: 2/20:, Batch: [], Training loss: 1.4732561111450195
Epoch: 2/20:, Batch: [], Training loss: 2.175987720489502
Epoch: 2/20:, Batch: [], Training loss: 1.502694845199585
Epoch: 2/20:, Batch: [], Training loss: 1.4669617414474487
Epoch: 1/20 Iteration: 20 Validation Acc: 0.3215
Epoch: 2/20:, Batch: [], Training loss: 2.04530668258667
Epoch: 2/20:, Batch: [], Training loss: 3.317385673522949
Epoch: 2/20:, Batch: [], Training loss: 1.50214421749115
Epoch: 2/20:, Batch: [], Training loss: 3.6512932777404785
Epoch: 2/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 1/20 Iteration: 25 Validation Acc: 0.2098
Epoch: 2/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5795395374298096
Epoch: 1/20 Iteration: 30 Validation Acc: 0.1798
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 2/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20 Iteration: 35 Validation Acc: 0.1689
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5380314588546753
Epoch: 2/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 1/20 Iteration: 40 Validation Acc: 0.1635
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5772584676742554
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20 Iteration: 45 Validation Acc: 0.1608
Epoch: 2/20:, Batch: [], Training loss: 1.5795989036560059
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20 Iteration: 50 Validation Acc: 0.1635
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20 Iteration: 55 Validation Acc: 0.1662
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 1/20 Iteration: 60 Validation Acc: 0.1662
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20:, Batch: [], Training loss: 1.5737851858139038
Epoch: 2/20:, Batch: [], Training loss: 1.6094379425048828
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20 Iteration: 5 Validation Acc: 0.1689
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.5751389265060425
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20 Iteration: 10 Validation Acc: 0.1689
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20 Iteration: 15 Validation Acc: 0.1689
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20 Iteration: 20 Validation Acc: 0.1717
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20 Iteration: 25 Validation Acc: 0.1717
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.5894430875778198
Epoch: 3/20:, Batch: [], Training loss: 1.5379087924957275
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 2/20 Iteration: 30 Validation Acc: 0.1880
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20 Iteration: 35 Validation Acc: 0.2016
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20 Iteration: 40 Validation Acc: 0.2044
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.5381016731262207
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 2/20 Iteration: 45 Validation Acc: 0.2125
Epoch: 3/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20:, Batch: [], Training loss: 1.4306124448776245
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 3/20:, Batch: [], Training loss: 1.4309970140457153
Epoch: 3/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 2/20 Iteration: 50 Validation Acc: 0.2180
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 3/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 3/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 2/20 Iteration: 55 Validation Acc: 0.2180
Epoch: 3/20:, Batch: [], Training loss: 1.5736806392669678
Epoch: 3/20:, Batch: [], Training loss: 1.5379618406295776
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 3/20:, Batch: [], Training loss: 1.5021778345108032
Epoch: 3/20:, Batch: [], Training loss: 1.5021425485610962
Epoch: 2/20 Iteration: 60 Validation Acc: 0.2234
Epoch: 3/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 3/20:, Batch: [], Training loss: 1.6921175718307495
Epoch: 3/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 3/20:, Batch: [], Training loss: 1.525528907775879
Epoch: 4/20:, Batch: [], Training loss: 1.4663773775100708
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.502259612083435
Epoch: 3/20 Iteration: 5 Validation Acc: 0.1962
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.5379345417022705
Epoch: 4/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.573672890663147
Epoch: 3/20 Iteration: 10 Validation Acc: 0.1935
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.60091233253479
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20 Iteration: 15 Validation Acc: 0.1962
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.5316898822784424
Epoch: 4/20:, Batch: [], Training loss: 1.5816242694854736
Epoch: 4/20:, Batch: [], Training loss: 1.5379079580307007
Epoch: 4/20:, Batch: [], Training loss: 1.5021487474441528
Epoch: 3/20 Iteration: 20 Validation Acc: 0.2398
Epoch: 4/20:, Batch: [], Training loss: 1.475698709487915
Epoch: 4/20:, Batch: [], Training loss: 1.568429708480835
Epoch: 4/20:, Batch: [], Training loss: 1.5716711282730103
Epoch: 4/20:, Batch: [], Training loss: 2.8632445335388184
Epoch: 4/20:, Batch: [], Training loss: 2.312582015991211
Epoch: 3/20 Iteration: 25 Validation Acc: 0.2779
Epoch: 4/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 4/20:, Batch: [], Training loss: 1.361034870147705
Epoch: 4/20:, Batch: [], Training loss: 1.3808352947235107
Epoch: 4/20:, Batch: [], Training loss: 1.5021775960922241
Epoch: 4/20:, Batch: [], Training loss: 1.5379307270050049
Epoch: 3/20 Iteration: 30 Validation Acc: 0.2561
Epoch: 4/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 4/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20:, Batch: [], Training loss: 1.4400185346603394
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 3/20 Iteration: 35 Validation Acc: 0.2616
Epoch: 4/20:, Batch: [], Training loss: 1.8124250173568726
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 4/20:, Batch: [], Training loss: 1.5045546293258667
Epoch: 3/20 Iteration: 40 Validation Acc: 0.2507
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.3238916397094727
Epoch: 4/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20 Iteration: 45 Validation Acc: 0.2371
Epoch: 4/20:, Batch: [], Training loss: 2.179244041442871
Epoch: 4/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20 Iteration: 50 Validation Acc: 0.2098
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.5021427869796753
Epoch: 4/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 3/20 Iteration: 55 Validation Acc: 0.1962
Epoch: 4/20:, Batch: [], Training loss: 1.573672890663147
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 3/20 Iteration: 60 Validation Acc: 0.1962
Epoch: 4/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 4/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 4/20:, Batch: [], Training loss: 1.5796931982040405
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 4/20 Iteration: 5 Validation Acc: 0.1907
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5380332469940186
Epoch: 4/20 Iteration: 10 Validation Acc: 0.1907
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.5467002391815186
Epoch: 5/20:, Batch: [], Training loss: 1.5379090309143066
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.573962688446045
Epoch: 4/20 Iteration: 15 Validation Acc: 0.1962
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5379478931427002
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20 Iteration: 20 Validation Acc: 0.1962
Epoch: 5/20:, Batch: [], Training loss: 1.5023975372314453
Epoch: 5/20:, Batch: [], Training loss: 1.5453147888183594
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 4/20 Iteration: 25 Validation Acc: 0.2044
Epoch: 5/20:, Batch: [], Training loss: 1.4891996383666992
Epoch: 5/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 5/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 4/20 Iteration: 30 Validation Acc: 0.2343
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20:, Batch: [], Training loss: 1.4325891733169556
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 4/20 Iteration: 35 Validation Acc: 0.2480
Epoch: 5/20:, Batch: [], Training loss: 1.4702579975128174
Epoch: 5/20:, Batch: [], Training loss: 1.573672890663147
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20 Iteration: 40 Validation Acc: 0.2643
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 2.5305817127227783
Epoch: 5/20:, Batch: [], Training loss: 1.817579984664917
Epoch: 5/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 4/20 Iteration: 45 Validation Acc: 0.2452
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20 Iteration: 50 Validation Acc: 0.2125
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 4/20 Iteration: 55 Validation Acc: 0.1935
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20:, Batch: [], Training loss: 1.5846655368804932
Epoch: 5/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 4/20 Iteration: 60 Validation Acc: 0.1935
Epoch: 5/20:, Batch: [], Training loss: 1.430612325668335
Epoch: 5/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 5/20:, Batch: [], Training loss: 1.5691815614700317
Epoch: 6/20:, Batch: [], Training loss: 1.5023682117462158
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 5/20 Iteration: 5 Validation Acc: 0.1935
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20 Iteration: 10 Validation Acc: 0.1962
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.542697548866272
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20 Iteration: 15 Validation Acc: 0.2071
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20 Iteration: 20 Validation Acc: 0.2153
Epoch: 6/20:, Batch: [], Training loss: 1.538364052772522
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.4048326015472412
Epoch: 5/20 Iteration: 25 Validation Acc: 0.2153
Epoch: 6/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 6/20:, Batch: [], Training loss: 1.5021491050720215
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 5/20 Iteration: 30 Validation Acc: 0.2316
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 6/20:, Batch: [], Training loss: 1.4666591882705688
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 5/20 Iteration: 35 Validation Acc: 0.2507
Epoch: 6/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.4669920206069946
Epoch: 6/20:, Batch: [], Training loss: 1.3235838413238525
Epoch: 6/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20 Iteration: 40 Validation Acc: 0.2534
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.573707103729248
Epoch: 6/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 6/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 5/20 Iteration: 45 Validation Acc: 0.2561
Epoch: 6/20:, Batch: [], Training loss: 1.6784929037094116
Epoch: 6/20:, Batch: [], Training loss: 1.6139603853225708
Epoch: 6/20:, Batch: [], Training loss: 1.5055270195007324
Epoch: 6/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20 Iteration: 50 Validation Acc: 0.2071
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 5/20 Iteration: 55 Validation Acc: 0.1935
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 5/20 Iteration: 60 Validation Acc: 0.1907
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 6/20:, Batch: [], Training loss: 1.6094379425048828
Epoch: 7/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20 Iteration: 5 Validation Acc: 0.1826
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20 Iteration: 10 Validation Acc: 0.1826
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 6/20 Iteration: 15 Validation Acc: 0.1826
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.5737141370773315
Epoch: 7/20:, Batch: [], Training loss: 1.609325885772705
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20 Iteration: 20 Validation Acc: 0.1907
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20 Iteration: 25 Validation Acc: 0.1962
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20:, Batch: [], Training loss: 1.5022759437561035
Epoch: 7/20:, Batch: [], Training loss: 1.4313266277313232
Epoch: 7/20:, Batch: [], Training loss: 1.5736771821975708
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20 Iteration: 30 Validation Acc: 0.2044
Epoch: 7/20:, Batch: [], Training loss: 1.5379536151885986
Epoch: 7/20:, Batch: [], Training loss: 1.5117878913879395
Epoch: 7/20:, Batch: [], Training loss: 1.5428038835525513
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20 Iteration: 35 Validation Acc: 0.2207
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20:, Batch: [], Training loss: 1.359081506729126
Epoch: 7/20:, Batch: [], Training loss: 1.5032989978790283
Epoch: 6/20 Iteration: 40 Validation Acc: 0.2343
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5076524019241333
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 6/20 Iteration: 45 Validation Acc: 0.2561
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 7/20:, Batch: [], Training loss: 1.4374406337738037
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 7/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 6/20 Iteration: 50 Validation Acc: 0.2616
Epoch: 7/20:, Batch: [], Training loss: 2.8453664779663086
Epoch: 7/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20:, Batch: [], Training loss: 1.4695287942886353
Epoch: 6/20 Iteration: 55 Validation Acc: 0.2262
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.539334774017334
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 7/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 6/20 Iteration: 60 Validation Acc: 0.2180
Epoch: 7/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 7/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 7/20:, Batch: [], Training loss: 1.5456976890563965
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20 Iteration: 5 Validation Acc: 0.2125
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.4663875102996826
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 7/20 Iteration: 10 Validation Acc: 0.2098
Epoch: 8/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 8/20:, Batch: [], Training loss: 1.5696804523468018
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20 Iteration: 15 Validation Acc: 0.2180
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20 Iteration: 20 Validation Acc: 0.2289
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 7/20 Iteration: 25 Validation Acc: 0.2343
Epoch: 8/20:, Batch: [], Training loss: 1.4663794040679932
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.361670970916748
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 7/20 Iteration: 30 Validation Acc: 0.2398
Epoch: 8/20:, Batch: [], Training loss: 1.5021559000015259
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 7/20 Iteration: 35 Validation Acc: 0.2425
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.573691725730896
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.572543740272522
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20 Iteration: 40 Validation Acc: 0.2371
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5736730098724365
Epoch: 8/20:, Batch: [], Training loss: 1.430898666381836
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 7/20 Iteration: 45 Validation Acc: 0.2207
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.4663770198822021
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 7/20 Iteration: 50 Validation Acc: 0.2044
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 8/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 7/20 Iteration: 55 Validation Acc: 0.1989
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.4668231010437012
Epoch: 8/20:, Batch: [], Training loss: 1.5383795499801636
Epoch: 7/20 Iteration: 60 Validation Acc: 0.1989
Epoch: 8/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 8/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20:, Batch: [], Training loss: 1.5616334676742554
Epoch: 9/20:, Batch: [], Training loss: 1.502143383026123
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20 Iteration: 5 Validation Acc: 0.1989
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 8/20 Iteration: 10 Validation Acc: 0.1989
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.4770032167434692
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20:, Batch: [], Training loss: 1.5783748626708984
Epoch: 8/20 Iteration: 15 Validation Acc: 0.1989
Epoch: 9/20:, Batch: [], Training loss: 1.5379078388214111
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20 Iteration: 20 Validation Acc: 0.2207
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 8/20 Iteration: 25 Validation Acc: 0.2262
Epoch: 9/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 9/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 9/20:, Batch: [], Training loss: 1.3591418266296387
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 8/20 Iteration: 30 Validation Acc: 0.2316
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20:, Batch: [], Training loss: 1.4664802551269531
Epoch: 8/20 Iteration: 35 Validation Acc: 0.2316
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 9/20:, Batch: [], Training loss: 1.5047768354415894
Epoch: 8/20 Iteration: 40 Validation Acc: 0.2316
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 8/20 Iteration: 45 Validation Acc: 0.2371
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 9/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 9/20:, Batch: [], Training loss: 1.4306118488311768
Epoch: 9/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 8/20 Iteration: 50 Validation Acc: 0.2398
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.417551040649414
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 8/20 Iteration: 55 Validation Acc: 0.2561
Epoch: 9/20:, Batch: [], Training loss: 1.5379177331924438
Epoch: 9/20:, Batch: [], Training loss: 1.508853554725647
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 9/20:, Batch: [], Training loss: 1.3356696367263794
Epoch: 8/20 Iteration: 60 Validation Acc: 0.2507
Epoch: 9/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 9/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 9/20:, Batch: [], Training loss: 1.450087547302246
Epoch: 10/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20:, Batch: [], Training loss: 1.467220664024353
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20 Iteration: 5 Validation Acc: 0.2507
Epoch: 10/20:, Batch: [], Training loss: 1.4697184562683105
Epoch: 10/20:, Batch: [], Training loss: 1.4306137561798096
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20 Iteration: 10 Validation Acc: 0.2589
Epoch: 10/20:, Batch: [], Training loss: 1.5399028062820435
Epoch: 10/20:, Batch: [], Training loss: 2.3510212898254395
Epoch: 10/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20 Iteration: 15 Validation Acc: 0.2371
Epoch: 10/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20:, Batch: [], Training loss: 1.5021700859069824
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20 Iteration: 20 Validation Acc: 0.2262
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 9/20 Iteration: 25 Validation Acc: 0.2234
Epoch: 10/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 10/20:, Batch: [], Training loss: 1.5021482706069946
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 9/20 Iteration: 30 Validation Acc: 0.2180
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5093803405761719
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.5295722484588623
Epoch: 9/20 Iteration: 35 Validation Acc: 0.2262
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.2944000959396362
Epoch: 10/20:, Batch: [], Training loss: 1.4790009260177612
Epoch: 9/20 Iteration: 40 Validation Acc: 0.2589
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.3948488235473633
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20 Iteration: 45 Validation Acc: 0.2643
Epoch: 10/20:, Batch: [], Training loss: 2.0451817512512207
Epoch: 10/20:, Batch: [], Training loss: 1.4306117296218872
Epoch: 10/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 10/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 9/20 Iteration: 50 Validation Acc: 0.2180
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.3598415851593018
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20:, Batch: [], Training loss: 1.5021989345550537
Epoch: 10/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 9/20 Iteration: 55 Validation Acc: 0.2044
Epoch: 10/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20:, Batch: [], Training loss: 1.4663790464401245
Epoch: 10/20:, Batch: [], Training loss: 1.5379464626312256
Epoch: 9/20 Iteration: 60 Validation Acc: 0.1989
Epoch: 10/20:, Batch: [], Training loss: 1.467992901802063
Epoch: 10/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20:, Batch: [], Training loss: 1.5935028791427612
Epoch: 11/20:, Batch: [], Training loss: 1.5021435022354126
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20 Iteration: 5 Validation Acc: 0.1962
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 10/20 Iteration: 10 Validation Acc: 0.1880
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20 Iteration: 15 Validation Acc: 0.1880
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 11/20:, Batch: [], Training loss: 1.5427042245864868
Epoch: 11/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20 Iteration: 20 Validation Acc: 0.1935
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.537939429283142
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20 Iteration: 25 Validation Acc: 0.1989
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.4753464460372925
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.588994026184082
Epoch: 10/20 Iteration: 30 Validation Acc: 0.2071
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.46637761592865
Epoch: 10/20 Iteration: 35 Validation Acc: 0.2153
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.3237547874450684
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 10/20 Iteration: 40 Validation Acc: 0.2234
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 10/20 Iteration: 45 Validation Acc: 0.2234
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.4306122064590454
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 11/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 10/20 Iteration: 50 Validation Acc: 0.2316
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 11/20:, Batch: [], Training loss: 1.6333266496658325
Epoch: 11/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 10/20 Iteration: 55 Validation Acc: 0.2234
Epoch: 11/20:, Batch: [], Training loss: 1.5545237064361572
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 11/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 10/20 Iteration: 60 Validation Acc: 0.2125
Epoch: 11/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 11/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20:, Batch: [], Training loss: 1.561632752418518
Epoch: 12/20:, Batch: [], Training loss: 1.4770433902740479
Epoch: 12/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 12/20:, Batch: [], Training loss: 1.5381088256835938
Epoch: 12/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 12/20:, Batch: [], Training loss: 1.5035390853881836
Epoch: 11/20 Iteration: 5 Validation Acc: 0.2180
Epoch: 12/20:, Batch: [], Training loss: 1.5379118919372559
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20 Iteration: 10 Validation Acc: 0.2234
Epoch: 12/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 11/20 Iteration: 15 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.502142310142517
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20 Iteration: 20 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 11/20 Iteration: 25 Validation Acc: 0.2289
Epoch: 12/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 11/20 Iteration: 30 Validation Acc: 0.2289
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.442802906036377
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.4768730401992798
Epoch: 11/20 Iteration: 35 Validation Acc: 0.2316
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 12/20:, Batch: [], Training loss: 1.4666352272033691
Epoch: 12/20:, Batch: [], Training loss: 1.3590811491012573
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20 Iteration: 40 Validation Acc: 0.2289
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.5739319324493408
Epoch: 12/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 11/20 Iteration: 45 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.430634617805481
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 11/20 Iteration: 50 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 11/20 Iteration: 55 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.4667738676071167
Epoch: 12/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 11/20 Iteration: 60 Validation Acc: 0.2262
Epoch: 12/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 12/20:, Batch: [], Training loss: 1.537912368774414
Epoch: 12/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20:, Batch: [], Training loss: 1.4565913677215576
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.573673129081726
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20 Iteration: 5 Validation Acc: 0.2316
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.503978967666626
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.7430566549301147
Epoch: 13/20:, Batch: [], Training loss: 1.5393260717391968
Epoch: 12/20 Iteration: 10 Validation Acc: 0.2262
Epoch: 13/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.537908911705017
Epoch: 12/20 Iteration: 15 Validation Acc: 0.2234
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.5034562349319458
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20 Iteration: 20 Validation Acc: 0.2180
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5379098653793335
Epoch: 13/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 13/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 12/20 Iteration: 25 Validation Acc: 0.2153
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.4663771390914917
Epoch: 13/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 13/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 12/20 Iteration: 30 Validation Acc: 0.2153
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.4329888820648193
Epoch: 13/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20 Iteration: 35 Validation Acc: 0.2153
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.3759009838104248
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 12/20 Iteration: 40 Validation Acc: 0.2234
Epoch: 13/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 12/20 Iteration: 45 Validation Acc: 0.2262
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.430615782737732
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 13/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 12/20 Iteration: 50 Validation Acc: 0.2343
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 13/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 12/20 Iteration: 55 Validation Acc: 0.2452
Epoch: 13/20:, Batch: [], Training loss: 1.5104954242706299
Epoch: 13/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 13/20:, Batch: [], Training loss: 1.5022454261779785
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 12/20 Iteration: 60 Validation Acc: 0.2561
Epoch: 13/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 13/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 13/20:, Batch: [], Training loss: 1.4046504497528076
Epoch: 14/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 14/20:, Batch: [], Training loss: 1.4685404300689697
Epoch: 14/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 14/20:, Batch: [], Training loss: 1.5379084348678589
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20 Iteration: 5 Validation Acc: 0.2752
Epoch: 14/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 14/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 14/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 14/20:, Batch: [], Training loss: 2.026120185852051
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20 Iteration: 10 Validation Acc: 0.2561
Epoch: 14/20:, Batch: [], Training loss: 1.5381040573120117
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20 Iteration: 15 Validation Acc: 0.2234
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20 Iteration: 20 Validation Acc: 0.2071
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 14/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 13/20 Iteration: 25 Validation Acc: 0.1989
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20:, Batch: [], Training loss: 1.5736761093139648
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20 Iteration: 30 Validation Acc: 0.1962
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20 Iteration: 35 Validation Acc: 0.1935
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 14/20:, Batch: [], Training loss: 1.5379669666290283
Epoch: 13/20 Iteration: 40 Validation Acc: 0.1935
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 13/20 Iteration: 45 Validation Acc: 0.1935
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.4687752723693848
Epoch: 14/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 13/20 Iteration: 50 Validation Acc: 0.1935
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5379279851913452
Epoch: 14/20:, Batch: [], Training loss: 1.5783754587173462
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 13/20 Iteration: 55 Validation Acc: 0.1962
Epoch: 14/20:, Batch: [], Training loss: 1.5462692975997925
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20:, Batch: [], Training loss: 1.573757529258728
Epoch: 13/20 Iteration: 60 Validation Acc: 0.2071
Epoch: 14/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 14/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20:, Batch: [], Training loss: 1.5456976890563965
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20 Iteration: 5 Validation Acc: 0.2153
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20 Iteration: 10 Validation Acc: 0.2180
Epoch: 15/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20 Iteration: 15 Validation Acc: 0.2289
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20 Iteration: 20 Validation Acc: 0.2289
Epoch: 15/20:, Batch: [], Training loss: 1.4663770198822021
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 15/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 14/20 Iteration: 25 Validation Acc: 0.2289
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 14/20 Iteration: 30 Validation Acc: 0.2316
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 14/20 Iteration: 35 Validation Acc: 0.2316
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 14/20 Iteration: 40 Validation Acc: 0.2316
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.6030734777450562
Epoch: 15/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 14/20 Iteration: 45 Validation Acc: 0.2398
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 15/20:, Batch: [], Training loss: 1.5736937522888184
Epoch: 14/20 Iteration: 50 Validation Acc: 0.2452
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 15/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 14/20 Iteration: 55 Validation Acc: 0.2480
Epoch: 15/20:, Batch: [], Training loss: 1.5038208961486816
Epoch: 15/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 15/20:, Batch: [], Training loss: 1.4663779735565186
Epoch: 15/20:, Batch: [], Training loss: 1.5713332891464233
Epoch: 15/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 14/20 Iteration: 60 Validation Acc: 0.2480
Epoch: 15/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 15/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 15/20:, Batch: [], Training loss: 1.4341607093811035
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20 Iteration: 5 Validation Acc: 0.2398
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20 Iteration: 10 Validation Acc: 0.2398
Epoch: 16/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20 Iteration: 15 Validation Acc: 0.2343
Epoch: 16/20:, Batch: [], Training loss: 1.4678981304168701
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 15/20 Iteration: 20 Validation Acc: 0.2343
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 15/20 Iteration: 25 Validation Acc: 0.2343
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.3694229125976562
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20 Iteration: 30 Validation Acc: 0.2398
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20 Iteration: 35 Validation Acc: 0.2398
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20 Iteration: 40 Validation Acc: 0.2452
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20 Iteration: 45 Validation Acc: 0.2452
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 16/20:, Batch: [], Training loss: 1.325872778892517
Epoch: 16/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 15/20 Iteration: 50 Validation Acc: 0.2480
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 16/20:, Batch: [], Training loss: 1.4306806325912476
Epoch: 16/20:, Batch: [], Training loss: 1.46637761592865
Epoch: 16/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 15/20 Iteration: 55 Validation Acc: 0.2480
Epoch: 16/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 16/20:, Batch: [], Training loss: 1.466378092765808
Epoch: 16/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 16/20:, Batch: [], Training loss: 1.3233157396316528
Epoch: 15/20 Iteration: 60 Validation Acc: 0.2507
Epoch: 16/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 16/20:, Batch: [], Training loss: 1.4306185245513916
Epoch: 16/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20:, Batch: [], Training loss: 1.4042608737945557
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20 Iteration: 5 Validation Acc: 0.2589
Epoch: 17/20:, Batch: [], Training loss: 1.3948463201522827
Epoch: 17/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.4543148279190063
Epoch: 17/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20 Iteration: 10 Validation Acc: 0.2861
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 17/20:, Batch: [], Training loss: 2.222748279571533
Epoch: 16/20 Iteration: 15 Validation Acc: 0.2970
Epoch: 17/20:, Batch: [], Training loss: 1.943213701248169
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 16/20 Iteration: 20 Validation Acc: 0.2480
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 2.074795722961426
Epoch: 17/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 16/20 Iteration: 25 Validation Acc: 0.2153
Epoch: 17/20:, Batch: [], Training loss: 1.4664585590362549
Epoch: 17/20:, Batch: [], Training loss: 1.5036247968673706
Epoch: 17/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 16/20 Iteration: 30 Validation Acc: 0.1935
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.537907600402832
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20 Iteration: 35 Validation Acc: 0.1826
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.4663794040679932
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 16/20 Iteration: 40 Validation Acc: 0.1771
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20 Iteration: 45 Validation Acc: 0.1771
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.5050270557403564
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20 Iteration: 50 Validation Acc: 0.1771
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 16/20 Iteration: 55 Validation Acc: 0.1771
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 16/20 Iteration: 60 Validation Acc: 0.1771
Epoch: 17/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20:, Batch: [], Training loss: 1.6094379425048828
Epoch: 18/20:, Batch: [], Training loss: 1.5023632049560547
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20 Iteration: 5 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20 Iteration: 10 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20 Iteration: 15 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5379419326782227
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20 Iteration: 20 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 17/20 Iteration: 25 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5379085540771484
Epoch: 18/20:, Batch: [], Training loss: 1.466459035873413
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20 Iteration: 30 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20 Iteration: 35 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 17/20 Iteration: 40 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20 Iteration: 45 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20 Iteration: 50 Validation Acc: 0.1771
Epoch: 18/20:, Batch: [], Training loss: 1.5682874917984009
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 17/20 Iteration: 55 Validation Acc: 0.1798
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 17/20 Iteration: 60 Validation Acc: 0.1907
Epoch: 18/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20:, Batch: [], Training loss: 1.5935027599334717
Epoch: 19/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20 Iteration: 5 Validation Acc: 0.1935
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20 Iteration: 10 Validation Acc: 0.1962
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5021424293518066
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20 Iteration: 15 Validation Acc: 0.1962
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5021506547927856
Epoch: 18/20 Iteration: 20 Validation Acc: 0.1962
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5379351377487183
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20 Iteration: 25 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 18/20 Iteration: 30 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20 Iteration: 35 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 18/20 Iteration: 40 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5391576290130615
Epoch: 18/20 Iteration: 45 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5379109382629395
Epoch: 19/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 18/20 Iteration: 50 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20 Iteration: 55 Validation Acc: 0.1989
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.482674241065979
Epoch: 19/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 18/20 Iteration: 60 Validation Acc: 0.2044
Epoch: 19/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 19/20:, Batch: [], Training loss: 1.553537130355835
Epoch: 19/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20:, Batch: [], Training loss: 1.545697808265686
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20 Iteration: 5 Validation Acc: 0.2262
Epoch: 20/20:, Batch: [], Training loss: 1.5094730854034424
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20 Iteration: 10 Validation Acc: 0.2371
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 20/20:, Batch: [], Training loss: 1.502605676651001
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20 Iteration: 15 Validation Acc: 0.2425
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 19/20 Iteration: 20 Validation Acc: 0.2507
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5737148523330688
Epoch: 20/20:, Batch: [], Training loss: 1.3590810298919678
Epoch: 19/20 Iteration: 25 Validation Acc: 0.2534
Epoch: 20/20:, Batch: [], Training loss: 2.1690855026245117
Epoch: 20/20:, Batch: [], Training loss: 1.4306122064590454
Epoch: 20/20:, Batch: [], Training loss: 2.782285213470459
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 19/20 Iteration: 30 Validation Acc: 0.2343
Epoch: 20/20:, Batch: [], Training loss: 1.5021429061889648
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 19/20 Iteration: 35 Validation Acc: 0.2153
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.4306116104125977
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20 Iteration: 40 Validation Acc: 0.2071
Epoch: 20/20:, Batch: [], Training loss: 1.573672890663147
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 20/20:, Batch: [], Training loss: 1.5021421909332275
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 19/20 Iteration: 45 Validation Acc: 0.2016
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20 Iteration: 50 Validation Acc: 0.2016
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.4663792848587036
Epoch: 19/20 Iteration: 55 Validation Acc: 0.1962
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5738474130630493
Epoch: 20/20:, Batch: [], Training loss: 1.5021471977233887
Epoch: 20/20:, Batch: [], Training loss: 1.6094380617141724
Epoch: 19/20 Iteration: 60 Validation Acc: 0.1935
Epoch: 20/20:, Batch: [], Training loss: 1.4663769006729126
Epoch: 20/20:, Batch: [], Training loss: 1.5379074811935425
Epoch: 20/20:, Batch: [], Training loss: 1.5736727714538574
Epoch: 20/20:, Batch: [], Training loss: 1.6094379425048828

Testing

Below you see the test accuracy. You can also see the predictions returned for images.


In [19]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: test_x,
            labels_: test_y}
    test_acc = sess.run(accuracy, feed_dict=feed)
    print("Test accuracy: {:.4f}".format(test_acc))


INFO:tensorflow:Restoring parameters from checkpoints/flowers.ckpt
Test accuracy: 0.2098

In [20]:
%matplotlib inline

import matplotlib.pyplot as plt
from scipy.ndimage import imread

Below, feel free to choose images and see how the trained classifier predicts the flowers in them.


In [21]:
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)


Out[21]:
<matplotlib.image.AxesImage at 0x7eff43939828>

In [22]:
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
    print('"vgg" object already exists.  Will not create again.')
else:
    #create vgg
    with tf.Session() as sess:
        input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
        vgg = vgg16.Vgg16()
        vgg.build(input_)


"vgg" object already exists.  Will not create again.

In [24]:
with tf.Session() as sess:
    img = utils.load_image(test_img_path)
    img = img.reshape((1, 224, 224, 3))

    feed_dict = {input_: img}
    code = sess.run(vgg.relu6, feed_dict=feed_dict)
        
saver = tf.train.Saver()
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: code}
    prediction = sess.run(predicted, feed_dict=feed).squeeze()


INFO:tensorflow:Restoring parameters from checkpoints/flowers.ckpt

In [25]:
plt.imshow(test_img)


Out[25]:
<matplotlib.image.AxesImage at 0x7eff25076b70>

In [26]:
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-26-96435c952224> in <module>()
      1 plt.barh(np.arange(5), prediction)
----> 2 _ = plt.yticks(np.arange(5), lb.classes_)

NameError: name 'lb' is not defined

In [ ]: