Transfer Learning

Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.

VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.

You can read more about transfer learning from the CS231n course notes.

Pretrained VGGNet

We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.

git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg

This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.


In [1]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm

vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
    raise Exception("VGG directory doesn't exist!")

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(vgg_dir + "vgg16.npy"):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
        urlretrieve(
            'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
            vgg_dir + 'vgg16.npy',
            pbar.hook)
else:
    print("Parameter file already exists!")


Parameter file already exists!

Flower power

Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.


In [2]:
import tarfile

dataset_folder_path = 'flower_photos'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('flower_photos.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
        urlretrieve(
            'http://download.tensorflow.org/example_images/flower_photos.tgz',
            'flower_photos.tar.gz',
            pbar.hook)

if not isdir(dataset_folder_path):
    with tarfile.open('flower_photos.tar.gz') as tar:
        tar.extractall()
        tar.close()

ConvNet Codes

Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.

Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):

self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')

self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')

self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')

self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')

self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')

self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)

So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6).


In [3]:
import os

import numpy as np
import tensorflow as tf

from tensorflow_vgg import vgg16
from tensorflow_vgg import utils

In [4]:
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]

Below I'm running images through the VGG network in batches.


In [5]:
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 100
codes_list = []
labels = []
batch = []

codes = None
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
    vgg.build(input_)
        

with tf.Session() as sess:
    for each in classes:
        print("Starting {} images".format(each))
        class_path = data_dir + each
        files = os.listdir(class_path)
        for ii, file in enumerate(files, 1):
            # Add images to the current batch
            # utils.load_image crops the input images for us, from the center
            img = utils.load_image(os.path.join(class_path, file))
            batch.append(img.reshape((1, 224, 224, 3)))
            labels.append(each)
            
            # Running the batch through the network to get the codes
            if ii % batch_size == 0 or ii == len(files):
                
                # Image batch to pass to VGG network
                images = np.concatenate(batch)
                
                feed_dict = {input_: images}
                codes_batch =  sess.run(vgg.relu6, feed_dict=feed_dict)
                
                # Here I'm building an array of the codes
                if codes is None:
                    codes = codes_batch
                else:
                    codes = np.concatenate((codes, codes_batch))
                
                # Reset to start building the next batch
                batch = []
                print('{} images processed'.format(ii))


/home/adrsta/Github/deep-learning/08_transfer-learning/tensorflow_vgg/vgg16.npy
npy file loaded
build model started
build model finished: 0s
Starting sunflowers images
100 images processed
200 images processed
300 images processed
400 images processed
500 images processed
600 images processed
699 images processed
Starting dandelion images
100 images processed
200 images processed
300 images processed
400 images processed
500 images processed
600 images processed
700 images processed
800 images processed
898 images processed
Starting tulips images
100 images processed
200 images processed
300 images processed
400 images processed
500 images processed
600 images processed
700 images processed
799 images processed
Starting roses images
100 images processed
200 images processed
300 images processed
400 images processed
500 images processed
600 images processed
641 images processed
Starting daisy images
100 images processed
200 images processed
300 images processed
400 images processed
500 images processed
600 images processed
633 images processed

In [6]:
# write codes to file
with open('codes', 'w') as f:
    codes.tofile(f)
    
# write labels to file
import csv
with open('labels', 'w') as f:
    writer = csv.writer(f, delimiter='\n')
    writer.writerow(labels)

Building the Classifier

Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.


In [7]:
# read codes and labels from file
import csv

with open('labels') as f:
    reader = csv.reader(f, delimiter='\n')
    labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
    codes = np.fromfile(f, dtype=np.float32)
    codes = codes.reshape((len(labels), -1))

Data prep

As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!

From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.


In [8]:
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
labels_vecs


Out[8]:
array([[0, 0, 0, 1, 0],
       [0, 0, 0, 1, 0],
       [0, 0, 0, 1, 0],
       ..., 
       [1, 0, 0, 0, 0],
       [1, 0, 0, 0, 0],
       [1, 0, 0, 0, 0]])

Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.

You can create the splitter like so:

ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)

Then split the data with

splitter = ss.split(x, y)

ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices.


In [9]:
from sklearn.model_selection import StratifiedShuffleSplit

sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)
for train_index, test_index in sss.split(codes, labels_vecs):
    train_x, rest_x = codes[train_index], codes[test_index]
    train_y, rest_y = labels_vecs[train_index], labels_vecs[test_index]

sss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=0)
for train_index, test_index in sss.split(rest_x, rest_y):
    val_x, test_x = rest_x[train_index], rest_x[test_index]
    val_y, test_y = rest_y[train_index], rest_y[test_index]

In [10]:
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)


Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)

Classifier layers

Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.

Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.


In [11]:
from tensorflow import layers

In [12]:
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])

fc = tf.layers.dense(inputs_, 2000, activation=tf.nn.relu)
logits = tf.layers.dense(fc, labels_vecs.shape[1], activation=None)

cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=labels_,
                                                             logits=logits))

optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)

# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Batches!

Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.


In [13]:
def get_batches(x, y, n_batches=10):
    """ Return a generator that yields batches from arrays x and y. """
    batch_size = len(x)//n_batches
    
    for ii in range(0, n_batches*batch_size, batch_size):
        # If we're not on the last batch, grab data with size batch_size
        if ii != (n_batches-1)*batch_size:
            X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] 
        # On the last batch, grab the rest of the data
        else:
            X, Y = x[ii:], y[ii:]
        # I love generators
        yield X, Y

Training

Here, we'll train the network.


In [14]:
epochs = 10
batches = 100
saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for e in range(epochs):
        b = 0
        for x, y in get_batches(train_x, train_y, batches):
            feed = {inputs_: x,
                    labels_: y}
            batch_cost, _ = sess.run([cost, optimizer], feed_dict=feed)
            print("Epoch: {}/{} ".format(e+1, epochs),
                  "Batch: {}/{} ".format(b+1, batches),
                  "Training loss: {:.4f}".format(batch_cost))
            b += 1      
    saver.save(sess, "checkpoints/flowers.ckpt")


Epoch: 1/10  Batch: 1/100  Training loss: 258.3085
Epoch: 1/10  Batch: 2/100  Training loss: 213.2260
Epoch: 1/10  Batch: 3/100  Training loss: 120.0142
Epoch: 1/10  Batch: 4/100  Training loss: 99.7718
Epoch: 1/10  Batch: 5/100  Training loss: 54.0191
Epoch: 1/10  Batch: 6/100  Training loss: 49.9562
Epoch: 1/10  Batch: 7/100  Training loss: 45.4586
Epoch: 1/10  Batch: 8/100  Training loss: 50.2755
Epoch: 1/10  Batch: 9/100  Training loss: 16.9481
Epoch: 1/10  Batch: 10/100  Training loss: 34.4742
Epoch: 1/10  Batch: 11/100  Training loss: 46.4253
Epoch: 1/10  Batch: 12/100  Training loss: 83.4526
Epoch: 1/10  Batch: 13/100  Training loss: 20.3686
Epoch: 1/10  Batch: 14/100  Training loss: 19.7525
Epoch: 1/10  Batch: 15/100  Training loss: 22.1988
Epoch: 1/10  Batch: 16/100  Training loss: 58.0417
Epoch: 1/10  Batch: 17/100  Training loss: 60.0108
Epoch: 1/10  Batch: 18/100  Training loss: 37.6135
Epoch: 1/10  Batch: 19/100  Training loss: 32.6551
Epoch: 1/10  Batch: 20/100  Training loss: 29.1316
Epoch: 1/10  Batch: 21/100  Training loss: 16.1710
Epoch: 1/10  Batch: 22/100  Training loss: 28.5059
Epoch: 1/10  Batch: 23/100  Training loss: 27.2328
Epoch: 1/10  Batch: 24/100  Training loss: 39.9243
Epoch: 1/10  Batch: 25/100  Training loss: 26.5905
Epoch: 1/10  Batch: 26/100  Training loss: 52.3984
Epoch: 1/10  Batch: 27/100  Training loss: 41.4468
Epoch: 1/10  Batch: 28/100  Training loss: 45.8202
Epoch: 1/10  Batch: 29/100  Training loss: 12.7220
Epoch: 1/10  Batch: 30/100  Training loss: 32.2211
Epoch: 1/10  Batch: 31/100  Training loss: 42.3756
Epoch: 1/10  Batch: 32/100  Training loss: 19.2546
Epoch: 1/10  Batch: 33/100  Training loss: 58.5932
Epoch: 1/10  Batch: 34/100  Training loss: 44.3519
Epoch: 1/10  Batch: 35/100  Training loss: 27.3145
Epoch: 1/10  Batch: 36/100  Training loss: 19.7184
Epoch: 1/10  Batch: 37/100  Training loss: 17.4269
Epoch: 1/10  Batch: 38/100  Training loss: 9.0744
Epoch: 1/10  Batch: 39/100  Training loss: 12.1141
Epoch: 1/10  Batch: 40/100  Training loss: 31.9705
Epoch: 1/10  Batch: 41/100  Training loss: 19.8388
Epoch: 1/10  Batch: 42/100  Training loss: 31.8801
Epoch: 1/10  Batch: 43/100  Training loss: 30.2859
Epoch: 1/10  Batch: 44/100  Training loss: 24.4009
Epoch: 1/10  Batch: 45/100  Training loss: 36.3559
Epoch: 1/10  Batch: 46/100  Training loss: 18.2452
Epoch: 1/10  Batch: 47/100  Training loss: 2.8941
Epoch: 1/10  Batch: 48/100  Training loss: 8.4642
Epoch: 1/10  Batch: 49/100  Training loss: 9.7432
Epoch: 1/10  Batch: 50/100  Training loss: 11.1386
Epoch: 1/10  Batch: 51/100  Training loss: 5.9851
Epoch: 1/10  Batch: 52/100  Training loss: 13.5811
Epoch: 1/10  Batch: 53/100  Training loss: 26.0611
Epoch: 1/10  Batch: 54/100  Training loss: 14.8790
Epoch: 1/10  Batch: 55/100  Training loss: 1.7423
Epoch: 1/10  Batch: 56/100  Training loss: 22.6984
Epoch: 1/10  Batch: 57/100  Training loss: 4.8239
Epoch: 1/10  Batch: 58/100  Training loss: 25.0413
Epoch: 1/10  Batch: 59/100  Training loss: 2.0545
Epoch: 1/10  Batch: 60/100  Training loss: 7.9274
Epoch: 1/10  Batch: 61/100  Training loss: 13.4382
Epoch: 1/10  Batch: 62/100  Training loss: 18.0277
Epoch: 1/10  Batch: 63/100  Training loss: 44.4189
Epoch: 1/10  Batch: 64/100  Training loss: 6.1725
Epoch: 1/10  Batch: 65/100  Training loss: 29.5078
Epoch: 1/10  Batch: 66/100  Training loss: 37.3291
Epoch: 1/10  Batch: 67/100  Training loss: 20.7156
Epoch: 1/10  Batch: 68/100  Training loss: 14.3072
Epoch: 1/10  Batch: 69/100  Training loss: 4.5579
Epoch: 1/10  Batch: 70/100  Training loss: 12.8262
Epoch: 1/10  Batch: 71/100  Training loss: 14.2806
Epoch: 1/10  Batch: 72/100  Training loss: 19.3508
Epoch: 1/10  Batch: 73/100  Training loss: 46.7643
Epoch: 1/10  Batch: 74/100  Training loss: 26.9331
Epoch: 1/10  Batch: 75/100  Training loss: 12.5584
Epoch: 1/10  Batch: 76/100  Training loss: 20.8457
Epoch: 1/10  Batch: 77/100  Training loss: 11.1591
Epoch: 1/10  Batch: 78/100  Training loss: 9.0657
Epoch: 1/10  Batch: 79/100  Training loss: 17.2252
Epoch: 1/10  Batch: 80/100  Training loss: 2.9942
Epoch: 1/10  Batch: 81/100  Training loss: 42.0547
Epoch: 1/10  Batch: 82/100  Training loss: 7.5245
Epoch: 1/10  Batch: 83/100  Training loss: 14.5007
Epoch: 1/10  Batch: 84/100  Training loss: 22.4118
Epoch: 1/10  Batch: 85/100  Training loss: 41.8103
Epoch: 1/10  Batch: 86/100  Training loss: 11.6181
Epoch: 1/10  Batch: 87/100  Training loss: 33.2647
Epoch: 1/10  Batch: 88/100  Training loss: 37.7598
Epoch: 1/10  Batch: 89/100  Training loss: 11.2899
Epoch: 1/10  Batch: 90/100  Training loss: 24.7512
Epoch: 1/10  Batch: 91/100  Training loss: 15.9439
Epoch: 1/10  Batch: 92/100  Training loss: 50.8114
Epoch: 1/10  Batch: 93/100  Training loss: 16.1097
Epoch: 1/10  Batch: 94/100  Training loss: 17.9776
Epoch: 1/10  Batch: 95/100  Training loss: 5.3464
Epoch: 1/10  Batch: 96/100  Training loss: 2.0788
Epoch: 1/10  Batch: 97/100  Training loss: 15.4374
Epoch: 1/10  Batch: 98/100  Training loss: 37.5716
Epoch: 1/10  Batch: 99/100  Training loss: 4.0275
Epoch: 1/10  Batch: 100/100  Training loss: 2.4693
Epoch: 2/10  Batch: 1/100  Training loss: 2.6287
Epoch: 2/10  Batch: 2/100  Training loss: 0.0449
Epoch: 2/10  Batch: 3/100  Training loss: 0.0939
Epoch: 2/10  Batch: 4/100  Training loss: 0.4309
Epoch: 2/10  Batch: 5/100  Training loss: 2.9701
Epoch: 2/10  Batch: 6/100  Training loss: 0.5460
Epoch: 2/10  Batch: 7/100  Training loss: 25.7500
Epoch: 2/10  Batch: 8/100  Training loss: 17.0611
Epoch: 2/10  Batch: 9/100  Training loss: 1.8399
Epoch: 2/10  Batch: 10/100  Training loss: 4.1276
Epoch: 2/10  Batch: 11/100  Training loss: 23.9920
Epoch: 2/10  Batch: 12/100  Training loss: 12.6495
Epoch: 2/10  Batch: 13/100  Training loss: 8.4492
Epoch: 2/10  Batch: 14/100  Training loss: 8.7929
Epoch: 2/10  Batch: 15/100  Training loss: 0.7010
Epoch: 2/10  Batch: 16/100  Training loss: 0.0239
Epoch: 2/10  Batch: 17/100  Training loss: 21.2451
Epoch: 2/10  Batch: 18/100  Training loss: 4.5728
Epoch: 2/10  Batch: 19/100  Training loss: 11.6790
Epoch: 2/10  Batch: 20/100  Training loss: 0.3175
Epoch: 2/10  Batch: 21/100  Training loss: 5.4469
Epoch: 2/10  Batch: 22/100  Training loss: 0.2363
Epoch: 2/10  Batch: 23/100  Training loss: 0.4080
Epoch: 2/10  Batch: 24/100  Training loss: 0.5347
Epoch: 2/10  Batch: 25/100  Training loss: 9.4738
Epoch: 2/10  Batch: 26/100  Training loss: 7.1701
Epoch: 2/10  Batch: 27/100  Training loss: 48.4243
Epoch: 2/10  Batch: 28/100  Training loss: 17.4398
Epoch: 2/10  Batch: 29/100  Training loss: 0.2202
Epoch: 2/10  Batch: 30/100  Training loss: 3.3811
Epoch: 2/10  Batch: 31/100  Training loss: 4.5550
Epoch: 2/10  Batch: 32/100  Training loss: 0.9982
Epoch: 2/10  Batch: 33/100  Training loss: 4.2258
Epoch: 2/10  Batch: 34/100  Training loss: 3.8031
Epoch: 2/10  Batch: 35/100  Training loss: 4.5515
Epoch: 2/10  Batch: 36/100  Training loss: 4.2836
Epoch: 2/10  Batch: 37/100  Training loss: 13.7063
Epoch: 2/10  Batch: 38/100  Training loss: 1.9306
Epoch: 2/10  Batch: 39/100  Training loss: 1.0541
Epoch: 2/10  Batch: 40/100  Training loss: 0.1213
Epoch: 2/10  Batch: 41/100  Training loss: 8.0309
Epoch: 2/10  Batch: 42/100  Training loss: 6.0740
Epoch: 2/10  Batch: 43/100  Training loss: 0.0890
Epoch: 2/10  Batch: 44/100  Training loss: 1.0535
Epoch: 2/10  Batch: 45/100  Training loss: 1.7996
Epoch: 2/10  Batch: 46/100  Training loss: 0.5715
Epoch: 2/10  Batch: 47/100  Training loss: 0.1611
Epoch: 2/10  Batch: 48/100  Training loss: 2.9009
Epoch: 2/10  Batch: 49/100  Training loss: 5.6919
Epoch: 2/10  Batch: 50/100  Training loss: 1.9070
Epoch: 2/10  Batch: 51/100  Training loss: 0.9191
Epoch: 2/10  Batch: 52/100  Training loss: 0.6189
Epoch: 2/10  Batch: 53/100  Training loss: 5.7202
Epoch: 2/10  Batch: 54/100  Training loss: 0.8646
Epoch: 2/10  Batch: 55/100  Training loss: 2.7405
Epoch: 2/10  Batch: 56/100  Training loss: 4.2968
Epoch: 2/10  Batch: 57/100  Training loss: 0.0458
Epoch: 2/10  Batch: 58/100  Training loss: 8.2842
Epoch: 2/10  Batch: 59/100  Training loss: 0.0043
Epoch: 2/10  Batch: 60/100  Training loss: 0.0215
Epoch: 2/10  Batch: 61/100  Training loss: 0.1800
Epoch: 2/10  Batch: 62/100  Training loss: 0.6102
Epoch: 2/10  Batch: 63/100  Training loss: 5.5751
Epoch: 2/10  Batch: 64/100  Training loss: 2.0708
Epoch: 2/10  Batch: 65/100  Training loss: 4.8326
Epoch: 2/10  Batch: 66/100  Training loss: 11.2886
Epoch: 2/10  Batch: 67/100  Training loss: 2.8267
Epoch: 2/10  Batch: 68/100  Training loss: 0.0339
Epoch: 2/10  Batch: 69/100  Training loss: 3.3029
Epoch: 2/10  Batch: 70/100  Training loss: 0.2334
Epoch: 2/10  Batch: 71/100  Training loss: 1.2783
Epoch: 2/10  Batch: 72/100  Training loss: 2.7551
Epoch: 2/10  Batch: 73/100  Training loss: 7.5290
Epoch: 2/10  Batch: 74/100  Training loss: 0.2669
Epoch: 2/10  Batch: 75/100  Training loss: 0.5711
Epoch: 2/10  Batch: 76/100  Training loss: 0.1044
Epoch: 2/10  Batch: 77/100  Training loss: 2.3338
Epoch: 2/10  Batch: 78/100  Training loss: 1.0832
Epoch: 2/10  Batch: 79/100  Training loss: 9.6930
Epoch: 2/10  Batch: 80/100  Training loss: 0.1183
Epoch: 2/10  Batch: 81/100  Training loss: 0.1576
Epoch: 2/10  Batch: 82/100  Training loss: 1.1011
Epoch: 2/10  Batch: 83/100  Training loss: 4.8100
Epoch: 2/10  Batch: 84/100  Training loss: 6.6799
Epoch: 2/10  Batch: 85/100  Training loss: 11.7426
Epoch: 2/10  Batch: 86/100  Training loss: 1.9252
Epoch: 2/10  Batch: 87/100  Training loss: 1.5134
Epoch: 2/10  Batch: 88/100  Training loss: 4.6714
Epoch: 2/10  Batch: 89/100  Training loss: 0.1841
Epoch: 2/10  Batch: 90/100  Training loss: 1.1810
Epoch: 2/10  Batch: 91/100  Training loss: 1.7845
Epoch: 2/10  Batch: 92/100  Training loss: 9.1741
Epoch: 2/10  Batch: 93/100  Training loss: 0.3492
Epoch: 2/10  Batch: 94/100  Training loss: 0.5568
Epoch: 2/10  Batch: 95/100  Training loss: 0.0853
Epoch: 2/10  Batch: 96/100  Training loss: 4.2267
Epoch: 2/10  Batch: 97/100  Training loss: 1.4521
Epoch: 2/10  Batch: 98/100  Training loss: 0.3179
Epoch: 2/10  Batch: 99/100  Training loss: 0.0425
Epoch: 2/10  Batch: 100/100  Training loss: 2.7628
Epoch: 3/10  Batch: 1/100  Training loss: 0.0177
Epoch: 3/10  Batch: 2/100  Training loss: 0.1735
Epoch: 3/10  Batch: 3/100  Training loss: 0.0655
Epoch: 3/10  Batch: 4/100  Training loss: 3.2876
Epoch: 3/10  Batch: 5/100  Training loss: 1.1870
Epoch: 3/10  Batch: 6/100  Training loss: 1.2752
Epoch: 3/10  Batch: 7/100  Training loss: 0.1427
Epoch: 3/10  Batch: 8/100  Training loss: 3.9666
Epoch: 3/10  Batch: 9/100  Training loss: 0.4113
Epoch: 3/10  Batch: 10/100  Training loss: 0.1738
Epoch: 3/10  Batch: 11/100  Training loss: 0.5143
Epoch: 3/10  Batch: 12/100  Training loss: 0.0687
Epoch: 3/10  Batch: 13/100  Training loss: 2.0308
Epoch: 3/10  Batch: 14/100  Training loss: 3.1288
Epoch: 3/10  Batch: 15/100  Training loss: 0.4315
Epoch: 3/10  Batch: 16/100  Training loss: 1.3767
Epoch: 3/10  Batch: 17/100  Training loss: 3.2803
Epoch: 3/10  Batch: 18/100  Training loss: 0.0193
Epoch: 3/10  Batch: 19/100  Training loss: 0.1545
Epoch: 3/10  Batch: 20/100  Training loss: 0.2449
Epoch: 3/10  Batch: 21/100  Training loss: 3.1252
Epoch: 3/10  Batch: 22/100  Training loss: 7.7259
Epoch: 3/10  Batch: 23/100  Training loss: 0.0424
Epoch: 3/10  Batch: 24/100  Training loss: 0.2353
Epoch: 3/10  Batch: 25/100  Training loss: 0.4653
Epoch: 3/10  Batch: 26/100  Training loss: 0.0468
Epoch: 3/10  Batch: 27/100  Training loss: 0.6002
Epoch: 3/10  Batch: 28/100  Training loss: 0.0769
Epoch: 3/10  Batch: 29/100  Training loss: 1.8482
Epoch: 3/10  Batch: 30/100  Training loss: 1.2020
Epoch: 3/10  Batch: 31/100  Training loss: 1.6219
Epoch: 3/10  Batch: 32/100  Training loss: 0.7747
Epoch: 3/10  Batch: 33/100  Training loss: 7.1686
Epoch: 3/10  Batch: 34/100  Training loss: 6.9106
Epoch: 3/10  Batch: 35/100  Training loss: 1.8264
Epoch: 3/10  Batch: 36/100  Training loss: 0.0009
Epoch: 3/10  Batch: 37/100  Training loss: 0.0675
Epoch: 3/10  Batch: 38/100  Training loss: 2.5518
Epoch: 3/10  Batch: 39/100  Training loss: 0.0819
Epoch: 3/10  Batch: 40/100  Training loss: 0.0467
Epoch: 3/10  Batch: 41/100  Training loss: 2.1262
Epoch: 3/10  Batch: 42/100  Training loss: 0.0070
Epoch: 3/10  Batch: 43/100  Training loss: 3.0589
Epoch: 3/10  Batch: 44/100  Training loss: 0.4335
Epoch: 3/10  Batch: 45/100  Training loss: 0.0858
Epoch: 3/10  Batch: 46/100  Training loss: 3.3819
Epoch: 3/10  Batch: 47/100  Training loss: 0.6050
Epoch: 3/10  Batch: 48/100  Training loss: 0.0354
Epoch: 3/10  Batch: 49/100  Training loss: 0.0410
Epoch: 3/10  Batch: 50/100  Training loss: 0.0104
Epoch: 3/10  Batch: 51/100  Training loss: 0.2430
Epoch: 3/10  Batch: 52/100  Training loss: 0.7661
Epoch: 3/10  Batch: 53/100  Training loss: 0.0251
Epoch: 3/10  Batch: 54/100  Training loss: 0.0552
Epoch: 3/10  Batch: 55/100  Training loss: 1.4787
Epoch: 3/10  Batch: 56/100  Training loss: 0.8869
Epoch: 3/10  Batch: 57/100  Training loss: 0.0276
Epoch: 3/10  Batch: 58/100  Training loss: 0.0624
Epoch: 3/10  Batch: 59/100  Training loss: 0.1466
Epoch: 3/10  Batch: 60/100  Training loss: 1.1695
Epoch: 3/10  Batch: 61/100  Training loss: 2.0117
Epoch: 3/10  Batch: 62/100  Training loss: 5.3353
Epoch: 3/10  Batch: 63/100  Training loss: 0.4895
Epoch: 3/10  Batch: 64/100  Training loss: 0.0417
Epoch: 3/10  Batch: 65/100  Training loss: 0.0192
Epoch: 3/10  Batch: 66/100  Training loss: 0.0173
Epoch: 3/10  Batch: 67/100  Training loss: 0.0558
Epoch: 3/10  Batch: 68/100  Training loss: 0.0089
Epoch: 3/10  Batch: 69/100  Training loss: 0.3756
Epoch: 3/10  Batch: 70/100  Training loss: 0.0094
Epoch: 3/10  Batch: 71/100  Training loss: 0.0072
Epoch: 3/10  Batch: 72/100  Training loss: 0.3710
Epoch: 3/10  Batch: 73/100  Training loss: 1.2323
Epoch: 3/10  Batch: 74/100  Training loss: 4.5014
Epoch: 3/10  Batch: 75/100  Training loss: 3.0780
Epoch: 3/10  Batch: 76/100  Training loss: 0.1915
Epoch: 3/10  Batch: 77/100  Training loss: 1.9933
Epoch: 3/10  Batch: 78/100  Training loss: 5.4234
Epoch: 3/10  Batch: 79/100  Training loss: 0.1519
Epoch: 3/10  Batch: 80/100  Training loss: 0.0104
Epoch: 3/10  Batch: 81/100  Training loss: 6.3563
Epoch: 3/10  Batch: 82/100  Training loss: 12.2359
Epoch: 3/10  Batch: 83/100  Training loss: 0.0745
Epoch: 3/10  Batch: 84/100  Training loss: 0.0686
Epoch: 3/10  Batch: 85/100  Training loss: 0.3485
Epoch: 3/10  Batch: 86/100  Training loss: 0.8173
Epoch: 3/10  Batch: 87/100  Training loss: 0.6032
Epoch: 3/10  Batch: 88/100  Training loss: 0.0686
Epoch: 3/10  Batch: 89/100  Training loss: 0.4291
Epoch: 3/10  Batch: 90/100  Training loss: 0.0851
Epoch: 3/10  Batch: 91/100  Training loss: 0.1975
Epoch: 3/10  Batch: 92/100  Training loss: 0.7702
Epoch: 3/10  Batch: 93/100  Training loss: 1.3713
Epoch: 3/10  Batch: 94/100  Training loss: 0.2582
Epoch: 3/10  Batch: 95/100  Training loss: 0.4625
Epoch: 3/10  Batch: 96/100  Training loss: 0.7494
Epoch: 3/10  Batch: 97/100  Training loss: 0.2149
Epoch: 3/10  Batch: 98/100  Training loss: 1.0199
Epoch: 3/10  Batch: 99/100  Training loss: 0.0223
Epoch: 3/10  Batch: 100/100  Training loss: 1.5217
Epoch: 4/10  Batch: 1/100  Training loss: 0.0204
Epoch: 4/10  Batch: 2/100  Training loss: 1.7709
Epoch: 4/10  Batch: 3/100  Training loss: 0.0090
Epoch: 4/10  Batch: 4/100  Training loss: 0.0534
Epoch: 4/10  Batch: 5/100  Training loss: 3.4736
Epoch: 4/10  Batch: 6/100  Training loss: 0.1220
Epoch: 4/10  Batch: 7/100  Training loss: 0.2483
Epoch: 4/10  Batch: 8/100  Training loss: 0.0547
Epoch: 4/10  Batch: 9/100  Training loss: 0.2162
Epoch: 4/10  Batch: 10/100  Training loss: 0.0758
Epoch: 4/10  Batch: 11/100  Training loss: 0.3564
Epoch: 4/10  Batch: 12/100  Training loss: 0.0180
Epoch: 4/10  Batch: 13/100  Training loss: 0.0418
Epoch: 4/10  Batch: 14/100  Training loss: 2.8178
Epoch: 4/10  Batch: 15/100  Training loss: 0.0041
Epoch: 4/10  Batch: 16/100  Training loss: 0.0063
Epoch: 4/10  Batch: 17/100  Training loss: 1.0235
Epoch: 4/10  Batch: 18/100  Training loss: 1.5762
Epoch: 4/10  Batch: 19/100  Training loss: 10.3869
Epoch: 4/10  Batch: 20/100  Training loss: 4.7046
Epoch: 4/10  Batch: 21/100  Training loss: 0.1171
Epoch: 4/10  Batch: 22/100  Training loss: 0.0239
Epoch: 4/10  Batch: 23/100  Training loss: 0.5178
Epoch: 4/10  Batch: 24/100  Training loss: 0.3198
Epoch: 4/10  Batch: 25/100  Training loss: 3.4675
Epoch: 4/10  Batch: 26/100  Training loss: 0.8289
Epoch: 4/10  Batch: 27/100  Training loss: 20.8007
Epoch: 4/10  Batch: 28/100  Training loss: 5.7320
Epoch: 4/10  Batch: 29/100  Training loss: 6.7559
Epoch: 4/10  Batch: 30/100  Training loss: 2.9373
Epoch: 4/10  Batch: 31/100  Training loss: 0.2303
Epoch: 4/10  Batch: 32/100  Training loss: 0.0107
Epoch: 4/10  Batch: 33/100  Training loss: 5.7044
Epoch: 4/10  Batch: 34/100  Training loss: 0.1575
Epoch: 4/10  Batch: 35/100  Training loss: 4.1276
Epoch: 4/10  Batch: 36/100  Training loss: 2.9159
Epoch: 4/10  Batch: 37/100  Training loss: 32.5283
Epoch: 4/10  Batch: 38/100  Training loss: 0.7662
Epoch: 4/10  Batch: 39/100  Training loss: 0.5645
Epoch: 4/10  Batch: 40/100  Training loss: 0.0484
Epoch: 4/10  Batch: 41/100  Training loss: 0.0463
Epoch: 4/10  Batch: 42/100  Training loss: 1.1458
Epoch: 4/10  Batch: 43/100  Training loss: 1.7961
Epoch: 4/10  Batch: 44/100  Training loss: 11.3673
Epoch: 4/10  Batch: 45/100  Training loss: 0.3836
Epoch: 4/10  Batch: 46/100  Training loss: 1.5093
Epoch: 4/10  Batch: 47/100  Training loss: 0.0406
Epoch: 4/10  Batch: 48/100  Training loss: 0.0120
Epoch: 4/10  Batch: 49/100  Training loss: 0.0178
Epoch: 4/10  Batch: 50/100  Training loss: 2.3678
Epoch: 4/10  Batch: 51/100  Training loss: 0.3631
Epoch: 4/10  Batch: 52/100  Training loss: 0.0372
Epoch: 4/10  Batch: 53/100  Training loss: 0.0211
Epoch: 4/10  Batch: 54/100  Training loss: 0.0804
Epoch: 4/10  Batch: 55/100  Training loss: 0.0523
Epoch: 4/10  Batch: 56/100  Training loss: 2.1632
Epoch: 4/10  Batch: 57/100  Training loss: 0.8585
Epoch: 4/10  Batch: 58/100  Training loss: 0.0612
Epoch: 4/10  Batch: 59/100  Training loss: 0.0096
Epoch: 4/10  Batch: 60/100  Training loss: 0.3201
Epoch: 4/10  Batch: 61/100  Training loss: 6.2046
Epoch: 4/10  Batch: 62/100  Training loss: 0.1266
Epoch: 4/10  Batch: 63/100  Training loss: 0.0043
Epoch: 4/10  Batch: 64/100  Training loss: 0.0707
Epoch: 4/10  Batch: 65/100  Training loss: 0.5804
Epoch: 4/10  Batch: 66/100  Training loss: 0.3826
Epoch: 4/10  Batch: 67/100  Training loss: 0.1091
Epoch: 4/10  Batch: 68/100  Training loss: 0.0562
Epoch: 4/10  Batch: 69/100  Training loss: 0.0465
Epoch: 4/10  Batch: 70/100  Training loss: 0.1150
Epoch: 4/10  Batch: 71/100  Training loss: 0.1152
Epoch: 4/10  Batch: 72/100  Training loss: 0.2592
Epoch: 4/10  Batch: 73/100  Training loss: 0.0809
Epoch: 4/10  Batch: 74/100  Training loss: 4.5065
Epoch: 4/10  Batch: 75/100  Training loss: 0.0233
Epoch: 4/10  Batch: 76/100  Training loss: 0.0481
Epoch: 4/10  Batch: 77/100  Training loss: 0.0630
Epoch: 4/10  Batch: 78/100  Training loss: 3.3704
Epoch: 4/10  Batch: 79/100  Training loss: 0.2346
Epoch: 4/10  Batch: 80/100  Training loss: 0.1206
Epoch: 4/10  Batch: 81/100  Training loss: 0.9956
Epoch: 4/10  Batch: 82/100  Training loss: 0.5550
Epoch: 4/10  Batch: 83/100  Training loss: 0.0050
Epoch: 4/10  Batch: 84/100  Training loss: 3.0001
Epoch: 4/10  Batch: 85/100  Training loss: 0.5279
Epoch: 4/10  Batch: 86/100  Training loss: 0.0817
Epoch: 4/10  Batch: 87/100  Training loss: 0.9740
Epoch: 4/10  Batch: 88/100  Training loss: 0.1179
Epoch: 4/10  Batch: 89/100  Training loss: 0.1243
Epoch: 4/10  Batch: 90/100  Training loss: 0.0879
Epoch: 4/10  Batch: 91/100  Training loss: 0.7324
Epoch: 4/10  Batch: 92/100  Training loss: 0.0661
Epoch: 4/10  Batch: 93/100  Training loss: 0.0118
Epoch: 4/10  Batch: 94/100  Training loss: 3.4578
Epoch: 4/10  Batch: 95/100  Training loss: 0.4506
Epoch: 4/10  Batch: 96/100  Training loss: 0.0685
Epoch: 4/10  Batch: 97/100  Training loss: 3.0849
Epoch: 4/10  Batch: 98/100  Training loss: 0.0216
Epoch: 4/10  Batch: 99/100  Training loss: 0.0256
Epoch: 4/10  Batch: 100/100  Training loss: 0.4361
Epoch: 5/10  Batch: 1/100  Training loss: 0.0017
Epoch: 5/10  Batch: 2/100  Training loss: 0.0083
Epoch: 5/10  Batch: 3/100  Training loss: 0.0766
Epoch: 5/10  Batch: 4/100  Training loss: 0.1280
Epoch: 5/10  Batch: 5/100  Training loss: 0.0014
Epoch: 5/10  Batch: 6/100  Training loss: 0.0458
Epoch: 5/10  Batch: 7/100  Training loss: 0.2024
Epoch: 5/10  Batch: 8/100  Training loss: 0.0693
Epoch: 5/10  Batch: 9/100  Training loss: 0.0206
Epoch: 5/10  Batch: 10/100  Training loss: 0.0134
Epoch: 5/10  Batch: 11/100  Training loss: 0.1772
Epoch: 5/10  Batch: 12/100  Training loss: 1.9735
Epoch: 5/10  Batch: 13/100  Training loss: 0.0186
Epoch: 5/10  Batch: 14/100  Training loss: 0.0027
Epoch: 5/10  Batch: 15/100  Training loss: 0.0108
Epoch: 5/10  Batch: 16/100  Training loss: 0.0008
Epoch: 5/10  Batch: 17/100  Training loss: 1.0116
Epoch: 5/10  Batch: 18/100  Training loss: 0.7292
Epoch: 5/10  Batch: 19/100  Training loss: 0.2628
Epoch: 5/10  Batch: 20/100  Training loss: 0.0062
Epoch: 5/10  Batch: 21/100  Training loss: 0.0111
Epoch: 5/10  Batch: 22/100  Training loss: 0.0112
Epoch: 5/10  Batch: 23/100  Training loss: 0.0023
Epoch: 5/10  Batch: 24/100  Training loss: 0.0883
Epoch: 5/10  Batch: 25/100  Training loss: 0.0013
Epoch: 5/10  Batch: 26/100  Training loss: 0.0281
Epoch: 5/10  Batch: 27/100  Training loss: 0.0021
Epoch: 5/10  Batch: 28/100  Training loss: 0.0065
Epoch: 5/10  Batch: 29/100  Training loss: 0.0174
Epoch: 5/10  Batch: 30/100  Training loss: 0.0081
Epoch: 5/10  Batch: 31/100  Training loss: 0.4352
Epoch: 5/10  Batch: 32/100  Training loss: 5.8039
Epoch: 5/10  Batch: 33/100  Training loss: 0.0198
Epoch: 5/10  Batch: 34/100  Training loss: 0.0106
Epoch: 5/10  Batch: 35/100  Training loss: 0.0873
Epoch: 5/10  Batch: 36/100  Training loss: 0.0004
Epoch: 5/10  Batch: 37/100  Training loss: 0.0022
Epoch: 5/10  Batch: 38/100  Training loss: 0.0007
Epoch: 5/10  Batch: 39/100  Training loss: 0.0029
Epoch: 5/10  Batch: 40/100  Training loss: 0.1854
Epoch: 5/10  Batch: 41/100  Training loss: 0.1106
Epoch: 5/10  Batch: 42/100  Training loss: 0.0120
Epoch: 5/10  Batch: 43/100  Training loss: 0.0082
Epoch: 5/10  Batch: 44/100  Training loss: 0.0439
Epoch: 5/10  Batch: 45/100  Training loss: 0.0169
Epoch: 5/10  Batch: 46/100  Training loss: 5.8922
Epoch: 5/10  Batch: 47/100  Training loss: 0.2461
Epoch: 5/10  Batch: 48/100  Training loss: 0.0115
Epoch: 5/10  Batch: 49/100  Training loss: 0.1286
Epoch: 5/10  Batch: 50/100  Training loss: 0.0021
Epoch: 5/10  Batch: 51/100  Training loss: 0.0080
Epoch: 5/10  Batch: 52/100  Training loss: 0.0139
Epoch: 5/10  Batch: 53/100  Training loss: 0.0462
Epoch: 5/10  Batch: 54/100  Training loss: 0.0270
Epoch: 5/10  Batch: 55/100  Training loss: 0.0233
Epoch: 5/10  Batch: 56/100  Training loss: 0.0201
Epoch: 5/10  Batch: 57/100  Training loss: 0.0032
Epoch: 5/10  Batch: 58/100  Training loss: 0.0058
Epoch: 5/10  Batch: 59/100  Training loss: 0.0001
Epoch: 5/10  Batch: 60/100  Training loss: 0.5414
Epoch: 5/10  Batch: 61/100  Training loss: 0.0089
Epoch: 5/10  Batch: 62/100  Training loss: 0.0114
Epoch: 5/10  Batch: 63/100  Training loss: 0.0239
Epoch: 5/10  Batch: 64/100  Training loss: 0.0482
Epoch: 5/10  Batch: 65/100  Training loss: 0.0264
Epoch: 5/10  Batch: 66/100  Training loss: 0.0207
Epoch: 5/10  Batch: 67/100  Training loss: 0.0572
Epoch: 5/10  Batch: 68/100  Training loss: 0.6001
Epoch: 5/10  Batch: 69/100  Training loss: 0.9443
Epoch: 5/10  Batch: 70/100  Training loss: 0.0040
Epoch: 5/10  Batch: 71/100  Training loss: 0.0100
Epoch: 5/10  Batch: 72/100  Training loss: 0.1478
Epoch: 5/10  Batch: 73/100  Training loss: 0.0022
Epoch: 5/10  Batch: 74/100  Training loss: 0.0182
Epoch: 5/10  Batch: 75/100  Training loss: 0.0111
Epoch: 5/10  Batch: 76/100  Training loss: 0.0157
Epoch: 5/10  Batch: 77/100  Training loss: 0.0118
Epoch: 5/10  Batch: 78/100  Training loss: 0.0120
Epoch: 5/10  Batch: 79/100  Training loss: 0.1036
Epoch: 5/10  Batch: 80/100  Training loss: 0.0137
Epoch: 5/10  Batch: 81/100  Training loss: 0.0060
Epoch: 5/10  Batch: 82/100  Training loss: 0.0114
Epoch: 5/10  Batch: 83/100  Training loss: 0.0044
Epoch: 5/10  Batch: 84/100  Training loss: 0.1233
Epoch: 5/10  Batch: 85/100  Training loss: 0.0041
Epoch: 5/10  Batch: 86/100  Training loss: 0.0073
Epoch: 5/10  Batch: 87/100  Training loss: 0.0274
Epoch: 5/10  Batch: 88/100  Training loss: 0.1522
Epoch: 5/10  Batch: 89/100  Training loss: 0.8418
Epoch: 5/10  Batch: 90/100  Training loss: 0.1134
Epoch: 5/10  Batch: 91/100  Training loss: 0.2847
Epoch: 5/10  Batch: 92/100  Training loss: 0.0262
Epoch: 5/10  Batch: 93/100  Training loss: 0.0036
Epoch: 5/10  Batch: 94/100  Training loss: 0.0792
Epoch: 5/10  Batch: 95/100  Training loss: 0.0059
Epoch: 5/10  Batch: 96/100  Training loss: 0.0086
Epoch: 5/10  Batch: 97/100  Training loss: 0.0098
Epoch: 5/10  Batch: 98/100  Training loss: 0.0076
Epoch: 5/10  Batch: 99/100  Training loss: 0.0168
Epoch: 5/10  Batch: 100/100  Training loss: 0.0116
Epoch: 6/10  Batch: 1/100  Training loss: 0.0389
Epoch: 6/10  Batch: 2/100  Training loss: 0.0018
Epoch: 6/10  Batch: 3/100  Training loss: 0.0094
Epoch: 6/10  Batch: 4/100  Training loss: 0.3121
Epoch: 6/10  Batch: 5/100  Training loss: 0.0042
Epoch: 6/10  Batch: 6/100  Training loss: 0.0217
Epoch: 6/10  Batch: 7/100  Training loss: 0.0403
Epoch: 6/10  Batch: 8/100  Training loss: 0.1473
Epoch: 6/10  Batch: 9/100  Training loss: 0.4468
Epoch: 6/10  Batch: 10/100  Training loss: 0.0052
Epoch: 6/10  Batch: 11/100  Training loss: 0.0131
Epoch: 6/10  Batch: 12/100  Training loss: 0.1219
Epoch: 6/10  Batch: 13/100  Training loss: 0.0861
Epoch: 6/10  Batch: 14/100  Training loss: 0.0037
Epoch: 6/10  Batch: 15/100  Training loss: 0.0369
Epoch: 6/10  Batch: 16/100  Training loss: 0.0048
Epoch: 6/10  Batch: 17/100  Training loss: 0.0478
Epoch: 6/10  Batch: 18/100  Training loss: 0.0135
Epoch: 6/10  Batch: 19/100  Training loss: 0.1604
Epoch: 6/10  Batch: 20/100  Training loss: 0.0049
Epoch: 6/10  Batch: 21/100  Training loss: 0.1429
Epoch: 6/10  Batch: 22/100  Training loss: 0.0092
Epoch: 6/10  Batch: 23/100  Training loss: 0.0033
Epoch: 6/10  Batch: 24/100  Training loss: 0.0085
Epoch: 6/10  Batch: 25/100  Training loss: 0.0016
Epoch: 6/10  Batch: 26/100  Training loss: 0.0084
Epoch: 6/10  Batch: 27/100  Training loss: 0.0018
Epoch: 6/10  Batch: 28/100  Training loss: 0.0078
Epoch: 6/10  Batch: 29/100  Training loss: 0.0100
Epoch: 6/10  Batch: 30/100  Training loss: 0.0041
Epoch: 6/10  Batch: 31/100  Training loss: 0.0652
Epoch: 6/10  Batch: 32/100  Training loss: 0.0269
Epoch: 6/10  Batch: 33/100  Training loss: 0.0183
Epoch: 6/10  Batch: 34/100  Training loss: 0.0139
Epoch: 6/10  Batch: 35/100  Training loss: 0.0528
Epoch: 6/10  Batch: 36/100  Training loss: 0.0008
Epoch: 6/10  Batch: 37/100  Training loss: 0.0036
Epoch: 6/10  Batch: 38/100  Training loss: 0.0009
Epoch: 6/10  Batch: 39/100  Training loss: 0.0050
Epoch: 6/10  Batch: 40/100  Training loss: 0.0284
Epoch: 6/10  Batch: 41/100  Training loss: 0.0114
Epoch: 6/10  Batch: 42/100  Training loss: 0.0067
Epoch: 6/10  Batch: 43/100  Training loss: 0.0082
Epoch: 6/10  Batch: 44/100  Training loss: 0.0141
Epoch: 6/10  Batch: 45/100  Training loss: 0.0568
Epoch: 6/10  Batch: 46/100  Training loss: 0.0014
Epoch: 6/10  Batch: 47/100  Training loss: 0.0061
Epoch: 6/10  Batch: 48/100  Training loss: 0.0111
Epoch: 6/10  Batch: 49/100  Training loss: 0.0139
Epoch: 6/10  Batch: 50/100  Training loss: 0.0065
Epoch: 6/10  Batch: 51/100  Training loss: 0.0008
Epoch: 6/10  Batch: 52/100  Training loss: 0.0212
Epoch: 6/10  Batch: 53/100  Training loss: 0.0250
Epoch: 6/10  Batch: 54/100  Training loss: 0.0853
Epoch: 6/10  Batch: 55/100  Training loss: 0.0211
Epoch: 6/10  Batch: 56/100  Training loss: 0.0321
Epoch: 6/10  Batch: 57/100  Training loss: 0.0063
Epoch: 6/10  Batch: 58/100  Training loss: 0.0188
Epoch: 6/10  Batch: 59/100  Training loss: 0.0001
Epoch: 6/10  Batch: 60/100  Training loss: 0.0027
Epoch: 6/10  Batch: 61/100  Training loss: 0.0041
Epoch: 6/10  Batch: 62/100  Training loss: 0.0371
Epoch: 6/10  Batch: 63/100  Training loss: 0.0070
Epoch: 6/10  Batch: 64/100  Training loss: 0.0114
Epoch: 6/10  Batch: 65/100  Training loss: 0.0369
Epoch: 6/10  Batch: 66/100  Training loss: 0.0741
Epoch: 6/10  Batch: 67/100  Training loss: 0.0101
Epoch: 6/10  Batch: 68/100  Training loss: 0.0064
Epoch: 6/10  Batch: 69/100  Training loss: 0.0053
Epoch: 6/10  Batch: 70/100  Training loss: 0.0041
Epoch: 6/10  Batch: 71/100  Training loss: 0.0055
Epoch: 6/10  Batch: 72/100  Training loss: 0.0089
Epoch: 6/10  Batch: 73/100  Training loss: 0.0089
Epoch: 6/10  Batch: 74/100  Training loss: 0.0083
Epoch: 6/10  Batch: 75/100  Training loss: 0.0053
Epoch: 6/10  Batch: 76/100  Training loss: 0.0091
Epoch: 6/10  Batch: 77/100  Training loss: 0.0091
Epoch: 6/10  Batch: 78/100  Training loss: 0.0098
Epoch: 6/10  Batch: 79/100  Training loss: 0.0150
Epoch: 6/10  Batch: 80/100  Training loss: 0.0023
Epoch: 6/10  Batch: 81/100  Training loss: 0.0028
Epoch: 6/10  Batch: 82/100  Training loss: 0.0130
Epoch: 6/10  Batch: 83/100  Training loss: 0.0017
Epoch: 6/10  Batch: 84/100  Training loss: 0.1079
Epoch: 6/10  Batch: 85/100  Training loss: 0.0070
Epoch: 6/10  Batch: 86/100  Training loss: 0.0128
Epoch: 6/10  Batch: 87/100  Training loss: 0.0197
Epoch: 6/10  Batch: 88/100  Training loss: 0.0209
Epoch: 6/10  Batch: 89/100  Training loss: 0.0038
Epoch: 6/10  Batch: 90/100  Training loss: 0.1253
Epoch: 6/10  Batch: 91/100  Training loss: 0.0048
Epoch: 6/10  Batch: 92/100  Training loss: 0.0619
Epoch: 6/10  Batch: 93/100  Training loss: 0.0110
Epoch: 6/10  Batch: 94/100  Training loss: 0.2386
Epoch: 6/10  Batch: 95/100  Training loss: 0.0089
Epoch: 6/10  Batch: 96/100  Training loss: 0.0026
Epoch: 6/10  Batch: 97/100  Training loss: 0.0034
Epoch: 6/10  Batch: 98/100  Training loss: 0.0140
Epoch: 6/10  Batch: 99/100  Training loss: 0.0052
Epoch: 6/10  Batch: 100/100  Training loss: 0.0092
Epoch: 7/10  Batch: 1/100  Training loss: 0.0016
Epoch: 7/10  Batch: 2/100  Training loss: 0.0010
Epoch: 7/10  Batch: 3/100  Training loss: 0.0280
Epoch: 7/10  Batch: 4/100  Training loss: 0.0033
Epoch: 7/10  Batch: 5/100  Training loss: 0.0007
Epoch: 7/10  Batch: 6/100  Training loss: 0.0065
Epoch: 7/10  Batch: 7/100  Training loss: 0.0100
Epoch: 7/10  Batch: 8/100  Training loss: 0.0111
Epoch: 7/10  Batch: 9/100  Training loss: 0.0060
Epoch: 7/10  Batch: 10/100  Training loss: 0.0026
Epoch: 7/10  Batch: 11/100  Training loss: 0.0265
Epoch: 7/10  Batch: 12/100  Training loss: 0.0111
Epoch: 7/10  Batch: 13/100  Training loss: 0.0070
Epoch: 7/10  Batch: 14/100  Training loss: 0.0030
Epoch: 7/10  Batch: 15/100  Training loss: 0.0435
Epoch: 7/10  Batch: 16/100  Training loss: 0.0022
Epoch: 7/10  Batch: 17/100  Training loss: 0.1466
Epoch: 7/10  Batch: 18/100  Training loss: 0.0061
Epoch: 7/10  Batch: 19/100  Training loss: 0.0230
Epoch: 7/10  Batch: 20/100  Training loss: 0.0013
Epoch: 7/10  Batch: 21/100  Training loss: 0.0114
Epoch: 7/10  Batch: 22/100  Training loss: 0.0055
Epoch: 7/10  Batch: 23/100  Training loss: 0.0052
Epoch: 7/10  Batch: 24/100  Training loss: 0.0073
Epoch: 7/10  Batch: 25/100  Training loss: 0.0010
Epoch: 7/10  Batch: 26/100  Training loss: 0.0157
Epoch: 7/10  Batch: 27/100  Training loss: 0.0008
Epoch: 7/10  Batch: 28/100  Training loss: 0.0025
Epoch: 7/10  Batch: 29/100  Training loss: 0.0063
Epoch: 7/10  Batch: 30/100  Training loss: 0.0031
Epoch: 7/10  Batch: 31/100  Training loss: 0.0261
Epoch: 7/10  Batch: 32/100  Training loss: 0.1148
Epoch: 7/10  Batch: 33/100  Training loss: 0.0091
Epoch: 7/10  Batch: 34/100  Training loss: 0.0123
Epoch: 7/10  Batch: 35/100  Training loss: 0.0400
Epoch: 7/10  Batch: 36/100  Training loss: 0.0002
Epoch: 7/10  Batch: 37/100  Training loss: 0.0055
Epoch: 7/10  Batch: 38/100  Training loss: 0.0004
Epoch: 7/10  Batch: 39/100  Training loss: 0.0046
Epoch: 7/10  Batch: 40/100  Training loss: 0.0447
Epoch: 7/10  Batch: 41/100  Training loss: 0.0101
Epoch: 7/10  Batch: 42/100  Training loss: 0.0050
Epoch: 7/10  Batch: 43/100  Training loss: 0.0042
Epoch: 7/10  Batch: 44/100  Training loss: 0.0170
Epoch: 7/10  Batch: 45/100  Training loss: 0.0077
Epoch: 7/10  Batch: 46/100  Training loss: 0.0006
Epoch: 7/10  Batch: 47/100  Training loss: 0.0026
Epoch: 7/10  Batch: 48/100  Training loss: 0.0060
Epoch: 7/10  Batch: 49/100  Training loss: 0.0130
Epoch: 7/10  Batch: 50/100  Training loss: 0.0044
Epoch: 7/10  Batch: 51/100  Training loss: 0.0003
Epoch: 7/10  Batch: 52/100  Training loss: 0.0113
Epoch: 7/10  Batch: 53/100  Training loss: 0.0039
Epoch: 7/10  Batch: 54/100  Training loss: 0.0049
Epoch: 7/10  Batch: 55/100  Training loss: 0.0097
Epoch: 7/10  Batch: 56/100  Training loss: 0.0106
Epoch: 7/10  Batch: 57/100  Training loss: 0.0028
Epoch: 7/10  Batch: 58/100  Training loss: 0.0052
Epoch: 7/10  Batch: 59/100  Training loss: 0.0001
Epoch: 7/10  Batch: 60/100  Training loss: 0.0033
Epoch: 7/10  Batch: 61/100  Training loss: 0.0050
Epoch: 7/10  Batch: 62/100  Training loss: 0.0097
Epoch: 7/10  Batch: 63/100  Training loss: 0.0085
Epoch: 7/10  Batch: 64/100  Training loss: 0.0152
Epoch: 7/10  Batch: 65/100  Training loss: 0.0075
Epoch: 7/10  Batch: 66/100  Training loss: 0.0198
Epoch: 7/10  Batch: 67/100  Training loss: 0.0116
Epoch: 7/10  Batch: 68/100  Training loss: 0.0082
Epoch: 7/10  Batch: 69/100  Training loss: 0.0055
Epoch: 7/10  Batch: 70/100  Training loss: 0.0032
Epoch: 7/10  Batch: 71/100  Training loss: 0.0057
Epoch: 7/10  Batch: 72/100  Training loss: 0.0094
Epoch: 7/10  Batch: 73/100  Training loss: 0.0051
Epoch: 7/10  Batch: 74/100  Training loss: 0.0088
Epoch: 7/10  Batch: 75/100  Training loss: 0.0058
Epoch: 7/10  Batch: 76/100  Training loss: 0.0091
Epoch: 7/10  Batch: 77/100  Training loss: 0.0077
Epoch: 7/10  Batch: 78/100  Training loss: 0.0074
Epoch: 7/10  Batch: 79/100  Training loss: 0.0251
Epoch: 7/10  Batch: 80/100  Training loss: 0.0019
Epoch: 7/10  Batch: 81/100  Training loss: 0.0024
Epoch: 7/10  Batch: 82/100  Training loss: 0.0058
Epoch: 7/10  Batch: 83/100  Training loss: 0.0019
Epoch: 7/10  Batch: 84/100  Training loss: 0.0244
Epoch: 7/10  Batch: 85/100  Training loss: 0.0064
Epoch: 7/10  Batch: 86/100  Training loss: 0.0082
Epoch: 7/10  Batch: 87/100  Training loss: 0.0080
Epoch: 7/10  Batch: 88/100  Training loss: 0.0150
Epoch: 7/10  Batch: 89/100  Training loss: 0.0037
Epoch: 7/10  Batch: 90/100  Training loss: 0.0162
Epoch: 7/10  Batch: 91/100  Training loss: 0.0046
Epoch: 7/10  Batch: 92/100  Training loss: 0.0313
Epoch: 7/10  Batch: 93/100  Training loss: 0.0093
Epoch: 7/10  Batch: 94/100  Training loss: 0.0379
Epoch: 7/10  Batch: 95/100  Training loss: 0.0096
Epoch: 7/10  Batch: 96/100  Training loss: 0.0032
Epoch: 7/10  Batch: 97/100  Training loss: 0.0030
Epoch: 7/10  Batch: 98/100  Training loss: 0.0151
Epoch: 7/10  Batch: 99/100  Training loss: 0.0061
Epoch: 7/10  Batch: 100/100  Training loss: 0.0051
Epoch: 8/10  Batch: 1/100  Training loss: 0.0028
Epoch: 8/10  Batch: 2/100  Training loss: 0.0008
Epoch: 8/10  Batch: 3/100  Training loss: 0.0172
Epoch: 8/10  Batch: 4/100  Training loss: 0.0039
Epoch: 8/10  Batch: 5/100  Training loss: 0.0008
Epoch: 8/10  Batch: 6/100  Training loss: 0.0050
Epoch: 8/10  Batch: 7/100  Training loss: 0.0054
Epoch: 8/10  Batch: 8/100  Training loss: 0.0036
Epoch: 8/10  Batch: 9/100  Training loss: 0.0018
Epoch: 8/10  Batch: 10/100  Training loss: 0.0026
Epoch: 8/10  Batch: 11/100  Training loss: 0.0166
Epoch: 8/10  Batch: 12/100  Training loss: 0.0044
Epoch: 8/10  Batch: 13/100  Training loss: 0.0060
Epoch: 8/10  Batch: 14/100  Training loss: 0.0013
Epoch: 8/10  Batch: 15/100  Training loss: 0.0048
Epoch: 8/10  Batch: 16/100  Training loss: 0.0013
Epoch: 8/10  Batch: 17/100  Training loss: 0.0112
Epoch: 8/10  Batch: 18/100  Training loss: 0.0052
Epoch: 8/10  Batch: 19/100  Training loss: 0.0154
Epoch: 8/10  Batch: 20/100  Training loss: 0.0067
Epoch: 8/10  Batch: 21/100  Training loss: 0.0079
Epoch: 8/10  Batch: 22/100  Training loss: 0.0050
Epoch: 8/10  Batch: 23/100  Training loss: 0.0028
Epoch: 8/10  Batch: 24/100  Training loss: 0.0067
Epoch: 8/10  Batch: 25/100  Training loss: 0.0010
Epoch: 8/10  Batch: 26/100  Training loss: 0.0081
Epoch: 8/10  Batch: 27/100  Training loss: 0.0011
Epoch: 8/10  Batch: 28/100  Training loss: 0.0025
Epoch: 8/10  Batch: 29/100  Training loss: 0.0053
Epoch: 8/10  Batch: 30/100  Training loss: 0.0028
Epoch: 8/10  Batch: 31/100  Training loss: 0.0130
Epoch: 8/10  Batch: 32/100  Training loss: 0.0241
Epoch: 8/10  Batch: 33/100  Training loss: 0.0071
Epoch: 8/10  Batch: 34/100  Training loss: 0.0087
Epoch: 8/10  Batch: 35/100  Training loss: 0.0148
Epoch: 8/10  Batch: 36/100  Training loss: 0.0001
Epoch: 8/10  Batch: 37/100  Training loss: 0.0022
Epoch: 8/10  Batch: 38/100  Training loss: 0.0004
Epoch: 8/10  Batch: 39/100  Training loss: 0.0034
Epoch: 8/10  Batch: 40/100  Training loss: 0.0108
Epoch: 8/10  Batch: 41/100  Training loss: 0.0091
Epoch: 8/10  Batch: 42/100  Training loss: 0.0040
Epoch: 8/10  Batch: 43/100  Training loss: 0.0027
Epoch: 8/10  Batch: 44/100  Training loss: 0.0124
Epoch: 8/10  Batch: 45/100  Training loss: 0.0059
Epoch: 8/10  Batch: 46/100  Training loss: 0.0006
Epoch: 8/10  Batch: 47/100  Training loss: 0.0022
Epoch: 8/10  Batch: 48/100  Training loss: 0.0050
Epoch: 8/10  Batch: 49/100  Training loss: 0.0125
Epoch: 8/10  Batch: 50/100  Training loss: 0.0025
Epoch: 8/10  Batch: 51/100  Training loss: 0.0003
Epoch: 8/10  Batch: 52/100  Training loss: 0.0048
Epoch: 8/10  Batch: 53/100  Training loss: 0.0033
Epoch: 8/10  Batch: 54/100  Training loss: 0.0047
Epoch: 8/10  Batch: 55/100  Training loss: 0.0069
Epoch: 8/10  Batch: 56/100  Training loss: 0.0050
Epoch: 8/10  Batch: 57/100  Training loss: 0.0023
Epoch: 8/10  Batch: 58/100  Training loss: 0.0041
Epoch: 8/10  Batch: 59/100  Training loss: 0.0001
Epoch: 8/10  Batch: 60/100  Training loss: 0.0024
Epoch: 8/10  Batch: 61/100  Training loss: 0.0065
Epoch: 8/10  Batch: 62/100  Training loss: 0.0075
Epoch: 8/10  Batch: 63/100  Training loss: 0.0038
Epoch: 8/10  Batch: 64/100  Training loss: 0.0129
Epoch: 8/10  Batch: 65/100  Training loss: 0.0062
Epoch: 8/10  Batch: 66/100  Training loss: 0.0150
Epoch: 8/10  Batch: 67/100  Training loss: 0.0093
Epoch: 8/10  Batch: 68/100  Training loss: 0.0077
Epoch: 8/10  Batch: 69/100  Training loss: 0.0046
Epoch: 8/10  Batch: 70/100  Training loss: 0.0027
Epoch: 8/10  Batch: 71/100  Training loss: 0.0042
Epoch: 8/10  Batch: 72/100  Training loss: 0.0111
Epoch: 8/10  Batch: 73/100  Training loss: 0.0024
Epoch: 8/10  Batch: 74/100  Training loss: 0.0066
Epoch: 8/10  Batch: 75/100  Training loss: 0.0060
Epoch: 8/10  Batch: 76/100  Training loss: 0.0083
Epoch: 8/10  Batch: 77/100  Training loss: 0.0068
Epoch: 8/10  Batch: 78/100  Training loss: 0.0062
Epoch: 8/10  Batch: 79/100  Training loss: 0.0184
Epoch: 8/10  Batch: 80/100  Training loss: 0.0019
Epoch: 8/10  Batch: 81/100  Training loss: 0.0024
Epoch: 8/10  Batch: 82/100  Training loss: 0.0040
Epoch: 8/10  Batch: 83/100  Training loss: 0.0019
Epoch: 8/10  Batch: 84/100  Training loss: 0.0120
Epoch: 8/10  Batch: 85/100  Training loss: 0.0043
Epoch: 8/10  Batch: 86/100  Training loss: 0.0051
Epoch: 8/10  Batch: 87/100  Training loss: 0.0044
Epoch: 8/10  Batch: 88/100  Training loss: 0.0098
Epoch: 8/10  Batch: 89/100  Training loss: 0.0037
Epoch: 8/10  Batch: 90/100  Training loss: 0.0108
Epoch: 8/10  Batch: 91/100  Training loss: 0.0042
Epoch: 8/10  Batch: 92/100  Training loss: 0.0175
Epoch: 8/10  Batch: 93/100  Training loss: 0.0050
Epoch: 8/10  Batch: 94/100  Training loss: 0.0201
Epoch: 8/10  Batch: 95/100  Training loss: 0.0058
Epoch: 8/10  Batch: 96/100  Training loss: 0.0027
Epoch: 8/10  Batch: 97/100  Training loss: 0.0033
Epoch: 8/10  Batch: 98/100  Training loss: 0.0102
Epoch: 8/10  Batch: 99/100  Training loss: 0.0047
Epoch: 8/10  Batch: 100/100  Training loss: 0.0044
Epoch: 9/10  Batch: 1/100  Training loss: 0.0025
Epoch: 9/10  Batch: 2/100  Training loss: 0.0008
Epoch: 9/10  Batch: 3/100  Training loss: 0.0095
Epoch: 9/10  Batch: 4/100  Training loss: 0.0035
Epoch: 9/10  Batch: 5/100  Training loss: 0.0008
Epoch: 9/10  Batch: 6/100  Training loss: 0.0043
Epoch: 9/10  Batch: 7/100  Training loss: 0.0047
Epoch: 9/10  Batch: 8/100  Training loss: 0.0034
Epoch: 9/10  Batch: 9/100  Training loss: 0.0016
Epoch: 9/10  Batch: 10/100  Training loss: 0.0026
Epoch: 9/10  Batch: 11/100  Training loss: 0.0137
Epoch: 9/10  Batch: 12/100  Training loss: 0.0040
Epoch: 9/10  Batch: 13/100  Training loss: 0.0051
Epoch: 9/10  Batch: 14/100  Training loss: 0.0012
Epoch: 9/10  Batch: 15/100  Training loss: 0.0044
Epoch: 9/10  Batch: 16/100  Training loss: 0.0012
Epoch: 9/10  Batch: 17/100  Training loss: 0.0104
Epoch: 9/10  Batch: 18/100  Training loss: 0.0049
Epoch: 9/10  Batch: 19/100  Training loss: 0.0135
Epoch: 9/10  Batch: 20/100  Training loss: 0.0059
Epoch: 9/10  Batch: 21/100  Training loss: 0.0070
Epoch: 9/10  Batch: 22/100  Training loss: 0.0046
Epoch: 9/10  Batch: 23/100  Training loss: 0.0025
Epoch: 9/10  Batch: 24/100  Training loss: 0.0063
Epoch: 9/10  Batch: 25/100  Training loss: 0.0011
Epoch: 9/10  Batch: 26/100  Training loss: 0.0066
Epoch: 9/10  Batch: 27/100  Training loss: 0.0012
Epoch: 9/10  Batch: 28/100  Training loss: 0.0024
Epoch: 9/10  Batch: 29/100  Training loss: 0.0047
Epoch: 9/10  Batch: 30/100  Training loss: 0.0028
Epoch: 9/10  Batch: 31/100  Training loss: 0.0099
Epoch: 9/10  Batch: 32/100  Training loss: 0.0185
Epoch: 9/10  Batch: 33/100  Training loss: 0.0066
Epoch: 9/10  Batch: 34/100  Training loss: 0.0072
Epoch: 9/10  Batch: 35/100  Training loss: 0.0122
Epoch: 9/10  Batch: 36/100  Training loss: 0.0001
Epoch: 9/10  Batch: 37/100  Training loss: 0.0018
Epoch: 9/10  Batch: 38/100  Training loss: 0.0004
Epoch: 9/10  Batch: 39/100  Training loss: 0.0028
Epoch: 9/10  Batch: 40/100  Training loss: 0.0074
Epoch: 9/10  Batch: 41/100  Training loss: 0.0068
Epoch: 9/10  Batch: 42/100  Training loss: 0.0034
Epoch: 9/10  Batch: 43/100  Training loss: 0.0024
Epoch: 9/10  Batch: 44/100  Training loss: 0.0097
Epoch: 9/10  Batch: 45/100  Training loss: 0.0052
Epoch: 9/10  Batch: 46/100  Training loss: 0.0006
Epoch: 9/10  Batch: 47/100  Training loss: 0.0020
Epoch: 9/10  Batch: 48/100  Training loss: 0.0044
Epoch: 9/10  Batch: 49/100  Training loss: 0.0102
Epoch: 9/10  Batch: 50/100  Training loss: 0.0021
Epoch: 9/10  Batch: 51/100  Training loss: 0.0003
Epoch: 9/10  Batch: 52/100  Training loss: 0.0041
Epoch: 9/10  Batch: 53/100  Training loss: 0.0030
Epoch: 9/10  Batch: 54/100  Training loss: 0.0039
Epoch: 9/10  Batch: 55/100  Training loss: 0.0065
Epoch: 9/10  Batch: 56/100  Training loss: 0.0044
Epoch: 9/10  Batch: 57/100  Training loss: 0.0021
Epoch: 9/10  Batch: 58/100  Training loss: 0.0035
Epoch: 9/10  Batch: 59/100  Training loss: 0.0001
Epoch: 9/10  Batch: 60/100  Training loss: 0.0022
Epoch: 9/10  Batch: 61/100  Training loss: 0.0058
Epoch: 9/10  Batch: 62/100  Training loss: 0.0067
Epoch: 9/10  Batch: 63/100  Training loss: 0.0032
Epoch: 9/10  Batch: 64/100  Training loss: 0.0114
Epoch: 9/10  Batch: 65/100  Training loss: 0.0051
Epoch: 9/10  Batch: 66/100  Training loss: 0.0131
Epoch: 9/10  Batch: 67/100  Training loss: 0.0079
Epoch: 9/10  Batch: 68/100  Training loss: 0.0061
Epoch: 9/10  Batch: 69/100  Training loss: 0.0041
Epoch: 9/10  Batch: 70/100  Training loss: 0.0025
Epoch: 9/10  Batch: 71/100  Training loss: 0.0037
Epoch: 9/10  Batch: 72/100  Training loss: 0.0097
Epoch: 9/10  Batch: 73/100  Training loss: 0.0023
Epoch: 9/10  Batch: 74/100  Training loss: 0.0059
Epoch: 9/10  Batch: 75/100  Training loss: 0.0053
Epoch: 9/10  Batch: 76/100  Training loss: 0.0073
Epoch: 9/10  Batch: 77/100  Training loss: 0.0060
Epoch: 9/10  Batch: 78/100  Training loss: 0.0058
Epoch: 9/10  Batch: 79/100  Training loss: 0.0142
Epoch: 9/10  Batch: 80/100  Training loss: 0.0018
Epoch: 9/10  Batch: 81/100  Training loss: 0.0022
Epoch: 9/10  Batch: 82/100  Training loss: 0.0037
Epoch: 9/10  Batch: 83/100  Training loss: 0.0018
Epoch: 9/10  Batch: 84/100  Training loss: 0.0106
Epoch: 9/10  Batch: 85/100  Training loss: 0.0039
Epoch: 9/10  Batch: 86/100  Training loss: 0.0046
Epoch: 9/10  Batch: 87/100  Training loss: 0.0040
Epoch: 9/10  Batch: 88/100  Training loss: 0.0089
Epoch: 9/10  Batch: 89/100  Training loss: 0.0034
Epoch: 9/10  Batch: 90/100  Training loss: 0.0099
Epoch: 9/10  Batch: 91/100  Training loss: 0.0038
Epoch: 9/10  Batch: 92/100  Training loss: 0.0150
Epoch: 9/10  Batch: 93/100  Training loss: 0.0041
Epoch: 9/10  Batch: 94/100  Training loss: 0.0168
Epoch: 9/10  Batch: 95/100  Training loss: 0.0048
Epoch: 9/10  Batch: 96/100  Training loss: 0.0025
Epoch: 9/10  Batch: 97/100  Training loss: 0.0031
Epoch: 9/10  Batch: 98/100  Training loss: 0.0081
Epoch: 9/10  Batch: 99/100  Training loss: 0.0042
Epoch: 9/10  Batch: 100/100  Training loss: 0.0038
Epoch: 10/10  Batch: 1/100  Training loss: 0.0025
Epoch: 10/10  Batch: 2/100  Training loss: 0.0008
Epoch: 10/10  Batch: 3/100  Training loss: 0.0077
Epoch: 10/10  Batch: 4/100  Training loss: 0.0033
Epoch: 10/10  Batch: 5/100  Training loss: 0.0007
Epoch: 10/10  Batch: 6/100  Training loss: 0.0038
Epoch: 10/10  Batch: 7/100  Training loss: 0.0041
Epoch: 10/10  Batch: 8/100  Training loss: 0.0031
Epoch: 10/10  Batch: 9/100  Training loss: 0.0015
Epoch: 10/10  Batch: 10/100  Training loss: 0.0025
Epoch: 10/10  Batch: 11/100  Training loss: 0.0116
Epoch: 10/10  Batch: 12/100  Training loss: 0.0036
Epoch: 10/10  Batch: 13/100  Training loss: 0.0045
Epoch: 10/10  Batch: 14/100  Training loss: 0.0012
Epoch: 10/10  Batch: 15/100  Training loss: 0.0041
Epoch: 10/10  Batch: 16/100  Training loss: 0.0011
Epoch: 10/10  Batch: 17/100  Training loss: 0.0097
Epoch: 10/10  Batch: 18/100  Training loss: 0.0047
Epoch: 10/10  Batch: 19/100  Training loss: 0.0120
Epoch: 10/10  Batch: 20/100  Training loss: 0.0049
Epoch: 10/10  Batch: 21/100  Training loss: 0.0063
Epoch: 10/10  Batch: 22/100  Training loss: 0.0042
Epoch: 10/10  Batch: 23/100  Training loss: 0.0022
Epoch: 10/10  Batch: 24/100  Training loss: 0.0059
Epoch: 10/10  Batch: 25/100  Training loss: 0.0011
Epoch: 10/10  Batch: 26/100  Training loss: 0.0057
Epoch: 10/10  Batch: 27/100  Training loss: 0.0012
Epoch: 10/10  Batch: 28/100  Training loss: 0.0023
Epoch: 10/10  Batch: 29/100  Training loss: 0.0041
Epoch: 10/10  Batch: 30/100  Training loss: 0.0028
Epoch: 10/10  Batch: 31/100  Training loss: 0.0081
Epoch: 10/10  Batch: 32/100  Training loss: 0.0159
Epoch: 10/10  Batch: 33/100  Training loss: 0.0062
Epoch: 10/10  Batch: 34/100  Training loss: 0.0062
Epoch: 10/10  Batch: 35/100  Training loss: 0.0107
Epoch: 10/10  Batch: 36/100  Training loss: 0.0001
Epoch: 10/10  Batch: 37/100  Training loss: 0.0016
Epoch: 10/10  Batch: 38/100  Training loss: 0.0004
Epoch: 10/10  Batch: 39/100  Training loss: 0.0025
Epoch: 10/10  Batch: 40/100  Training loss: 0.0059
Epoch: 10/10  Batch: 41/100  Training loss: 0.0058
Epoch: 10/10  Batch: 42/100  Training loss: 0.0029
Epoch: 10/10  Batch: 43/100  Training loss: 0.0022
Epoch: 10/10  Batch: 44/100  Training loss: 0.0081
Epoch: 10/10  Batch: 45/100  Training loss: 0.0046
Epoch: 10/10  Batch: 46/100  Training loss: 0.0005
Epoch: 10/10  Batch: 47/100  Training loss: 0.0019
Epoch: 10/10  Batch: 48/100  Training loss: 0.0040
Epoch: 10/10  Batch: 49/100  Training loss: 0.0087
Epoch: 10/10  Batch: 50/100  Training loss: 0.0019
Epoch: 10/10  Batch: 51/100  Training loss: 0.0003
Epoch: 10/10  Batch: 52/100  Training loss: 0.0036
Epoch: 10/10  Batch: 53/100  Training loss: 0.0027
Epoch: 10/10  Batch: 54/100  Training loss: 0.0034
Epoch: 10/10  Batch: 55/100  Training loss: 0.0062
Epoch: 10/10  Batch: 56/100  Training loss: 0.0039
Epoch: 10/10  Batch: 57/100  Training loss: 0.0019
Epoch: 10/10  Batch: 58/100  Training loss: 0.0032
Epoch: 10/10  Batch: 59/100  Training loss: 0.0001
Epoch: 10/10  Batch: 60/100  Training loss: 0.0020
Epoch: 10/10  Batch: 61/100  Training loss: 0.0053
Epoch: 10/10  Batch: 62/100  Training loss: 0.0060
Epoch: 10/10  Batch: 63/100  Training loss: 0.0028
Epoch: 10/10  Batch: 64/100  Training loss: 0.0101
Epoch: 10/10  Batch: 65/100  Training loss: 0.0044
Epoch: 10/10  Batch: 66/100  Training loss: 0.0116
Epoch: 10/10  Batch: 67/100  Training loss: 0.0069
Epoch: 10/10  Batch: 68/100  Training loss: 0.0052
Epoch: 10/10  Batch: 69/100  Training loss: 0.0037
Epoch: 10/10  Batch: 70/100  Training loss: 0.0023
Epoch: 10/10  Batch: 71/100  Training loss: 0.0034
Epoch: 10/10  Batch: 72/100  Training loss: 0.0087
Epoch: 10/10  Batch: 73/100  Training loss: 0.0022
Epoch: 10/10  Batch: 74/100  Training loss: 0.0053
Epoch: 10/10  Batch: 75/100  Training loss: 0.0049
Epoch: 10/10  Batch: 76/100  Training loss: 0.0066
Epoch: 10/10  Batch: 77/100  Training loss: 0.0054
Epoch: 10/10  Batch: 78/100  Training loss: 0.0054
Epoch: 10/10  Batch: 79/100  Training loss: 0.0120
Epoch: 10/10  Batch: 80/100  Training loss: 0.0016
Epoch: 10/10  Batch: 81/100  Training loss: 0.0020
Epoch: 10/10  Batch: 82/100  Training loss: 0.0033
Epoch: 10/10  Batch: 83/100  Training loss: 0.0017
Epoch: 10/10  Batch: 84/100  Training loss: 0.0094
Epoch: 10/10  Batch: 85/100  Training loss: 0.0035
Epoch: 10/10  Batch: 86/100  Training loss: 0.0041
Epoch: 10/10  Batch: 87/100  Training loss: 0.0036
Epoch: 10/10  Batch: 88/100  Training loss: 0.0081
Epoch: 10/10  Batch: 89/100  Training loss: 0.0031
Epoch: 10/10  Batch: 90/100  Training loss: 0.0090
Epoch: 10/10  Batch: 91/100  Training loss: 0.0035
Epoch: 10/10  Batch: 92/100  Training loss: 0.0130
Epoch: 10/10  Batch: 93/100  Training loss: 0.0035
Epoch: 10/10  Batch: 94/100  Training loss: 0.0142
Epoch: 10/10  Batch: 95/100  Training loss: 0.0041
Epoch: 10/10  Batch: 96/100  Training loss: 0.0024
Epoch: 10/10  Batch: 97/100  Training loss: 0.0030
Epoch: 10/10  Batch: 98/100  Training loss: 0.0067
Epoch: 10/10  Batch: 99/100  Training loss: 0.0038
Epoch: 10/10  Batch: 100/100  Training loss: 0.0034

Testing

Below you see the test accuracy. You can also see the predictions returned for images.


In [15]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: test_x,
            labels_: test_y}
    test_acc = sess.run(accuracy, feed_dict=feed)
    print("Test accuracy: {:.4f}".format(test_acc))


INFO:tensorflow:Restoring parameters from checkpoints/flowers.ckpt
Test accuracy: 0.9455

In [16]:
%matplotlib inline

import matplotlib.pyplot as plt
from scipy.ndimage import imread

Below, feel free to choose images and see how the trained classifier predicts the flowers in them.


In [17]:
test_img_path = 'flower_photos/daisy/144603918_b9de002f60_m.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)


Out[17]:
<matplotlib.image.AxesImage at 0x7f10bc66dc88>

In [18]:
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
    print('"vgg" object already exists.  Will not create again.')
else:
    #create vgg
    with tf.Session() as sess:
        input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
        vgg = vgg16.Vgg16()
        vgg.build(input_)


"vgg" object already exists.  Will not create again.

In [19]:
batch = []
with tf.Session() as sess:     
    img = utils.load_image(test_img_path)
    batch.append(img.reshape((1, 224, 224, 3)))
    images = np.concatenate(batch)

    feed_dict = {input_: images}
    code = sess.run(vgg.relu6, feed_dict=feed_dict)
        
saver = tf.train.Saver()
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
    
    feed = {inputs_: code}
    prediction = sess.run(predicted, feed_dict=feed).squeeze()


INFO:tensorflow:Restoring parameters from checkpoints/flowers.ckpt

In [20]:
plt.imshow(test_img)


Out[20]:
<matplotlib.image.AxesImage at 0x7f10bc6db358>

In [21]:
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)


Find photos that were mistakenly calassified


In [25]:
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]        

with tf.Session() as sess:     
    saver = tf.train.Saver()
    with tf.Session() as sess2:
        saver.restore(sess2, tf.train.latest_checkpoint('checkpoints'))

        for each in classes:
            print("Starting {} images".format(each))
            class_path = data_dir + each
            files = os.listdir(class_path)
            
            for file in files:
                batch = []
                labels = []

                img = utils.load_image(os.path.join(class_path, file))
                batch.append(img.reshape((1, 224, 224, 3)))
                labels.append(lb.transform([each])[0])
                images = np.concatenate(batch)

                feed_dict = {input_: images}
                code = sess.run(vgg.relu6, feed_dict=feed_dict)

                feed = {inputs_: code, labels_: labels}
                correct, prediction = sess2.run([correct_pred, predicted], feed_dict=feed)
                
                if not correct[0]:
                    #test_img = imread(os.path.join(class_path, file))
                    #plt.imshow(test_img)
                    #plt.barh(np.arange(5), prediction)
                    #_ = plt.yticks(np.arange(5), lb.classes_)
                    print(os.path.join(class_path, file))


INFO:tensorflow:Restoring parameters from checkpoints/flowers.ckpt
Starting sunflowers images
flower_photos/sunflowers/16832961488_5f7e70eb5e_n.jpg
flower_photos/sunflowers/8543642705_b841b0e5f6.jpg
flower_photos/sunflowers/8202034834_ee0ee91e04_n.jpg
flower_photos/sunflowers/2883115621_4837267ea1_m.jpg
flower_photos/sunflowers/7176729016_d73ff2211e.jpg
flower_photos/sunflowers/22478719251_276cb094f9_n.jpg
flower_photos/sunflowers/15207507116_8b7f894508_m.jpg
flower_photos/sunflowers/4872284527_ff52128b97.jpg
flower_photos/sunflowers/9460336948_6ae968be93.jpg
flower_photos/sunflowers/9231555352_d2dd8f8e68_m.jpg
flower_photos/sunflowers/265422922_bbbde781d2_m.jpg
flower_photos/sunflowers/2729206569_9dd2b5a3ed.jpg
flower_photos/sunflowers/3734999477_7f454081aa_n.jpg
flower_photos/sunflowers/10386522775_4f8c616999_m.jpg
flower_photos/sunflowers/19710925313_31682fa22b_m.jpg
flower_photos/sunflowers/8928614683_6c168edcfc.jpg
flower_photos/sunflowers/6239758929_50e5e5a476_m.jpg
flower_photos/sunflowers/1267876087_a1b3c63dc9.jpg
flower_photos/sunflowers/8041242566_752def876e_n.jpg
flower_photos/sunflowers/14678298676_6db8831ee6_m.jpg
flower_photos/sunflowers/7530313068_ddd2dc1f44_m.jpg
flower_photos/sunflowers/9599534035_ae4df582b6.jpg
flower_photos/sunflowers/5492906452_80943bfd04.jpg
flower_photos/sunflowers/5027895361_ace3b731e5_n.jpg
flower_photos/sunflowers/18237215308_a158d49f28_n.jpg
flower_photos/sunflowers/147804446_ef9244c8ce_m.jpg
flower_photos/sunflowers/20410533613_56da1cce7c.jpg
flower_photos/sunflowers/7270375648_79f0caef42_n.jpg
flower_photos/sunflowers/6140661443_bb48344226.jpg
flower_photos/sunflowers/8014735546_3db46bb1fe_n.jpg
flower_photos/sunflowers/6482016425_d8fab362f6.jpg
flower_photos/sunflowers/2807106374_f422b5f00c.jpg
Starting dandelion images
flower_photos/dandelion/146242691_44d9c9d6ce_n.jpg
flower_photos/dandelion/15268682367_5a4512b29f_m.jpg
flower_photos/dandelion/9818247_e2eac18894.jpg
flower_photos/dandelion/8701999625_8d83138124.jpg
flower_photos/dandelion/4657801292_73bef15031.jpg
flower_photos/dandelion/16949657389_ac0ee80fd1_m.jpg
flower_photos/dandelion/8744249948_36cb1969f8_m.jpg
flower_photos/dandelion/19551343814_48f764535f_m.jpg
flower_photos/dandelion/19440910519_cb1162470e.jpg
flower_photos/dandelion/17077940105_d2cd7b9ec4_n.jpg
flower_photos/dandelion/5109496141_8dcf673d43_n.jpg
flower_photos/dandelion/14439618952_470224b89b_n.jpg
flower_photos/dandelion/8181477_8cb77d2e0f_n.jpg
flower_photos/dandelion/3483575184_cb8d16a083_n.jpg
flower_photos/dandelion/2620243133_e801981efe_n.jpg
flower_photos/dandelion/140951103_69847c0b7c.jpg
flower_photos/dandelion/7222962522_36952a67b6_n.jpg
flower_photos/dandelion/2387025546_6aecb1b984_n.jpg
Starting tulips images
flower_photos/tulips/5674704952_9bd225ed9e_n.jpg
flower_photos/tulips/3238068295_b2a7b17f48_n.jpg
flower_photos/tulips/495094547_fd2d999c44.jpg
flower_photos/tulips/10094731133_94a942463c.jpg
flower_photos/tulips/6903831250_a2757fff82_m.jpg
flower_photos/tulips/5524946579_307dc74476.jpg
flower_photos/tulips/7094415739_6b29e5215c_m.jpg
flower_photos/tulips/7145978709_2d1596f462.jpg
flower_photos/tulips/15275190769_0ed7bbf490.jpg
flower_photos/tulips/3446285408_4be9c0fded_m.jpg
flower_photos/tulips/3637371174_a8dfcc1b35.jpg
flower_photos/tulips/7003964080_4566470798_n.jpg
flower_photos/tulips/9947385346_3a8cacea02_n.jpg
flower_photos/tulips/251811158_75fa3034ff.jpg
flower_photos/tulips/7166550328_de0d73cfa9.jpg
flower_photos/tulips/4561670472_0451888e32_n.jpg
flower_photos/tulips/15082212714_ff87e8fcb1_m.jpg
flower_photos/tulips/4604272150_0c92385530_n.jpg
flower_photos/tulips/4550091966_7f3e0f8802_n.jpg
flower_photos/tulips/7166618384_850905fc63_n.jpg
flower_photos/tulips/14116780333_7836f4448c.jpg
flower_photos/tulips/7266196114_c2a736a15a_m.jpg
flower_photos/tulips/6267021825_a8316e0dcc_m.jpg
flower_photos/tulips/16680998737_6f6225fe36.jpg
flower_photos/tulips/16702188449_3dacce90b2_m.jpg
flower_photos/tulips/15275504998_ca9eb82998.jpg
flower_photos/tulips/5687705933_55a8c2dbac.jpg
flower_photos/tulips/142235914_5419ff8a4a.jpg
flower_photos/tulips/8454707381_453b4862eb_m.jpg
flower_photos/tulips/164578909_51f245d3fa_n.jpg
flower_photos/tulips/10791227_7168491604.jpg
flower_photos/tulips/130684927_a05164ba13_m.jpg
flower_photos/tulips/4508346090_a27b988f79_n.jpg
flower_photos/tulips/8762189906_8223cef62f.jpg
flower_photos/tulips/15452909878_0c4941f729_m.jpg
flower_photos/tulips/112334842_3ecf7585dd.jpg
flower_photos/tulips/7082476907_99beef0dde.jpg
flower_photos/tulips/16751015081_af2ef77c9a_n.jpg
flower_photos/tulips/113902743_8f537f769b_n.jpg
flower_photos/tulips/15147473067_7c5498eb0e_m.jpg
flower_photos/tulips/6936380780_19c26c918a.jpg
flower_photos/tulips/5674707464_dc18de05b1.jpg
flower_photos/tulips/16930121391_a4092ecf00_n.jpg
flower_photos/tulips/15647243236_2778501cf5_n.jpg
flower_photos/tulips/14067778605_0285b7cc3a.jpg
Starting roses images
flower_photos/roses/534228982_4afbcece9b_m.jpg
flower_photos/roses/3145692843_d46ba4703c.jpg
flower_photos/roses/4503599544_3822e7d1be.jpg
flower_photos/roses/6732261031_861a1026fa_n.jpg
flower_photos/roses/873660804_37f5c6a46e_n.jpg
flower_photos/roses/14687731322_5613f76353.jpg
flower_photos/roses/9337528427_3d09b7012b.jpg
flower_photos/roses/15859434664_67bf3ef29f.jpg
flower_photos/roses/5863698305_04a4277401_n.jpg
flower_photos/roses/14267691818_301aceda07.jpg
flower_photos/roses/13231224664_4af5293a37.jpg
flower_photos/roses/7147367479_f7a6ef0798.jpg
flower_photos/roses/6069602140_866eecf7c2_m.jpg
flower_photos/roses/9216324117_5fa1e2bc25_n.jpg
flower_photos/roses/21413573151_e681c6a97a.jpg
flower_photos/roses/8775267816_726ddc6d92_n.jpg
flower_photos/roses/1645761726_2b1be95472.jpg
flower_photos/roses/6241886381_cc722785af.jpg
flower_photos/roses/8035908422_87220425d2_n.jpg
flower_photos/roses/4413509121_a62879598a.jpg
flower_photos/roses/5212877807_a3ddf06a7c_n.jpg
flower_photos/roses/9167147034_0a66ee3616_n.jpg
flower_photos/roses/15697872479_ed48e9dd73_n.jpg
flower_photos/roses/15922772266_1167a06620.jpg
flower_photos/roses/8949720453_66e8304c30.jpg
flower_photos/roses/8667746487_781af9e615_n.jpg
flower_photos/roses/7345657862_689366e79a.jpg
flower_photos/roses/5223191368_01aedb6547_n.jpg
flower_photos/roses/11102341464_508d558dfc_n.jpg
flower_photos/roses/4998708839_c53ee536a8_n.jpg
flower_photos/roses/15255964454_0a64eb67fa.jpg
flower_photos/roses/16051111039_0f0626a241_n.jpg
flower_photos/roses/7551637034_55ae047756_n.jpg
flower_photos/roses/9633056561_6f1b7e8faf_m.jpg
flower_photos/roses/6108118824_5b0231a56d.jpg
flower_photos/roses/488849503_63a290a8c2_m.jpg
flower_photos/roses/9164924345_6b63637acf.jpg
flower_photos/roses/159079265_d77a9ac920_n.jpg
flower_photos/roses/9320934277_4fb95aef5d_n.jpg
flower_photos/roses/2973256732_1926295f35.jpg
Starting daisy images
flower_photos/daisy/515112668_a49c69455a.jpg
flower_photos/daisy/519880292_7a3a6c6b69.jpg
flower_photos/daisy/10391248763_1d16681106_n.jpg
flower_photos/daisy/3445110406_0c1616d2e3_n.jpg
flower_photos/daisy/4333085242_bbeb3e2841_m.jpg
flower_photos/daisy/517054463_036db655a1_m.jpg
flower_photos/daisy/2567033807_8e918c53d8_n.jpg
flower_photos/daisy/17357636476_1953c07aa4_n.jpg
flower_photos/daisy/2666572212_2caca8de9f_n.jpg
flower_photos/daisy/7066602021_2647457985_m.jpg
flower_photos/daisy/1355787476_32e9f2a30b.jpg
flower_photos/daisy/5673551_01d1ea993e_n.jpg
flower_photos/daisy/173350276_02817aa8d5.jpg
flower_photos/daisy/6323721068_3d3394af6d_n.jpg
flower_photos/daisy/5435513198_90ce39f1aa_n.jpg
flower_photos/daisy/10172636503_21bededa75_n.jpg
flower_photos/daisy/6910811638_aa6f17df23.jpg
flower_photos/daisy/512177035_70afc925c8.jpg
flower_photos/daisy/530738000_4df7e4786b.jpg
flower_photos/daisy/5944315415_2be8abeb2f_m.jpg
flower_photos/daisy/517054467_d82d323c33_m.jpg
flower_photos/daisy/5135131051_102d4878ca_n.jpg
flower_photos/daisy/5665838969_fe217988b9_m.jpg
flower_photos/daisy/3780380240_ef9ec1b737_m.jpg
flower_photos/daisy/5700781400_65761f3fce.jpg
flower_photos/daisy/7358085448_b317d11cd5.jpg
flower_photos/daisy/18203367608_07a04e98a4_n.jpg
flower_photos/daisy/2538504987_fe524b92a8_n.jpg
flower_photos/daisy/835750256_3f91a147ef_n.jpg
flower_photos/daisy/5665834973_76bd6c6523_m.jpg
flower_photos/daisy/20182559506_40a112f762.jpg
flower_photos/daisy/3611577717_f3a7a8c416_n.jpg
flower_photos/daisy/14088053307_1a13a0bf91_n.jpg
flower_photos/daisy/134372449_0f7166d96c_n.jpg

In [ ]: