Anna KaRNNa Summaries


Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [1]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use.


In [2]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

In [3]:
text[:100]


Out[3]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

In [4]:
chars[:100]


Out[4]:
array([ 6, 55, 75,  1, 64, 12, 26,  8, 61, 73, 73, 73, 25, 75,  1,  1, 76,
        8, 62, 75,  7, 18, 82, 18, 12,  5,  8, 75, 26, 12,  8, 75, 82, 82,
        8, 75, 82, 18, 54, 12, 47,  8, 12, 32, 12, 26, 76,  8, 35, 63, 55,
       75,  1,  1, 76,  8, 62, 75,  7, 18, 82, 76,  8, 18,  5,  8, 35, 63,
       55, 75,  1,  1, 76,  8, 18, 63,  8, 18, 64,  5,  8, 29, 72, 63, 73,
       72, 75, 76, 19, 73, 73, 23, 32, 12, 26, 76, 64, 55, 18, 63], dtype=int32)

Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.

Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.

The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.


In [6]:
def split_data(chars, batch_size, num_steps, split_frac=0.9):
    """ 
    Split character data into training and validation sets, inputs and targets for each set.
    
    Arguments
    ---------
    chars: character array
    batch_size: Size of examples in each of batch
    num_steps: Number of sequence steps to keep in the input and pass to the network
    split_frac: Fraction of batches to keep in the training set
    
    
    Returns train_x, train_y, val_x, val_y
    """
    
    slice_size = batch_size * num_steps
    n_batches = int(len(chars) / slice_size)
    
    # Drop the last few characters to make only full batches
    x = chars[: n_batches*slice_size]
    y = chars[1: n_batches*slice_size + 1]
    
    # Split the data into batch_size slices, then stack them into a 2D matrix 
    x = np.stack(np.split(x, batch_size))
    y = np.stack(np.split(y, batch_size))
    
    # Now x and y are arrays with dimensions batch_size x n_batches*num_steps
    
    # Split into training and validation sets, keep the virst split_frac batches for training
    split_idx = int(n_batches*split_frac)
    train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
    val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
    
    return train_x, train_y, val_x, val_y

In [7]:
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)

In [8]:
train_x.shape


Out[8]:
(10, 178400)

In [9]:
train_x[:,:10]


Out[9]:
array([[ 6, 55, 75,  1, 64, 12, 26,  8, 61, 73],
       [30, 63, 79,  8, 55, 12,  8,  7, 29, 32],
       [ 8, 53, 75, 64, 53, 55, 18, 63, 56,  8],
       [29, 64, 55, 12, 26,  8, 72, 29, 35, 82],
       [ 8, 64, 55, 12,  8, 82, 75, 63, 79, 46],
       [ 8, 33, 55, 26, 29, 35, 56, 55,  8, 82],
       [64,  8, 64, 29, 73, 79, 29, 19, 73, 73],
       [29,  8, 55, 12, 26,  5, 12, 82, 62, 48],
       [55, 75, 64,  8, 18,  5,  8, 64, 55, 12],
       [12, 26,  5, 12, 82, 62,  8, 75, 63, 79]], dtype=int32)

I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.


In [10]:
def get_batch(arrs, num_steps):
    batch_size, slice_size = arrs[0].shape
    
    n_batches = int(slice_size/num_steps)
    for b in range(n_batches):
        yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]

In [14]:
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
              learning_rate=0.001, grad_clip=5, sampling=False):
        
    if sampling == True:
        batch_size, num_steps = 1, 1

    tf.reset_default_graph()
    
    # Declare placeholders we'll feed into the graph
    with tf.name_scope('inputs'):
        inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
        x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
    
    with tf.name_scope('targets'):
        targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
        y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
        y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
    
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    # Build the RNN layers
    with tf.name_scope("RNN_cells"):
        lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
        cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
    
    with tf.name_scope("RNN_init_state"):
        initial_state = cell.zero_state(batch_size, tf.float32)

    # Run the data through the RNN layers
    with tf.name_scope("RNN_forward"):
        rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
        outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
    
    final_state = state
    
    # Reshape output so it's a bunch of rows, one row for each cell output
    with tf.name_scope('sequence_reshape'):
        seq_output = tf.concat(outputs, axis=1,name='seq_output')
        output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
    
    # Now connect the RNN outputs to a softmax layer and calculate the cost
    with tf.name_scope('logits'):
        softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
                               name='softmax_w')
        softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
        logits = tf.matmul(output, softmax_w) + softmax_b
        tf.summary.histogram('softmax_w', softmax_w)
        tf.summary.histogram('softmax_b', softmax_b)

    with tf.name_scope('predictions'):
        preds = tf.nn.softmax(logits, name='predictions')
        tf.summary.histogram('predictions', preds)
    
    with tf.name_scope('cost'):
        loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
        cost = tf.reduce_mean(loss, name='cost')
        tf.summary.scalar('cost', cost)

    # Optimizer for training, using gradient clipping to control exploding gradients
    with tf.name_scope('train'):
        tvars = tf.trainable_variables()
        grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
        train_op = tf.train.AdamOptimizer(learning_rate)
        optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    merged = tf.summary.merge_all()
    
    # Export the nodes 
    export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
                    'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
    Graph = namedtuple('Graph', export_nodes)
    local_dict = locals()
    graph = Graph(*[local_dict[each] for each in export_nodes])
    
    return graph

Hyperparameters

Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.


In [15]:
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001

Training

Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.


In [13]:
!mkdir -p checkpoints/anna

In [17]:
epochs = 10
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)

model = build_rnn(len(vocab), 
                  batch_size=batch_size,
                  num_steps=num_steps,
                  learning_rate=learning_rate,
                  lstm_size=lstm_size,
                  num_layers=num_layers)

saver = tf.train.Saver(max_to_keep=100)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
    test_writer = tf.summary.FileWriter('./logs/2/test')
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/anna20.ckpt')
    
    n_batches = int(train_x.shape[1]/num_steps)
    iterations = n_batches * epochs
    for e in range(epochs):
        
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
            iteration = e*n_batches + b
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: 0.5,
                    model.initial_state: new_state}
            summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, 
                                                          model.final_state, model.optimizer], 
                                                          feed_dict=feed)
            loss += batch_loss
            end = time.time()
            print('Epoch {}/{} '.format(e+1, epochs),
                  'Iteration {}/{}'.format(iteration, iterations),
                  'Training loss: {:.4f}'.format(loss/b),
                  '{:.4f} sec/batch'.format((end-start)))
            
            train_writer.add_summary(summary, iteration)
        
            if (iteration%save_every_n == 0) or (iteration == iterations):
                # Check performance, notice dropout has been set to 1
                val_loss = []
                new_state = sess.run(model.initial_state)
                for x, y in get_batch([val_x, val_y], num_steps):
                    feed = {model.inputs: x,
                            model.targets: y,
                            model.keep_prob: 1.,
                            model.initial_state: new_state}
                    summary, batch_loss, new_state = sess.run([model.merged, model.cost, 
                                                               model.final_state], feed_dict=feed)
                    val_loss.append(batch_loss)
                    
                test_writer.add_summary(summary, iteration)

                print('Validation loss:', np.mean(val_loss),
                      'Saving checkpoint!')
                #saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))


Epoch 1/10  Iteration 1/1780 Training loss: 4.4188 1.2876 sec/batch
Epoch 1/10  Iteration 2/1780 Training loss: 4.3775 0.1364 sec/batch
Epoch 1/10  Iteration 3/1780 Training loss: 4.2100 0.1310 sec/batch
Epoch 1/10  Iteration 4/1780 Training loss: 4.5256 0.1212 sec/batch
Epoch 1/10  Iteration 5/1780 Training loss: 4.4524 0.1271 sec/batch
Epoch 1/10  Iteration 6/1780 Training loss: 4.3496 0.1272 sec/batch
Epoch 1/10  Iteration 7/1780 Training loss: 4.2637 0.1260 sec/batch
Epoch 1/10  Iteration 8/1780 Training loss: 4.1856 0.1231 sec/batch
Epoch 1/10  Iteration 9/1780 Training loss: 4.1126 0.1210 sec/batch
Epoch 1/10  Iteration 10/1780 Training loss: 4.0469 0.1198 sec/batch
Epoch 1/10  Iteration 11/1780 Training loss: 3.9883 0.1211 sec/batch
Epoch 1/10  Iteration 12/1780 Training loss: 3.9390 0.1232 sec/batch
Epoch 1/10  Iteration 13/1780 Training loss: 3.8954 0.1352 sec/batch
Epoch 1/10  Iteration 14/1780 Training loss: 3.8584 0.1232 sec/batch
Epoch 1/10  Iteration 15/1780 Training loss: 3.8247 0.1217 sec/batch
Epoch 1/10  Iteration 16/1780 Training loss: 3.7941 0.1202 sec/batch
Epoch 1/10  Iteration 17/1780 Training loss: 3.7654 0.1205 sec/batch
Epoch 1/10  Iteration 18/1780 Training loss: 3.7406 0.1200 sec/batch
Epoch 1/10  Iteration 19/1780 Training loss: 3.7170 0.1227 sec/batch
Epoch 1/10  Iteration 20/1780 Training loss: 3.6936 0.1193 sec/batch
Epoch 1/10  Iteration 21/1780 Training loss: 3.6733 0.1201 sec/batch
Epoch 1/10  Iteration 22/1780 Training loss: 3.6542 0.1187 sec/batch
Epoch 1/10  Iteration 23/1780 Training loss: 3.6371 0.1194 sec/batch
Epoch 1/10  Iteration 24/1780 Training loss: 3.6212 0.1194 sec/batch
Epoch 1/10  Iteration 25/1780 Training loss: 3.6055 0.1203 sec/batch
Epoch 1/10  Iteration 26/1780 Training loss: 3.5918 0.1209 sec/batch
Epoch 1/10  Iteration 27/1780 Training loss: 3.5789 0.1189 sec/batch
Epoch 1/10  Iteration 28/1780 Training loss: 3.5657 0.1197 sec/batch
Epoch 1/10  Iteration 29/1780 Training loss: 3.5534 0.1242 sec/batch
Epoch 1/10  Iteration 30/1780 Training loss: 3.5425 0.1184 sec/batch
Epoch 1/10  Iteration 31/1780 Training loss: 3.5325 0.1204 sec/batch
Epoch 1/10  Iteration 32/1780 Training loss: 3.5224 0.1203 sec/batch
Epoch 1/10  Iteration 33/1780 Training loss: 3.5125 0.1236 sec/batch
Epoch 1/10  Iteration 34/1780 Training loss: 3.5037 0.1195 sec/batch
Epoch 1/10  Iteration 35/1780 Training loss: 3.4948 0.1202 sec/batch
Epoch 1/10  Iteration 36/1780 Training loss: 3.4867 0.1190 sec/batch
Epoch 1/10  Iteration 37/1780 Training loss: 3.4782 0.1226 sec/batch
Epoch 1/10  Iteration 38/1780 Training loss: 3.4702 0.1201 sec/batch
Epoch 1/10  Iteration 39/1780 Training loss: 3.4625 0.1223 sec/batch
Epoch 1/10  Iteration 40/1780 Training loss: 3.4553 0.1196 sec/batch
Epoch 1/10  Iteration 41/1780 Training loss: 3.4482 0.1200 sec/batch
Epoch 1/10  Iteration 42/1780 Training loss: 3.4415 0.1195 sec/batch
Epoch 1/10  Iteration 43/1780 Training loss: 3.4350 0.1209 sec/batch
Epoch 1/10  Iteration 44/1780 Training loss: 3.4287 0.1215 sec/batch
Epoch 1/10  Iteration 45/1780 Training loss: 3.4225 0.1255 sec/batch
Epoch 1/10  Iteration 46/1780 Training loss: 3.4170 0.1194 sec/batch
Epoch 1/10  Iteration 47/1780 Training loss: 3.4116 0.1194 sec/batch
Epoch 1/10  Iteration 48/1780 Training loss: 3.4067 0.1190 sec/batch
Epoch 1/10  Iteration 49/1780 Training loss: 3.4020 0.1215 sec/batch
Epoch 1/10  Iteration 50/1780 Training loss: 3.3972 0.1203 sec/batch
Epoch 1/10  Iteration 51/1780 Training loss: 3.3926 0.1199 sec/batch
Epoch 1/10  Iteration 52/1780 Training loss: 3.3878 0.1188 sec/batch
Epoch 1/10  Iteration 53/1780 Training loss: 3.3836 0.1214 sec/batch
Epoch 1/10  Iteration 54/1780 Training loss: 3.3791 0.1201 sec/batch
Epoch 1/10  Iteration 55/1780 Training loss: 3.3750 0.1199 sec/batch
Epoch 1/10  Iteration 56/1780 Training loss: 3.3707 0.1201 sec/batch
Epoch 1/10  Iteration 57/1780 Training loss: 3.3667 0.1234 sec/batch
Epoch 1/10  Iteration 58/1780 Training loss: 3.3630 0.1213 sec/batch
Epoch 1/10  Iteration 59/1780 Training loss: 3.3592 0.1229 sec/batch
Epoch 1/10  Iteration 60/1780 Training loss: 3.3557 0.1194 sec/batch
Epoch 1/10  Iteration 61/1780 Training loss: 3.3522 0.1205 sec/batch
Epoch 1/10  Iteration 62/1780 Training loss: 3.3493 0.1189 sec/batch
Epoch 1/10  Iteration 63/1780 Training loss: 3.3464 0.1201 sec/batch
Epoch 1/10  Iteration 64/1780 Training loss: 3.3429 0.1210 sec/batch
Epoch 1/10  Iteration 65/1780 Training loss: 3.3396 0.1213 sec/batch
Epoch 1/10  Iteration 66/1780 Training loss: 3.3368 0.1218 sec/batch
Epoch 1/10  Iteration 67/1780 Training loss: 3.3340 0.1202 sec/batch
Epoch 1/10  Iteration 68/1780 Training loss: 3.3306 0.1195 sec/batch
Epoch 1/10  Iteration 69/1780 Training loss: 3.3276 0.1225 sec/batch
Epoch 1/10  Iteration 70/1780 Training loss: 3.3249 0.1188 sec/batch
Epoch 1/10  Iteration 71/1780 Training loss: 3.3221 0.1208 sec/batch
Epoch 1/10  Iteration 72/1780 Training loss: 3.3197 0.1201 sec/batch
Epoch 1/10  Iteration 73/1780 Training loss: 3.3170 0.1206 sec/batch
Epoch 1/10  Iteration 74/1780 Training loss: 3.3145 0.1192 sec/batch
Epoch 1/10  Iteration 75/1780 Training loss: 3.3122 0.1233 sec/batch
Epoch 1/10  Iteration 76/1780 Training loss: 3.3099 0.1197 sec/batch
Epoch 1/10  Iteration 77/1780 Training loss: 3.3076 0.1204 sec/batch
Epoch 1/10  Iteration 78/1780 Training loss: 3.3053 0.1199 sec/batch
Epoch 1/10  Iteration 79/1780 Training loss: 3.3029 0.1232 sec/batch
Epoch 1/10  Iteration 80/1780 Training loss: 3.3004 0.1190 sec/batch
Epoch 1/10  Iteration 81/1780 Training loss: 3.2982 0.1201 sec/batch
Epoch 1/10  Iteration 82/1780 Training loss: 3.2961 0.1196 sec/batch
Epoch 1/10  Iteration 83/1780 Training loss: 3.2940 0.1213 sec/batch
Epoch 1/10  Iteration 84/1780 Training loss: 3.2919 0.1184 sec/batch
Epoch 1/10  Iteration 85/1780 Training loss: 3.2899 0.1199 sec/batch
Epoch 1/10  Iteration 86/1780 Training loss: 3.2881 0.1190 sec/batch
Epoch 1/10  Iteration 87/1780 Training loss: 3.2862 0.1201 sec/batch
Epoch 1/10  Iteration 88/1780 Training loss: 3.2843 0.1217 sec/batch
Epoch 1/10  Iteration 89/1780 Training loss: 3.2826 0.1199 sec/batch
Epoch 1/10  Iteration 90/1780 Training loss: 3.2809 0.1191 sec/batch
Epoch 1/10  Iteration 91/1780 Training loss: 3.2791 0.1204 sec/batch
Epoch 1/10  Iteration 92/1780 Training loss: 3.2773 0.1219 sec/batch
Epoch 1/10  Iteration 93/1780 Training loss: 3.2756 0.1192 sec/batch
Epoch 1/10  Iteration 94/1780 Training loss: 3.2738 0.1213 sec/batch
Epoch 1/10  Iteration 95/1780 Training loss: 3.2721 0.1207 sec/batch
Epoch 1/10  Iteration 96/1780 Training loss: 3.2703 0.1186 sec/batch
Epoch 1/10  Iteration 97/1780 Training loss: 3.2688 0.1207 sec/batch
Epoch 1/10  Iteration 98/1780 Training loss: 3.2670 0.1203 sec/batch
Epoch 1/10  Iteration 99/1780 Training loss: 3.2654 0.1201 sec/batch
Epoch 1/10  Iteration 100/1780 Training loss: 3.2637 0.1199 sec/batch
Validation loss: 3.05181 Saving checkpoint!
Epoch 1/10  Iteration 101/1780 Training loss: 3.2620 0.1184 sec/batch
Epoch 1/10  Iteration 102/1780 Training loss: 3.2603 0.1201 sec/batch
Epoch 1/10  Iteration 103/1780 Training loss: 3.2587 0.1201 sec/batch
Epoch 1/10  Iteration 104/1780 Training loss: 3.2569 0.1208 sec/batch
Epoch 1/10  Iteration 105/1780 Training loss: 3.2553 0.1187 sec/batch
Epoch 1/10  Iteration 106/1780 Training loss: 3.2536 0.1201 sec/batch
Epoch 1/10  Iteration 107/1780 Training loss: 3.2519 0.1227 sec/batch
Epoch 1/10  Iteration 108/1780 Training loss: 3.2502 0.1205 sec/batch
Epoch 1/10  Iteration 109/1780 Training loss: 3.2487 0.1224 sec/batch
Epoch 1/10  Iteration 110/1780 Training loss: 3.2469 0.1220 sec/batch
Epoch 1/10  Iteration 111/1780 Training loss: 3.2453 0.1191 sec/batch
Epoch 1/10  Iteration 112/1780 Training loss: 3.2437 0.1204 sec/batch
Epoch 1/10  Iteration 113/1780 Training loss: 3.2421 0.1191 sec/batch
Epoch 1/10  Iteration 114/1780 Training loss: 3.2404 0.1207 sec/batch
Epoch 1/10  Iteration 115/1780 Training loss: 3.2387 0.1202 sec/batch
Epoch 1/10  Iteration 116/1780 Training loss: 3.2371 0.1201 sec/batch
Epoch 1/10  Iteration 117/1780 Training loss: 3.2354 0.1195 sec/batch
Epoch 1/10  Iteration 118/1780 Training loss: 3.2340 0.1217 sec/batch
Epoch 1/10  Iteration 119/1780 Training loss: 3.2325 0.1211 sec/batch
Epoch 1/10  Iteration 120/1780 Training loss: 3.2309 0.1200 sec/batch
Epoch 1/10  Iteration 121/1780 Training loss: 3.2295 0.1187 sec/batch
Epoch 1/10  Iteration 122/1780 Training loss: 3.2280 0.1229 sec/batch
Epoch 1/10  Iteration 123/1780 Training loss: 3.2264 0.1189 sec/batch
Epoch 1/10  Iteration 124/1780 Training loss: 3.2249 0.1207 sec/batch
Epoch 1/10  Iteration 125/1780 Training loss: 3.2232 0.1194 sec/batch
Epoch 1/10  Iteration 126/1780 Training loss: 3.2214 0.1226 sec/batch
Epoch 1/10  Iteration 127/1780 Training loss: 3.2197 0.1201 sec/batch
Epoch 1/10  Iteration 128/1780 Training loss: 3.2181 0.1190 sec/batch
Epoch 1/10  Iteration 129/1780 Training loss: 3.2164 0.1223 sec/batch
Epoch 1/10  Iteration 130/1780 Training loss: 3.2148 0.1223 sec/batch
Epoch 1/10  Iteration 131/1780 Training loss: 3.2132 0.1215 sec/batch
Epoch 1/10  Iteration 132/1780 Training loss: 3.2114 0.1222 sec/batch
Epoch 1/10  Iteration 133/1780 Training loss: 3.2097 0.1211 sec/batch
Epoch 1/10  Iteration 134/1780 Training loss: 3.2079 0.1204 sec/batch
Epoch 1/10  Iteration 135/1780 Training loss: 3.2059 0.1228 sec/batch
Epoch 1/10  Iteration 136/1780 Training loss: 3.2039 0.1214 sec/batch
Epoch 1/10  Iteration 137/1780 Training loss: 3.2020 0.1199 sec/batch
Epoch 1/10  Iteration 138/1780 Training loss: 3.2000 0.1207 sec/batch
Epoch 1/10  Iteration 139/1780 Training loss: 3.1982 0.1205 sec/batch
Epoch 1/10  Iteration 140/1780 Training loss: 3.1961 0.1202 sec/batch
Epoch 1/10  Iteration 141/1780 Training loss: 3.1941 0.1209 sec/batch
Epoch 1/10  Iteration 142/1780 Training loss: 3.1921 0.1225 sec/batch
Epoch 1/10  Iteration 143/1780 Training loss: 3.1901 0.1191 sec/batch
Epoch 1/10  Iteration 144/1780 Training loss: 3.1880 0.1246 sec/batch
Epoch 1/10  Iteration 145/1780 Training loss: 3.1860 0.1200 sec/batch
Epoch 1/10  Iteration 146/1780 Training loss: 3.1840 0.1214 sec/batch
Epoch 1/10  Iteration 147/1780 Training loss: 3.1820 0.1289 sec/batch
Epoch 1/10  Iteration 148/1780 Training loss: 3.1800 0.1206 sec/batch
Epoch 1/10  Iteration 149/1780 Training loss: 3.1778 0.1210 sec/batch
Epoch 1/10  Iteration 150/1780 Training loss: 3.1756 0.1208 sec/batch
Epoch 1/10  Iteration 151/1780 Training loss: 3.1736 0.1197 sec/batch
Epoch 1/10  Iteration 152/1780 Training loss: 3.1716 0.1201 sec/batch
Epoch 1/10  Iteration 153/1780 Training loss: 3.1694 0.1216 sec/batch
Epoch 1/10  Iteration 154/1780 Training loss: 3.1671 0.1206 sec/batch
Epoch 1/10  Iteration 155/1780 Training loss: 3.1648 0.1193 sec/batch
Epoch 1/10  Iteration 156/1780 Training loss: 3.1624 0.1201 sec/batch
Epoch 1/10  Iteration 157/1780 Training loss: 3.1599 0.1191 sec/batch
Epoch 1/10  Iteration 158/1780 Training loss: 3.1574 0.1211 sec/batch
Epoch 1/10  Iteration 159/1780 Training loss: 3.1548 0.1318 sec/batch
Epoch 1/10  Iteration 160/1780 Training loss: 3.1523 0.1204 sec/batch
Epoch 1/10  Iteration 161/1780 Training loss: 3.1498 0.1213 sec/batch
Epoch 1/10  Iteration 162/1780 Training loss: 3.1471 0.1204 sec/batch
Epoch 1/10  Iteration 163/1780 Training loss: 3.1446 0.1221 sec/batch
Epoch 1/10  Iteration 164/1780 Training loss: 3.1430 0.1203 sec/batch
Epoch 1/10  Iteration 165/1780 Training loss: 3.1411 0.1189 sec/batch
Epoch 1/10  Iteration 166/1780 Training loss: 3.1390 0.1221 sec/batch
Epoch 1/10  Iteration 167/1780 Training loss: 3.1367 0.1196 sec/batch
Epoch 1/10  Iteration 168/1780 Training loss: 3.1346 0.1224 sec/batch
Epoch 1/10  Iteration 169/1780 Training loss: 3.1325 0.1187 sec/batch
Epoch 1/10  Iteration 170/1780 Training loss: 3.1301 0.1226 sec/batch
Epoch 1/10  Iteration 171/1780 Training loss: 3.1278 0.1188 sec/batch
Epoch 1/10  Iteration 172/1780 Training loss: 3.1258 0.1196 sec/batch
Epoch 1/10  Iteration 173/1780 Training loss: 3.1237 0.1192 sec/batch
Epoch 1/10  Iteration 174/1780 Training loss: 3.1215 0.1223 sec/batch
Epoch 1/10  Iteration 175/1780 Training loss: 3.1193 0.1186 sec/batch
Epoch 1/10  Iteration 176/1780 Training loss: 3.1179 0.1208 sec/batch
Epoch 1/10  Iteration 177/1780 Training loss: 3.1162 0.1187 sec/batch
Epoch 1/10  Iteration 178/1780 Training loss: 3.1137 0.1232 sec/batch
Epoch 2/10  Iteration 179/1780 Training loss: 2.6953 0.1210 sec/batch
Epoch 2/10  Iteration 180/1780 Training loss: 2.6538 0.1232 sec/batch
Epoch 2/10  Iteration 181/1780 Training loss: 2.6371 0.1197 sec/batch
Epoch 2/10  Iteration 182/1780 Training loss: 2.6328 0.1235 sec/batch
Epoch 2/10  Iteration 183/1780 Training loss: 2.6298 0.1185 sec/batch
Epoch 2/10  Iteration 184/1780 Training loss: 2.6251 0.1227 sec/batch
Epoch 2/10  Iteration 185/1780 Training loss: 2.6222 0.1192 sec/batch
Epoch 2/10  Iteration 186/1780 Training loss: 2.6206 0.1228 sec/batch
Epoch 2/10  Iteration 187/1780 Training loss: 2.6176 0.1232 sec/batch
Epoch 2/10  Iteration 188/1780 Training loss: 2.6138 0.1206 sec/batch
Epoch 2/10  Iteration 189/1780 Training loss: 2.6088 0.1204 sec/batch
Epoch 2/10  Iteration 190/1780 Training loss: 2.6067 0.1209 sec/batch
Epoch 2/10  Iteration 191/1780 Training loss: 2.6035 0.1196 sec/batch
Epoch 2/10  Iteration 192/1780 Training loss: 2.6023 0.1203 sec/batch
Epoch 2/10  Iteration 193/1780 Training loss: 2.5985 0.1229 sec/batch
Epoch 2/10  Iteration 194/1780 Training loss: 2.5957 0.1262 sec/batch
Epoch 2/10  Iteration 195/1780 Training loss: 2.5928 0.1223 sec/batch
Epoch 2/10  Iteration 196/1780 Training loss: 2.5922 0.1223 sec/batch
Epoch 2/10  Iteration 197/1780 Training loss: 2.5893 0.1192 sec/batch
Epoch 2/10  Iteration 198/1780 Training loss: 2.5853 0.1222 sec/batch
Epoch 2/10  Iteration 199/1780 Training loss: 2.5819 0.1228 sec/batch
Epoch 2/10  Iteration 200/1780 Training loss: 2.5808 0.1213 sec/batch
Validation loss: 2.43305 Saving checkpoint!
Epoch 2/10  Iteration 201/1780 Training loss: 2.5788 0.1208 sec/batch
Epoch 2/10  Iteration 202/1780 Training loss: 2.5758 0.1206 sec/batch
Epoch 2/10  Iteration 203/1780 Training loss: 2.5726 0.1197 sec/batch
Epoch 2/10  Iteration 204/1780 Training loss: 2.5701 0.1203 sec/batch
Epoch 2/10  Iteration 205/1780 Training loss: 2.5674 0.1191 sec/batch
Epoch 2/10  Iteration 206/1780 Training loss: 2.5649 0.1218 sec/batch
Epoch 2/10  Iteration 207/1780 Training loss: 2.5627 0.1205 sec/batch
Epoch 2/10  Iteration 208/1780 Training loss: 2.5605 0.1194 sec/batch
Epoch 2/10  Iteration 209/1780 Training loss: 2.5589 0.1231 sec/batch
Epoch 2/10  Iteration 210/1780 Training loss: 2.5562 0.1208 sec/batch
Epoch 2/10  Iteration 211/1780 Training loss: 2.5533 0.1237 sec/batch
Epoch 2/10  Iteration 212/1780 Training loss: 2.5509 0.1243 sec/batch
Epoch 2/10  Iteration 213/1780 Training loss: 2.5486 0.1192 sec/batch
Epoch 2/10  Iteration 214/1780 Training loss: 2.5464 0.1218 sec/batch
Epoch 2/10  Iteration 215/1780 Training loss: 2.5440 0.1228 sec/batch
Epoch 2/10  Iteration 216/1780 Training loss: 2.5412 0.1224 sec/batch
Epoch 2/10  Iteration 217/1780 Training loss: 2.5388 0.1195 sec/batch
Epoch 2/10  Iteration 218/1780 Training loss: 2.5362 0.1229 sec/batch
Epoch 2/10  Iteration 219/1780 Training loss: 2.5336 0.1219 sec/batch
Epoch 2/10  Iteration 220/1780 Training loss: 2.5310 0.1241 sec/batch
Epoch 2/10  Iteration 221/1780 Training loss: 2.5286 0.1194 sec/batch
Epoch 2/10  Iteration 222/1780 Training loss: 2.5260 0.1209 sec/batch
Epoch 2/10  Iteration 223/1780 Training loss: 2.5238 0.1195 sec/batch
Epoch 2/10  Iteration 224/1780 Training loss: 2.5209 0.1212 sec/batch
Epoch 2/10  Iteration 225/1780 Training loss: 2.5193 0.1191 sec/batch
Epoch 2/10  Iteration 226/1780 Training loss: 2.5171 0.1196 sec/batch
Epoch 2/10  Iteration 227/1780 Training loss: 2.5150 0.1202 sec/batch
Epoch 2/10  Iteration 228/1780 Training loss: 2.5135 0.1234 sec/batch
Epoch 2/10  Iteration 229/1780 Training loss: 2.5115 0.1213 sec/batch
Epoch 2/10  Iteration 230/1780 Training loss: 2.5097 0.1203 sec/batch
Epoch 2/10  Iteration 231/1780 Training loss: 2.5077 0.1210 sec/batch
Epoch 2/10  Iteration 232/1780 Training loss: 2.5057 0.1202 sec/batch
Epoch 2/10  Iteration 233/1780 Training loss: 2.5035 0.1194 sec/batch
Epoch 2/10  Iteration 234/1780 Training loss: 2.5019 0.1208 sec/batch
Epoch 2/10  Iteration 235/1780 Training loss: 2.5001 0.1209 sec/batch
Epoch 2/10  Iteration 236/1780 Training loss: 2.4982 0.1326 sec/batch
Epoch 2/10  Iteration 237/1780 Training loss: 2.4963 0.1190 sec/batch
Epoch 2/10  Iteration 238/1780 Training loss: 2.4948 0.1222 sec/batch
Epoch 2/10  Iteration 239/1780 Training loss: 2.4930 0.1195 sec/batch
Epoch 2/10  Iteration 240/1780 Training loss: 2.4915 0.1190 sec/batch
Epoch 2/10  Iteration 241/1780 Training loss: 2.4902 0.1215 sec/batch
Epoch 2/10  Iteration 242/1780 Training loss: 2.4885 0.1208 sec/batch
Epoch 2/10  Iteration 243/1780 Training loss: 2.4867 0.1213 sec/batch
Epoch 2/10  Iteration 244/1780 Training loss: 2.4853 0.1208 sec/batch
Epoch 2/10  Iteration 245/1780 Training loss: 2.4836 0.1193 sec/batch
Epoch 2/10  Iteration 246/1780 Training loss: 2.4816 0.1196 sec/batch
Epoch 2/10  Iteration 247/1780 Training loss: 2.4796 0.1220 sec/batch
Epoch 2/10  Iteration 248/1780 Training loss: 2.4781 0.1227 sec/batch
Epoch 2/10  Iteration 249/1780 Training loss: 2.4767 0.1215 sec/batch
Epoch 2/10  Iteration 250/1780 Training loss: 2.4754 0.1240 sec/batch
Epoch 2/10  Iteration 251/1780 Training loss: 2.4740 0.1215 sec/batch
Epoch 2/10  Iteration 252/1780 Training loss: 2.4723 0.1198 sec/batch
Epoch 2/10  Iteration 253/1780 Training loss: 2.4707 0.1199 sec/batch
Epoch 2/10  Iteration 254/1780 Training loss: 2.4696 0.1210 sec/batch
Epoch 2/10  Iteration 255/1780 Training loss: 2.4681 0.1215 sec/batch
Epoch 2/10  Iteration 256/1780 Training loss: 2.4667 0.1201 sec/batch
Epoch 2/10  Iteration 257/1780 Training loss: 2.4651 0.1189 sec/batch
Epoch 2/10  Iteration 258/1780 Training loss: 2.4635 0.1210 sec/batch
Epoch 2/10  Iteration 259/1780 Training loss: 2.4619 0.1193 sec/batch
Epoch 2/10  Iteration 260/1780 Training loss: 2.4604 0.1212 sec/batch
Epoch 2/10  Iteration 261/1780 Training loss: 2.4588 0.1281 sec/batch
Epoch 2/10  Iteration 262/1780 Training loss: 2.4575 0.1231 sec/batch
Epoch 2/10  Iteration 263/1780 Training loss: 2.4561 0.1188 sec/batch
Epoch 2/10  Iteration 264/1780 Training loss: 2.4546 0.1216 sec/batch
Epoch 2/10  Iteration 265/1780 Training loss: 2.4534 0.1192 sec/batch
Epoch 2/10  Iteration 266/1780 Training loss: 2.4521 0.1232 sec/batch
Epoch 2/10  Iteration 267/1780 Training loss: 2.4507 0.1201 sec/batch
Epoch 2/10  Iteration 268/1780 Training loss: 2.4495 0.1327 sec/batch
Epoch 2/10  Iteration 269/1780 Training loss: 2.4480 0.1185 sec/batch
Epoch 2/10  Iteration 270/1780 Training loss: 2.4466 0.1232 sec/batch
Epoch 2/10  Iteration 271/1780 Training loss: 2.4452 0.1174 sec/batch
Epoch 2/10  Iteration 272/1780 Training loss: 2.4437 0.1204 sec/batch
Epoch 2/10  Iteration 273/1780 Training loss: 2.4423 0.1197 sec/batch
Epoch 2/10  Iteration 274/1780 Training loss: 2.4408 0.1207 sec/batch
Epoch 2/10  Iteration 275/1780 Training loss: 2.4395 0.1204 sec/batch
Epoch 2/10  Iteration 276/1780 Training loss: 2.4380 0.1194 sec/batch
Epoch 2/10  Iteration 277/1780 Training loss: 2.4365 0.1200 sec/batch
Epoch 2/10  Iteration 278/1780 Training loss: 2.4351 0.1209 sec/batch
Epoch 2/10  Iteration 279/1780 Training loss: 2.4338 0.1203 sec/batch
Epoch 2/10  Iteration 280/1780 Training loss: 2.4325 0.1200 sec/batch
Epoch 2/10  Iteration 281/1780 Training loss: 2.4309 0.1201 sec/batch
Epoch 2/10  Iteration 282/1780 Training loss: 2.4294 0.1218 sec/batch
Epoch 2/10  Iteration 283/1780 Training loss: 2.4279 0.1224 sec/batch
Epoch 2/10  Iteration 284/1780 Training loss: 2.4266 0.1209 sec/batch
Epoch 2/10  Iteration 285/1780 Training loss: 2.4253 0.1194 sec/batch
Epoch 2/10  Iteration 286/1780 Training loss: 2.4242 0.1218 sec/batch
Epoch 2/10  Iteration 287/1780 Training loss: 2.4229 0.1196 sec/batch
Epoch 2/10  Iteration 288/1780 Training loss: 2.4215 0.1220 sec/batch
Epoch 2/10  Iteration 289/1780 Training loss: 2.4202 0.1193 sec/batch
Epoch 2/10  Iteration 290/1780 Training loss: 2.4189 0.1216 sec/batch
Epoch 2/10  Iteration 291/1780 Training loss: 2.4175 0.1196 sec/batch
Epoch 2/10  Iteration 292/1780 Training loss: 2.4160 0.1214 sec/batch
Epoch 2/10  Iteration 293/1780 Training loss: 2.4146 0.1197 sec/batch
Epoch 2/10  Iteration 294/1780 Training loss: 2.4130 0.1226 sec/batch
Epoch 2/10  Iteration 295/1780 Training loss: 2.4117 0.1220 sec/batch
Epoch 2/10  Iteration 296/1780 Training loss: 2.4103 0.1206 sec/batch
Epoch 2/10  Iteration 297/1780 Training loss: 2.4092 0.1215 sec/batch
Epoch 2/10  Iteration 298/1780 Training loss: 2.4080 0.1216 sec/batch
Epoch 2/10  Iteration 299/1780 Training loss: 2.4068 0.1187 sec/batch
Epoch 2/10  Iteration 300/1780 Training loss: 2.4054 0.1198 sec/batch
Validation loss: 2.16109 Saving checkpoint!
Epoch 2/10  Iteration 301/1780 Training loss: 2.4042 0.1188 sec/batch
Epoch 2/10  Iteration 302/1780 Training loss: 2.4030 0.1222 sec/batch
Epoch 2/10  Iteration 303/1780 Training loss: 2.4017 0.1224 sec/batch
Epoch 2/10  Iteration 304/1780 Training loss: 2.4002 0.1229 sec/batch
Epoch 2/10  Iteration 305/1780 Training loss: 2.3991 0.1241 sec/batch
Epoch 2/10  Iteration 306/1780 Training loss: 2.3979 0.1218 sec/batch
Epoch 2/10  Iteration 307/1780 Training loss: 2.3968 0.1212 sec/batch
Epoch 2/10  Iteration 308/1780 Training loss: 2.3956 0.1210 sec/batch
Epoch 2/10  Iteration 309/1780 Training loss: 2.3943 0.1204 sec/batch
Epoch 2/10  Iteration 310/1780 Training loss: 2.3929 0.1215 sec/batch
Epoch 2/10  Iteration 311/1780 Training loss: 2.3916 0.1196 sec/batch
Epoch 2/10  Iteration 312/1780 Training loss: 2.3905 0.1224 sec/batch
Epoch 2/10  Iteration 313/1780 Training loss: 2.3893 0.1192 sec/batch
Epoch 2/10  Iteration 314/1780 Training loss: 2.3881 0.1197 sec/batch
Epoch 2/10  Iteration 315/1780 Training loss: 2.3869 0.1214 sec/batch
Epoch 2/10  Iteration 316/1780 Training loss: 2.3857 0.1206 sec/batch
Epoch 2/10  Iteration 317/1780 Training loss: 2.3848 0.1216 sec/batch
Epoch 2/10  Iteration 318/1780 Training loss: 2.3835 0.1205 sec/batch
Epoch 2/10  Iteration 319/1780 Training loss: 2.3824 0.1217 sec/batch
Epoch 2/10  Iteration 320/1780 Training loss: 2.3811 0.1205 sec/batch
Epoch 2/10  Iteration 321/1780 Training loss: 2.3799 0.1201 sec/batch
Epoch 2/10  Iteration 322/1780 Training loss: 2.3787 0.1232 sec/batch
Epoch 2/10  Iteration 323/1780 Training loss: 2.3775 0.1197 sec/batch
Epoch 2/10  Iteration 324/1780 Training loss: 2.3765 0.1205 sec/batch
Epoch 2/10  Iteration 325/1780 Training loss: 2.3754 0.1203 sec/batch
Epoch 2/10  Iteration 326/1780 Training loss: 2.3744 0.1205 sec/batch
Epoch 2/10  Iteration 327/1780 Training loss: 2.3732 0.1204 sec/batch
Epoch 2/10  Iteration 328/1780 Training loss: 2.3720 0.1210 sec/batch
Epoch 2/10  Iteration 329/1780 Training loss: 2.3710 0.1191 sec/batch
Epoch 2/10  Iteration 330/1780 Training loss: 2.3701 0.1199 sec/batch
Epoch 2/10  Iteration 331/1780 Training loss: 2.3691 0.1218 sec/batch
Epoch 2/10  Iteration 332/1780 Training loss: 2.3680 0.1200 sec/batch
Epoch 2/10  Iteration 333/1780 Training loss: 2.3668 0.1206 sec/batch
Epoch 2/10  Iteration 334/1780 Training loss: 2.3656 0.1211 sec/batch
Epoch 2/10  Iteration 335/1780 Training loss: 2.3645 0.1201 sec/batch
Epoch 2/10  Iteration 336/1780 Training loss: 2.3633 0.1229 sec/batch
Epoch 2/10  Iteration 337/1780 Training loss: 2.3620 0.1186 sec/batch
Epoch 2/10  Iteration 338/1780 Training loss: 2.3610 0.1238 sec/batch
Epoch 2/10  Iteration 339/1780 Training loss: 2.3600 0.1197 sec/batch
Epoch 2/10  Iteration 340/1780 Training loss: 2.3588 0.1216 sec/batch
Epoch 2/10  Iteration 341/1780 Training loss: 2.3577 0.1209 sec/batch
Epoch 2/10  Iteration 342/1780 Training loss: 2.3566 0.1204 sec/batch
Epoch 2/10  Iteration 343/1780 Training loss: 2.3555 0.1199 sec/batch
Epoch 2/10  Iteration 344/1780 Training loss: 2.3544 0.1249 sec/batch
Epoch 2/10  Iteration 345/1780 Training loss: 2.3533 0.1188 sec/batch
Epoch 2/10  Iteration 346/1780 Training loss: 2.3524 0.1219 sec/batch
Epoch 2/10  Iteration 347/1780 Training loss: 2.3513 0.1242 sec/batch
Epoch 2/10  Iteration 348/1780 Training loss: 2.3501 0.1230 sec/batch
Epoch 2/10  Iteration 349/1780 Training loss: 2.3489 0.1213 sec/batch
Epoch 2/10  Iteration 350/1780 Training loss: 2.3479 0.1217 sec/batch
Epoch 2/10  Iteration 351/1780 Training loss: 2.3469 0.1192 sec/batch
Epoch 2/10  Iteration 352/1780 Training loss: 2.3459 0.1199 sec/batch
Epoch 2/10  Iteration 353/1780 Training loss: 2.3450 0.1217 sec/batch
Epoch 2/10  Iteration 354/1780 Training loss: 2.3439 0.1213 sec/batch
Epoch 2/10  Iteration 355/1780 Training loss: 2.3428 0.1294 sec/batch
Epoch 2/10  Iteration 356/1780 Training loss: 2.3417 0.1208 sec/batch
Epoch 3/10  Iteration 357/1780 Training loss: 2.2072 0.1212 sec/batch
Epoch 3/10  Iteration 358/1780 Training loss: 2.1648 0.1217 sec/batch
Epoch 3/10  Iteration 359/1780 Training loss: 2.1521 0.1214 sec/batch
Epoch 3/10  Iteration 360/1780 Training loss: 2.1456 0.1205 sec/batch
Epoch 3/10  Iteration 361/1780 Training loss: 2.1434 0.1209 sec/batch
Epoch 3/10  Iteration 362/1780 Training loss: 2.1387 0.1210 sec/batch
Epoch 3/10  Iteration 363/1780 Training loss: 2.1379 0.1210 sec/batch
Epoch 3/10  Iteration 364/1780 Training loss: 2.1381 0.1228 sec/batch
Epoch 3/10  Iteration 365/1780 Training loss: 2.1400 0.1193 sec/batch
Epoch 3/10  Iteration 366/1780 Training loss: 2.1401 0.1203 sec/batch
Epoch 3/10  Iteration 367/1780 Training loss: 2.1372 0.1216 sec/batch
Epoch 3/10  Iteration 368/1780 Training loss: 2.1354 0.1204 sec/batch
Epoch 3/10  Iteration 369/1780 Training loss: 2.1345 0.1225 sec/batch
Epoch 3/10  Iteration 370/1780 Training loss: 2.1361 0.1210 sec/batch
Epoch 3/10  Iteration 371/1780 Training loss: 2.1352 0.1214 sec/batch
Epoch 3/10  Iteration 372/1780 Training loss: 2.1337 0.1213 sec/batch
Epoch 3/10  Iteration 373/1780 Training loss: 2.1331 0.1198 sec/batch
Epoch 3/10  Iteration 374/1780 Training loss: 2.1347 0.1227 sec/batch
Epoch 3/10  Iteration 375/1780 Training loss: 2.1341 0.1211 sec/batch
Epoch 3/10  Iteration 376/1780 Training loss: 2.1330 0.1210 sec/batch
Epoch 3/10  Iteration 377/1780 Training loss: 2.1319 0.1197 sec/batch
Epoch 3/10  Iteration 378/1780 Training loss: 2.1329 0.1203 sec/batch
Epoch 3/10  Iteration 379/1780 Training loss: 2.1318 0.1201 sec/batch
Epoch 3/10  Iteration 380/1780 Training loss: 2.1302 0.1209 sec/batch
Epoch 3/10  Iteration 381/1780 Training loss: 2.1293 0.1218 sec/batch
Epoch 3/10  Iteration 382/1780 Training loss: 2.1279 0.1216 sec/batch
Epoch 3/10  Iteration 383/1780 Training loss: 2.1265 0.1213 sec/batch
Epoch 3/10  Iteration 384/1780 Training loss: 2.1257 0.1228 sec/batch
Epoch 3/10  Iteration 385/1780 Training loss: 2.1261 0.1213 sec/batch
Epoch 3/10  Iteration 386/1780 Training loss: 2.1254 0.1203 sec/batch
Epoch 3/10  Iteration 387/1780 Training loss: 2.1249 0.1194 sec/batch
Epoch 3/10  Iteration 388/1780 Training loss: 2.1232 0.1196 sec/batch
Epoch 3/10  Iteration 389/1780 Training loss: 2.1223 0.1218 sec/batch
Epoch 3/10  Iteration 390/1780 Training loss: 2.1222 0.1218 sec/batch
Epoch 3/10  Iteration 391/1780 Training loss: 2.1212 0.1194 sec/batch
Epoch 3/10  Iteration 392/1780 Training loss: 2.1202 0.1205 sec/batch
Epoch 3/10  Iteration 393/1780 Training loss: 2.1193 0.1268 sec/batch
Epoch 3/10  Iteration 394/1780 Training loss: 2.1172 0.1223 sec/batch
Epoch 3/10  Iteration 395/1780 Training loss: 2.1155 0.1202 sec/batch
Epoch 3/10  Iteration 396/1780 Training loss: 2.1140 0.1208 sec/batch
Epoch 3/10  Iteration 397/1780 Training loss: 2.1126 0.1198 sec/batch
Epoch 3/10  Iteration 398/1780 Training loss: 2.1118 0.1209 sec/batch
Epoch 3/10  Iteration 399/1780 Training loss: 2.1106 0.1202 sec/batch
Epoch 3/10  Iteration 400/1780 Training loss: 2.1093 0.1228 sec/batch
Validation loss: 1.97191 Saving checkpoint!
Epoch 3/10  Iteration 401/1780 Training loss: 2.1092 0.1196 sec/batch
Epoch 3/10  Iteration 402/1780 Training loss: 2.1071 0.1222 sec/batch
Epoch 3/10  Iteration 403/1780 Training loss: 2.1064 0.1206 sec/batch
Epoch 3/10  Iteration 404/1780 Training loss: 2.1050 0.1231 sec/batch
Epoch 3/10  Iteration 405/1780 Training loss: 2.1041 0.1221 sec/batch
Epoch 3/10  Iteration 406/1780 Training loss: 2.1039 0.1212 sec/batch
Epoch 3/10  Iteration 407/1780 Training loss: 2.1025 0.1207 sec/batch
Epoch 3/10  Iteration 408/1780 Training loss: 2.1023 0.1207 sec/batch
Epoch 3/10  Iteration 409/1780 Training loss: 2.1013 0.1184 sec/batch
Epoch 3/10  Iteration 410/1780 Training loss: 2.1005 0.1197 sec/batch
Epoch 3/10  Iteration 411/1780 Training loss: 2.0995 0.1209 sec/batch
Epoch 3/10  Iteration 412/1780 Training loss: 2.0988 0.1208 sec/batch
Epoch 3/10  Iteration 413/1780 Training loss: 2.0982 0.1197 sec/batch
Epoch 3/10  Iteration 414/1780 Training loss: 2.0972 0.1195 sec/batch
Epoch 3/10  Iteration 415/1780 Training loss: 2.0961 0.1209 sec/batch
Epoch 3/10  Iteration 416/1780 Training loss: 2.0957 0.1206 sec/batch
Epoch 3/10  Iteration 417/1780 Training loss: 2.0948 0.1214 sec/batch
Epoch 3/10  Iteration 418/1780 Training loss: 2.0947 0.1225 sec/batch
Epoch 3/10  Iteration 419/1780 Training loss: 2.0947 0.1187 sec/batch
Epoch 3/10  Iteration 420/1780 Training loss: 2.0944 0.1204 sec/batch
Epoch 3/10  Iteration 421/1780 Training loss: 2.0935 0.1222 sec/batch
Epoch 3/10  Iteration 422/1780 Training loss: 2.0933 0.1246 sec/batch
Epoch 3/10  Iteration 423/1780 Training loss: 2.0927 0.1190 sec/batch
Epoch 3/10  Iteration 424/1780 Training loss: 2.0916 0.1198 sec/batch
Epoch 3/10  Iteration 425/1780 Training loss: 2.0905 0.1207 sec/batch
Epoch 3/10  Iteration 426/1780 Training loss: 2.0898 0.1204 sec/batch
Epoch 3/10  Iteration 427/1780 Training loss: 2.0894 0.1208 sec/batch
Epoch 3/10  Iteration 428/1780 Training loss: 2.0889 0.1200 sec/batch
Epoch 3/10  Iteration 429/1780 Training loss: 2.0885 0.1201 sec/batch
Epoch 3/10  Iteration 430/1780 Training loss: 2.0876 0.1215 sec/batch
Epoch 3/10  Iteration 431/1780 Training loss: 2.0870 0.1207 sec/batch
Epoch 3/10  Iteration 432/1780 Training loss: 2.0867 0.1202 sec/batch
Epoch 3/10  Iteration 433/1780 Training loss: 2.0858 0.1208 sec/batch
Epoch 3/10  Iteration 434/1780 Training loss: 2.0852 0.1213 sec/batch
Epoch 3/10  Iteration 435/1780 Training loss: 2.0842 0.1193 sec/batch
Epoch 3/10  Iteration 436/1780 Training loss: 2.0833 0.1194 sec/batch
Epoch 3/10  Iteration 437/1780 Training loss: 2.0821 0.1204 sec/batch
Epoch 3/10  Iteration 438/1780 Training loss: 2.0815 0.1223 sec/batch
Epoch 3/10  Iteration 439/1780 Training loss: 2.0803 0.1218 sec/batch
Epoch 3/10  Iteration 440/1780 Training loss: 2.0795 0.1225 sec/batch
Epoch 3/10  Iteration 441/1780 Training loss: 2.0783 0.1221 sec/batch
Epoch 3/10  Iteration 442/1780 Training loss: 2.0774 0.1201 sec/batch
Epoch 3/10  Iteration 443/1780 Training loss: 2.0766 0.1228 sec/batch
Epoch 3/10  Iteration 444/1780 Training loss: 2.0757 0.1219 sec/batch
Epoch 3/10  Iteration 445/1780 Training loss: 2.0745 0.1194 sec/batch
Epoch 3/10  Iteration 446/1780 Training loss: 2.0738 0.1230 sec/batch
Epoch 3/10  Iteration 447/1780 Training loss: 2.0728 0.1217 sec/batch
Epoch 3/10  Iteration 448/1780 Training loss: 2.0721 0.1196 sec/batch
Epoch 3/10  Iteration 449/1780 Training loss: 2.0709 0.1204 sec/batch
Epoch 3/10  Iteration 450/1780 Training loss: 2.0699 0.1205 sec/batch
Epoch 3/10  Iteration 451/1780 Training loss: 2.0689 0.1191 sec/batch
Epoch 3/10  Iteration 452/1780 Training loss: 2.0681 0.1223 sec/batch
Epoch 3/10  Iteration 453/1780 Training loss: 2.0673 0.1236 sec/batch
Epoch 3/10  Iteration 454/1780 Training loss: 2.0663 0.1206 sec/batch
Epoch 3/10  Iteration 455/1780 Training loss: 2.0654 0.1197 sec/batch
Epoch 3/10  Iteration 456/1780 Training loss: 2.0643 0.1199 sec/batch
Epoch 3/10  Iteration 457/1780 Training loss: 2.0636 0.1196 sec/batch
Epoch 3/10  Iteration 458/1780 Training loss: 2.0630 0.1228 sec/batch
Epoch 3/10  Iteration 459/1780 Training loss: 2.0621 0.1223 sec/batch
Epoch 3/10  Iteration 460/1780 Training loss: 2.0612 0.1226 sec/batch
Epoch 3/10  Iteration 461/1780 Training loss: 2.0604 0.1220 sec/batch
Epoch 3/10  Iteration 462/1780 Training loss: 2.0596 0.1246 sec/batch
Epoch 3/10  Iteration 463/1780 Training loss: 2.0587 0.1215 sec/batch
Epoch 3/10  Iteration 464/1780 Training loss: 2.0581 0.1226 sec/batch
Epoch 3/10  Iteration 465/1780 Training loss: 2.0576 0.1210 sec/batch
Epoch 3/10  Iteration 466/1780 Training loss: 2.0568 0.1232 sec/batch
Epoch 3/10  Iteration 467/1780 Training loss: 2.0560 0.1268 sec/batch
Epoch 3/10  Iteration 468/1780 Training loss: 2.0552 0.1210 sec/batch
Epoch 3/10  Iteration 469/1780 Training loss: 2.0545 0.1212 sec/batch
Epoch 3/10  Iteration 470/1780 Training loss: 2.0538 0.1225 sec/batch
Epoch 3/10  Iteration 471/1780 Training loss: 2.0528 0.1192 sec/batch
Epoch 3/10  Iteration 472/1780 Training loss: 2.0518 0.1195 sec/batch
Epoch 3/10  Iteration 473/1780 Training loss: 2.0511 0.1205 sec/batch
Epoch 3/10  Iteration 474/1780 Training loss: 2.0504 0.1211 sec/batch
Epoch 3/10  Iteration 475/1780 Training loss: 2.0497 0.1213 sec/batch
Epoch 3/10  Iteration 476/1780 Training loss: 2.0490 0.1193 sec/batch
Epoch 3/10  Iteration 477/1780 Training loss: 2.0484 0.1204 sec/batch
Epoch 3/10  Iteration 478/1780 Training loss: 2.0475 0.1215 sec/batch
Epoch 3/10  Iteration 479/1780 Training loss: 2.0467 0.1205 sec/batch
Epoch 3/10  Iteration 480/1780 Training loss: 2.0461 0.1211 sec/batch
Epoch 3/10  Iteration 481/1780 Training loss: 2.0455 0.1203 sec/batch
Epoch 3/10  Iteration 482/1780 Training loss: 2.0444 0.1209 sec/batch
Epoch 3/10  Iteration 483/1780 Training loss: 2.0439 0.1194 sec/batch
Epoch 3/10  Iteration 484/1780 Training loss: 2.0433 0.1259 sec/batch
Epoch 3/10  Iteration 485/1780 Training loss: 2.0428 0.1202 sec/batch
Epoch 3/10  Iteration 486/1780 Training loss: 2.0422 0.1211 sec/batch
Epoch 3/10  Iteration 487/1780 Training loss: 2.0414 0.1222 sec/batch
Epoch 3/10  Iteration 488/1780 Training loss: 2.0406 0.1208 sec/batch
Epoch 3/10  Iteration 489/1780 Training loss: 2.0399 0.1209 sec/batch
Epoch 3/10  Iteration 490/1780 Training loss: 2.0394 0.1232 sec/batch
Epoch 3/10  Iteration 491/1780 Training loss: 2.0388 0.1193 sec/batch
Epoch 3/10  Iteration 492/1780 Training loss: 2.0383 0.1196 sec/batch
Epoch 3/10  Iteration 493/1780 Training loss: 2.0377 0.1202 sec/batch
Epoch 3/10  Iteration 494/1780 Training loss: 2.0372 0.1228 sec/batch
Epoch 3/10  Iteration 495/1780 Training loss: 2.0368 0.1212 sec/batch
Epoch 3/10  Iteration 496/1780 Training loss: 2.0361 0.1201 sec/batch
Epoch 3/10  Iteration 497/1780 Training loss: 2.0357 0.1209 sec/batch
Epoch 3/10  Iteration 498/1780 Training loss: 2.0349 0.1231 sec/batch
Epoch 3/10  Iteration 499/1780 Training loss: 2.0343 0.1196 sec/batch
Epoch 3/10  Iteration 500/1780 Training loss: 2.0337 0.1215 sec/batch
Validation loss: 1.84066 Saving checkpoint!
Epoch 3/10  Iteration 501/1780 Training loss: 2.0332 0.1197 sec/batch
Epoch 3/10  Iteration 502/1780 Training loss: 2.0326 0.1207 sec/batch
Epoch 3/10  Iteration 503/1780 Training loss: 2.0320 0.1198 sec/batch
Epoch 3/10  Iteration 504/1780 Training loss: 2.0316 0.1234 sec/batch
Epoch 3/10  Iteration 505/1780 Training loss: 2.0310 0.1202 sec/batch
Epoch 3/10  Iteration 506/1780 Training loss: 2.0302 0.1211 sec/batch
Epoch 3/10  Iteration 507/1780 Training loss: 2.0296 0.1195 sec/batch
Epoch 3/10  Iteration 508/1780 Training loss: 2.0292 0.1198 sec/batch
Epoch 3/10  Iteration 509/1780 Training loss: 2.0287 0.1225 sec/batch
Epoch 3/10  Iteration 510/1780 Training loss: 2.0282 0.1203 sec/batch
Epoch 3/10  Iteration 511/1780 Training loss: 2.0275 0.1199 sec/batch
Epoch 3/10  Iteration 512/1780 Training loss: 2.0269 0.1206 sec/batch
Epoch 3/10  Iteration 513/1780 Training loss: 2.0262 0.1189 sec/batch
Epoch 3/10  Iteration 514/1780 Training loss: 2.0255 0.1226 sec/batch
Epoch 3/10  Iteration 515/1780 Training loss: 2.0248 0.1220 sec/batch
Epoch 3/10  Iteration 516/1780 Training loss: 2.0243 0.1205 sec/batch
Epoch 3/10  Iteration 517/1780 Training loss: 2.0239 0.1193 sec/batch
Epoch 3/10  Iteration 518/1780 Training loss: 2.0233 0.1198 sec/batch
Epoch 3/10  Iteration 519/1780 Training loss: 2.0227 0.1220 sec/batch
Epoch 3/10  Iteration 520/1780 Training loss: 2.0222 0.1213 sec/batch
Epoch 3/10  Iteration 521/1780 Training loss: 2.0216 0.1206 sec/batch
Epoch 3/10  Iteration 522/1780 Training loss: 2.0209 0.1222 sec/batch
Epoch 3/10  Iteration 523/1780 Training loss: 2.0204 0.1224 sec/batch
Epoch 3/10  Iteration 524/1780 Training loss: 2.0202 0.1204 sec/batch
Epoch 3/10  Iteration 525/1780 Training loss: 2.0195 0.1218 sec/batch
Epoch 3/10  Iteration 526/1780 Training loss: 2.0189 0.1204 sec/batch
Epoch 3/10  Iteration 527/1780 Training loss: 2.0182 0.1211 sec/batch
Epoch 3/10  Iteration 528/1780 Training loss: 2.0174 0.1203 sec/batch
Epoch 3/10  Iteration 529/1780 Training loss: 2.0170 0.1214 sec/batch
Epoch 3/10  Iteration 530/1780 Training loss: 2.0164 0.1214 sec/batch
Epoch 3/10  Iteration 531/1780 Training loss: 2.0159 0.1194 sec/batch
Epoch 3/10  Iteration 532/1780 Training loss: 2.0153 0.1246 sec/batch
Epoch 3/10  Iteration 533/1780 Training loss: 2.0146 0.1200 sec/batch
Epoch 3/10  Iteration 534/1780 Training loss: 2.0141 0.1202 sec/batch
Epoch 4/10  Iteration 535/1780 Training loss: 1.9760 0.1208 sec/batch
Epoch 4/10  Iteration 536/1780 Training loss: 1.9361 0.1223 sec/batch
Epoch 4/10  Iteration 537/1780 Training loss: 1.9218 0.1204 sec/batch
Epoch 4/10  Iteration 538/1780 Training loss: 1.9151 0.1209 sec/batch
Epoch 4/10  Iteration 539/1780 Training loss: 1.9126 0.1238 sec/batch
Epoch 4/10  Iteration 540/1780 Training loss: 1.9034 0.1229 sec/batch
Epoch 4/10  Iteration 541/1780 Training loss: 1.9039 0.1209 sec/batch
Epoch 4/10  Iteration 542/1780 Training loss: 1.9039 0.1225 sec/batch
Epoch 4/10  Iteration 543/1780 Training loss: 1.9061 0.1197 sec/batch
Epoch 4/10  Iteration 544/1780 Training loss: 1.9051 0.1224 sec/batch
Epoch 4/10  Iteration 545/1780 Training loss: 1.9024 0.1202 sec/batch
Epoch 4/10  Iteration 546/1780 Training loss: 1.9002 0.1227 sec/batch
Epoch 4/10  Iteration 547/1780 Training loss: 1.8999 0.1223 sec/batch
Epoch 4/10  Iteration 548/1780 Training loss: 1.9020 0.1240 sec/batch
Epoch 4/10  Iteration 549/1780 Training loss: 1.9007 0.1216 sec/batch
Epoch 4/10  Iteration 550/1780 Training loss: 1.8991 0.1226 sec/batch
Epoch 4/10  Iteration 551/1780 Training loss: 1.8983 0.1221 sec/batch
Epoch 4/10  Iteration 552/1780 Training loss: 1.9000 0.1202 sec/batch
Epoch 4/10  Iteration 553/1780 Training loss: 1.8991 0.1264 sec/batch
Epoch 4/10  Iteration 554/1780 Training loss: 1.8990 0.1214 sec/batch
Epoch 4/10  Iteration 555/1780 Training loss: 1.8978 0.1221 sec/batch
Epoch 4/10  Iteration 556/1780 Training loss: 1.8984 0.1208 sec/batch
Epoch 4/10  Iteration 557/1780 Training loss: 1.8970 0.1240 sec/batch
Epoch 4/10  Iteration 558/1780 Training loss: 1.8962 0.1213 sec/batch
Epoch 4/10  Iteration 559/1780 Training loss: 1.8953 0.1198 sec/batch
Epoch 4/10  Iteration 560/1780 Training loss: 1.8938 0.1210 sec/batch
Epoch 4/10  Iteration 561/1780 Training loss: 1.8923 0.1204 sec/batch
Epoch 4/10  Iteration 562/1780 Training loss: 1.8923 0.1199 sec/batch
Epoch 4/10  Iteration 563/1780 Training loss: 1.8930 0.1227 sec/batch
Epoch 4/10  Iteration 564/1780 Training loss: 1.8926 0.1244 sec/batch
Epoch 4/10  Iteration 565/1780 Training loss: 1.8921 0.1201 sec/batch
Epoch 4/10  Iteration 566/1780 Training loss: 1.8908 0.1202 sec/batch
Epoch 4/10  Iteration 567/1780 Training loss: 1.8904 0.1212 sec/batch
Epoch 4/10  Iteration 568/1780 Training loss: 1.8909 0.1223 sec/batch
Epoch 4/10  Iteration 569/1780 Training loss: 1.8899 0.1218 sec/batch
Epoch 4/10  Iteration 570/1780 Training loss: 1.8891 0.1203 sec/batch
Epoch 4/10  Iteration 571/1780 Training loss: 1.8882 0.1242 sec/batch
Epoch 4/10  Iteration 572/1780 Training loss: 1.8867 0.1219 sec/batch
Epoch 4/10  Iteration 573/1780 Training loss: 1.8851 0.1204 sec/batch
Epoch 4/10  Iteration 574/1780 Training loss: 1.8840 0.1215 sec/batch
Epoch 4/10  Iteration 575/1780 Training loss: 1.8830 0.1208 sec/batch
Epoch 4/10  Iteration 576/1780 Training loss: 1.8829 0.1234 sec/batch
Epoch 4/10  Iteration 577/1780 Training loss: 1.8819 0.1202 sec/batch
Epoch 4/10  Iteration 578/1780 Training loss: 1.8806 0.1211 sec/batch
Epoch 4/10  Iteration 579/1780 Training loss: 1.8802 0.1227 sec/batch
Epoch 4/10  Iteration 580/1780 Training loss: 1.8786 0.1222 sec/batch
Epoch 4/10  Iteration 581/1780 Training loss: 1.8781 0.1202 sec/batch
Epoch 4/10  Iteration 582/1780 Training loss: 1.8771 0.1230 sec/batch
Epoch 4/10  Iteration 583/1780 Training loss: 1.8766 0.1202 sec/batch
Epoch 4/10  Iteration 584/1780 Training loss: 1.8769 0.1311 sec/batch
Epoch 4/10  Iteration 585/1780 Training loss: 1.8759 0.1200 sec/batch
Epoch 4/10  Iteration 586/1780 Training loss: 1.8765 0.1204 sec/batch
Epoch 4/10  Iteration 587/1780 Training loss: 1.8759 0.1209 sec/batch
Epoch 4/10  Iteration 588/1780 Training loss: 1.8754 0.1236 sec/batch
Epoch 4/10  Iteration 589/1780 Training loss: 1.8746 0.1202 sec/batch
Epoch 4/10  Iteration 590/1780 Training loss: 1.8742 0.1205 sec/batch
Epoch 4/10  Iteration 591/1780 Training loss: 1.8739 0.1211 sec/batch
Epoch 4/10  Iteration 592/1780 Training loss: 1.8733 0.1207 sec/batch
Epoch 4/10  Iteration 593/1780 Training loss: 1.8725 0.1201 sec/batch
Epoch 4/10  Iteration 594/1780 Training loss: 1.8726 0.1218 sec/batch
Epoch 4/10  Iteration 595/1780 Training loss: 1.8722 0.1220 sec/batch
Epoch 4/10  Iteration 596/1780 Training loss: 1.8725 0.1204 sec/batch
Epoch 4/10  Iteration 597/1780 Training loss: 1.8725 0.1209 sec/batch
Epoch 4/10  Iteration 598/1780 Training loss: 1.8724 0.1206 sec/batch
Epoch 4/10  Iteration 599/1780 Training loss: 1.8720 0.1207 sec/batch
Epoch 4/10  Iteration 600/1780 Training loss: 1.8718 0.1195 sec/batch
Validation loss: 1.73093 Saving checkpoint!
Epoch 4/10  Iteration 601/1780 Training loss: 1.8722 0.1200 sec/batch
Epoch 4/10  Iteration 602/1780 Training loss: 1.8713 0.1218 sec/batch
Epoch 4/10  Iteration 603/1780 Training loss: 1.8707 0.1227 sec/batch
Epoch 4/10  Iteration 604/1780 Training loss: 1.8703 0.1217 sec/batch
Epoch 4/10  Iteration 605/1780 Training loss: 1.8703 0.1209 sec/batch
Epoch 4/10  Iteration 606/1780 Training loss: 1.8699 0.1209 sec/batch
Epoch 4/10  Iteration 607/1780 Training loss: 1.8698 0.1248 sec/batch
Epoch 4/10  Iteration 608/1780 Training loss: 1.8691 0.1225 sec/batch
Epoch 4/10  Iteration 609/1780 Training loss: 1.8687 0.1215 sec/batch
Epoch 4/10  Iteration 610/1780 Training loss: 1.8685 0.1204 sec/batch
Epoch 4/10  Iteration 611/1780 Training loss: 1.8680 0.1221 sec/batch
Epoch 4/10  Iteration 612/1780 Training loss: 1.8676 0.1204 sec/batch
Epoch 4/10  Iteration 613/1780 Training loss: 1.8668 0.1208 sec/batch
Epoch 4/10  Iteration 614/1780 Training loss: 1.8662 0.1245 sec/batch
Epoch 4/10  Iteration 615/1780 Training loss: 1.8652 0.1214 sec/batch
Epoch 4/10  Iteration 616/1780 Training loss: 1.8650 0.1223 sec/batch
Epoch 4/10  Iteration 617/1780 Training loss: 1.8640 0.1206 sec/batch
Epoch 4/10  Iteration 618/1780 Training loss: 1.8637 0.1236 sec/batch
Epoch 4/10  Iteration 619/1780 Training loss: 1.8629 0.1221 sec/batch
Epoch 4/10  Iteration 620/1780 Training loss: 1.8622 0.1234 sec/batch
Epoch 4/10  Iteration 621/1780 Training loss: 1.8617 0.1212 sec/batch
Epoch 4/10  Iteration 622/1780 Training loss: 1.8611 0.1245 sec/batch
Epoch 4/10  Iteration 623/1780 Training loss: 1.8601 0.1201 sec/batch
Epoch 4/10  Iteration 624/1780 Training loss: 1.8600 0.1217 sec/batch
Epoch 4/10  Iteration 625/1780 Training loss: 1.8593 0.1225 sec/batch
Epoch 4/10  Iteration 626/1780 Training loss: 1.8587 0.1230 sec/batch
Epoch 4/10  Iteration 627/1780 Training loss: 1.8579 0.1226 sec/batch
Epoch 4/10  Iteration 628/1780 Training loss: 1.8573 0.1205 sec/batch
Epoch 4/10  Iteration 629/1780 Training loss: 1.8566 0.1207 sec/batch
Epoch 4/10  Iteration 630/1780 Training loss: 1.8560 0.1212 sec/batch
Epoch 4/10  Iteration 631/1780 Training loss: 1.8556 0.1198 sec/batch
Epoch 4/10  Iteration 632/1780 Training loss: 1.8549 0.1219 sec/batch
Epoch 4/10  Iteration 633/1780 Training loss: 1.8542 0.1227 sec/batch
Epoch 4/10  Iteration 634/1780 Training loss: 1.8533 0.1207 sec/batch
Epoch 4/10  Iteration 635/1780 Training loss: 1.8528 0.1207 sec/batch
Epoch 4/10  Iteration 636/1780 Training loss: 1.8524 0.1216 sec/batch
Epoch 4/10  Iteration 637/1780 Training loss: 1.8517 0.1208 sec/batch
Epoch 4/10  Iteration 638/1780 Training loss: 1.8512 0.1207 sec/batch
Epoch 4/10  Iteration 639/1780 Training loss: 1.8506 0.1199 sec/batch
Epoch 4/10  Iteration 640/1780 Training loss: 1.8501 0.1212 sec/batch
Epoch 4/10  Iteration 641/1780 Training loss: 1.8497 0.1322 sec/batch
Epoch 4/10  Iteration 642/1780 Training loss: 1.8493 0.1219 sec/batch
Epoch 4/10  Iteration 643/1780 Training loss: 1.8490 0.1222 sec/batch
Epoch 4/10  Iteration 644/1780 Training loss: 1.8485 0.1210 sec/batch
Epoch 4/10  Iteration 645/1780 Training loss: 1.8481 0.1205 sec/batch
Epoch 4/10  Iteration 646/1780 Training loss: 1.8475 0.1213 sec/batch
Epoch 4/10  Iteration 647/1780 Training loss: 1.8470 0.1216 sec/batch
Epoch 4/10  Iteration 648/1780 Training loss: 1.8465 0.1219 sec/batch
Epoch 4/10  Iteration 649/1780 Training loss: 1.8458 0.1224 sec/batch
Epoch 4/10  Iteration 650/1780 Training loss: 1.8451 0.1233 sec/batch
Epoch 4/10  Iteration 651/1780 Training loss: 1.8447 0.1205 sec/batch
Epoch 4/10  Iteration 652/1780 Training loss: 1.8442 0.1225 sec/batch
Epoch 4/10  Iteration 653/1780 Training loss: 1.8437 0.1217 sec/batch
Epoch 4/10  Iteration 654/1780 Training loss: 1.8432 0.1231 sec/batch
Epoch 4/10  Iteration 655/1780 Training loss: 1.8428 0.1208 sec/batch
Epoch 4/10  Iteration 656/1780 Training loss: 1.8421 0.1206 sec/batch
Epoch 4/10  Iteration 657/1780 Training loss: 1.8415 0.1199 sec/batch
Epoch 4/10  Iteration 658/1780 Training loss: 1.8412 0.1228 sec/batch
Epoch 4/10  Iteration 659/1780 Training loss: 1.8407 0.1206 sec/batch
Epoch 4/10  Iteration 660/1780 Training loss: 1.8398 0.1207 sec/batch
Epoch 4/10  Iteration 661/1780 Training loss: 1.8395 0.1210 sec/batch
Epoch 4/10  Iteration 662/1780 Training loss: 1.8391 0.1215 sec/batch
Epoch 4/10  Iteration 663/1780 Training loss: 1.8386 0.1224 sec/batch
Epoch 4/10  Iteration 664/1780 Training loss: 1.8382 0.1221 sec/batch
Epoch 4/10  Iteration 665/1780 Training loss: 1.8375 0.1245 sec/batch
Epoch 4/10  Iteration 666/1780 Training loss: 1.8369 0.1218 sec/batch
Epoch 4/10  Iteration 667/1780 Training loss: 1.8365 0.1198 sec/batch
Epoch 4/10  Iteration 668/1780 Training loss: 1.8361 0.1224 sec/batch
Epoch 4/10  Iteration 669/1780 Training loss: 1.8357 0.1211 sec/batch
Epoch 4/10  Iteration 670/1780 Training loss: 1.8354 0.1217 sec/batch
Epoch 4/10  Iteration 671/1780 Training loss: 1.8350 0.1198 sec/batch
Epoch 4/10  Iteration 672/1780 Training loss: 1.8347 0.1214 sec/batch
Epoch 4/10  Iteration 673/1780 Training loss: 1.8345 0.1196 sec/batch
Epoch 4/10  Iteration 674/1780 Training loss: 1.8340 0.1197 sec/batch
Epoch 4/10  Iteration 675/1780 Training loss: 1.8338 0.1204 sec/batch
Epoch 4/10  Iteration 676/1780 Training loss: 1.8333 0.1227 sec/batch
Epoch 4/10  Iteration 677/1780 Training loss: 1.8330 0.1210 sec/batch
Epoch 4/10  Iteration 678/1780 Training loss: 1.8326 0.1234 sec/batch
Epoch 4/10  Iteration 679/1780 Training loss: 1.8320 0.1201 sec/batch
Epoch 4/10  Iteration 680/1780 Training loss: 1.8317 0.1209 sec/batch
Epoch 4/10  Iteration 681/1780 Training loss: 1.8314 0.1212 sec/batch
Epoch 4/10  Iteration 682/1780 Training loss: 1.8312 0.1226 sec/batch
Epoch 4/10  Iteration 683/1780 Training loss: 1.8308 0.1227 sec/batch
Epoch 4/10  Iteration 684/1780 Training loss: 1.8303 0.1220 sec/batch
Epoch 4/10  Iteration 685/1780 Training loss: 1.8297 0.1231 sec/batch
Epoch 4/10  Iteration 686/1780 Training loss: 1.8295 0.1194 sec/batch
Epoch 4/10  Iteration 687/1780 Training loss: 1.8292 0.1221 sec/batch
Epoch 4/10  Iteration 688/1780 Training loss: 1.8288 0.1206 sec/batch
Epoch 4/10  Iteration 689/1780 Training loss: 1.8284 0.1210 sec/batch
Epoch 4/10  Iteration 690/1780 Training loss: 1.8280 0.1226 sec/batch
Epoch 4/10  Iteration 691/1780 Training loss: 1.8277 0.1197 sec/batch
Epoch 4/10  Iteration 692/1780 Training loss: 1.8273 0.1207 sec/batch
Epoch 4/10  Iteration 693/1780 Training loss: 1.8267 0.1224 sec/batch
Epoch 4/10  Iteration 694/1780 Training loss: 1.8264 0.1267 sec/batch
Epoch 4/10  Iteration 695/1780 Training loss: 1.8263 0.1214 sec/batch
Epoch 4/10  Iteration 696/1780 Training loss: 1.8259 0.1224 sec/batch
Epoch 4/10  Iteration 697/1780 Training loss: 1.8255 0.1230 sec/batch
Epoch 4/10  Iteration 698/1780 Training loss: 1.8252 0.1234 sec/batch
Epoch 4/10  Iteration 699/1780 Training loss: 1.8248 0.1210 sec/batch
Epoch 4/10  Iteration 700/1780 Training loss: 1.8243 0.1202 sec/batch
Validation loss: 1.65231 Saving checkpoint!
Epoch 4/10  Iteration 701/1780 Training loss: 1.8245 0.1202 sec/batch
Epoch 4/10  Iteration 702/1780 Training loss: 1.8245 0.1223 sec/batch
Epoch 4/10  Iteration 703/1780 Training loss: 1.8241 0.1228 sec/batch
Epoch 4/10  Iteration 704/1780 Training loss: 1.8237 0.1214 sec/batch
Epoch 4/10  Iteration 705/1780 Training loss: 1.8233 0.1206 sec/batch
Epoch 4/10  Iteration 706/1780 Training loss: 1.8228 0.1249 sec/batch
Epoch 4/10  Iteration 707/1780 Training loss: 1.8225 0.1202 sec/batch
Epoch 4/10  Iteration 708/1780 Training loss: 1.8221 0.1202 sec/batch
Epoch 4/10  Iteration 709/1780 Training loss: 1.8218 0.1235 sec/batch
Epoch 4/10  Iteration 710/1780 Training loss: 1.8214 0.1214 sec/batch
Epoch 4/10  Iteration 711/1780 Training loss: 1.8208 0.1219 sec/batch
Epoch 4/10  Iteration 712/1780 Training loss: 1.8206 0.1209 sec/batch
Epoch 5/10  Iteration 713/1780 Training loss: 1.8258 0.1203 sec/batch
Epoch 5/10  Iteration 714/1780 Training loss: 1.7858 0.1202 sec/batch
Epoch 5/10  Iteration 715/1780 Training loss: 1.7699 0.1205 sec/batch
Epoch 5/10  Iteration 716/1780 Training loss: 1.7626 0.1229 sec/batch
Epoch 5/10  Iteration 717/1780 Training loss: 1.7575 0.1229 sec/batch
Epoch 5/10  Iteration 718/1780 Training loss: 1.7478 0.1233 sec/batch
Epoch 5/10  Iteration 719/1780 Training loss: 1.7484 0.1197 sec/batch
Epoch 5/10  Iteration 720/1780 Training loss: 1.7470 0.1201 sec/batch
Epoch 5/10  Iteration 721/1780 Training loss: 1.7486 0.1205 sec/batch
Epoch 5/10  Iteration 722/1780 Training loss: 1.7473 0.1214 sec/batch
Epoch 5/10  Iteration 723/1780 Training loss: 1.7443 0.1229 sec/batch
Epoch 5/10  Iteration 724/1780 Training loss: 1.7431 0.1229 sec/batch
Epoch 5/10  Iteration 725/1780 Training loss: 1.7430 0.1201 sec/batch
Epoch 5/10  Iteration 726/1780 Training loss: 1.7453 0.1203 sec/batch
Epoch 5/10  Iteration 727/1780 Training loss: 1.7441 0.1212 sec/batch
Epoch 5/10  Iteration 728/1780 Training loss: 1.7419 0.1239 sec/batch
Epoch 5/10  Iteration 729/1780 Training loss: 1.7417 0.1221 sec/batch
Epoch 5/10  Iteration 730/1780 Training loss: 1.7430 0.1210 sec/batch
Epoch 5/10  Iteration 731/1780 Training loss: 1.7426 0.1208 sec/batch
Epoch 5/10  Iteration 732/1780 Training loss: 1.7427 0.1209 sec/batch
Epoch 5/10  Iteration 733/1780 Training loss: 1.7420 0.1212 sec/batch
Epoch 5/10  Iteration 734/1780 Training loss: 1.7431 0.1232 sec/batch
Epoch 5/10  Iteration 735/1780 Training loss: 1.7421 0.1198 sec/batch
Epoch 5/10  Iteration 736/1780 Training loss: 1.7414 0.1201 sec/batch
Epoch 5/10  Iteration 737/1780 Training loss: 1.7411 0.1225 sec/batch
Epoch 5/10  Iteration 738/1780 Training loss: 1.7398 0.1234 sec/batch
Epoch 5/10  Iteration 739/1780 Training loss: 1.7383 0.1267 sec/batch
Epoch 5/10  Iteration 740/1780 Training loss: 1.7388 0.1221 sec/batch
Epoch 5/10  Iteration 741/1780 Training loss: 1.7394 0.1196 sec/batch
Epoch 5/10  Iteration 742/1780 Training loss: 1.7393 0.1201 sec/batch
Epoch 5/10  Iteration 743/1780 Training loss: 1.7389 0.1211 sec/batch
Epoch 5/10  Iteration 744/1780 Training loss: 1.7376 0.1209 sec/batch
Epoch 5/10  Iteration 745/1780 Training loss: 1.7375 0.1205 sec/batch
Epoch 5/10  Iteration 746/1780 Training loss: 1.7376 0.1219 sec/batch
Epoch 5/10  Iteration 747/1780 Training loss: 1.7371 0.1199 sec/batch
Epoch 5/10  Iteration 748/1780 Training loss: 1.7366 0.1196 sec/batch
Epoch 5/10  Iteration 749/1780 Training loss: 1.7359 0.1233 sec/batch
Epoch 5/10  Iteration 750/1780 Training loss: 1.7344 0.1204 sec/batch
Epoch 5/10  Iteration 751/1780 Training loss: 1.7328 0.1230 sec/batch
Epoch 5/10  Iteration 752/1780 Training loss: 1.7319 0.1207 sec/batch
Epoch 5/10  Iteration 753/1780 Training loss: 1.7311 0.1234 sec/batch
Epoch 5/10  Iteration 754/1780 Training loss: 1.7316 0.1203 sec/batch
Epoch 5/10  Iteration 755/1780 Training loss: 1.7307 0.1209 sec/batch
Epoch 5/10  Iteration 756/1780 Training loss: 1.7298 0.1219 sec/batch
Epoch 5/10  Iteration 757/1780 Training loss: 1.7298 0.1231 sec/batch
Epoch 5/10  Iteration 758/1780 Training loss: 1.7289 0.1212 sec/batch
Epoch 5/10  Iteration 759/1780 Training loss: 1.7284 0.1206 sec/batch
Epoch 5/10  Iteration 760/1780 Training loss: 1.7277 0.1196 sec/batch
Epoch 5/10  Iteration 761/1780 Training loss: 1.7273 0.1217 sec/batch
Epoch 5/10  Iteration 762/1780 Training loss: 1.7279 0.1209 sec/batch
Epoch 5/10  Iteration 763/1780 Training loss: 1.7271 0.1225 sec/batch
Epoch 5/10  Iteration 764/1780 Training loss: 1.7277 0.1207 sec/batch
Epoch 5/10  Iteration 765/1780 Training loss: 1.7274 0.1227 sec/batch
Epoch 5/10  Iteration 766/1780 Training loss: 1.7272 0.1203 sec/batch
Epoch 5/10  Iteration 767/1780 Training loss: 1.7266 0.1216 sec/batch
Epoch 5/10  Iteration 768/1780 Training loss: 1.7263 0.1207 sec/batch
Epoch 5/10  Iteration 769/1780 Training loss: 1.7264 0.1211 sec/batch
Epoch 5/10  Iteration 770/1780 Training loss: 1.7259 0.1199 sec/batch
Epoch 5/10  Iteration 771/1780 Training loss: 1.7252 0.1231 sec/batch
Epoch 5/10  Iteration 772/1780 Training loss: 1.7255 0.1210 sec/batch
Epoch 5/10  Iteration 773/1780 Training loss: 1.7252 0.1195 sec/batch
Epoch 5/10  Iteration 774/1780 Training loss: 1.7257 0.1259 sec/batch
Epoch 5/10  Iteration 775/1780 Training loss: 1.7260 0.1205 sec/batch
Epoch 5/10  Iteration 776/1780 Training loss: 1.7263 0.1208 sec/batch
Epoch 5/10  Iteration 777/1780 Training loss: 1.7260 0.1231 sec/batch
Epoch 5/10  Iteration 778/1780 Training loss: 1.7261 0.1212 sec/batch
Epoch 5/10  Iteration 779/1780 Training loss: 1.7263 0.1223 sec/batch
Epoch 5/10  Iteration 780/1780 Training loss: 1.7258 0.1207 sec/batch
Epoch 5/10  Iteration 781/1780 Training loss: 1.7255 0.1205 sec/batch
Epoch 5/10  Iteration 782/1780 Training loss: 1.7252 0.1242 sec/batch
Epoch 5/10  Iteration 783/1780 Training loss: 1.7255 0.1230 sec/batch
Epoch 5/10  Iteration 784/1780 Training loss: 1.7254 0.1201 sec/batch
Epoch 5/10  Iteration 785/1780 Training loss: 1.7255 0.1203 sec/batch
Epoch 5/10  Iteration 786/1780 Training loss: 1.7250 0.1259 sec/batch
Epoch 5/10  Iteration 787/1780 Training loss: 1.7245 0.1374 sec/batch
Epoch 5/10  Iteration 788/1780 Training loss: 1.7246 0.1230 sec/batch
Epoch 5/10  Iteration 789/1780 Training loss: 1.7241 0.1200 sec/batch
Epoch 5/10  Iteration 790/1780 Training loss: 1.7240 0.1225 sec/batch
Epoch 5/10  Iteration 791/1780 Training loss: 1.7233 0.1212 sec/batch
Epoch 5/10  Iteration 792/1780 Training loss: 1.7228 0.1208 sec/batch
Epoch 5/10  Iteration 793/1780 Training loss: 1.7221 0.1241 sec/batch
Epoch 5/10  Iteration 794/1780 Training loss: 1.7219 0.1229 sec/batch
Epoch 5/10  Iteration 795/1780 Training loss: 1.7211 0.1293 sec/batch
Epoch 5/10  Iteration 796/1780 Training loss: 1.7208 0.1219 sec/batch
Epoch 5/10  Iteration 797/1780 Training loss: 1.7201 0.1216 sec/batch
Epoch 5/10  Iteration 798/1780 Training loss: 1.7196 0.1230 sec/batch
Epoch 5/10  Iteration 799/1780 Training loss: 1.7191 0.1198 sec/batch
Epoch 5/10  Iteration 800/1780 Training loss: 1.7186 0.1205 sec/batch
Validation loss: 1.57561 Saving checkpoint!
Epoch 5/10  Iteration 801/1780 Training loss: 1.7186 0.1210 sec/batch
Epoch 5/10  Iteration 802/1780 Training loss: 1.7185 0.1207 sec/batch
Epoch 5/10  Iteration 803/1780 Training loss: 1.7179 0.1223 sec/batch
Epoch 5/10  Iteration 804/1780 Training loss: 1.7174 0.1230 sec/batch
Epoch 5/10  Iteration 805/1780 Training loss: 1.7167 0.1220 sec/batch
Epoch 5/10  Iteration 806/1780 Training loss: 1.7161 0.1199 sec/batch
Epoch 5/10  Iteration 807/1780 Training loss: 1.7155 0.1211 sec/batch
Epoch 5/10  Iteration 808/1780 Training loss: 1.7151 0.1203 sec/batch
Epoch 5/10  Iteration 809/1780 Training loss: 1.7146 0.1232 sec/batch
Epoch 5/10  Iteration 810/1780 Training loss: 1.7139 0.1223 sec/batch
Epoch 5/10  Iteration 811/1780 Training loss: 1.7132 0.1211 sec/batch
Epoch 5/10  Iteration 812/1780 Training loss: 1.7124 0.1235 sec/batch
Epoch 5/10  Iteration 813/1780 Training loss: 1.7121 0.1200 sec/batch
Epoch 5/10  Iteration 814/1780 Training loss: 1.7117 0.1205 sec/batch
Epoch 5/10  Iteration 815/1780 Training loss: 1.7111 0.1223 sec/batch
Epoch 5/10  Iteration 816/1780 Training loss: 1.7107 0.1205 sec/batch
Epoch 5/10  Iteration 817/1780 Training loss: 1.7101 0.1216 sec/batch
Epoch 5/10  Iteration 818/1780 Training loss: 1.7097 0.1225 sec/batch
Epoch 5/10  Iteration 819/1780 Training loss: 1.7093 0.1204 sec/batch
Epoch 5/10  Iteration 820/1780 Training loss: 1.7089 0.1207 sec/batch
Epoch 5/10  Iteration 821/1780 Training loss: 1.7086 0.1228 sec/batch
Epoch 5/10  Iteration 822/1780 Training loss: 1.7084 0.1201 sec/batch
Epoch 5/10  Iteration 823/1780 Training loss: 1.7080 0.1211 sec/batch
Epoch 5/10  Iteration 824/1780 Training loss: 1.7075 0.1227 sec/batch
Epoch 5/10  Iteration 825/1780 Training loss: 1.7071 0.1207 sec/batch
Epoch 5/10  Iteration 826/1780 Training loss: 1.7067 0.1305 sec/batch
Epoch 5/10  Iteration 827/1780 Training loss: 1.7061 0.1222 sec/batch
Epoch 5/10  Iteration 828/1780 Training loss: 1.7056 0.1232 sec/batch
Epoch 5/10  Iteration 829/1780 Training loss: 1.7053 0.1210 sec/batch
Epoch 5/10  Iteration 830/1780 Training loss: 1.7049 0.1211 sec/batch
Epoch 5/10  Iteration 831/1780 Training loss: 1.7045 0.1220 sec/batch
Epoch 5/10  Iteration 832/1780 Training loss: 1.7042 0.1217 sec/batch
Epoch 5/10  Iteration 833/1780 Training loss: 1.7038 0.1219 sec/batch
Epoch 5/10  Iteration 834/1780 Training loss: 1.7032 0.1204 sec/batch
Epoch 5/10  Iteration 835/1780 Training loss: 1.7026 0.1212 sec/batch
Epoch 5/10  Iteration 836/1780 Training loss: 1.7023 0.1233 sec/batch
Epoch 5/10  Iteration 837/1780 Training loss: 1.7020 0.1250 sec/batch
Epoch 5/10  Iteration 838/1780 Training loss: 1.7014 0.1192 sec/batch
Epoch 5/10  Iteration 839/1780 Training loss: 1.7012 0.1248 sec/batch
Epoch 5/10  Iteration 840/1780 Training loss: 1.7010 0.1203 sec/batch
Epoch 5/10  Iteration 841/1780 Training loss: 1.7006 0.1225 sec/batch
Epoch 5/10  Iteration 842/1780 Training loss: 1.7002 0.1236 sec/batch
Epoch 5/10  Iteration 843/1780 Training loss: 1.6995 0.1222 sec/batch
Epoch 5/10  Iteration 844/1780 Training loss: 1.6990 0.1244 sec/batch
Epoch 5/10  Iteration 845/1780 Training loss: 1.6988 0.1213 sec/batch
Epoch 5/10  Iteration 846/1780 Training loss: 1.6986 0.1207 sec/batch
Epoch 5/10  Iteration 847/1780 Training loss: 1.6984 0.1214 sec/batch
Epoch 5/10  Iteration 848/1780 Training loss: 1.6983 0.1206 sec/batch
Epoch 5/10  Iteration 849/1780 Training loss: 1.6981 0.1198 sec/batch
Epoch 5/10  Iteration 850/1780 Training loss: 1.6979 0.1218 sec/batch
Epoch 5/10  Iteration 851/1780 Training loss: 1.6978 0.1207 sec/batch
Epoch 5/10  Iteration 852/1780 Training loss: 1.6975 0.1204 sec/batch
Epoch 5/10  Iteration 853/1780 Training loss: 1.6975 0.1233 sec/batch
Epoch 5/10  Iteration 854/1780 Training loss: 1.6972 0.1210 sec/batch
Epoch 5/10  Iteration 855/1780 Training loss: 1.6969 0.1209 sec/batch
Epoch 5/10  Iteration 856/1780 Training loss: 1.6968 0.1205 sec/batch
Epoch 5/10  Iteration 857/1780 Training loss: 1.6964 0.1226 sec/batch
Epoch 5/10  Iteration 858/1780 Training loss: 1.6962 0.1232 sec/batch
Epoch 5/10  Iteration 859/1780 Training loss: 1.6959 0.1227 sec/batch
Epoch 5/10  Iteration 860/1780 Training loss: 1.6959 0.1222 sec/batch
Epoch 5/10  Iteration 861/1780 Training loss: 1.6957 0.1203 sec/batch
Epoch 5/10  Iteration 862/1780 Training loss: 1.6953 0.1207 sec/batch
Epoch 5/10  Iteration 863/1780 Training loss: 1.6949 0.1236 sec/batch
Epoch 5/10  Iteration 864/1780 Training loss: 1.6946 0.1209 sec/batch
Epoch 5/10  Iteration 865/1780 Training loss: 1.6944 0.1202 sec/batch
Epoch 5/10  Iteration 866/1780 Training loss: 1.6942 0.1209 sec/batch
Epoch 5/10  Iteration 867/1780 Training loss: 1.6939 0.1204 sec/batch
Epoch 5/10  Iteration 868/1780 Training loss: 1.6936 0.1210 sec/batch
Epoch 5/10  Iteration 869/1780 Training loss: 1.6934 0.1235 sec/batch
Epoch 5/10  Iteration 870/1780 Training loss: 1.6931 0.1229 sec/batch
Epoch 5/10  Iteration 871/1780 Training loss: 1.6926 0.1198 sec/batch
Epoch 5/10  Iteration 872/1780 Training loss: 1.6925 0.1215 sec/batch
Epoch 5/10  Iteration 873/1780 Training loss: 1.6924 0.1214 sec/batch
Epoch 5/10  Iteration 874/1780 Training loss: 1.6922 0.1201 sec/batch
Epoch 5/10  Iteration 875/1780 Training loss: 1.6919 0.1214 sec/batch
Epoch 5/10  Iteration 876/1780 Training loss: 1.6917 0.1223 sec/batch
Epoch 5/10  Iteration 877/1780 Training loss: 1.6914 0.1232 sec/batch
Epoch 5/10  Iteration 878/1780 Training loss: 1.6911 0.1215 sec/batch
Epoch 5/10  Iteration 879/1780 Training loss: 1.6909 0.1216 sec/batch
Epoch 5/10  Iteration 880/1780 Training loss: 1.6911 0.1222 sec/batch
Epoch 5/10  Iteration 881/1780 Training loss: 1.6908 0.1216 sec/batch
Epoch 5/10  Iteration 882/1780 Training loss: 1.6905 0.1212 sec/batch
Epoch 5/10  Iteration 883/1780 Training loss: 1.6902 0.1205 sec/batch
Epoch 5/10  Iteration 884/1780 Training loss: 1.6898 0.1205 sec/batch
Epoch 5/10  Iteration 885/1780 Training loss: 1.6896 0.1211 sec/batch
Epoch 5/10  Iteration 886/1780 Training loss: 1.6894 0.1205 sec/batch
Epoch 5/10  Iteration 887/1780 Training loss: 1.6893 0.1241 sec/batch
Epoch 5/10  Iteration 888/1780 Training loss: 1.6889 0.1212 sec/batch
Epoch 5/10  Iteration 889/1780 Training loss: 1.6885 0.1205 sec/batch
Epoch 5/10  Iteration 890/1780 Training loss: 1.6883 0.1202 sec/batch
Epoch 6/10  Iteration 891/1780 Training loss: 1.7285 0.1223 sec/batch
Epoch 6/10  Iteration 892/1780 Training loss: 1.6840 0.1218 sec/batch
Epoch 6/10  Iteration 893/1780 Training loss: 1.6686 0.1203 sec/batch
Epoch 6/10  Iteration 894/1780 Training loss: 1.6615 0.1204 sec/batch
Epoch 6/10  Iteration 895/1780 Training loss: 1.6549 0.1203 sec/batch
Epoch 6/10  Iteration 896/1780 Training loss: 1.6431 0.1222 sec/batch
Epoch 6/10  Iteration 897/1780 Training loss: 1.6436 0.1213 sec/batch
Epoch 6/10  Iteration 898/1780 Training loss: 1.6423 0.1211 sec/batch
Epoch 6/10  Iteration 899/1780 Training loss: 1.6439 0.1205 sec/batch
Epoch 6/10  Iteration 900/1780 Training loss: 1.6417 0.1204 sec/batch
Validation loss: 1.51374 Saving checkpoint!
Epoch 6/10  Iteration 901/1780 Training loss: 1.6435 0.1198 sec/batch
Epoch 6/10  Iteration 902/1780 Training loss: 1.6409 0.1217 sec/batch
Epoch 6/10  Iteration 903/1780 Training loss: 1.6401 0.1229 sec/batch
Epoch 6/10  Iteration 904/1780 Training loss: 1.6419 0.1198 sec/batch
Epoch 6/10  Iteration 905/1780 Training loss: 1.6410 0.1214 sec/batch
Epoch 6/10  Iteration 906/1780 Training loss: 1.6389 0.1208 sec/batch
Epoch 6/10  Iteration 907/1780 Training loss: 1.6384 0.1208 sec/batch
Epoch 6/10  Iteration 908/1780 Training loss: 1.6397 0.1241 sec/batch
Epoch 6/10  Iteration 909/1780 Training loss: 1.6398 0.1209 sec/batch
Epoch 6/10  Iteration 910/1780 Training loss: 1.6401 0.1209 sec/batch
Epoch 6/10  Iteration 911/1780 Training loss: 1.6394 0.1210 sec/batch
Epoch 6/10  Iteration 912/1780 Training loss: 1.6401 0.1213 sec/batch
Epoch 6/10  Iteration 913/1780 Training loss: 1.6389 0.1220 sec/batch
Epoch 6/10  Iteration 914/1780 Training loss: 1.6386 0.1233 sec/batch
Epoch 6/10  Iteration 915/1780 Training loss: 1.6385 0.1227 sec/batch
Epoch 6/10  Iteration 916/1780 Training loss: 1.6368 0.1203 sec/batch
Epoch 6/10  Iteration 917/1780 Training loss: 1.6351 0.1212 sec/batch
Epoch 6/10  Iteration 918/1780 Training loss: 1.6350 0.1206 sec/batch
Epoch 6/10  Iteration 919/1780 Training loss: 1.6354 0.1205 sec/batch
Epoch 6/10  Iteration 920/1780 Training loss: 1.6355 0.1209 sec/batch
Epoch 6/10  Iteration 921/1780 Training loss: 1.6349 0.1202 sec/batch
Epoch 6/10  Iteration 922/1780 Training loss: 1.6336 0.1199 sec/batch
Epoch 6/10  Iteration 923/1780 Training loss: 1.6337 0.1205 sec/batch
Epoch 6/10  Iteration 924/1780 Training loss: 1.6339 0.1198 sec/batch
Epoch 6/10  Iteration 925/1780 Training loss: 1.6333 0.1222 sec/batch
Epoch 6/10  Iteration 926/1780 Training loss: 1.6328 0.1208 sec/batch
Epoch 6/10  Iteration 927/1780 Training loss: 1.6318 0.1206 sec/batch
Epoch 6/10  Iteration 928/1780 Training loss: 1.6305 0.1246 sec/batch
Epoch 6/10  Iteration 929/1780 Training loss: 1.6290 0.1223 sec/batch
Epoch 6/10  Iteration 930/1780 Training loss: 1.6284 0.1204 sec/batch
Epoch 6/10  Iteration 931/1780 Training loss: 1.6278 0.1215 sec/batch
Epoch 6/10  Iteration 932/1780 Training loss: 1.6279 0.1207 sec/batch
Epoch 6/10  Iteration 933/1780 Training loss: 1.6270 0.1205 sec/batch
Epoch 6/10  Iteration 934/1780 Training loss: 1.6259 0.1233 sec/batch
Epoch 6/10  Iteration 935/1780 Training loss: 1.6258 0.1209 sec/batch
Epoch 6/10  Iteration 936/1780 Training loss: 1.6249 0.1227 sec/batch
Epoch 6/10  Iteration 937/1780 Training loss: 1.6244 0.1219 sec/batch
Epoch 6/10  Iteration 938/1780 Training loss: 1.6237 0.1207 sec/batch
Epoch 6/10  Iteration 939/1780 Training loss: 1.6229 0.1205 sec/batch
Epoch 6/10  Iteration 940/1780 Training loss: 1.6233 0.1218 sec/batch
Epoch 6/10  Iteration 941/1780 Training loss: 1.6226 0.1203 sec/batch
Epoch 6/10  Iteration 942/1780 Training loss: 1.6233 0.1199 sec/batch
Epoch 6/10  Iteration 943/1780 Training loss: 1.6230 0.1208 sec/batch
Epoch 6/10  Iteration 944/1780 Training loss: 1.6229 0.1203 sec/batch
Epoch 6/10  Iteration 945/1780 Training loss: 1.6224 0.1228 sec/batch
Epoch 6/10  Iteration 946/1780 Training loss: 1.6224 0.1207 sec/batch
Epoch 6/10  Iteration 947/1780 Training loss: 1.6225 0.1222 sec/batch
Epoch 6/10  Iteration 948/1780 Training loss: 1.6218 0.1209 sec/batch
Epoch 6/10  Iteration 949/1780 Training loss: 1.6210 0.1236 sec/batch
Epoch 6/10  Iteration 950/1780 Training loss: 1.6212 0.1225 sec/batch
Epoch 6/10  Iteration 951/1780 Training loss: 1.6212 0.1220 sec/batch
Epoch 6/10  Iteration 952/1780 Training loss: 1.6218 0.1220 sec/batch
Epoch 6/10  Iteration 953/1780 Training loss: 1.6221 0.1199 sec/batch
Epoch 6/10  Iteration 954/1780 Training loss: 1.6221 0.1196 sec/batch
Epoch 6/10  Iteration 955/1780 Training loss: 1.6219 0.1219 sec/batch
Epoch 6/10  Iteration 956/1780 Training loss: 1.6220 0.1207 sec/batch
Epoch 6/10  Iteration 957/1780 Training loss: 1.6221 0.1216 sec/batch
Epoch 6/10  Iteration 958/1780 Training loss: 1.6217 0.1203 sec/batch
Epoch 6/10  Iteration 959/1780 Training loss: 1.6216 0.1221 sec/batch
Epoch 6/10  Iteration 960/1780 Training loss: 1.6213 0.1203 sec/batch
Epoch 6/10  Iteration 961/1780 Training loss: 1.6217 0.1245 sec/batch
Epoch 6/10  Iteration 962/1780 Training loss: 1.6217 0.1221 sec/batch
Epoch 6/10  Iteration 963/1780 Training loss: 1.6219 0.1204 sec/batch
Epoch 6/10  Iteration 964/1780 Training loss: 1.6214 0.1210 sec/batch
Epoch 6/10  Iteration 965/1780 Training loss: 1.6211 0.1251 sec/batch
Epoch 6/10  Iteration 966/1780 Training loss: 1.6212 0.1232 sec/batch
Epoch 6/10  Iteration 967/1780 Training loss: 1.6208 0.1242 sec/batch
Epoch 6/10  Iteration 968/1780 Training loss: 1.6206 0.1223 sec/batch
Epoch 6/10  Iteration 969/1780 Training loss: 1.6198 0.1259 sec/batch
Epoch 6/10  Iteration 970/1780 Training loss: 1.6196 0.1220 sec/batch
Epoch 6/10  Iteration 971/1780 Training loss: 1.6189 0.1229 sec/batch
Epoch 6/10  Iteration 972/1780 Training loss: 1.6187 0.1230 sec/batch
Epoch 6/10  Iteration 973/1780 Training loss: 1.6180 0.1245 sec/batch
Epoch 6/10  Iteration 974/1780 Training loss: 1.6177 0.1251 sec/batch
Epoch 6/10  Iteration 975/1780 Training loss: 1.6173 0.1277 sec/batch
Epoch 6/10  Iteration 976/1780 Training loss: 1.6168 0.1243 sec/batch
Epoch 6/10  Iteration 977/1780 Training loss: 1.6163 0.1280 sec/batch
Epoch 6/10  Iteration 978/1780 Training loss: 1.6159 0.1237 sec/batch
Epoch 6/10  Iteration 979/1780 Training loss: 1.6152 0.1262 sec/batch
Epoch 6/10  Iteration 980/1780 Training loss: 1.6152 0.1246 sec/batch
Epoch 6/10  Iteration 981/1780 Training loss: 1.6148 0.1235 sec/batch
Epoch 6/10  Iteration 982/1780 Training loss: 1.6144 0.1269 sec/batch
Epoch 6/10  Iteration 983/1780 Training loss: 1.6138 0.1269 sec/batch
Epoch 6/10  Iteration 984/1780 Training loss: 1.6133 0.1274 sec/batch
Epoch 6/10  Iteration 985/1780 Training loss: 1.6127 0.1229 sec/batch
Epoch 6/10  Iteration 986/1780 Training loss: 1.6125 0.1256 sec/batch
Epoch 6/10  Iteration 987/1780 Training loss: 1.6123 0.1259 sec/batch
Epoch 6/10  Iteration 988/1780 Training loss: 1.6117 0.1219 sec/batch
Epoch 6/10  Iteration 989/1780 Training loss: 1.6112 0.1202 sec/batch
Epoch 6/10  Iteration 990/1780 Training loss: 1.6104 0.1220 sec/batch
Epoch 6/10  Iteration 991/1780 Training loss: 1.6103 0.1203 sec/batch
Epoch 6/10  Iteration 992/1780 Training loss: 1.6100 0.1209 sec/batch
Epoch 6/10  Iteration 993/1780 Training loss: 1.6096 0.1252 sec/batch
Epoch 6/10  Iteration 994/1780 Training loss: 1.6092 0.1334 sec/batch
Epoch 6/10  Iteration 995/1780 Training loss: 1.6089 0.1243 sec/batch
Epoch 6/10  Iteration 996/1780 Training loss: 1.6086 0.1249 sec/batch
Epoch 6/10  Iteration 997/1780 Training loss: 1.6084 0.1239 sec/batch
Epoch 6/10  Iteration 998/1780 Training loss: 1.6081 0.1253 sec/batch
Epoch 6/10  Iteration 999/1780 Training loss: 1.6079 0.1252 sec/batch
Epoch 6/10  Iteration 1000/1780 Training loss: 1.6078 0.1274 sec/batch
Validation loss: 1.46721 Saving checkpoint!
Epoch 6/10  Iteration 1001/1780 Training loss: 1.6081 0.1217 sec/batch
Epoch 6/10  Iteration 1002/1780 Training loss: 1.6078 0.1249 sec/batch
Epoch 6/10  Iteration 1003/1780 Training loss: 1.6076 0.1228 sec/batch
Epoch 6/10  Iteration 1004/1780 Training loss: 1.6073 0.1246 sec/batch
Epoch 6/10  Iteration 1005/1780 Training loss: 1.6069 0.1240 sec/batch
Epoch 6/10  Iteration 1006/1780 Training loss: 1.6063 0.1261 sec/batch
Epoch 6/10  Iteration 1007/1780 Training loss: 1.6061 0.1235 sec/batch
Epoch 6/10  Iteration 1008/1780 Training loss: 1.6059 0.1219 sec/batch
Epoch 6/10  Iteration 1009/1780 Training loss: 1.6055 0.1258 sec/batch
Epoch 6/10  Iteration 1010/1780 Training loss: 1.6052 0.1300 sec/batch
Epoch 6/10  Iteration 1011/1780 Training loss: 1.6050 0.1216 sec/batch
Epoch 6/10  Iteration 1012/1780 Training loss: 1.6044 0.1244 sec/batch
Epoch 6/10  Iteration 1013/1780 Training loss: 1.6039 0.1259 sec/batch
Epoch 6/10  Iteration 1014/1780 Training loss: 1.6037 0.1246 sec/batch
Epoch 6/10  Iteration 1015/1780 Training loss: 1.6034 0.1218 sec/batch
Epoch 6/10  Iteration 1016/1780 Training loss: 1.6029 0.1248 sec/batch
Epoch 6/10  Iteration 1017/1780 Training loss: 1.6028 0.1233 sec/batch
Epoch 6/10  Iteration 1018/1780 Training loss: 1.6026 0.1230 sec/batch
Epoch 6/10  Iteration 1019/1780 Training loss: 1.6023 0.1232 sec/batch
Epoch 6/10  Iteration 1020/1780 Training loss: 1.6019 0.1207 sec/batch
Epoch 6/10  Iteration 1021/1780 Training loss: 1.6013 0.1210 sec/batch
Epoch 6/10  Iteration 1022/1780 Training loss: 1.6009 0.1231 sec/batch
Epoch 6/10  Iteration 1023/1780 Training loss: 1.6008 0.1217 sec/batch
Epoch 6/10  Iteration 1024/1780 Training loss: 1.6006 0.1226 sec/batch
Epoch 6/10  Iteration 1025/1780 Training loss: 1.6005 0.1207 sec/batch
Epoch 6/10  Iteration 1026/1780 Training loss: 1.6003 0.1200 sec/batch
Epoch 6/10  Iteration 1027/1780 Training loss: 1.6003 0.1213 sec/batch
Epoch 6/10  Iteration 1028/1780 Training loss: 1.6002 0.1204 sec/batch
Epoch 6/10  Iteration 1029/1780 Training loss: 1.6001 0.1207 sec/batch
Epoch 6/10  Iteration 1030/1780 Training loss: 1.5998 0.1212 sec/batch
Epoch 6/10  Iteration 1031/1780 Training loss: 1.6000 0.1235 sec/batch
Epoch 6/10  Iteration 1032/1780 Training loss: 1.5998 0.1233 sec/batch
Epoch 6/10  Iteration 1033/1780 Training loss: 1.5995 0.1257 sec/batch
Epoch 6/10  Iteration 1034/1780 Training loss: 1.5996 0.1232 sec/batch
Epoch 6/10  Iteration 1035/1780 Training loss: 1.5993 0.1234 sec/batch
Epoch 6/10  Iteration 1036/1780 Training loss: 1.5992 0.1231 sec/batch
Epoch 6/10  Iteration 1037/1780 Training loss: 1.5990 0.1219 sec/batch
Epoch 6/10  Iteration 1038/1780 Training loss: 1.5990 0.1212 sec/batch
Epoch 6/10  Iteration 1039/1780 Training loss: 1.5988 0.1233 sec/batch
Epoch 6/10  Iteration 1040/1780 Training loss: 1.5985 0.1216 sec/batch
Epoch 6/10  Iteration 1041/1780 Training loss: 1.5981 0.1213 sec/batch
Epoch 6/10  Iteration 1042/1780 Training loss: 1.5979 0.1204 sec/batch
Epoch 6/10  Iteration 1043/1780 Training loss: 1.5977 0.1201 sec/batch
Epoch 6/10  Iteration 1044/1780 Training loss: 1.5976 0.1202 sec/batch
Epoch 6/10  Iteration 1045/1780 Training loss: 1.5974 0.1203 sec/batch
Epoch 6/10  Iteration 1046/1780 Training loss: 1.5971 0.1194 sec/batch
Epoch 6/10  Iteration 1047/1780 Training loss: 1.5970 0.1269 sec/batch
Epoch 6/10  Iteration 1048/1780 Training loss: 1.5968 0.1310 sec/batch
Epoch 6/10  Iteration 1049/1780 Training loss: 1.5963 0.1288 sec/batch
Epoch 6/10  Iteration 1050/1780 Training loss: 1.5963 0.1232 sec/batch
Epoch 6/10  Iteration 1051/1780 Training loss: 1.5963 0.1242 sec/batch
Epoch 6/10  Iteration 1052/1780 Training loss: 1.5961 0.1242 sec/batch
Epoch 6/10  Iteration 1053/1780 Training loss: 1.5959 0.1205 sec/batch
Epoch 6/10  Iteration 1054/1780 Training loss: 1.5958 0.1194 sec/batch
Epoch 6/10  Iteration 1055/1780 Training loss: 1.5955 0.1237 sec/batch
Epoch 6/10  Iteration 1056/1780 Training loss: 1.5953 0.1210 sec/batch
Epoch 6/10  Iteration 1057/1780 Training loss: 1.5953 0.1208 sec/batch
Epoch 6/10  Iteration 1058/1780 Training loss: 1.5956 0.1224 sec/batch
Epoch 6/10  Iteration 1059/1780 Training loss: 1.5954 0.1305 sec/batch
Epoch 6/10  Iteration 1060/1780 Training loss: 1.5951 0.1221 sec/batch
Epoch 6/10  Iteration 1061/1780 Training loss: 1.5948 0.1207 sec/batch
Epoch 6/10  Iteration 1062/1780 Training loss: 1.5945 0.1215 sec/batch
Epoch 6/10  Iteration 1063/1780 Training loss: 1.5944 0.1195 sec/batch
Epoch 6/10  Iteration 1064/1780 Training loss: 1.5942 0.1209 sec/batch
Epoch 6/10  Iteration 1065/1780 Training loss: 1.5941 0.1220 sec/batch
Epoch 6/10  Iteration 1066/1780 Training loss: 1.5938 0.1227 sec/batch
Epoch 6/10  Iteration 1067/1780 Training loss: 1.5935 0.1240 sec/batch
Epoch 6/10  Iteration 1068/1780 Training loss: 1.5934 0.1240 sec/batch
Epoch 7/10  Iteration 1069/1780 Training loss: 1.6372 0.1212 sec/batch
Epoch 7/10  Iteration 1070/1780 Training loss: 1.5995 0.1223 sec/batch
Epoch 7/10  Iteration 1071/1780 Training loss: 1.5822 0.1204 sec/batch
Epoch 7/10  Iteration 1072/1780 Training loss: 1.5759 0.1216 sec/batch
Epoch 7/10  Iteration 1073/1780 Training loss: 1.5702 0.1213 sec/batch
Epoch 7/10  Iteration 1074/1780 Training loss: 1.5601 0.1211 sec/batch
Epoch 7/10  Iteration 1075/1780 Training loss: 1.5609 0.1205 sec/batch
Epoch 7/10  Iteration 1076/1780 Training loss: 1.5592 0.1235 sec/batch
Epoch 7/10  Iteration 1077/1780 Training loss: 1.5611 0.1202 sec/batch
Epoch 7/10  Iteration 1078/1780 Training loss: 1.5607 0.1200 sec/batch
Epoch 7/10  Iteration 1079/1780 Training loss: 1.5565 0.1236 sec/batch
Epoch 7/10  Iteration 1080/1780 Training loss: 1.5554 0.1219 sec/batch
Epoch 7/10  Iteration 1081/1780 Training loss: 1.5553 0.1225 sec/batch
Epoch 7/10  Iteration 1082/1780 Training loss: 1.5567 0.1216 sec/batch
Epoch 7/10  Iteration 1083/1780 Training loss: 1.5559 0.1224 sec/batch
Epoch 7/10  Iteration 1084/1780 Training loss: 1.5534 0.1221 sec/batch
Epoch 7/10  Iteration 1085/1780 Training loss: 1.5536 0.1237 sec/batch
Epoch 7/10  Iteration 1086/1780 Training loss: 1.5556 0.1209 sec/batch
Epoch 7/10  Iteration 1087/1780 Training loss: 1.5558 0.1226 sec/batch
Epoch 7/10  Iteration 1088/1780 Training loss: 1.5568 0.1211 sec/batch
Epoch 7/10  Iteration 1089/1780 Training loss: 1.5560 0.1229 sec/batch
Epoch 7/10  Iteration 1090/1780 Training loss: 1.5560 0.1223 sec/batch
Epoch 7/10  Iteration 1091/1780 Training loss: 1.5549 0.1239 sec/batch
Epoch 7/10  Iteration 1092/1780 Training loss: 1.5546 0.1231 sec/batch
Epoch 7/10  Iteration 1093/1780 Training loss: 1.5543 0.1223 sec/batch
Epoch 7/10  Iteration 1094/1780 Training loss: 1.5528 0.1209 sec/batch
Epoch 7/10  Iteration 1095/1780 Training loss: 1.5511 0.1235 sec/batch
Epoch 7/10  Iteration 1096/1780 Training loss: 1.5513 0.1219 sec/batch
Epoch 7/10  Iteration 1097/1780 Training loss: 1.5517 0.1222 sec/batch
Epoch 7/10  Iteration 1098/1780 Training loss: 1.5517 0.1243 sec/batch
Epoch 7/10  Iteration 1099/1780 Training loss: 1.5511 0.1266 sec/batch
Epoch 7/10  Iteration 1100/1780 Training loss: 1.5501 0.1268 sec/batch
Validation loss: 1.43194 Saving checkpoint!
Epoch 7/10  Iteration 1101/1780 Training loss: 1.5527 0.1234 sec/batch
Epoch 7/10  Iteration 1102/1780 Training loss: 1.5528 0.1254 sec/batch
Epoch 7/10  Iteration 1103/1780 Training loss: 1.5527 0.1247 sec/batch
Epoch 7/10  Iteration 1104/1780 Training loss: 1.5524 0.1249 sec/batch
Epoch 7/10  Iteration 1105/1780 Training loss: 1.5513 0.1250 sec/batch
Epoch 7/10  Iteration 1106/1780 Training loss: 1.5504 0.1252 sec/batch
Epoch 7/10  Iteration 1107/1780 Training loss: 1.5489 0.1261 sec/batch
Epoch 7/10  Iteration 1108/1780 Training loss: 1.5482 0.1202 sec/batch
Epoch 7/10  Iteration 1109/1780 Training loss: 1.5475 0.1234 sec/batch
Epoch 7/10  Iteration 1110/1780 Training loss: 1.5480 0.1218 sec/batch
Epoch 7/10  Iteration 1111/1780 Training loss: 1.5474 0.1225 sec/batch
Epoch 7/10  Iteration 1112/1780 Training loss: 1.5466 0.1223 sec/batch
Epoch 7/10  Iteration 1113/1780 Training loss: 1.5469 0.1227 sec/batch
Epoch 7/10  Iteration 1114/1780 Training loss: 1.5458 0.1221 sec/batch
Epoch 7/10  Iteration 1115/1780 Training loss: 1.5455 0.1231 sec/batch
Epoch 7/10  Iteration 1116/1780 Training loss: 1.5449 0.1215 sec/batch
Epoch 7/10  Iteration 1117/1780 Training loss: 1.5446 0.1237 sec/batch
Epoch 7/10  Iteration 1118/1780 Training loss: 1.5449 0.1218 sec/batch
Epoch 7/10  Iteration 1119/1780 Training loss: 1.5442 0.1245 sec/batch
Epoch 7/10  Iteration 1120/1780 Training loss: 1.5449 0.1219 sec/batch
Epoch 7/10  Iteration 1121/1780 Training loss: 1.5447 0.1229 sec/batch
Epoch 7/10  Iteration 1122/1780 Training loss: 1.5446 0.1212 sec/batch
Epoch 7/10  Iteration 1123/1780 Training loss: 1.5441 0.1248 sec/batch
Epoch 7/10  Iteration 1124/1780 Training loss: 1.5439 0.1228 sec/batch
Epoch 7/10  Iteration 1125/1780 Training loss: 1.5441 0.1267 sec/batch
Epoch 7/10  Iteration 1126/1780 Training loss: 1.5434 0.1204 sec/batch
Epoch 7/10  Iteration 1127/1780 Training loss: 1.5426 0.1258 sec/batch
Epoch 7/10  Iteration 1128/1780 Training loss: 1.5429 0.1223 sec/batch
Epoch 7/10  Iteration 1129/1780 Training loss: 1.5426 0.1226 sec/batch
Epoch 7/10  Iteration 1130/1780 Training loss: 1.5432 0.1215 sec/batch
Epoch 7/10  Iteration 1131/1780 Training loss: 1.5435 0.1222 sec/batch
Epoch 7/10  Iteration 1132/1780 Training loss: 1.5435 0.1218 sec/batch
Epoch 7/10  Iteration 1133/1780 Training loss: 1.5432 0.1226 sec/batch
Epoch 7/10  Iteration 1134/1780 Training loss: 1.5432 0.1231 sec/batch
Epoch 7/10  Iteration 1135/1780 Training loss: 1.5431 0.1242 sec/batch
Epoch 7/10  Iteration 1136/1780 Training loss: 1.5425 0.1238 sec/batch
Epoch 7/10  Iteration 1137/1780 Training loss: 1.5423 0.1258 sec/batch
Epoch 7/10  Iteration 1138/1780 Training loss: 1.5420 0.1209 sec/batch
Epoch 7/10  Iteration 1139/1780 Training loss: 1.5424 0.1254 sec/batch
Epoch 7/10  Iteration 1140/1780 Training loss: 1.5424 0.1215 sec/batch
Epoch 7/10  Iteration 1141/1780 Training loss: 1.5425 0.1297 sec/batch
Epoch 7/10  Iteration 1142/1780 Training loss: 1.5419 0.1234 sec/batch
Epoch 7/10  Iteration 1143/1780 Training loss: 1.5414 0.1258 sec/batch
Epoch 7/10  Iteration 1144/1780 Training loss: 1.5414 0.1236 sec/batch
Epoch 7/10  Iteration 1145/1780 Training loss: 1.5409 0.1230 sec/batch
Epoch 7/10  Iteration 1146/1780 Training loss: 1.5407 0.1220 sec/batch
Epoch 7/10  Iteration 1147/1780 Training loss: 1.5398 0.1226 sec/batch
Epoch 7/10  Iteration 1148/1780 Training loss: 1.5395 0.1218 sec/batch
Epoch 7/10  Iteration 1149/1780 Training loss: 1.5387 0.1263 sec/batch
Epoch 7/10  Iteration 1150/1780 Training loss: 1.5385 0.1233 sec/batch
Epoch 7/10  Iteration 1151/1780 Training loss: 1.5378 0.1249 sec/batch
Epoch 7/10  Iteration 1152/1780 Training loss: 1.5376 0.1244 sec/batch
Epoch 7/10  Iteration 1153/1780 Training loss: 1.5370 0.1239 sec/batch
Epoch 7/10  Iteration 1154/1780 Training loss: 1.5367 0.1217 sec/batch
Epoch 7/10  Iteration 1155/1780 Training loss: 1.5362 0.1217 sec/batch
Epoch 7/10  Iteration 1156/1780 Training loss: 1.5357 0.1245 sec/batch
Epoch 7/10  Iteration 1157/1780 Training loss: 1.5351 0.1221 sec/batch
Epoch 7/10  Iteration 1158/1780 Training loss: 1.5350 0.1242 sec/batch
Epoch 7/10  Iteration 1159/1780 Training loss: 1.5345 0.1215 sec/batch
Epoch 7/10  Iteration 1160/1780 Training loss: 1.5340 0.1224 sec/batch
Epoch 7/10  Iteration 1161/1780 Training loss: 1.5334 0.1254 sec/batch
Epoch 7/10  Iteration 1162/1780 Training loss: 1.5330 0.1213 sec/batch
Epoch 7/10  Iteration 1163/1780 Training loss: 1.5324 0.1225 sec/batch
Epoch 7/10  Iteration 1164/1780 Training loss: 1.5322 0.1216 sec/batch
Epoch 7/10  Iteration 1165/1780 Training loss: 1.5320 0.1235 sec/batch
Epoch 7/10  Iteration 1166/1780 Training loss: 1.5313 0.1236 sec/batch
Epoch 7/10  Iteration 1167/1780 Training loss: 1.5307 0.1213 sec/batch
Epoch 7/10  Iteration 1168/1780 Training loss: 1.5300 0.1235 sec/batch
Epoch 7/10  Iteration 1169/1780 Training loss: 1.5298 0.1229 sec/batch
Epoch 7/10  Iteration 1170/1780 Training loss: 1.5296 0.1226 sec/batch
Epoch 7/10  Iteration 1171/1780 Training loss: 1.5292 0.1262 sec/batch
Epoch 7/10  Iteration 1172/1780 Training loss: 1.5290 0.1220 sec/batch
Epoch 7/10  Iteration 1173/1780 Training loss: 1.5286 0.1236 sec/batch
Epoch 7/10  Iteration 1174/1780 Training loss: 1.5283 0.1318 sec/batch
Epoch 7/10  Iteration 1175/1780 Training loss: 1.5281 0.1231 sec/batch
Epoch 7/10  Iteration 1176/1780 Training loss: 1.5278 0.1227 sec/batch
Epoch 7/10  Iteration 1177/1780 Training loss: 1.5275 0.1247 sec/batch
Epoch 7/10  Iteration 1178/1780 Training loss: 1.5274 0.1220 sec/batch
Epoch 7/10  Iteration 1179/1780 Training loss: 1.5270 0.1229 sec/batch
Epoch 7/10  Iteration 1180/1780 Training loss: 1.5267 0.1219 sec/batch
Epoch 7/10  Iteration 1181/1780 Training loss: 1.5263 0.1224 sec/batch
Epoch 7/10  Iteration 1182/1780 Training loss: 1.5259 0.1216 sec/batch
Epoch 7/10  Iteration 1183/1780 Training loss: 1.5255 0.1224 sec/batch
Epoch 7/10  Iteration 1184/1780 Training loss: 1.5249 0.1230 sec/batch
Epoch 7/10  Iteration 1185/1780 Training loss: 1.5246 0.1255 sec/batch
Epoch 7/10  Iteration 1186/1780 Training loss: 1.5244 0.1230 sec/batch
Epoch 7/10  Iteration 1187/1780 Training loss: 1.5241 0.1254 sec/batch
Epoch 7/10  Iteration 1188/1780 Training loss: 1.5238 0.1218 sec/batch
Epoch 7/10  Iteration 1189/1780 Training loss: 1.5236 0.1256 sec/batch
Epoch 7/10  Iteration 1190/1780 Training loss: 1.5231 0.1229 sec/batch
Epoch 7/10  Iteration 1191/1780 Training loss: 1.5225 0.1222 sec/batch
Epoch 7/10  Iteration 1192/1780 Training loss: 1.5223 0.1212 sec/batch
Epoch 7/10  Iteration 1193/1780 Training loss: 1.5220 0.1227 sec/batch
Epoch 7/10  Iteration 1194/1780 Training loss: 1.5214 0.1209 sec/batch
Epoch 7/10  Iteration 1195/1780 Training loss: 1.5212 0.1247 sec/batch
Epoch 7/10  Iteration 1196/1780 Training loss: 1.5210 0.1214 sec/batch
Epoch 7/10  Iteration 1197/1780 Training loss: 1.5208 0.1254 sec/batch
Epoch 7/10  Iteration 1198/1780 Training loss: 1.5203 0.1230 sec/batch
Epoch 7/10  Iteration 1199/1780 Training loss: 1.5197 0.1251 sec/batch
Epoch 7/10  Iteration 1200/1780 Training loss: 1.5192 0.1241 sec/batch
Validation loss: 1.37934 Saving checkpoint!
Epoch 7/10  Iteration 1201/1780 Training loss: 1.5201 0.1215 sec/batch
Epoch 7/10  Iteration 1202/1780 Training loss: 1.5200 0.1217 sec/batch
Epoch 7/10  Iteration 1203/1780 Training loss: 1.5199 0.1240 sec/batch
Epoch 7/10  Iteration 1204/1780 Training loss: 1.5199 0.1249 sec/batch
Epoch 7/10  Iteration 1205/1780 Training loss: 1.5199 0.1257 sec/batch
Epoch 7/10  Iteration 1206/1780 Training loss: 1.5199 0.1239 sec/batch
Epoch 7/10  Iteration 1207/1780 Training loss: 1.5198 0.1222 sec/batch
Epoch 7/10  Iteration 1208/1780 Training loss: 1.5196 0.1239 sec/batch
Epoch 7/10  Iteration 1209/1780 Training loss: 1.5199 0.1251 sec/batch
Epoch 7/10  Iteration 1210/1780 Training loss: 1.5197 0.1215 sec/batch
Epoch 7/10  Iteration 1211/1780 Training loss: 1.5195 0.1221 sec/batch
Epoch 7/10  Iteration 1212/1780 Training loss: 1.5196 0.1212 sec/batch
Epoch 7/10  Iteration 1213/1780 Training loss: 1.5193 0.1230 sec/batch
Epoch 7/10  Iteration 1214/1780 Training loss: 1.5192 0.1225 sec/batch
Epoch 7/10  Iteration 1215/1780 Training loss: 1.5191 0.1219 sec/batch
Epoch 7/10  Iteration 1216/1780 Training loss: 1.5191 0.1232 sec/batch
Epoch 7/10  Iteration 1217/1780 Training loss: 1.5191 0.1278 sec/batch
Epoch 7/10  Iteration 1218/1780 Training loss: 1.5187 0.1211 sec/batch
Epoch 7/10  Iteration 1219/1780 Training loss: 1.5182 0.1224 sec/batch
Epoch 7/10  Iteration 1220/1780 Training loss: 1.5180 0.1219 sec/batch
Epoch 7/10  Iteration 1221/1780 Training loss: 1.5178 0.1227 sec/batch
Epoch 7/10  Iteration 1222/1780 Training loss: 1.5177 0.1235 sec/batch
Epoch 7/10  Iteration 1223/1780 Training loss: 1.5175 0.1230 sec/batch
Epoch 7/10  Iteration 1224/1780 Training loss: 1.5173 0.1233 sec/batch
Epoch 7/10  Iteration 1225/1780 Training loss: 1.5173 0.1238 sec/batch
Epoch 7/10  Iteration 1226/1780 Training loss: 1.5171 0.1222 sec/batch
Epoch 7/10  Iteration 1227/1780 Training loss: 1.5166 0.1256 sec/batch
Epoch 7/10  Iteration 1228/1780 Training loss: 1.5166 0.1215 sec/batch
Epoch 7/10  Iteration 1229/1780 Training loss: 1.5167 0.1268 sec/batch
Epoch 7/10  Iteration 1230/1780 Training loss: 1.5165 0.1226 sec/batch
Epoch 7/10  Iteration 1231/1780 Training loss: 1.5163 0.1264 sec/batch
Epoch 7/10  Iteration 1232/1780 Training loss: 1.5162 0.1214 sec/batch
Epoch 7/10  Iteration 1233/1780 Training loss: 1.5160 0.1232 sec/batch
Epoch 7/10  Iteration 1234/1780 Training loss: 1.5157 0.1234 sec/batch
Epoch 7/10  Iteration 1235/1780 Training loss: 1.5157 0.1232 sec/batch
Epoch 7/10  Iteration 1236/1780 Training loss: 1.5160 0.1216 sec/batch
Epoch 7/10  Iteration 1237/1780 Training loss: 1.5158 0.1224 sec/batch
Epoch 7/10  Iteration 1238/1780 Training loss: 1.5156 0.1227 sec/batch
Epoch 7/10  Iteration 1239/1780 Training loss: 1.5153 0.1241 sec/batch
Epoch 7/10  Iteration 1240/1780 Training loss: 1.5150 0.1266 sec/batch
Epoch 7/10  Iteration 1241/1780 Training loss: 1.5150 0.1220 sec/batch
Epoch 7/10  Iteration 1242/1780 Training loss: 1.5149 0.1254 sec/batch
Epoch 7/10  Iteration 1243/1780 Training loss: 1.5148 0.1229 sec/batch
Epoch 7/10  Iteration 1244/1780 Training loss: 1.5145 0.1249 sec/batch
Epoch 7/10  Iteration 1245/1780 Training loss: 1.5142 0.1246 sec/batch
Epoch 7/10  Iteration 1246/1780 Training loss: 1.5141 0.1222 sec/batch
Epoch 8/10  Iteration 1247/1780 Training loss: 1.5910 0.1253 sec/batch
Epoch 8/10  Iteration 1248/1780 Training loss: 1.5418 0.1228 sec/batch
Epoch 8/10  Iteration 1249/1780 Training loss: 1.5191 0.1264 sec/batch
Epoch 8/10  Iteration 1250/1780 Training loss: 1.5103 0.1242 sec/batch
Epoch 8/10  Iteration 1251/1780 Training loss: 1.5018 0.1317 sec/batch
Epoch 8/10  Iteration 1252/1780 Training loss: 1.4903 0.1233 sec/batch
Epoch 8/10  Iteration 1253/1780 Training loss: 1.4902 0.1234 sec/batch
Epoch 8/10  Iteration 1254/1780 Training loss: 1.4880 0.1212 sec/batch
Epoch 8/10  Iteration 1255/1780 Training loss: 1.4880 0.1231 sec/batch
Epoch 8/10  Iteration 1256/1780 Training loss: 1.4869 0.1224 sec/batch
Epoch 8/10  Iteration 1257/1780 Training loss: 1.4827 0.1234 sec/batch
Epoch 8/10  Iteration 1258/1780 Training loss: 1.4808 0.1275 sec/batch
Epoch 8/10  Iteration 1259/1780 Training loss: 1.4796 0.1237 sec/batch
Epoch 8/10  Iteration 1260/1780 Training loss: 1.4813 0.1243 sec/batch
Epoch 8/10  Iteration 1261/1780 Training loss: 1.4808 0.1225 sec/batch
Epoch 8/10  Iteration 1262/1780 Training loss: 1.4793 0.1227 sec/batch
Epoch 8/10  Iteration 1263/1780 Training loss: 1.4792 0.1220 sec/batch
Epoch 8/10  Iteration 1264/1780 Training loss: 1.4808 0.1213 sec/batch
Epoch 8/10  Iteration 1265/1780 Training loss: 1.4807 0.1260 sec/batch
Epoch 8/10  Iteration 1266/1780 Training loss: 1.4814 0.1235 sec/batch
Epoch 8/10  Iteration 1267/1780 Training loss: 1.4810 0.1230 sec/batch
Epoch 8/10  Iteration 1268/1780 Training loss: 1.4813 0.1216 sec/batch
Epoch 8/10  Iteration 1269/1780 Training loss: 1.4802 0.1255 sec/batch
Epoch 8/10  Iteration 1270/1780 Training loss: 1.4797 0.1209 sec/batch
Epoch 8/10  Iteration 1271/1780 Training loss: 1.4796 0.1219 sec/batch
Epoch 8/10  Iteration 1272/1780 Training loss: 1.4779 0.1234 sec/batch
Epoch 8/10  Iteration 1273/1780 Training loss: 1.4763 0.1255 sec/batch
Epoch 8/10  Iteration 1274/1780 Training loss: 1.4762 0.1246 sec/batch
Epoch 8/10  Iteration 1275/1780 Training loss: 1.4763 0.1245 sec/batch
Epoch 8/10  Iteration 1276/1780 Training loss: 1.4768 0.1216 sec/batch
Epoch 8/10  Iteration 1277/1780 Training loss: 1.4763 0.1245 sec/batch
Epoch 8/10  Iteration 1278/1780 Training loss: 1.4751 0.1246 sec/batch
Epoch 8/10  Iteration 1279/1780 Training loss: 1.4754 0.1230 sec/batch
Epoch 8/10  Iteration 1280/1780 Training loss: 1.4753 0.1217 sec/batch
Epoch 8/10  Iteration 1281/1780 Training loss: 1.4749 0.1261 sec/batch
Epoch 8/10  Iteration 1282/1780 Training loss: 1.4746 0.1282 sec/batch
Epoch 8/10  Iteration 1283/1780 Training loss: 1.4740 0.1229 sec/batch
Epoch 8/10  Iteration 1284/1780 Training loss: 1.4728 0.1230 sec/batch
Epoch 8/10  Iteration 1285/1780 Training loss: 1.4715 0.1233 sec/batch
Epoch 8/10  Iteration 1286/1780 Training loss: 1.4708 0.1212 sec/batch
Epoch 8/10  Iteration 1287/1780 Training loss: 1.4703 0.1309 sec/batch
Epoch 8/10  Iteration 1288/1780 Training loss: 1.4706 0.1227 sec/batch
Epoch 8/10  Iteration 1289/1780 Training loss: 1.4699 0.1234 sec/batch
Epoch 8/10  Iteration 1290/1780 Training loss: 1.4689 0.1222 sec/batch
Epoch 8/10  Iteration 1291/1780 Training loss: 1.4688 0.1241 sec/batch
Epoch 8/10  Iteration 1292/1780 Training loss: 1.4678 0.1217 sec/batch
Epoch 8/10  Iteration 1293/1780 Training loss: 1.4674 0.1230 sec/batch
Epoch 8/10  Iteration 1294/1780 Training loss: 1.4666 0.1219 sec/batch
Epoch 8/10  Iteration 1295/1780 Training loss: 1.4662 0.1222 sec/batch
Epoch 8/10  Iteration 1296/1780 Training loss: 1.4664 0.1223 sec/batch
Epoch 8/10  Iteration 1297/1780 Training loss: 1.4658 0.1235 sec/batch
Epoch 8/10  Iteration 1298/1780 Training loss: 1.4666 0.1228 sec/batch
Epoch 8/10  Iteration 1299/1780 Training loss: 1.4663 0.1276 sec/batch
Epoch 8/10  Iteration 1300/1780 Training loss: 1.4664 0.1240 sec/batch
Validation loss: 1.34587 Saving checkpoint!
Epoch 8/10  Iteration 1301/1780 Training loss: 1.4681 0.1217 sec/batch
Epoch 8/10  Iteration 1302/1780 Training loss: 1.4686 0.1215 sec/batch
Epoch 8/10  Iteration 1303/1780 Training loss: 1.4691 0.1234 sec/batch
Epoch 8/10  Iteration 1304/1780 Training loss: 1.4687 0.1223 sec/batch
Epoch 8/10  Iteration 1305/1780 Training loss: 1.4681 0.1259 sec/batch
Epoch 8/10  Iteration 1306/1780 Training loss: 1.4686 0.1227 sec/batch
Epoch 8/10  Iteration 1307/1780 Training loss: 1.4686 0.1248 sec/batch
Epoch 8/10  Iteration 1308/1780 Training loss: 1.4693 0.1232 sec/batch
Epoch 8/10  Iteration 1309/1780 Training loss: 1.4698 0.1257 sec/batch
Epoch 8/10  Iteration 1310/1780 Training loss: 1.4699 0.1215 sec/batch
Epoch 8/10  Iteration 1311/1780 Training loss: 1.4697 0.1221 sec/batch
Epoch 8/10  Iteration 1312/1780 Training loss: 1.4698 0.1216 sec/batch
Epoch 8/10  Iteration 1313/1780 Training loss: 1.4699 0.1241 sec/batch
Epoch 8/10  Iteration 1314/1780 Training loss: 1.4693 0.1206 sec/batch
Epoch 8/10  Iteration 1315/1780 Training loss: 1.4692 0.1240 sec/batch
Epoch 8/10  Iteration 1316/1780 Training loss: 1.4688 0.1212 sec/batch
Epoch 8/10  Iteration 1317/1780 Training loss: 1.4693 0.1223 sec/batch
Epoch 8/10  Iteration 1318/1780 Training loss: 1.4694 0.1248 sec/batch
Epoch 8/10  Iteration 1319/1780 Training loss: 1.4697 0.1271 sec/batch
Epoch 8/10  Iteration 1320/1780 Training loss: 1.4693 0.1231 sec/batch
Epoch 8/10  Iteration 1321/1780 Training loss: 1.4689 0.1228 sec/batch
Epoch 8/10  Iteration 1322/1780 Training loss: 1.4689 0.1220 sec/batch
Epoch 8/10  Iteration 1323/1780 Training loss: 1.4686 0.1254 sec/batch
Epoch 8/10  Iteration 1324/1780 Training loss: 1.4684 0.1223 sec/batch
Epoch 8/10  Iteration 1325/1780 Training loss: 1.4678 0.1257 sec/batch
Epoch 8/10  Iteration 1326/1780 Training loss: 1.4676 0.1218 sec/batch
Epoch 8/10  Iteration 1327/1780 Training loss: 1.4671 0.1226 sec/batch
Epoch 8/10  Iteration 1328/1780 Training loss: 1.4670 0.1216 sec/batch
Epoch 8/10  Iteration 1329/1780 Training loss: 1.4664 0.1234 sec/batch
Epoch 8/10  Iteration 1330/1780 Training loss: 1.4663 0.1209 sec/batch
Epoch 8/10  Iteration 1331/1780 Training loss: 1.4660 0.1228 sec/batch
Epoch 8/10  Iteration 1332/1780 Training loss: 1.4656 0.1228 sec/batch
Epoch 8/10  Iteration 1333/1780 Training loss: 1.4652 0.1246 sec/batch
Epoch 8/10  Iteration 1334/1780 Training loss: 1.4648 0.1222 sec/batch
Epoch 8/10  Iteration 1335/1780 Training loss: 1.4642 0.1232 sec/batch
Epoch 8/10  Iteration 1336/1780 Training loss: 1.4642 0.1230 sec/batch
Epoch 8/10  Iteration 1337/1780 Training loss: 1.4638 0.1238 sec/batch
Epoch 8/10  Iteration 1338/1780 Training loss: 1.4636 0.1248 sec/batch
Epoch 8/10  Iteration 1339/1780 Training loss: 1.4631 0.1230 sec/batch
Epoch 8/10  Iteration 1340/1780 Training loss: 1.4627 0.1242 sec/batch
Epoch 8/10  Iteration 1341/1780 Training loss: 1.4623 0.1226 sec/batch
Epoch 8/10  Iteration 1342/1780 Training loss: 1.4621 0.1210 sec/batch
Epoch 8/10  Iteration 1343/1780 Training loss: 1.4620 0.1234 sec/batch
Epoch 8/10  Iteration 1344/1780 Training loss: 1.4613 0.1231 sec/batch
Epoch 8/10  Iteration 1345/1780 Training loss: 1.4608 0.1213 sec/batch
Epoch 8/10  Iteration 1346/1780 Training loss: 1.4603 0.1220 sec/batch
Epoch 8/10  Iteration 1347/1780 Training loss: 1.4601 0.1239 sec/batch
Epoch 8/10  Iteration 1348/1780 Training loss: 1.4598 0.1213 sec/batch
Epoch 8/10  Iteration 1349/1780 Training loss: 1.4595 0.1239 sec/batch
Epoch 8/10  Iteration 1350/1780 Training loss: 1.4593 0.1215 sec/batch
Epoch 8/10  Iteration 1351/1780 Training loss: 1.4589 0.1238 sec/batch
Epoch 8/10  Iteration 1352/1780 Training loss: 1.4587 0.1239 sec/batch
Epoch 8/10  Iteration 1353/1780 Training loss: 1.4584 0.1232 sec/batch
Epoch 8/10  Iteration 1354/1780 Training loss: 1.4582 0.1216 sec/batch
Epoch 8/10  Iteration 1355/1780 Training loss: 1.4579 0.1229 sec/batch
Epoch 8/10  Iteration 1356/1780 Training loss: 1.4578 0.1244 sec/batch
Epoch 8/10  Iteration 1357/1780 Training loss: 1.4574 0.1214 sec/batch
Epoch 8/10  Iteration 1358/1780 Training loss: 1.4572 0.1208 sec/batch
Epoch 8/10  Iteration 1359/1780 Training loss: 1.4569 0.1254 sec/batch
Epoch 8/10  Iteration 1360/1780 Training loss: 1.4566 0.1203 sec/batch
Epoch 8/10  Iteration 1361/1780 Training loss: 1.4562 0.1283 sec/batch
Epoch 8/10  Iteration 1362/1780 Training loss: 1.4558 0.1203 sec/batch
Epoch 8/10  Iteration 1363/1780 Training loss: 1.4556 0.1239 sec/batch
Epoch 8/10  Iteration 1364/1780 Training loss: 1.4555 0.1228 sec/batch
Epoch 8/10  Iteration 1365/1780 Training loss: 1.4553 0.1224 sec/batch
Epoch 8/10  Iteration 1366/1780 Training loss: 1.4552 0.1244 sec/batch
Epoch 8/10  Iteration 1367/1780 Training loss: 1.4549 0.1228 sec/batch
Epoch 8/10  Iteration 1368/1780 Training loss: 1.4543 0.1223 sec/batch
Epoch 8/10  Iteration 1369/1780 Training loss: 1.4539 0.1212 sec/batch
Epoch 8/10  Iteration 1370/1780 Training loss: 1.4538 0.1215 sec/batch
Epoch 8/10  Iteration 1371/1780 Training loss: 1.4536 0.1222 sec/batch
Epoch 8/10  Iteration 1372/1780 Training loss: 1.4530 0.1205 sec/batch
Epoch 8/10  Iteration 1373/1780 Training loss: 1.4529 0.1241 sec/batch
Epoch 8/10  Iteration 1374/1780 Training loss: 1.4529 0.1216 sec/batch
Epoch 8/10  Iteration 1375/1780 Training loss: 1.4528 0.1249 sec/batch
Epoch 8/10  Iteration 1376/1780 Training loss: 1.4525 0.1219 sec/batch
Epoch 8/10  Iteration 1377/1780 Training loss: 1.4519 0.1220 sec/batch
Epoch 8/10  Iteration 1378/1780 Training loss: 1.4517 0.1238 sec/batch
Epoch 8/10  Iteration 1379/1780 Training loss: 1.4518 0.1225 sec/batch
Epoch 8/10  Iteration 1380/1780 Training loss: 1.4518 0.1212 sec/batch
Epoch 8/10  Iteration 1381/1780 Training loss: 1.4517 0.1227 sec/batch
Epoch 8/10  Iteration 1382/1780 Training loss: 1.4517 0.1211 sec/batch
Epoch 8/10  Iteration 1383/1780 Training loss: 1.4517 0.1249 sec/batch
Epoch 8/10  Iteration 1384/1780 Training loss: 1.4517 0.1227 sec/batch
Epoch 8/10  Iteration 1385/1780 Training loss: 1.4517 0.1229 sec/batch
Epoch 8/10  Iteration 1386/1780 Training loss: 1.4517 0.1213 sec/batch
Epoch 8/10  Iteration 1387/1780 Training loss: 1.4519 0.1219 sec/batch
Epoch 8/10  Iteration 1388/1780 Training loss: 1.4518 0.1215 sec/batch
Epoch 8/10  Iteration 1389/1780 Training loss: 1.4516 0.1220 sec/batch
Epoch 8/10  Iteration 1390/1780 Training loss: 1.4517 0.1204 sec/batch
Epoch 8/10  Iteration 1391/1780 Training loss: 1.4515 0.1230 sec/batch
Epoch 8/10  Iteration 1392/1780 Training loss: 1.4515 0.1228 sec/batch
Epoch 8/10  Iteration 1393/1780 Training loss: 1.4514 0.1241 sec/batch
Epoch 8/10  Iteration 1394/1780 Training loss: 1.4515 0.1228 sec/batch
Epoch 8/10  Iteration 1395/1780 Training loss: 1.4515 0.1224 sec/batch
Epoch 8/10  Iteration 1396/1780 Training loss: 1.4513 0.1202 sec/batch
Epoch 8/10  Iteration 1397/1780 Training loss: 1.4509 0.1257 sec/batch
Epoch 8/10  Iteration 1398/1780 Training loss: 1.4506 0.1229 sec/batch
Epoch 8/10  Iteration 1399/1780 Training loss: 1.4506 0.1253 sec/batch
Epoch 8/10  Iteration 1400/1780 Training loss: 1.4504 0.1224 sec/batch
Validation loss: 1.3216 Saving checkpoint!
Epoch 8/10  Iteration 1401/1780 Training loss: 1.4511 0.1220 sec/batch
Epoch 8/10  Iteration 1402/1780 Training loss: 1.4510 0.1214 sec/batch
Epoch 8/10  Iteration 1403/1780 Training loss: 1.4510 0.1233 sec/batch
Epoch 8/10  Iteration 1404/1780 Training loss: 1.4510 0.1211 sec/batch
Epoch 8/10  Iteration 1405/1780 Training loss: 1.4507 0.1231 sec/batch
Epoch 8/10  Iteration 1406/1780 Training loss: 1.4507 0.1231 sec/batch
Epoch 8/10  Iteration 1407/1780 Training loss: 1.4508 0.1250 sec/batch
Epoch 8/10  Iteration 1408/1780 Training loss: 1.4507 0.1235 sec/batch
Epoch 8/10  Iteration 1409/1780 Training loss: 1.4506 0.1251 sec/batch
Epoch 8/10  Iteration 1410/1780 Training loss: 1.4504 0.1218 sec/batch
Epoch 8/10  Iteration 1411/1780 Training loss: 1.4502 0.1239 sec/batch
Epoch 8/10  Iteration 1412/1780 Training loss: 1.4501 0.1212 sec/batch
Epoch 8/10  Iteration 1413/1780 Training loss: 1.4502 0.1235 sec/batch
Epoch 8/10  Iteration 1414/1780 Training loss: 1.4505 0.1242 sec/batch
Epoch 8/10  Iteration 1415/1780 Training loss: 1.4504 0.1225 sec/batch
Epoch 8/10  Iteration 1416/1780 Training loss: 1.4503 0.1239 sec/batch
Epoch 8/10  Iteration 1417/1780 Training loss: 1.4501 0.1255 sec/batch
Epoch 8/10  Iteration 1418/1780 Training loss: 1.4498 0.1217 sec/batch
Epoch 8/10  Iteration 1419/1780 Training loss: 1.4499 0.1256 sec/batch
Epoch 8/10  Iteration 1420/1780 Training loss: 1.4498 0.1245 sec/batch
Epoch 8/10  Iteration 1421/1780 Training loss: 1.4498 0.1246 sec/batch
Epoch 8/10  Iteration 1422/1780 Training loss: 1.4496 0.1216 sec/batch
Epoch 8/10  Iteration 1423/1780 Training loss: 1.4493 0.1225 sec/batch
Epoch 8/10  Iteration 1424/1780 Training loss: 1.4494 0.1255 sec/batch
Epoch 9/10  Iteration 1425/1780 Training loss: 1.5353 0.1220 sec/batch
Epoch 9/10  Iteration 1426/1780 Training loss: 1.4841 0.1218 sec/batch
Epoch 9/10  Iteration 1427/1780 Training loss: 1.4645 0.1242 sec/batch
Epoch 9/10  Iteration 1428/1780 Training loss: 1.4598 0.1219 sec/batch
Epoch 9/10  Iteration 1429/1780 Training loss: 1.4487 0.1245 sec/batch
Epoch 9/10  Iteration 1430/1780 Training loss: 1.4362 0.1217 sec/batch
Epoch 9/10  Iteration 1431/1780 Training loss: 1.4347 0.1253 sec/batch
Epoch 9/10  Iteration 1432/1780 Training loss: 1.4325 0.1219 sec/batch
Epoch 9/10  Iteration 1433/1780 Training loss: 1.4321 0.1241 sec/batch
Epoch 9/10  Iteration 1434/1780 Training loss: 1.4305 0.1243 sec/batch
Epoch 9/10  Iteration 1435/1780 Training loss: 1.4266 0.1241 sec/batch
Epoch 9/10  Iteration 1436/1780 Training loss: 1.4252 0.1217 sec/batch
Epoch 9/10  Iteration 1437/1780 Training loss: 1.4243 0.1271 sec/batch
Epoch 9/10  Iteration 1438/1780 Training loss: 1.4250 0.1232 sec/batch
Epoch 9/10  Iteration 1439/1780 Training loss: 1.4237 0.1221 sec/batch
Epoch 9/10  Iteration 1440/1780 Training loss: 1.4216 0.1235 sec/batch
Epoch 9/10  Iteration 1441/1780 Training loss: 1.4219 0.1228 sec/batch
Epoch 9/10  Iteration 1442/1780 Training loss: 1.4232 0.1223 sec/batch
Epoch 9/10  Iteration 1443/1780 Training loss: 1.4232 0.1219 sec/batch
Epoch 9/10  Iteration 1444/1780 Training loss: 1.4239 0.1281 sec/batch
Epoch 9/10  Iteration 1445/1780 Training loss: 1.4230 0.1237 sec/batch
Epoch 9/10  Iteration 1446/1780 Training loss: 1.4234 0.1218 sec/batch
Epoch 9/10  Iteration 1447/1780 Training loss: 1.4223 0.1256 sec/batch
Epoch 9/10  Iteration 1448/1780 Training loss: 1.4222 0.1235 sec/batch
Epoch 9/10  Iteration 1449/1780 Training loss: 1.4222 0.1227 sec/batch
Epoch 9/10  Iteration 1450/1780 Training loss: 1.4206 0.1216 sec/batch
Epoch 9/10  Iteration 1451/1780 Training loss: 1.4192 0.1242 sec/batch
Epoch 9/10  Iteration 1452/1780 Training loss: 1.4198 0.1221 sec/batch
Epoch 9/10  Iteration 1453/1780 Training loss: 1.4205 0.1231 sec/batch
Epoch 9/10  Iteration 1454/1780 Training loss: 1.4204 0.1242 sec/batch
Epoch 9/10  Iteration 1455/1780 Training loss: 1.4202 0.1229 sec/batch
Epoch 9/10  Iteration 1456/1780 Training loss: 1.4191 0.1219 sec/batch
Epoch 9/10  Iteration 1457/1780 Training loss: 1.4193 0.1238 sec/batch
Epoch 9/10  Iteration 1458/1780 Training loss: 1.4194 0.1247 sec/batch
Epoch 9/10  Iteration 1459/1780 Training loss: 1.4191 0.1228 sec/batch
Epoch 9/10  Iteration 1460/1780 Training loss: 1.4188 0.1231 sec/batch
Epoch 9/10  Iteration 1461/1780 Training loss: 1.4179 0.1235 sec/batch
Epoch 9/10  Iteration 1462/1780 Training loss: 1.4167 0.1234 sec/batch
Epoch 9/10  Iteration 1463/1780 Training loss: 1.4153 0.1228 sec/batch
Epoch 9/10  Iteration 1464/1780 Training loss: 1.4146 0.1217 sec/batch
Epoch 9/10  Iteration 1465/1780 Training loss: 1.4139 0.1258 sec/batch
Epoch 9/10  Iteration 1466/1780 Training loss: 1.4143 0.1233 sec/batch
Epoch 9/10  Iteration 1467/1780 Training loss: 1.4137 0.1252 sec/batch
Epoch 9/10  Iteration 1468/1780 Training loss: 1.4128 0.1249 sec/batch
Epoch 9/10  Iteration 1469/1780 Training loss: 1.4131 0.1238 sec/batch
Epoch 9/10  Iteration 1470/1780 Training loss: 1.4122 0.1247 sec/batch
Epoch 9/10  Iteration 1471/1780 Training loss: 1.4121 0.1243 sec/batch
Epoch 9/10  Iteration 1472/1780 Training loss: 1.4117 0.1209 sec/batch
Epoch 9/10  Iteration 1473/1780 Training loss: 1.4118 0.1266 sec/batch
Epoch 9/10  Iteration 1474/1780 Training loss: 1.4120 0.1234 sec/batch
Epoch 9/10  Iteration 1475/1780 Training loss: 1.4115 0.1269 sec/batch
Epoch 9/10  Iteration 1476/1780 Training loss: 1.4123 0.1232 sec/batch
Epoch 9/10  Iteration 1477/1780 Training loss: 1.4122 0.1286 sec/batch
Epoch 9/10  Iteration 1478/1780 Training loss: 1.4125 0.1287 sec/batch
Epoch 9/10  Iteration 1479/1780 Training loss: 1.4124 0.1227 sec/batch
Epoch 9/10  Iteration 1480/1780 Training loss: 1.4124 0.1213 sec/batch
Epoch 9/10  Iteration 1481/1780 Training loss: 1.4128 0.1257 sec/batch
Epoch 9/10  Iteration 1482/1780 Training loss: 1.4123 0.1241 sec/batch
Epoch 9/10  Iteration 1483/1780 Training loss: 1.4117 0.1232 sec/batch
Epoch 9/10  Iteration 1484/1780 Training loss: 1.4122 0.1224 sec/batch
Epoch 9/10  Iteration 1485/1780 Training loss: 1.4121 0.1224 sec/batch
Epoch 9/10  Iteration 1486/1780 Training loss: 1.4127 0.1207 sec/batch
Epoch 9/10  Iteration 1487/1780 Training loss: 1.4132 0.1240 sec/batch
Epoch 9/10  Iteration 1488/1780 Training loss: 1.4131 0.1215 sec/batch
Epoch 9/10  Iteration 1489/1780 Training loss: 1.4130 0.1224 sec/batch
Epoch 9/10  Iteration 1490/1780 Training loss: 1.4131 0.1227 sec/batch
Epoch 9/10  Iteration 1491/1780 Training loss: 1.4131 0.1268 sec/batch
Epoch 9/10  Iteration 1492/1780 Training loss: 1.4127 0.1229 sec/batch
Epoch 9/10  Iteration 1493/1780 Training loss: 1.4128 0.1231 sec/batch
Epoch 9/10  Iteration 1494/1780 Training loss: 1.4128 0.1236 sec/batch
Epoch 9/10  Iteration 1495/1780 Training loss: 1.4132 0.1247 sec/batch
Epoch 9/10  Iteration 1496/1780 Training loss: 1.4134 0.1217 sec/batch
Epoch 9/10  Iteration 1497/1780 Training loss: 1.4138 0.1244 sec/batch
Epoch 9/10  Iteration 1498/1780 Training loss: 1.4133 0.1215 sec/batch
Epoch 9/10  Iteration 1499/1780 Training loss: 1.4131 0.1271 sec/batch
Epoch 9/10  Iteration 1500/1780 Training loss: 1.4132 0.1231 sec/batch
Validation loss: 1.29403 Saving checkpoint!
Epoch 9/10  Iteration 1501/1780 Training loss: 1.4146 0.1211 sec/batch
Epoch 9/10  Iteration 1502/1780 Training loss: 1.4145 0.1221 sec/batch
Epoch 9/10  Iteration 1503/1780 Training loss: 1.4140 0.1238 sec/batch
Epoch 9/10  Iteration 1504/1780 Training loss: 1.4139 0.1218 sec/batch
Epoch 9/10  Iteration 1505/1780 Training loss: 1.4134 0.1228 sec/batch
Epoch 9/10  Iteration 1506/1780 Training loss: 1.4134 0.1236 sec/batch
Epoch 9/10  Iteration 1507/1780 Training loss: 1.4129 0.1233 sec/batch
Epoch 9/10  Iteration 1508/1780 Training loss: 1.4128 0.1248 sec/batch
Epoch 9/10  Iteration 1509/1780 Training loss: 1.4126 0.1222 sec/batch
Epoch 9/10  Iteration 1510/1780 Training loss: 1.4124 0.1211 sec/batch
Epoch 9/10  Iteration 1511/1780 Training loss: 1.4121 0.1235 sec/batch
Epoch 9/10  Iteration 1512/1780 Training loss: 1.4119 0.1201 sec/batch
Epoch 9/10  Iteration 1513/1780 Training loss: 1.4114 0.1221 sec/batch
Epoch 9/10  Iteration 1514/1780 Training loss: 1.4114 0.1207 sec/batch
Epoch 9/10  Iteration 1515/1780 Training loss: 1.4112 0.1237 sec/batch
Epoch 9/10  Iteration 1516/1780 Training loss: 1.4110 0.1217 sec/batch
Epoch 9/10  Iteration 1517/1780 Training loss: 1.4107 0.1230 sec/batch
Epoch 9/10  Iteration 1518/1780 Training loss: 1.4103 0.1231 sec/batch
Epoch 9/10  Iteration 1519/1780 Training loss: 1.4100 0.1228 sec/batch
Epoch 9/10  Iteration 1520/1780 Training loss: 1.4100 0.1228 sec/batch
Epoch 9/10  Iteration 1521/1780 Training loss: 1.4100 0.1254 sec/batch
Epoch 9/10  Iteration 1522/1780 Training loss: 1.4095 0.1229 sec/batch
Epoch 9/10  Iteration 1523/1780 Training loss: 1.4092 0.1239 sec/batch
Epoch 9/10  Iteration 1524/1780 Training loss: 1.4087 0.1210 sec/batch
Epoch 9/10  Iteration 1525/1780 Training loss: 1.4087 0.1227 sec/batch
Epoch 9/10  Iteration 1526/1780 Training loss: 1.4086 0.1227 sec/batch
Epoch 9/10  Iteration 1527/1780 Training loss: 1.4085 0.1252 sec/batch
Epoch 9/10  Iteration 1528/1780 Training loss: 1.4083 0.1245 sec/batch
Epoch 9/10  Iteration 1529/1780 Training loss: 1.4081 0.1335 sec/batch
Epoch 9/10  Iteration 1530/1780 Training loss: 1.4080 0.1237 sec/batch
Epoch 9/10  Iteration 1531/1780 Training loss: 1.4078 0.1220 sec/batch
Epoch 9/10  Iteration 1532/1780 Training loss: 1.4078 0.1213 sec/batch
Epoch 9/10  Iteration 1533/1780 Training loss: 1.4076 0.1215 sec/batch
Epoch 9/10  Iteration 1534/1780 Training loss: 1.4076 0.1235 sec/batch
Epoch 9/10  Iteration 1535/1780 Training loss: 1.4072 0.1248 sec/batch
Epoch 9/10  Iteration 1536/1780 Training loss: 1.4071 0.1215 sec/batch
Epoch 9/10  Iteration 1537/1780 Training loss: 1.4069 0.1226 sec/batch
Epoch 9/10  Iteration 1538/1780 Training loss: 1.4067 0.1229 sec/batch
Epoch 9/10  Iteration 1539/1780 Training loss: 1.4063 0.1254 sec/batch
Epoch 9/10  Iteration 1540/1780 Training loss: 1.4059 0.1226 sec/batch
Epoch 9/10  Iteration 1541/1780 Training loss: 1.4058 0.1251 sec/batch
Epoch 9/10  Iteration 1542/1780 Training loss: 1.4058 0.1242 sec/batch
Epoch 9/10  Iteration 1543/1780 Training loss: 1.4055 0.1250 sec/batch
Epoch 9/10  Iteration 1544/1780 Training loss: 1.4055 0.1237 sec/batch
Epoch 9/10  Iteration 1545/1780 Training loss: 1.4053 0.1282 sec/batch
Epoch 9/10  Iteration 1546/1780 Training loss: 1.4049 0.1247 sec/batch
Epoch 9/10  Iteration 1547/1780 Training loss: 1.4043 0.1252 sec/batch
Epoch 9/10  Iteration 1548/1780 Training loss: 1.4042 0.1225 sec/batch
Epoch 9/10  Iteration 1549/1780 Training loss: 1.4040 0.1220 sec/batch
Epoch 9/10  Iteration 1550/1780 Training loss: 1.4036 0.1226 sec/batch
Epoch 9/10  Iteration 1551/1780 Training loss: 1.4036 0.1226 sec/batch
Epoch 9/10  Iteration 1552/1780 Training loss: 1.4035 0.1206 sec/batch
Epoch 9/10  Iteration 1553/1780 Training loss: 1.4032 0.1232 sec/batch
Epoch 9/10  Iteration 1554/1780 Training loss: 1.4028 0.1234 sec/batch
Epoch 9/10  Iteration 1555/1780 Training loss: 1.4023 0.1214 sec/batch
Epoch 9/10  Iteration 1556/1780 Training loss: 1.4020 0.1207 sec/batch
Epoch 9/10  Iteration 1557/1780 Training loss: 1.4020 0.1221 sec/batch
Epoch 9/10  Iteration 1558/1780 Training loss: 1.4019 0.1221 sec/batch
Epoch 9/10  Iteration 1559/1780 Training loss: 1.4019 0.1257 sec/batch
Epoch 9/10  Iteration 1560/1780 Training loss: 1.4018 0.1217 sec/batch
Epoch 9/10  Iteration 1561/1780 Training loss: 1.4020 0.1255 sec/batch
Epoch 9/10  Iteration 1562/1780 Training loss: 1.4020 0.1208 sec/batch
Epoch 9/10  Iteration 1563/1780 Training loss: 1.4020 0.1229 sec/batch
Epoch 9/10  Iteration 1564/1780 Training loss: 1.4018 0.1239 sec/batch
Epoch 9/10  Iteration 1565/1780 Training loss: 1.4021 0.1219 sec/batch
Epoch 9/10  Iteration 1566/1780 Training loss: 1.4021 0.1223 sec/batch
Epoch 9/10  Iteration 1567/1780 Training loss: 1.4020 0.1235 sec/batch
Epoch 9/10  Iteration 1568/1780 Training loss: 1.4021 0.1228 sec/batch
Epoch 9/10  Iteration 1569/1780 Training loss: 1.4019 0.1253 sec/batch
Epoch 9/10  Iteration 1570/1780 Training loss: 1.4020 0.1220 sec/batch
Epoch 9/10  Iteration 1571/1780 Training loss: 1.4019 0.1231 sec/batch
Epoch 9/10  Iteration 1572/1780 Training loss: 1.4021 0.1341 sec/batch
Epoch 9/10  Iteration 1573/1780 Training loss: 1.4021 0.1225 sec/batch
Epoch 9/10  Iteration 1574/1780 Training loss: 1.4019 0.1217 sec/batch
Epoch 9/10  Iteration 1575/1780 Training loss: 1.4015 0.1227 sec/batch
Epoch 9/10  Iteration 1576/1780 Training loss: 1.4013 0.1202 sec/batch
Epoch 9/10  Iteration 1577/1780 Training loss: 1.4013 0.1273 sec/batch
Epoch 9/10  Iteration 1578/1780 Training loss: 1.4011 0.1213 sec/batch
Epoch 9/10  Iteration 1579/1780 Training loss: 1.4011 0.1290 sec/batch
Epoch 9/10  Iteration 1580/1780 Training loss: 1.4009 0.1201 sec/batch
Epoch 9/10  Iteration 1581/1780 Training loss: 1.4009 0.1219 sec/batch
Epoch 9/10  Iteration 1582/1780 Training loss: 1.4008 0.1235 sec/batch
Epoch 9/10  Iteration 1583/1780 Training loss: 1.4005 0.1212 sec/batch
Epoch 9/10  Iteration 1584/1780 Training loss: 1.4005 0.1224 sec/batch
Epoch 9/10  Iteration 1585/1780 Training loss: 1.4006 0.1226 sec/batch
Epoch 9/10  Iteration 1586/1780 Training loss: 1.4005 0.1227 sec/batch
Epoch 9/10  Iteration 1587/1780 Training loss: 1.4005 0.1289 sec/batch
Epoch 9/10  Iteration 1588/1780 Training loss: 1.4004 0.1241 sec/batch
Epoch 9/10  Iteration 1589/1780 Training loss: 1.4003 0.1219 sec/batch
Epoch 9/10  Iteration 1590/1780 Training loss: 1.4001 0.1237 sec/batch
Epoch 9/10  Iteration 1591/1780 Training loss: 1.4002 0.1235 sec/batch
Epoch 9/10  Iteration 1592/1780 Training loss: 1.4006 0.1215 sec/batch
Epoch 9/10  Iteration 1593/1780 Training loss: 1.4005 0.1251 sec/batch
Epoch 9/10  Iteration 1594/1780 Training loss: 1.4004 0.1221 sec/batch
Epoch 9/10  Iteration 1595/1780 Training loss: 1.4003 0.1227 sec/batch
Epoch 9/10  Iteration 1596/1780 Training loss: 1.4000 0.1242 sec/batch
Epoch 9/10  Iteration 1597/1780 Training loss: 1.4001 0.1221 sec/batch
Epoch 9/10  Iteration 1598/1780 Training loss: 1.4000 0.1211 sec/batch
Epoch 9/10  Iteration 1599/1780 Training loss: 1.4000 0.1315 sec/batch
Epoch 9/10  Iteration 1600/1780 Training loss: 1.3999 0.1200 sec/batch
Validation loss: 1.27288 Saving checkpoint!
Epoch 9/10  Iteration 1601/1780 Training loss: 1.4005 0.1202 sec/batch
Epoch 9/10  Iteration 1602/1780 Training loss: 1.4007 0.1246 sec/batch
Epoch 10/10  Iteration 1603/1780 Training loss: 1.5037 0.1222 sec/batch
Epoch 10/10  Iteration 1604/1780 Training loss: 1.4527 0.1217 sec/batch
Epoch 10/10  Iteration 1605/1780 Training loss: 1.4277 0.1252 sec/batch
Epoch 10/10  Iteration 1606/1780 Training loss: 1.4221 0.1206 sec/batch
Epoch 10/10  Iteration 1607/1780 Training loss: 1.4116 0.1220 sec/batch
Epoch 10/10  Iteration 1608/1780 Training loss: 1.3979 0.1219 sec/batch
Epoch 10/10  Iteration 1609/1780 Training loss: 1.3973 0.1243 sec/batch
Epoch 10/10  Iteration 1610/1780 Training loss: 1.3954 0.1233 sec/batch
Epoch 10/10  Iteration 1611/1780 Training loss: 1.3955 0.1269 sec/batch
Epoch 10/10  Iteration 1612/1780 Training loss: 1.3939 0.1219 sec/batch
Epoch 10/10  Iteration 1613/1780 Training loss: 1.3906 0.1257 sec/batch
Epoch 10/10  Iteration 1614/1780 Training loss: 1.3894 0.1240 sec/batch
Epoch 10/10  Iteration 1615/1780 Training loss: 1.3886 0.1228 sec/batch
Epoch 10/10  Iteration 1616/1780 Training loss: 1.3897 0.1220 sec/batch
Epoch 10/10  Iteration 1617/1780 Training loss: 1.3882 0.1248 sec/batch
Epoch 10/10  Iteration 1618/1780 Training loss: 1.3860 0.1238 sec/batch
Epoch 10/10  Iteration 1619/1780 Training loss: 1.3862 0.1225 sec/batch
Epoch 10/10  Iteration 1620/1780 Training loss: 1.3878 0.1222 sec/batch
Epoch 10/10  Iteration 1621/1780 Training loss: 1.3873 0.1219 sec/batch
Epoch 10/10  Iteration 1622/1780 Training loss: 1.3886 0.1232 sec/batch
Epoch 10/10  Iteration 1623/1780 Training loss: 1.3874 0.1277 sec/batch
Epoch 10/10  Iteration 1624/1780 Training loss: 1.3876 0.1213 sec/batch
Epoch 10/10  Iteration 1625/1780 Training loss: 1.3860 0.1217 sec/batch
Epoch 10/10  Iteration 1626/1780 Training loss: 1.3856 0.1226 sec/batch
Epoch 10/10  Iteration 1627/1780 Training loss: 1.3855 0.1219 sec/batch
Epoch 10/10  Iteration 1628/1780 Training loss: 1.3835 0.1222 sec/batch
Epoch 10/10  Iteration 1629/1780 Training loss: 1.3821 0.1256 sec/batch
Epoch 10/10  Iteration 1630/1780 Training loss: 1.3825 0.1217 sec/batch
Epoch 10/10  Iteration 1631/1780 Training loss: 1.3826 0.1251 sec/batch
Epoch 10/10  Iteration 1632/1780 Training loss: 1.3828 0.1245 sec/batch
Epoch 10/10  Iteration 1633/1780 Training loss: 1.3823 0.1274 sec/batch
Epoch 10/10  Iteration 1634/1780 Training loss: 1.3810 0.1231 sec/batch
Epoch 10/10  Iteration 1635/1780 Training loss: 1.3813 0.1290 sec/batch
Epoch 10/10  Iteration 1636/1780 Training loss: 1.3817 0.1234 sec/batch
Epoch 10/10  Iteration 1637/1780 Training loss: 1.3814 0.1252 sec/batch
Epoch 10/10  Iteration 1638/1780 Training loss: 1.3810 0.1226 sec/batch
Epoch 10/10  Iteration 1639/1780 Training loss: 1.3801 0.1261 sec/batch
Epoch 10/10  Iteration 1640/1780 Training loss: 1.3790 0.1215 sec/batch
Epoch 10/10  Iteration 1641/1780 Training loss: 1.3775 0.1235 sec/batch
Epoch 10/10  Iteration 1642/1780 Training loss: 1.3768 0.1250 sec/batch
Epoch 10/10  Iteration 1643/1780 Training loss: 1.3763 0.1233 sec/batch
Epoch 10/10  Iteration 1644/1780 Training loss: 1.3766 0.1223 sec/batch
Epoch 10/10  Iteration 1645/1780 Training loss: 1.3763 0.1227 sec/batch
Epoch 10/10  Iteration 1646/1780 Training loss: 1.3757 0.1231 sec/batch
Epoch 10/10  Iteration 1647/1780 Training loss: 1.3760 0.1232 sec/batch
Epoch 10/10  Iteration 1648/1780 Training loss: 1.3749 0.1222 sec/batch
Epoch 10/10  Iteration 1649/1780 Training loss: 1.3745 0.1225 sec/batch
Epoch 10/10  Iteration 1650/1780 Training loss: 1.3739 0.1226 sec/batch
Epoch 10/10  Iteration 1651/1780 Training loss: 1.3737 0.1242 sec/batch
Epoch 10/10  Iteration 1652/1780 Training loss: 1.3741 0.1255 sec/batch
Epoch 10/10  Iteration 1653/1780 Training loss: 1.3734 0.1230 sec/batch
Epoch 10/10  Iteration 1654/1780 Training loss: 1.3742 0.1211 sec/batch
Epoch 10/10  Iteration 1655/1780 Training loss: 1.3738 0.1227 sec/batch
Epoch 10/10  Iteration 1656/1780 Training loss: 1.3740 0.1254 sec/batch
Epoch 10/10  Iteration 1657/1780 Training loss: 1.3737 0.1261 sec/batch
Epoch 10/10  Iteration 1658/1780 Training loss: 1.3739 0.1222 sec/batch
Epoch 10/10  Iteration 1659/1780 Training loss: 1.3742 0.1254 sec/batch
Epoch 10/10  Iteration 1660/1780 Training loss: 1.3737 0.1211 sec/batch
Epoch 10/10  Iteration 1661/1780 Training loss: 1.3733 0.1233 sec/batch
Epoch 10/10  Iteration 1662/1780 Training loss: 1.3740 0.1232 sec/batch
Epoch 10/10  Iteration 1663/1780 Training loss: 1.3740 0.1230 sec/batch
Epoch 10/10  Iteration 1664/1780 Training loss: 1.3747 0.1234 sec/batch
Epoch 10/10  Iteration 1665/1780 Training loss: 1.3751 0.1225 sec/batch
Epoch 10/10  Iteration 1666/1780 Training loss: 1.3752 0.1269 sec/batch
Epoch 10/10  Iteration 1667/1780 Training loss: 1.3751 0.1221 sec/batch
Epoch 10/10  Iteration 1668/1780 Training loss: 1.3751 0.1244 sec/batch
Epoch 10/10  Iteration 1669/1780 Training loss: 1.3752 0.1228 sec/batch
Epoch 10/10  Iteration 1670/1780 Training loss: 1.3748 0.1214 sec/batch
Epoch 10/10  Iteration 1671/1780 Training loss: 1.3748 0.1236 sec/batch
Epoch 10/10  Iteration 1672/1780 Training loss: 1.3747 0.1221 sec/batch
Epoch 10/10  Iteration 1673/1780 Training loss: 1.3751 0.1268 sec/batch
Epoch 10/10  Iteration 1674/1780 Training loss: 1.3753 0.1213 sec/batch
Epoch 10/10  Iteration 1675/1780 Training loss: 1.3758 0.1251 sec/batch
Epoch 10/10  Iteration 1676/1780 Training loss: 1.3754 0.1224 sec/batch
Epoch 10/10  Iteration 1677/1780 Training loss: 1.3753 0.1224 sec/batch
Epoch 10/10  Iteration 1678/1780 Training loss: 1.3753 0.1225 sec/batch
Epoch 10/10  Iteration 1679/1780 Training loss: 1.3751 0.1225 sec/batch
Epoch 10/10  Iteration 1680/1780 Training loss: 1.3749 0.1229 sec/batch
Epoch 10/10  Iteration 1681/1780 Training loss: 1.3742 0.1252 sec/batch
Epoch 10/10  Iteration 1682/1780 Training loss: 1.3741 0.1226 sec/batch
Epoch 10/10  Iteration 1683/1780 Training loss: 1.3736 0.1255 sec/batch
Epoch 10/10  Iteration 1684/1780 Training loss: 1.3736 0.1217 sec/batch
Epoch 10/10  Iteration 1685/1780 Training loss: 1.3729 0.1251 sec/batch
Epoch 10/10  Iteration 1686/1780 Training loss: 1.3728 0.1218 sec/batch
Epoch 10/10  Iteration 1687/1780 Training loss: 1.3725 0.1235 sec/batch
Epoch 10/10  Iteration 1688/1780 Training loss: 1.3723 0.1215 sec/batch
Epoch 10/10  Iteration 1689/1780 Training loss: 1.3720 0.1262 sec/batch
Epoch 10/10  Iteration 1690/1780 Training loss: 1.3716 0.1229 sec/batch
Epoch 10/10  Iteration 1691/1780 Training loss: 1.3711 0.1232 sec/batch
Epoch 10/10  Iteration 1692/1780 Training loss: 1.3711 0.1215 sec/batch
Epoch 10/10  Iteration 1693/1780 Training loss: 1.3708 0.1228 sec/batch
Epoch 10/10  Iteration 1694/1780 Training loss: 1.3705 0.1233 sec/batch
Epoch 10/10  Iteration 1695/1780 Training loss: 1.3702 0.1253 sec/batch
Epoch 10/10  Iteration 1696/1780 Training loss: 1.3699 0.1233 sec/batch
Epoch 10/10  Iteration 1697/1780 Training loss: 1.3696 0.1231 sec/batch
Epoch 10/10  Iteration 1698/1780 Training loss: 1.3695 0.1218 sec/batch
Epoch 10/10  Iteration 1699/1780 Training loss: 1.3695 0.1242 sec/batch
Epoch 10/10  Iteration 1700/1780 Training loss: 1.3691 0.1220 sec/batch
Validation loss: 1.25628 Saving checkpoint!
Epoch 10/10  Iteration 1701/1780 Training loss: 1.3703 0.1237 sec/batch
Epoch 10/10  Iteration 1702/1780 Training loss: 1.3699 0.1257 sec/batch
Epoch 10/10  Iteration 1703/1780 Training loss: 1.3698 0.1244 sec/batch
Epoch 10/10  Iteration 1704/1780 Training loss: 1.3697 0.1210 sec/batch
Epoch 10/10  Iteration 1705/1780 Training loss: 1.3696 0.1271 sec/batch
Epoch 10/10  Iteration 1706/1780 Training loss: 1.3695 0.1220 sec/batch
Epoch 10/10  Iteration 1707/1780 Training loss: 1.3693 0.1230 sec/batch
Epoch 10/10  Iteration 1708/1780 Training loss: 1.3691 0.1214 sec/batch
Epoch 10/10  Iteration 1709/1780 Training loss: 1.3691 0.1233 sec/batch
Epoch 10/10  Iteration 1710/1780 Training loss: 1.3690 0.1252 sec/batch
Epoch 10/10  Iteration 1711/1780 Training loss: 1.3689 0.1254 sec/batch
Epoch 10/10  Iteration 1712/1780 Training loss: 1.3689 0.1226 sec/batch
Epoch 10/10  Iteration 1713/1780 Training loss: 1.3688 0.1226 sec/batch
Epoch 10/10  Iteration 1714/1780 Training loss: 1.3686 0.1216 sec/batch
Epoch 10/10  Iteration 1715/1780 Training loss: 1.3684 0.1223 sec/batch
Epoch 10/10  Iteration 1716/1780 Training loss: 1.3683 0.1222 sec/batch
Epoch 10/10  Iteration 1717/1780 Training loss: 1.3679 0.1280 sec/batch
Epoch 10/10  Iteration 1718/1780 Training loss: 1.3676 0.1235 sec/batch
Epoch 10/10  Iteration 1719/1780 Training loss: 1.3675 0.1218 sec/batch
Epoch 10/10  Iteration 1720/1780 Training loss: 1.3675 0.1205 sec/batch
Epoch 10/10  Iteration 1721/1780 Training loss: 1.3673 0.1237 sec/batch
Epoch 10/10  Iteration 1722/1780 Training loss: 1.3672 0.1234 sec/batch
Epoch 10/10  Iteration 1723/1780 Training loss: 1.3670 0.1233 sec/batch
Epoch 10/10  Iteration 1724/1780 Training loss: 1.3666 0.1210 sec/batch
Epoch 10/10  Iteration 1725/1780 Training loss: 1.3661 0.1220 sec/batch
Epoch 10/10  Iteration 1726/1780 Training loss: 1.3661 0.1216 sec/batch
Epoch 10/10  Iteration 1727/1780 Training loss: 1.3660 0.1231 sec/batch
Epoch 10/10  Iteration 1728/1780 Training loss: 1.3656 0.1217 sec/batch
Epoch 10/10  Iteration 1729/1780 Training loss: 1.3656 0.1358 sec/batch
Epoch 10/10  Iteration 1730/1780 Training loss: 1.3655 0.1230 sec/batch
Epoch 10/10  Iteration 1731/1780 Training loss: 1.3653 0.1226 sec/batch
Epoch 10/10  Iteration 1732/1780 Training loss: 1.3650 0.1224 sec/batch
Epoch 10/10  Iteration 1733/1780 Training loss: 1.3645 0.1263 sec/batch
Epoch 10/10  Iteration 1734/1780 Training loss: 1.3642 0.1268 sec/batch
Epoch 10/10  Iteration 1735/1780 Training loss: 1.3642 0.1247 sec/batch
Epoch 10/10  Iteration 1736/1780 Training loss: 1.3642 0.1221 sec/batch
Epoch 10/10  Iteration 1737/1780 Training loss: 1.3641 0.1220 sec/batch
Epoch 10/10  Iteration 1738/1780 Training loss: 1.3641 0.1220 sec/batch
Epoch 10/10  Iteration 1739/1780 Training loss: 1.3642 0.1242 sec/batch
Epoch 10/10  Iteration 1740/1780 Training loss: 1.3642 0.1230 sec/batch
Epoch 10/10  Iteration 1741/1780 Training loss: 1.3641 0.1222 sec/batch
Epoch 10/10  Iteration 1742/1780 Training loss: 1.3641 0.1229 sec/batch
Epoch 10/10  Iteration 1743/1780 Training loss: 1.3644 0.1305 sec/batch
Epoch 10/10  Iteration 1744/1780 Training loss: 1.3643 0.1230 sec/batch
Epoch 10/10  Iteration 1745/1780 Training loss: 1.3642 0.1237 sec/batch
Epoch 10/10  Iteration 1746/1780 Training loss: 1.3644 0.1235 sec/batch
Epoch 10/10  Iteration 1747/1780 Training loss: 1.3643 0.1240 sec/batch
Epoch 10/10  Iteration 1748/1780 Training loss: 1.3643 0.1214 sec/batch
Epoch 10/10  Iteration 1749/1780 Training loss: 1.3643 0.1250 sec/batch
Epoch 10/10  Iteration 1750/1780 Training loss: 1.3644 0.1210 sec/batch
Epoch 10/10  Iteration 1751/1780 Training loss: 1.3644 0.1213 sec/batch
Epoch 10/10  Iteration 1752/1780 Training loss: 1.3643 0.1221 sec/batch
Epoch 10/10  Iteration 1753/1780 Training loss: 1.3640 0.1228 sec/batch
Epoch 10/10  Iteration 1754/1780 Training loss: 1.3637 0.1214 sec/batch
Epoch 10/10  Iteration 1755/1780 Training loss: 1.3637 0.1229 sec/batch
Epoch 10/10  Iteration 1756/1780 Training loss: 1.3636 0.1205 sec/batch
Epoch 10/10  Iteration 1757/1780 Training loss: 1.3635 0.1220 sec/batch
Epoch 10/10  Iteration 1758/1780 Training loss: 1.3635 0.1227 sec/batch
Epoch 10/10  Iteration 1759/1780 Training loss: 1.3634 0.1219 sec/batch
Epoch 10/10  Iteration 1760/1780 Training loss: 1.3634 0.1237 sec/batch
Epoch 10/10  Iteration 1761/1780 Training loss: 1.3630 0.1224 sec/batch
Epoch 10/10  Iteration 1762/1780 Training loss: 1.3631 0.1231 sec/batch
Epoch 10/10  Iteration 1763/1780 Training loss: 1.3633 0.1252 sec/batch
Epoch 10/10  Iteration 1764/1780 Training loss: 1.3632 0.1230 sec/batch
Epoch 10/10  Iteration 1765/1780 Training loss: 1.3631 0.1226 sec/batch
Epoch 10/10  Iteration 1766/1780 Training loss: 1.3631 0.1220 sec/batch
Epoch 10/10  Iteration 1767/1780 Training loss: 1.3630 0.1261 sec/batch
Epoch 10/10  Iteration 1768/1780 Training loss: 1.3630 0.1215 sec/batch
Epoch 10/10  Iteration 1769/1780 Training loss: 1.3630 0.1260 sec/batch
Epoch 10/10  Iteration 1770/1780 Training loss: 1.3634 0.1234 sec/batch
Epoch 10/10  Iteration 1771/1780 Training loss: 1.3633 0.1226 sec/batch
Epoch 10/10  Iteration 1772/1780 Training loss: 1.3633 0.1212 sec/batch
Epoch 10/10  Iteration 1773/1780 Training loss: 1.3631 0.1219 sec/batch
Epoch 10/10  Iteration 1774/1780 Training loss: 1.3629 0.1213 sec/batch
Epoch 10/10  Iteration 1775/1780 Training loss: 1.3630 0.1227 sec/batch
Epoch 10/10  Iteration 1776/1780 Training loss: 1.3629 0.1212 sec/batch
Epoch 10/10  Iteration 1777/1780 Training loss: 1.3630 0.1228 sec/batch
Epoch 10/10  Iteration 1778/1780 Training loss: 1.3627 0.1205 sec/batch
Epoch 10/10  Iteration 1779/1780 Training loss: 1.3625 0.1228 sec/batch
Epoch 10/10  Iteration 1780/1780 Training loss: 1.3626 0.1239 sec/batch
Validation loss: 1.24267 Saving checkpoint!

In [35]:
tf.train.get_checkpoint_state('checkpoints/anna')


Out[35]:
model_checkpoint_path: "checkpoints/anna/i3560_l512_1.122.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i200_l512_2.432.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i400_l512_1.980.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i600_l512_1.750.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i800_l512_1.595.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1000_l512_1.484.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1200_l512_1.407.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1400_l512_1.349.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1600_l512_1.292.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1800_l512_1.255.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2000_l512_1.224.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2200_l512_1.204.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2400_l512_1.187.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2600_l512_1.172.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2800_l512_1.160.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3000_l512_1.148.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3200_l512_1.137.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3400_l512_1.129.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3560_l512_1.122.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [17]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [41]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    prime = "Far"
    samples = [c for c in prime]
    model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.preds, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.preds, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

In [44]:
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)


Farlathit that if had so
like it that it were. He could not trouble to his wife, and there was
anything in them of the side of his weaky in the creature at his forteren
to him.

"What is it? I can't bread to those," said Stepan Arkadyevitch. "It's not
my children, and there is an almost this arm, true it mays already,
and tell you what I have say to you, and was not looking at the peasant,
why is, I don't know him out, and she doesn't speak to me immediately, as
you would say the countess and the more frest an angelembre, and time and
things's silent, but I was not in my stand that is in my head. But if he
say, and was so feeling with his soul. A child--in his soul of his
soul of his soul. He should not see that any of that sense of. Here he
had not been so composed and to speak for as in a whole picture, but
all the setting and her excellent and society, who had been delighted
and see to anywing had been being troed to thousand words on them,
we liked him.

That set in her money at the table, he came into the party. The capable
of his she could not be as an old composure.

"That's all something there will be down becime by throe is
such a silent, as in a countess, I should state it out and divorct.
The discussion is not for me. I was that something was simply they are
all three manshess of a sensitions of mind it all."

"No," he thought, shouted and lifting his soul. "While it might see your
honser and she, I could burst. And I had been a midelity. And I had a
marnief are through the countess," he said, looking at him, a chosing
which they had been carried out and still solied, and there was a sen that
was to be completely, and that this matter of all the seconds of it, and
a concipation were to her husband, who came up and conscaously, that he
was not the station. All his fourse she was always at the country,,
to speak oft, and though they were to hear the delightful throom and
whether they came towards the morning, and his living and a coller and
hold--the children. 

In [43]:
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farnt him oste wha sorind thans tout thint asd an sesand an hires on thime sind thit aled, ban thand and out hore as the ter hos ton ho te that, was tis tart al the hand sostint him sore an tit an son thes, win he se ther san ther hher tas tarereng,.

Anl at an ades in ond hesiln, ad hhe torers teans, wast tar arering tho this sos alten sorer has hhas an siton ther him he had sin he ard ate te anling the sosin her ans and
arins asd and ther ale te tot an tand tanginge wath and ho ald, so sot th asend sat hare sother horesinnd, he hesense wing ante her so tith tir sherinn, anded and to the toul anderin he sorit he torsith she se atere an ting ot hand and thit hhe so the te wile har
ens ont in the sersise, and we he seres tar aterer, to ato tat or has he he wan ton here won and sen heren he sosering, to to theer oo adent har herere the wosh oute, was serild ward tous hed astend..

I's sint on alt in har tor tit her asd hade shithans ored he talereng an soredendere tim tot hees. Tise sor and 

In [46]:
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Fard as astice her said he celatice of to seress in the raice, and to be the some and sere allats to that said to that the sark and a cast a the wither ald the pacinesse of her had astition, he said to the sount as she west at hissele. Af the cond it he was a fact onthis astisarianing.


"Or a ton to to be that's a more at aspestale as the sont of anstiring as
thours and trey.

The same wo dangring the
raterst, who sore and somethy had ast out an of his book. "We had's beane were that, and a morted a thay he had to tere. Then to
her homent andertersed his his ancouted to the pirsted, the soution for of the pirsice inthirgest and stenciol, with the hard and and
a colrice of to be oneres,
the song to this anderssad.
The could ounterss the said to serom of
soment a carsed of sheres of she
torded
har and want in their of hould, but
her told in that in he tad a the same to her. Serghing an her has and with the seed, and the camt ont his about of the
sail, the her then all houg ant or to hus to 

In [47]:
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farrat, his felt has at it.

"When the pose ther hor exceed
to his sheant was," weat a sime of his sounsed. The coment and the facily that which had began terede a marilicaly whice whether the pose of his hand, at she was alligated herself the same on she had to
taiking to his forthing and streath how to hand
began in a lang at some at it, this he cholded not set all her. "Wo love that is setthing. Him anstering as seen that."

"Yes in the man that say the mare a crances is it?" said Sergazy Ivancatching. "You doon think were somether is ifficult of a mone of
though the most at the countes that the
mean on the come to say the most, to
his feesing of
a man she, whilo he
sained and well, that he would still at to said. He wind at his for the sore in the most
of hoss and almoved to see him. They have betine the sumper into at he his stire, and what he was that at the so steate of the
sound, and shin should have a geest of shall feet on the conderation to she had been at that imporsing the dre