In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
In [1]:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
First we'll load the text file and convert it into integers for our network to use.
In [2]:
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
In [3]:
text[:100]
Out[3]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'
In [4]:
chars[:100]
Out[4]:
array([71, 19, 45, 77, 52, 26, 11, 56, 43, 12, 12, 12, 42, 45, 77, 77, 79,
56, 76, 45, 21, 38, 10, 38, 26, 66, 56, 45, 11, 26, 56, 45, 10, 10,
56, 45, 10, 38, 64, 26, 48, 56, 26, 24, 26, 11, 79, 56, 31, 70, 19,
45, 77, 77, 79, 56, 76, 45, 21, 38, 10, 79, 56, 38, 66, 56, 31, 70,
19, 45, 77, 77, 79, 56, 38, 70, 56, 38, 52, 66, 56, 82, 75, 70, 12,
75, 45, 79, 54, 12, 12, 3, 24, 26, 11, 79, 52, 19, 38, 70], dtype=int32)
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
In [5]:
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
In [6]:
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
In [7]:
train_x.shape
Out[7]:
(10, 178400)
In [8]:
train_x[:,:10]
Out[8]:
array([[71, 19, 45, 77, 52, 26, 11, 56, 43, 12],
[53, 70, 22, 56, 19, 26, 56, 21, 82, 24],
[56, 68, 45, 52, 68, 19, 38, 70, 49, 56],
[82, 52, 19, 26, 11, 56, 75, 82, 31, 10],
[56, 52, 19, 26, 56, 10, 45, 70, 22, 29],
[56, 16, 19, 11, 82, 31, 49, 19, 56, 10],
[52, 56, 52, 82, 12, 22, 82, 54, 12, 12],
[82, 56, 19, 26, 11, 66, 26, 10, 76, 78],
[19, 45, 52, 56, 38, 66, 56, 52, 19, 26],
[26, 11, 66, 26, 10, 76, 56, 45, 70, 22]], dtype=int32)
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
In [9]:
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
In [10]:
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
In [11]:
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
In [12]:
!mkdir -p checkpoints/anna
In [13]:
def train(model, epochs, file_writer):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
file_writer.add_summary(summary, iteration)
In [ ]:
epochs = 20
batch_size = 100
num_steps = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
for lstm_size in [128,256,512]:
for num_layers in [1, 2]:
for learning_rate in [0.002, 0.001]:
log_string = 'logs/4/lr={},rl={},ru={}'.format(learning_rate, num_layers, lstm_size)
writer = tf.summary.FileWriter(log_string)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
train(model, epochs, writer)
Epoch 1/20 Iteration 1/3560 Training loss: 4.4250 0.5152 sec/batch
Epoch 1/20 Iteration 2/3560 Training loss: 4.4116 0.4735 sec/batch
Epoch 1/20 Iteration 3/3560 Training loss: 4.3954 0.4831 sec/batch
Epoch 1/20 Iteration 4/3560 Training loss: 4.3717 0.4779 sec/batch
Epoch 1/20 Iteration 5/3560 Training loss: 4.3236 0.4768 sec/batch
Epoch 1/20 Iteration 6/3560 Training loss: 4.2405 0.4789 sec/batch
Epoch 1/20 Iteration 7/3560 Training loss: 4.1540 0.4889 sec/batch
Epoch 1/20 Iteration 8/3560 Training loss: 4.0791 0.4894 sec/batch
Epoch 1/20 Iteration 9/3560 Training loss: 4.0118 0.4733 sec/batch
Epoch 1/20 Iteration 10/3560 Training loss: 3.9532 0.5164 sec/batch
Epoch 1/20 Iteration 11/3560 Training loss: 3.8998 0.6066 sec/batch
Epoch 1/20 Iteration 12/3560 Training loss: 3.8533 0.6195 sec/batch
Epoch 1/20 Iteration 13/3560 Training loss: 3.8129 0.7032 sec/batch
Epoch 1/20 Iteration 14/3560 Training loss: 3.7777 0.7130 sec/batch
Epoch 1/20 Iteration 15/3560 Training loss: 3.7463 0.6845 sec/batch
Epoch 1/20 Iteration 16/3560 Training loss: 3.7173 0.6561 sec/batch
Epoch 1/20 Iteration 17/3560 Training loss: 3.6903 0.6642 sec/batch
Epoch 1/20 Iteration 18/3560 Training loss: 3.6676 0.6665 sec/batch
Epoch 1/20 Iteration 19/3560 Training loss: 3.6456 0.6543 sec/batch
Epoch 1/20 Iteration 20/3560 Training loss: 3.6244 0.6598 sec/batch
Epoch 1/20 Iteration 21/3560 Training loss: 3.6057 0.6463 sec/batch
Epoch 1/20 Iteration 22/3560 Training loss: 3.5883 0.6515 sec/batch
Epoch 1/20 Iteration 23/3560 Training loss: 3.5718 0.5752 sec/batch
Epoch 1/20 Iteration 24/3560 Training loss: 3.5566 0.5345 sec/batch
Epoch 1/20 Iteration 25/3560 Training loss: 3.5420 0.5244 sec/batch
Epoch 1/20 Iteration 26/3560 Training loss: 3.5294 0.5259 sec/batch
Epoch 1/20 Iteration 27/3560 Training loss: 3.5177 0.5364 sec/batch
Epoch 1/20 Iteration 28/3560 Training loss: 3.5052 0.6455 sec/batch
Epoch 1/20 Iteration 29/3560 Training loss: 3.4938 0.6564 sec/batch
Epoch 1/20 Iteration 30/3560 Training loss: 3.4835 0.6409 sec/batch
Epoch 1/20 Iteration 31/3560 Training loss: 3.4743 0.6434 sec/batch
Epoch 1/20 Iteration 32/3560 Training loss: 3.4647 0.6278 sec/batch
Epoch 1/20 Iteration 33/3560 Training loss: 3.4554 0.5325 sec/batch
Epoch 1/20 Iteration 34/3560 Training loss: 3.4471 0.5378 sec/batch
Epoch 1/20 Iteration 35/3560 Training loss: 3.4388 0.5332 sec/batch
Epoch 1/20 Iteration 36/3560 Training loss: 3.4311 0.5342 sec/batch
Epoch 1/20 Iteration 37/3560 Training loss: 3.4229 0.5442 sec/batch
Epoch 1/20 Iteration 38/3560 Training loss: 3.4154 0.5373 sec/batch
Epoch 1/20 Iteration 39/3560 Training loss: 3.4081 0.5315 sec/batch
Epoch 1/20 Iteration 40/3560 Training loss: 3.4012 0.6371 sec/batch
Epoch 1/20 Iteration 41/3560 Training loss: 3.3945 0.6812 sec/batch
Epoch 1/20 Iteration 42/3560 Training loss: 3.3883 0.5359 sec/batch
Epoch 1/20 Iteration 43/3560 Training loss: 3.3821 0.5342 sec/batch
Epoch 1/20 Iteration 44/3560 Training loss: 3.3760 0.5426 sec/batch
Epoch 1/20 Iteration 45/3560 Training loss: 3.3700 0.5286 sec/batch
Epoch 1/20 Iteration 46/3560 Training loss: 3.3647 0.5445 sec/batch
Epoch 1/20 Iteration 47/3560 Training loss: 3.3596 0.5478 sec/batch
Epoch 1/20 Iteration 48/3560 Training loss: 3.3548 0.5508 sec/batch
Epoch 1/20 Iteration 49/3560 Training loss: 3.3503 0.5395 sec/batch
Epoch 1/20 Iteration 50/3560 Training loss: 3.3458 0.5437 sec/batch
Epoch 1/20 Iteration 51/3560 Training loss: 3.3414 0.5359 sec/batch
Epoch 1/20 Iteration 52/3560 Training loss: 3.3369 0.5383 sec/batch
Epoch 1/20 Iteration 53/3560 Training loss: 3.3327 0.5462 sec/batch
Epoch 1/20 Iteration 54/3560 Training loss: 3.3284 0.5285 sec/batch
Epoch 1/20 Iteration 55/3560 Training loss: 3.3244 0.5364 sec/batch
Epoch 1/20 Iteration 56/3560 Training loss: 3.3202 0.5339 sec/batch
Epoch 1/20 Iteration 57/3560 Training loss: 3.3163 0.5439 sec/batch
Epoch 1/20 Iteration 58/3560 Training loss: 3.3125 0.5401 sec/batch
Epoch 1/20 Iteration 59/3560 Training loss: 3.3087 0.5419 sec/batch
Epoch 1/20 Iteration 60/3560 Training loss: 3.3052 0.5406 sec/batch
Epoch 1/20 Iteration 61/3560 Training loss: 3.3017 0.5471 sec/batch
Epoch 1/20 Iteration 62/3560 Training loss: 3.2986 0.5978 sec/batch
Epoch 1/20 Iteration 63/3560 Training loss: 3.2957 0.6459 sec/batch
Epoch 1/20 Iteration 64/3560 Training loss: 3.2922 0.5890 sec/batch
Epoch 1/20 Iteration 65/3560 Training loss: 3.2888 0.7620 sec/batch
Epoch 1/20 Iteration 66/3560 Training loss: 3.2858 0.7204 sec/batch
Epoch 1/20 Iteration 67/3560 Training loss: 3.2830 0.5847 sec/batch
Epoch 1/20 Iteration 68/3560 Training loss: 3.2794 0.6171 sec/batch
Epoch 1/20 Iteration 69/3560 Training loss: 3.2763 0.5969 sec/batch
Epoch 1/20 Iteration 70/3560 Training loss: 3.2735 0.5768 sec/batch
Epoch 1/20 Iteration 71/3560 Training loss: 3.2707 0.5477 sec/batch
Epoch 1/20 Iteration 72/3560 Training loss: 3.2682 0.7168 sec/batch
Epoch 1/20 Iteration 73/3560 Training loss: 3.2655 0.6382 sec/batch
Epoch 1/20 Iteration 74/3560 Training loss: 3.2628 0.5408 sec/batch
Epoch 1/20 Iteration 75/3560 Training loss: 3.2603 0.6133 sec/batch
Epoch 1/20 Iteration 76/3560 Training loss: 3.2580 0.6265 sec/batch
Epoch 1/20 Iteration 77/3560 Training loss: 3.2556 0.6538 sec/batch
Epoch 1/20 Iteration 78/3560 Training loss: 3.2531 0.6914 sec/batch
Epoch 1/20 Iteration 79/3560 Training loss: 3.2505 0.6255 sec/batch
Epoch 1/20 Iteration 80/3560 Training loss: 3.2479 0.6163 sec/batch
Epoch 1/20 Iteration 81/3560 Training loss: 3.2454 0.6443 sec/batch
Epoch 1/20 Iteration 82/3560 Training loss: 3.2431 0.5403 sec/batch
Epoch 1/20 Iteration 83/3560 Training loss: 3.2408 0.6097 sec/batch
Epoch 1/20 Iteration 84/3560 Training loss: 3.2383 0.5970 sec/batch
Epoch 1/20 Iteration 85/3560 Training loss: 3.2357 0.6050 sec/batch
Epoch 1/20 Iteration 86/3560 Training loss: 3.2333 0.6037 sec/batch
Epoch 1/20 Iteration 87/3560 Training loss: 3.2308 0.7296 sec/batch
Epoch 1/20 Iteration 88/3560 Training loss: 3.2284 0.6626 sec/batch
Epoch 1/20 Iteration 89/3560 Training loss: 3.2261 0.6678 sec/batch
Epoch 1/20 Iteration 90/3560 Training loss: 3.2238 0.6552 sec/batch
Epoch 1/20 Iteration 91/3560 Training loss: 3.2216 0.6721 sec/batch
Epoch 1/20 Iteration 92/3560 Training loss: 3.2192 0.6654 sec/batch
Epoch 1/20 Iteration 93/3560 Training loss: 3.2169 0.6458 sec/batch
Epoch 1/20 Iteration 94/3560 Training loss: 3.2146 0.6749 sec/batch
Epoch 1/20 Iteration 95/3560 Training loss: 3.2122 0.6473 sec/batch
Epoch 1/20 Iteration 96/3560 Training loss: 3.2098 0.6612 sec/batch
Epoch 1/20 Iteration 97/3560 Training loss: 3.2076 0.6552 sec/batch
Epoch 1/20 Iteration 98/3560 Training loss: 3.2052 0.6553 sec/batch
Epoch 1/20 Iteration 99/3560 Training loss: 3.2028 0.6663 sec/batch
Epoch 1/20 Iteration 100/3560 Training loss: 3.2004 0.6580 sec/batch
Epoch 1/20 Iteration 101/3560 Training loss: 3.1982 0.6599 sec/batch
Epoch 1/20 Iteration 102/3560 Training loss: 3.1958 0.5290 sec/batch
Epoch 1/20 Iteration 103/3560 Training loss: 3.1935 0.5444 sec/batch
Epoch 1/20 Iteration 104/3560 Training loss: 3.1911 0.5452 sec/batch
Epoch 1/20 Iteration 105/3560 Training loss: 3.1887 0.5353 sec/batch
Epoch 1/20 Iteration 106/3560 Training loss: 3.1863 0.5454 sec/batch
Epoch 1/20 Iteration 107/3560 Training loss: 3.1837 0.7041 sec/batch
Epoch 1/20 Iteration 108/3560 Training loss: 3.1812 0.5379 sec/batch
Epoch 1/20 Iteration 109/3560 Training loss: 3.1789 0.5643 sec/batch
Epoch 1/20 Iteration 110/3560 Training loss: 3.1761 0.5048 sec/batch
Epoch 1/20 Iteration 111/3560 Training loss: 3.1737 0.6427 sec/batch
Epoch 1/20 Iteration 112/3560 Training loss: 3.1712 0.6223 sec/batch
Epoch 1/20 Iteration 113/3560 Training loss: 3.1687 0.5435 sec/batch
Epoch 1/20 Iteration 114/3560 Training loss: 3.1659 0.5360 sec/batch
Epoch 1/20 Iteration 115/3560 Training loss: 3.1633 0.6699 sec/batch
Epoch 1/20 Iteration 116/3560 Training loss: 3.1606 0.6429 sec/batch
Epoch 1/20 Iteration 117/3560 Training loss: 3.1580 0.5056 sec/batch
Epoch 1/20 Iteration 118/3560 Training loss: 3.1556 0.6208 sec/batch
Epoch 1/20 Iteration 119/3560 Training loss: 3.1532 0.4763 sec/batch
Epoch 1/20 Iteration 120/3560 Training loss: 3.1506 0.5628 sec/batch
Epoch 1/20 Iteration 121/3560 Training loss: 3.1483 0.4950 sec/batch
Epoch 1/20 Iteration 122/3560 Training loss: 3.1458 0.5634 sec/batch
Epoch 1/20 Iteration 123/3560 Training loss: 3.1432 0.6508 sec/batch
Epoch 1/20 Iteration 124/3560 Training loss: 3.1407 0.6897 sec/batch
Epoch 1/20 Iteration 125/3560 Training loss: 3.1382 0.7233 sec/batch
Epoch 1/20 Iteration 126/3560 Training loss: 3.1353 0.6474 sec/batch
Epoch 1/20 Iteration 127/3560 Training loss: 3.1328 0.6321 sec/batch
Epoch 1/20 Iteration 128/3560 Training loss: 3.1303 0.6550 sec/batch
Epoch 1/20 Iteration 129/3560 Training loss: 3.1277 0.6528 sec/batch
Epoch 1/20 Iteration 130/3560 Training loss: 3.1251 0.6556 sec/batch
Epoch 1/20 Iteration 131/3560 Training loss: 3.1225 0.6827 sec/batch
Epoch 1/20 Iteration 132/3560 Training loss: 3.1198 0.6819 sec/batch
Epoch 1/20 Iteration 133/3560 Training loss: 3.1173 0.6636 sec/batch
Epoch 1/20 Iteration 134/3560 Training loss: 3.1147 0.5959 sec/batch
Epoch 1/20 Iteration 135/3560 Training loss: 3.1118 0.5994 sec/batch
Epoch 1/20 Iteration 136/3560 Training loss: 3.1090 0.6555 sec/batch
Epoch 1/20 Iteration 137/3560 Training loss: 3.1063 0.5476 sec/batch
Epoch 1/20 Iteration 138/3560 Training loss: 3.1036 0.6923 sec/batch
Epoch 1/20 Iteration 139/3560 Training loss: 3.1011 0.6848 sec/batch
Epoch 1/20 Iteration 140/3560 Training loss: 3.0983 0.7210 sec/batch
Epoch 1/20 Iteration 141/3560 Training loss: 3.0957 0.6850 sec/batch
Epoch 1/20 Iteration 142/3560 Training loss: 3.0930 0.6398 sec/batch
Epoch 1/20 Iteration 143/3560 Training loss: 3.0903 0.6747 sec/batch
Epoch 1/20 Iteration 144/3560 Training loss: 3.0876 0.7121 sec/batch
Epoch 1/20 Iteration 145/3560 Training loss: 3.0849 0.5913 sec/batch
Epoch 1/20 Iteration 146/3560 Training loss: 3.0823 0.5373 sec/batch
Epoch 1/20 Iteration 147/3560 Training loss: 3.0796 0.5323 sec/batch
Epoch 1/20 Iteration 148/3560 Training loss: 3.0771 0.5483 sec/batch
Epoch 1/20 Iteration 149/3560 Training loss: 3.0744 0.5647 sec/batch
Epoch 1/20 Iteration 150/3560 Training loss: 3.0717 0.5446 sec/batch
Epoch 1/20 Iteration 151/3560 Training loss: 3.0692 0.5550 sec/batch
Epoch 1/20 Iteration 152/3560 Training loss: 3.0668 0.5408 sec/batch
Epoch 1/20 Iteration 153/3560 Training loss: 3.0643 0.5385 sec/batch
Epoch 1/20 Iteration 154/3560 Training loss: 3.0617 0.5380 sec/batch
Epoch 1/20 Iteration 155/3560 Training loss: 3.0590 0.5312 sec/batch
Epoch 1/20 Iteration 156/3560 Training loss: 3.0564 0.5506 sec/batch
Epoch 1/20 Iteration 157/3560 Training loss: 3.0537 0.5440 sec/batch
Epoch 1/20 Iteration 158/3560 Training loss: 3.0510 0.5555 sec/batch
Epoch 1/20 Iteration 159/3560 Training loss: 3.0483 0.5417 sec/batch
Epoch 1/20 Iteration 160/3560 Training loss: 3.0457 0.5360 sec/batch
Epoch 1/20 Iteration 161/3560 Training loss: 3.0432 0.5394 sec/batch
Epoch 1/20 Iteration 162/3560 Training loss: 3.0404 0.5605 sec/batch
Epoch 1/20 Iteration 163/3560 Training loss: 3.0377 0.5584 sec/batch
Epoch 1/20 Iteration 164/3560 Training loss: 3.0350 0.5869 sec/batch
Epoch 1/20 Iteration 165/3560 Training loss: 3.0325 0.5496 sec/batch
Epoch 1/20 Iteration 166/3560 Training loss: 3.0299 0.5450 sec/batch
Epoch 1/20 Iteration 167/3560 Training loss: 3.0274 0.5372 sec/batch
Epoch 1/20 Iteration 168/3560 Training loss: 3.0249 0.5418 sec/batch
Epoch 1/20 Iteration 169/3560 Training loss: 3.0225 0.5468 sec/batch
Epoch 1/20 Iteration 170/3560 Training loss: 3.0198 0.5457 sec/batch
Epoch 1/20 Iteration 171/3560 Training loss: 3.0173 0.5420 sec/batch
Epoch 1/20 Iteration 172/3560 Training loss: 3.0151 0.5572 sec/batch
Epoch 1/20 Iteration 173/3560 Training loss: 3.0128 0.5395 sec/batch
Epoch 1/20 Iteration 174/3560 Training loss: 3.0106 0.5487 sec/batch
Epoch 1/20 Iteration 175/3560 Training loss: 3.0083 0.5456 sec/batch
Epoch 1/20 Iteration 176/3560 Training loss: 3.0058 0.5410 sec/batch
Epoch 1/20 Iteration 177/3560 Training loss: 3.0033 0.5302 sec/batch
Epoch 1/20 Iteration 178/3560 Training loss: 3.0007 0.5473 sec/batch
Epoch 2/20 Iteration 179/3560 Training loss: 2.6048 0.5387 sec/batch
Epoch 2/20 Iteration 180/3560 Training loss: 2.5641 0.5623 sec/batch
Epoch 2/20 Iteration 181/3560 Training loss: 2.5562 0.5316 sec/batch
Epoch 2/20 Iteration 182/3560 Training loss: 2.5528 0.5336 sec/batch
Epoch 2/20 Iteration 183/3560 Training loss: 2.5515 0.5526 sec/batch
Epoch 2/20 Iteration 184/3560 Training loss: 2.5497 0.5607 sec/batch
Epoch 2/20 Iteration 185/3560 Training loss: 2.5496 0.5541 sec/batch
Epoch 2/20 Iteration 186/3560 Training loss: 2.5496 0.5438 sec/batch
Epoch 2/20 Iteration 187/3560 Training loss: 2.5490 0.5387 sec/batch
Epoch 2/20 Iteration 188/3560 Training loss: 2.5470 0.5377 sec/batch
Epoch 2/20 Iteration 189/3560 Training loss: 2.5453 0.5435 sec/batch
Epoch 2/20 Iteration 190/3560 Training loss: 2.5444 0.5365 sec/batch
Epoch 2/20 Iteration 191/3560 Training loss: 2.5430 0.5397 sec/batch
Epoch 2/20 Iteration 192/3560 Training loss: 2.5445 0.5550 sec/batch
Epoch 2/20 Iteration 193/3560 Training loss: 2.5432 0.5371 sec/batch
Epoch 2/20 Iteration 194/3560 Training loss: 2.5427 0.5324 sec/batch
Epoch 2/20 Iteration 195/3560 Training loss: 2.5421 0.5283 sec/batch
Epoch 2/20 Iteration 196/3560 Training loss: 2.5426 0.5302 sec/batch
Epoch 2/20 Iteration 197/3560 Training loss: 2.5415 0.5463 sec/batch
Epoch 2/20 Iteration 198/3560 Training loss: 2.5396 0.5353 sec/batch
Epoch 2/20 Iteration 199/3560 Training loss: 2.5386 0.5443 sec/batch
Epoch 2/20 Iteration 200/3560 Training loss: 2.5385 0.6069 sec/batch
Epoch 2/20 Iteration 201/3560 Training loss: 2.5370 0.6892 sec/batch
Epoch 2/20 Iteration 202/3560 Training loss: 2.5360 0.5440 sec/batch
Epoch 2/20 Iteration 203/3560 Training loss: 2.5343 0.5346 sec/batch
Epoch 2/20 Iteration 204/3560 Training loss: 2.5333 0.5425 sec/batch
Epoch 2/20 Iteration 205/3560 Training loss: 2.5319 0.5443 sec/batch
Epoch 2/20 Iteration 206/3560 Training loss: 2.5310 0.5569 sec/batch
Epoch 2/20 Iteration 207/3560 Training loss: 2.5302 0.5487 sec/batch
Epoch 2/20 Iteration 208/3560 Training loss: 2.5290 0.5298 sec/batch
Epoch 2/20 Iteration 209/3560 Training loss: 2.5288 0.5405 sec/batch
Epoch 2/20 Iteration 210/3560 Training loss: 2.5275 0.5356 sec/batch
Epoch 2/20 Iteration 211/3560 Training loss: 2.5257 0.5441 sec/batch
Epoch 2/20 Iteration 212/3560 Training loss: 2.5247 0.5393 sec/batch
Epoch 2/20 Iteration 213/3560 Training loss: 2.5233 0.5421 sec/batch
Epoch 2/20 Iteration 214/3560 Training loss: 2.5225 0.5320 sec/batch
Epoch 2/20 Iteration 215/3560 Training loss: 2.5209 0.5335 sec/batch
Epoch 2/20 Iteration 216/3560 Training loss: 2.5191 0.5573 sec/batch
Epoch 2/20 Iteration 217/3560 Training loss: 2.5177 0.5354 sec/batch
Epoch 2/20 Iteration 218/3560 Training loss: 2.5164 0.5380 sec/batch
Epoch 2/20 Iteration 219/3560 Training loss: 2.5149 0.5375 sec/batch
Epoch 2/20 Iteration 220/3560 Training loss: 2.5133 0.5437 sec/batch
Epoch 2/20 Iteration 221/3560 Training loss: 2.5120 0.5531 sec/batch
Epoch 2/20 Iteration 222/3560 Training loss: 2.5107 0.5455 sec/batch
Epoch 2/20 Iteration 223/3560 Training loss: 2.5092 0.5358 sec/batch
Epoch 2/20 Iteration 224/3560 Training loss: 2.5074 0.5356 sec/batch
Epoch 2/20 Iteration 225/3560 Training loss: 2.5066 0.5551 sec/batch
Epoch 2/20 Iteration 226/3560 Training loss: 2.5056 0.5391 sec/batch
Epoch 2/20 Iteration 227/3560 Training loss: 2.5045 0.5358 sec/batch
Epoch 2/20 Iteration 228/3560 Training loss: 2.5038 0.5319 sec/batch
Epoch 2/20 Iteration 229/3560 Training loss: 2.5027 0.6389 sec/batch
Epoch 2/20 Iteration 230/3560 Training loss: 2.5017 0.7699 sec/batch
Epoch 2/20 Iteration 231/3560 Training loss: 2.5005 0.5404 sec/batch
Epoch 2/20 Iteration 232/3560 Training loss: 2.4992 0.5349 sec/batch
Epoch 2/20 Iteration 233/3560 Training loss: 2.4982 0.5677 sec/batch
Epoch 2/20 Iteration 234/3560 Training loss: 2.4973 0.6430 sec/batch
Epoch 2/20 Iteration 235/3560 Training loss: 2.4964 0.7779 sec/batch
Epoch 2/20 Iteration 236/3560 Training loss: 2.4953 0.7735 sec/batch
Epoch 2/20 Iteration 237/3560 Training loss: 2.4944 0.7228 sec/batch
Epoch 2/20 Iteration 238/3560 Training loss: 2.4937 0.6986 sec/batch
Epoch 2/20 Iteration 239/3560 Training loss: 2.4927 0.6438 sec/batch
Epoch 2/20 Iteration 240/3560 Training loss: 2.4920 0.6501 sec/batch
Epoch 2/20 Iteration 241/3560 Training loss: 2.4915 0.5655 sec/batch
Epoch 2/20 Iteration 242/3560 Training loss: 2.4905 0.5380 sec/batch
Epoch 2/20 Iteration 243/3560 Training loss: 2.4893 0.6681 sec/batch
Epoch 2/20 Iteration 244/3560 Training loss: 2.4887 0.8183 sec/batch
Epoch 2/20 Iteration 245/3560 Training loss: 2.4877 0.6544 sec/batch
Epoch 2/20 Iteration 246/3560 Training loss: 2.4864 0.7415 sec/batch
Epoch 2/20 Iteration 247/3560 Training loss: 2.4854 0.7331 sec/batch
Epoch 2/20 Iteration 248/3560 Training loss: 2.4848 0.5659 sec/batch
Epoch 2/20 Iteration 249/3560 Training loss: 2.4840 0.6391 sec/batch
Epoch 2/20 Iteration 250/3560 Training loss: 2.4834 0.7326 sec/batch
Epoch 2/20 Iteration 251/3560 Training loss: 2.4826 0.6841 sec/batch
Epoch 2/20 Iteration 252/3560 Training loss: 2.4816 0.6198 sec/batch
Epoch 2/20 Iteration 253/3560 Training loss: 2.4808 0.5384 sec/batch
Epoch 2/20 Iteration 254/3560 Training loss: 2.4805 0.5467 sec/batch
Epoch 2/20 Iteration 255/3560 Training loss: 2.4795 0.5492 sec/batch
Epoch 2/20 Iteration 256/3560 Training loss: 2.4789 0.5306 sec/batch
Epoch 2/20 Iteration 257/3560 Training loss: 2.4780 0.5420 sec/batch
Epoch 2/20 Iteration 258/3560 Training loss: 2.4771 0.5448 sec/batch
Epoch 2/20 Iteration 259/3560 Training loss: 2.4762 0.5424 sec/batch
Epoch 2/20 Iteration 260/3560 Training loss: 2.4757 0.5314 sec/batch
Epoch 2/20 Iteration 261/3560 Training loss: 2.4748 0.5391 sec/batch
Epoch 2/20 Iteration 262/3560 Training loss: 2.4738 0.5446 sec/batch
Epoch 2/20 Iteration 263/3560 Training loss: 2.4726 0.5413 sec/batch
Epoch 2/20 Iteration 264/3560 Training loss: 2.4717 0.5321 sec/batch
Epoch 2/20 Iteration 265/3560 Training loss: 2.4708 0.5365 sec/batch
Epoch 2/20 Iteration 266/3560 Training loss: 2.4699 0.5313 sec/batch
Epoch 2/20 Iteration 267/3560 Training loss: 2.4691 0.5331 sec/batch
Epoch 2/20 Iteration 268/3560 Training loss: 2.4684 0.5645 sec/batch
Epoch 2/20 Iteration 269/3560 Training loss: 2.4677 0.5517 sec/batch
Epoch 2/20 Iteration 270/3560 Training loss: 2.4670 0.5505 sec/batch
Epoch 2/20 Iteration 271/3560 Training loss: 2.4662 0.5337 sec/batch
Epoch 2/20 Iteration 272/3560 Training loss: 2.4653 0.5260 sec/batch
Epoch 2/20 Iteration 273/3560 Training loss: 2.4644 0.5261 sec/batch
Epoch 2/20 Iteration 274/3560 Training loss: 2.4636 0.5357 sec/batch
Epoch 2/20 Iteration 275/3560 Training loss: 2.4628 0.5218 sec/batch
Epoch 2/20 Iteration 276/3560 Training loss: 2.4620 0.5440 sec/batch
Epoch 2/20 Iteration 277/3560 Training loss: 2.4612 0.5318 sec/batch
Epoch 2/20 Iteration 278/3560 Training loss: 2.4604 0.5335 sec/batch
Epoch 2/20 Iteration 279/3560 Training loss: 2.4598 0.5355 sec/batch
Epoch 2/20 Iteration 280/3560 Training loss: 2.4592 0.5440 sec/batch
Epoch 2/20 Iteration 281/3560 Training loss: 2.4583 0.5522 sec/batch
Epoch 2/20 Iteration 282/3560 Training loss: 2.4575 0.6096 sec/batch
Epoch 2/20 Iteration 283/3560 Training loss: 2.4567 0.6537 sec/batch
Epoch 2/20 Iteration 284/3560 Training loss: 2.4560 0.6413 sec/batch
Epoch 2/20 Iteration 285/3560 Training loss: 2.4552 0.6503 sec/batch
Epoch 2/20 Iteration 286/3560 Training loss: 2.4548 0.6636 sec/batch
Epoch 2/20 Iteration 287/3560 Training loss: 2.4543 0.6072 sec/batch
Epoch 2/20 Iteration 288/3560 Training loss: 2.4535 0.5343 sec/batch
Epoch 2/20 Iteration 289/3560 Training loss: 2.4529 0.5396 sec/batch
Epoch 2/20 Iteration 290/3560 Training loss: 2.4524 0.5408 sec/batch
Epoch 2/20 Iteration 291/3560 Training loss: 2.4516 0.5420 sec/batch
Epoch 2/20 Iteration 292/3560 Training loss: 2.4509 0.5484 sec/batch
Epoch 2/20 Iteration 293/3560 Training loss: 2.4501 0.5546 sec/batch
Epoch 2/20 Iteration 294/3560 Training loss: 2.4492 0.5465 sec/batch
Epoch 2/20 Iteration 295/3560 Training loss: 2.4486 0.5498 sec/batch
Epoch 2/20 Iteration 296/3560 Training loss: 2.4480 0.5327 sec/batch
Epoch 2/20 Iteration 297/3560 Training loss: 2.4476 0.5400 sec/batch
Epoch 2/20 Iteration 298/3560 Training loss: 2.4470 0.5375 sec/batch
Epoch 2/20 Iteration 299/3560 Training loss: 2.4466 0.5397 sec/batch
Epoch 2/20 Iteration 300/3560 Training loss: 2.4460 0.5614 sec/batch
Epoch 2/20 Iteration 301/3560 Training loss: 2.4453 0.5349 sec/batch
Epoch 2/20 Iteration 302/3560 Training loss: 2.4448 0.5347 sec/batch
Epoch 2/20 Iteration 303/3560 Training loss: 2.4442 0.5808 sec/batch
Epoch 2/20 Iteration 304/3560 Training loss: 2.4434 0.7345 sec/batch
Epoch 2/20 Iteration 305/3560 Training loss: 2.4429 0.6916 sec/batch
Epoch 2/20 Iteration 306/3560 Training loss: 2.4424 0.6908 sec/batch
Epoch 2/20 Iteration 307/3560 Training loss: 2.4418 0.6475 sec/batch
Epoch 2/20 Iteration 308/3560 Training loss: 2.4412 0.5976 sec/batch
Epoch 2/20 Iteration 309/3560 Training loss: 2.4406 0.6060 sec/batch
Epoch 2/20 Iteration 310/3560 Training loss: 2.4399 0.5475 sec/batch
Epoch 2/20 Iteration 311/3560 Training loss: 2.4393 0.5565 sec/batch
Epoch 2/20 Iteration 312/3560 Training loss: 2.4389 0.5482 sec/batch
Epoch 2/20 Iteration 313/3560 Training loss: 2.4382 0.6476 sec/batch
Epoch 2/20 Iteration 314/3560 Training loss: 2.4376 0.7010 sec/batch
Epoch 2/20 Iteration 315/3560 Training loss: 2.4370 0.5695 sec/batch
Epoch 2/20 Iteration 316/3560 Training loss: 2.4365 0.5495 sec/batch
Epoch 2/20 Iteration 317/3560 Training loss: 2.4361 0.5456 sec/batch
Epoch 2/20 Iteration 318/3560 Training loss: 2.4355 0.5297 sec/batch
Epoch 2/20 Iteration 319/3560 Training loss: 2.4351 0.5511 sec/batch
Epoch 2/20 Iteration 320/3560 Training loss: 2.4346 0.5932 sec/batch
Epoch 2/20 Iteration 321/3560 Training loss: 2.4341 0.5479 sec/batch
Epoch 2/20 Iteration 322/3560 Training loss: 2.4335 0.5450 sec/batch
Epoch 2/20 Iteration 323/3560 Training loss: 2.4330 0.5668 sec/batch
Epoch 2/20 Iteration 324/3560 Training loss: 2.4326 0.5413 sec/batch
Epoch 2/20 Iteration 325/3560 Training loss: 2.4321 0.5470 sec/batch
Epoch 2/20 Iteration 326/3560 Training loss: 2.4317 0.5463 sec/batch
Epoch 2/20 Iteration 327/3560 Training loss: 2.4311 0.5348 sec/batch
Epoch 2/20 Iteration 328/3560 Training loss: 2.4305 0.5338 sec/batch
Epoch 2/20 Iteration 329/3560 Training loss: 2.4302 0.5415 sec/batch
Epoch 2/20 Iteration 330/3560 Training loss: 2.4299 0.5508 sec/batch
Epoch 2/20 Iteration 331/3560 Training loss: 2.4295 0.5817 sec/batch
Epoch 2/20 Iteration 332/3560 Training loss: 2.4291 0.5550 sec/batch
Epoch 2/20 Iteration 333/3560 Training loss: 2.4285 0.5343 sec/batch
Epoch 2/20 Iteration 334/3560 Training loss: 2.4279 0.5482 sec/batch
Epoch 2/20 Iteration 335/3560 Training loss: 2.4273 0.5349 sec/batch
Epoch 2/20 Iteration 336/3560 Training loss: 2.4268 0.5500 sec/batch
Epoch 2/20 Iteration 337/3560 Training loss: 2.4262 0.5340 sec/batch
Epoch 2/20 Iteration 338/3560 Training loss: 2.4259 0.5488 sec/batch
Epoch 2/20 Iteration 339/3560 Training loss: 2.4256 0.5625 sec/batch
Epoch 2/20 Iteration 340/3560 Training loss: 2.4250 0.5421 sec/batch
Epoch 2/20 Iteration 341/3560 Training loss: 2.4244 0.5389 sec/batch
Epoch 2/20 Iteration 342/3560 Training loss: 2.4239 0.5420 sec/batch
Epoch 2/20 Iteration 343/3560 Training loss: 2.4235 0.5382 sec/batch
Epoch 2/20 Iteration 344/3560 Training loss: 2.4229 0.5490 sec/batch
Epoch 2/20 Iteration 345/3560 Training loss: 2.4225 0.5311 sec/batch
Epoch 2/20 Iteration 346/3560 Training loss: 2.4221 0.5298 sec/batch
Epoch 2/20 Iteration 347/3560 Training loss: 2.4217 0.5413 sec/batch
Epoch 2/20 Iteration 348/3560 Training loss: 2.4211 0.5536 sec/batch
Epoch 2/20 Iteration 349/3560 Training loss: 2.4207 0.5539 sec/batch
Epoch 2/20 Iteration 350/3560 Training loss: 2.4204 0.5470 sec/batch
Epoch 2/20 Iteration 351/3560 Training loss: 2.4203 0.5453 sec/batch
Epoch 2/20 Iteration 352/3560 Training loss: 2.4201 0.5439 sec/batch
Epoch 2/20 Iteration 353/3560 Training loss: 2.4199 0.5576 sec/batch
Epoch 2/20 Iteration 354/3560 Training loss: 2.4194 0.5588 sec/batch
Epoch 2/20 Iteration 355/3560 Training loss: 2.4189 0.5421 sec/batch
Epoch 2/20 Iteration 356/3560 Training loss: 2.4183 0.5464 sec/batch
Epoch 3/20 Iteration 357/3560 Training loss: 2.3786 0.5486 sec/batch
Epoch 3/20 Iteration 358/3560 Training loss: 2.3386 0.5448 sec/batch
Epoch 3/20 Iteration 359/3560 Training loss: 2.3260 0.5391 sec/batch
Epoch 3/20 Iteration 360/3560 Training loss: 2.3242 0.5460 sec/batch
Epoch 3/20 Iteration 361/3560 Training loss: 2.3241 0.5391 sec/batch
Epoch 3/20 Iteration 362/3560 Training loss: 2.3224 0.5447 sec/batch
Epoch 3/20 Iteration 363/3560 Training loss: 2.3227 0.5531 sec/batch
Epoch 3/20 Iteration 364/3560 Training loss: 2.3248 0.5434 sec/batch
Epoch 3/20 Iteration 365/3560 Training loss: 2.3253 0.5454 sec/batch
Epoch 3/20 Iteration 366/3560 Training loss: 2.3256 0.5470 sec/batch
Epoch 3/20 Iteration 367/3560 Training loss: 2.3245 0.5439 sec/batch
Epoch 3/20 Iteration 368/3560 Training loss: 2.3241 0.5371 sec/batch
Epoch 3/20 Iteration 369/3560 Training loss: 2.3242 0.5428 sec/batch
Epoch 3/20 Iteration 370/3560 Training loss: 2.3264 0.5371 sec/batch
Epoch 3/20 Iteration 371/3560 Training loss: 2.3256 0.5400 sec/batch
Epoch 3/20 Iteration 372/3560 Training loss: 2.3249 0.5462 sec/batch
Epoch 3/20 Iteration 373/3560 Training loss: 2.3246 0.5413 sec/batch
Epoch 3/20 Iteration 374/3560 Training loss: 2.3267 0.5457 sec/batch
Epoch 3/20 Iteration 375/3560 Training loss: 2.3269 0.5389 sec/batch
Epoch 3/20 Iteration 376/3560 Training loss: 2.3253 0.5331 sec/batch
Epoch 3/20 Iteration 377/3560 Training loss: 2.3244 0.5349 sec/batch
Epoch 3/20 Iteration 378/3560 Training loss: 2.3256 0.5320 sec/batch
Epoch 3/20 Iteration 379/3560 Training loss: 2.3252 0.5279 sec/batch
Epoch 3/20 Iteration 380/3560 Training loss: 2.3246 0.5430 sec/batch
Epoch 3/20 Iteration 381/3560 Training loss: 2.3240 0.5766 sec/batch
Epoch 3/20 Iteration 382/3560 Training loss: 2.3229 0.6609 sec/batch
Epoch 3/20 Iteration 383/3560 Training loss: 2.3221 0.6672 sec/batch
Epoch 3/20 Iteration 384/3560 Training loss: 2.3217 0.6706 sec/batch
Epoch 3/20 Iteration 385/3560 Training loss: 2.3219 0.6744 sec/batch
Epoch 3/20 Iteration 386/3560 Training loss: 2.3218 0.6582 sec/batch
Epoch 3/20 Iteration 387/3560 Training loss: 2.3217 0.6860 sec/batch
Epoch 3/20 Iteration 388/3560 Training loss: 2.3210 0.6771 sec/batch
Epoch 3/20 Iteration 389/3560 Training loss: 2.3203 0.6572 sec/batch
Epoch 3/20 Iteration 390/3560 Training loss: 2.3204 0.6833 sec/batch
Epoch 3/20 Iteration 391/3560 Training loss: 2.3200 0.6497 sec/batch
Epoch 3/20 Iteration 392/3560 Training loss: 2.3198 0.7196 sec/batch
Epoch 3/20 Iteration 393/3560 Training loss: 2.3193 0.6447 sec/batch
Epoch 3/20 Iteration 394/3560 Training loss: 2.3182 0.6543 sec/batch
Epoch 3/20 Iteration 395/3560 Training loss: 2.3174 0.6600 sec/batch
Epoch 3/20 Iteration 396/3560 Training loss: 2.3165 0.6567 sec/batch
Epoch 3/20 Iteration 397/3560 Training loss: 2.3160 0.6711 sec/batch
Epoch 3/20 Iteration 398/3560 Training loss: 2.3152 0.6596 sec/batch
Epoch 3/20 Iteration 399/3560 Training loss: 2.3144 0.6567 sec/batch
Epoch 3/20 Iteration 400/3560 Training loss: 2.3137 0.5572 sec/batch
Epoch 3/20 Iteration 401/3560 Training loss: 2.3130 0.5422 sec/batch
Epoch 3/20 Iteration 402/3560 Training loss: 2.3116 0.5319 sec/batch
Epoch 3/20 Iteration 403/3560 Training loss: 2.3116 0.5418 sec/batch
Epoch 3/20 Iteration 404/3560 Training loss: 2.3111 0.5397 sec/batch
Epoch 3/20 Iteration 405/3560 Training loss: 2.3108 0.5367 sec/batch
Epoch 3/20 Iteration 406/3560 Training loss: 2.3110 0.5304 sec/batch
Epoch 3/20 Iteration 407/3560 Training loss: 2.3103 0.5551 sec/batch
Epoch 3/20 Iteration 408/3560 Training loss: 2.3102 0.5553 sec/batch
Epoch 3/20 Iteration 409/3560 Training loss: 2.3095 0.5446 sec/batch
Epoch 3/20 Iteration 410/3560 Training loss: 2.3090 0.5411 sec/batch
Epoch 3/20 Iteration 411/3560 Training loss: 2.3085 0.5464 sec/batch
Epoch 3/20 Iteration 412/3560 Training loss: 2.3083 0.5719 sec/batch
Epoch 3/20 Iteration 413/3560 Training loss: 2.3080 0.5467 sec/batch
Epoch 3/20 Iteration 414/3560 Training loss: 2.3075 0.5633 sec/batch
Epoch 3/20 Iteration 415/3560 Training loss: 2.3070 0.5504 sec/batch
Epoch 3/20 Iteration 416/3560 Training loss: 2.3071 0.5334 sec/batch
Epoch 3/20 Iteration 417/3560 Training loss: 2.3066 0.5404 sec/batch
Epoch 3/20 Iteration 418/3560 Training loss: 2.3066 0.5464 sec/batch
Epoch 3/20 Iteration 419/3560 Training loss: 2.3066 0.6434 sec/batch
Epoch 3/20 Iteration 420/3560 Training loss: 2.3062 0.6575 sec/batch
Epoch 3/20 Iteration 421/3560 Training loss: 2.3055 0.6596 sec/batch
Epoch 3/20 Iteration 422/3560 Training loss: 2.3053 0.6560 sec/batch
Epoch 3/20 Iteration 423/3560 Training loss: 2.3051 0.6559 sec/batch
Epoch 3/20 Iteration 424/3560 Training loss: 2.3042 0.6579 sec/batch
Epoch 3/20 Iteration 425/3560 Training loss: 2.3037 0.6803 sec/batch
Epoch 3/20 Iteration 426/3560 Training loss: 2.3034 0.6689 sec/batch
Epoch 3/20 Iteration 427/3560 Training loss: 2.3032 0.6900 sec/batch
Epoch 3/20 Iteration 428/3560 Training loss: 2.3031 0.6541 sec/batch
Epoch 3/20 Iteration 429/3560 Training loss: 2.3029 0.6546 sec/batch
Epoch 3/20 Iteration 430/3560 Training loss: 2.3024 0.5470 sec/batch
Epoch 3/20 Iteration 431/3560 Training loss: 2.3020 0.5544 sec/batch
Epoch 3/20 Iteration 432/3560 Training loss: 2.3021 0.5452 sec/batch
Epoch 3/20 Iteration 433/3560 Training loss: 2.3017 0.5561 sec/batch
Epoch 3/20 Iteration 434/3560 Training loss: 2.3016 0.5588 sec/batch
Epoch 3/20 Iteration 435/3560 Training loss: 2.3010 0.5392 sec/batch
Epoch 3/20 Iteration 436/3560 Training loss: 2.3006 0.6590 sec/batch
Epoch 3/20 Iteration 437/3560 Training loss: 2.3000 0.6718 sec/batch
Epoch 3/20 Iteration 438/3560 Training loss: 2.2998 0.6781 sec/batch
Epoch 3/20 Iteration 439/3560 Training loss: 2.2993 0.6563 sec/batch
Epoch 3/20 Iteration 440/3560 Training loss: 2.2987 0.6595 sec/batch
Epoch 3/20 Iteration 441/3560 Training loss: 2.2978 0.5343 sec/batch
Epoch 3/20 Iteration 442/3560 Training loss: 2.2973 0.5428 sec/batch
Epoch 3/20 Iteration 443/3560 Training loss: 2.2969 0.5455 sec/batch
Epoch 3/20 Iteration 444/3560 Training loss: 2.2965 0.5388 sec/batch
Epoch 3/20 Iteration 445/3560 Training loss: 2.2960 0.5355 sec/batch
Epoch 3/20 Iteration 446/3560 Training loss: 2.2958 0.5489 sec/batch
Epoch 3/20 Iteration 447/3560 Training loss: 2.2954 0.5377 sec/batch
Epoch 3/20 Iteration 448/3560 Training loss: 2.2951 0.5369 sec/batch
Epoch 3/20 Iteration 449/3560 Training loss: 2.2946 0.5812 sec/batch
Epoch 3/20 Iteration 450/3560 Training loss: 2.2941 0.6599 sec/batch
Epoch 3/20 Iteration 451/3560 Training loss: 2.2935 0.6626 sec/batch
Epoch 3/20 Iteration 452/3560 Training loss: 2.2931 0.6605 sec/batch
Epoch 3/20 Iteration 453/3560 Training loss: 2.2927 0.6657 sec/batch
Epoch 3/20 Iteration 454/3560 Training loss: 2.2923 0.6499 sec/batch
Epoch 3/20 Iteration 455/3560 Training loss: 2.2918 0.5962 sec/batch
Epoch 3/20 Iteration 456/3560 Training loss: 2.2913 0.6077 sec/batch
Epoch 3/20 Iteration 457/3560 Training loss: 2.2911 0.6036 sec/batch
Epoch 3/20 Iteration 458/3560 Training loss: 2.2908 0.5642 sec/batch
Epoch 3/20 Iteration 459/3560 Training loss: 2.2903 0.5808 sec/batch
Epoch 3/20 Iteration 460/3560 Training loss: 2.2900 0.6384 sec/batch
Epoch 3/20 Iteration 461/3560 Training loss: 2.2895 0.5806 sec/batch
Epoch 3/20 Iteration 462/3560 Training loss: 2.2891 0.5683 sec/batch
Epoch 3/20 Iteration 463/3560 Training loss: 2.2888 0.5765 sec/batch
Epoch 3/20 Iteration 464/3560 Training loss: 2.2885 0.5583 sec/batch
Epoch 3/20 Iteration 465/3560 Training loss: 2.2883 0.5418 sec/batch
Epoch 3/20 Iteration 466/3560 Training loss: 2.2877 0.5329 sec/batch
Epoch 3/20 Iteration 467/3560 Training loss: 2.2874 0.5600 sec/batch
Epoch 3/20 Iteration 468/3560 Training loss: 2.2872 0.7368 sec/batch
Epoch 3/20 Iteration 469/3560 Training loss: 2.2868 0.6307 sec/batch
Epoch 3/20 Iteration 470/3560 Training loss: 2.2863 0.5579 sec/batch
Epoch 3/20 Iteration 471/3560 Training loss: 2.2859 0.6233 sec/batch
Epoch 3/20 Iteration 472/3560 Training loss: 2.2853 0.6138 sec/batch
Epoch 3/20 Iteration 473/3560 Training loss: 2.2850 0.6204 sec/batch
Epoch 3/20 Iteration 474/3560 Training loss: 2.2847 0.5948 sec/batch
Epoch 3/20 Iteration 475/3560 Training loss: 2.2846 0.6270 sec/batch
Epoch 3/20 Iteration 476/3560 Training loss: 2.2844 0.6477 sec/batch
Epoch 3/20 Iteration 477/3560 Training loss: 2.2842 0.6346 sec/batch
Epoch 3/20 Iteration 478/3560 Training loss: 2.2838 0.6377 sec/batch
Epoch 3/20 Iteration 479/3560 Training loss: 2.2834 0.5346 sec/batch
Epoch 3/20 Iteration 480/3560 Training loss: 2.2832 0.5575 sec/batch
Epoch 3/20 Iteration 481/3560 Training loss: 2.2828 0.5876 sec/batch
Epoch 3/20 Iteration 482/3560 Training loss: 2.2824 0.5117 sec/batch
Epoch 3/20 Iteration 483/3560 Training loss: 2.2822 0.5148 sec/batch
Epoch 3/20 Iteration 484/3560 Training loss: 2.2820 0.5068 sec/batch
Epoch 3/20 Iteration 485/3560 Training loss: 2.2817 0.5376 sec/batch
Epoch 3/20 Iteration 486/3560 Training loss: 2.2815 0.5499 sec/batch
Epoch 3/20 Iteration 487/3560 Training loss: 2.2811 0.7284 sec/batch
Epoch 3/20 Iteration 488/3560 Training loss: 2.2807 0.6275 sec/batch
Epoch 3/20 Iteration 489/3560 Training loss: 2.2804 0.6949 sec/batch
Epoch 3/20 Iteration 490/3560 Training loss: 2.2802 0.6704 sec/batch
Epoch 3/20 Iteration 491/3560 Training loss: 2.2799 0.7057 sec/batch
Epoch 3/20 Iteration 492/3560 Training loss: 2.2797 0.5496 sec/batch
Epoch 3/20 Iteration 493/3560 Training loss: 2.2794 0.5565 sec/batch
Epoch 3/20 Iteration 494/3560 Training loss: 2.2792 0.5438 sec/batch
Epoch 3/20 Iteration 495/3560 Training loss: 2.2791 0.5376 sec/batch
Epoch 3/20 Iteration 496/3560 Training loss: 2.2788 0.5403 sec/batch
Epoch 3/20 Iteration 497/3560 Training loss: 2.2788 0.5356 sec/batch
Epoch 3/20 Iteration 498/3560 Training loss: 2.2784 0.5321 sec/batch
Epoch 3/20 Iteration 499/3560 Training loss: 2.2782 0.5388 sec/batch
Epoch 3/20 Iteration 500/3560 Training loss: 2.2779 0.5490 sec/batch
Epoch 3/20 Iteration 501/3560 Training loss: 2.2776 0.5504 sec/batch
Epoch 3/20 Iteration 502/3560 Training loss: 2.2775 0.5384 sec/batch
Epoch 3/20 Iteration 503/3560 Training loss: 2.2773 0.5456 sec/batch
Epoch 3/20 Iteration 504/3560 Training loss: 2.2771 0.5418 sec/batch
Epoch 3/20 Iteration 505/3560 Training loss: 2.2768 0.5451 sec/batch
Epoch 3/20 Iteration 506/3560 Training loss: 2.2764 0.5390 sec/batch
Epoch 3/20 Iteration 507/3560 Training loss: 2.2762 0.5331 sec/batch
Epoch 3/20 Iteration 508/3560 Training loss: 2.2762 0.5428 sec/batch
Epoch 3/20 Iteration 509/3560 Training loss: 2.2760 0.5295 sec/batch
Epoch 3/20 Iteration 510/3560 Training loss: 2.2759 0.5424 sec/batch
Epoch 3/20 Iteration 511/3560 Training loss: 2.2755 0.5438 sec/batch
Epoch 3/20 Iteration 512/3560 Training loss: 2.2753 0.5416 sec/batch
Epoch 3/20 Iteration 513/3560 Training loss: 2.2750 0.5469 sec/batch
Epoch 3/20 Iteration 514/3560 Training loss: 2.2746 0.5486 sec/batch
Epoch 3/20 Iteration 515/3560 Training loss: 2.2742 0.5437 sec/batch
Epoch 3/20 Iteration 516/3560 Training loss: 2.2741 0.5658 sec/batch
Epoch 3/20 Iteration 517/3560 Training loss: 2.2740 0.5358 sec/batch
Epoch 3/20 Iteration 518/3560 Training loss: 2.2736 0.5382 sec/batch
Epoch 3/20 Iteration 519/3560 Training loss: 2.2733 0.5467 sec/batch
Epoch 3/20 Iteration 520/3560 Training loss: 2.2730 0.5388 sec/batch
Epoch 3/20 Iteration 521/3560 Training loss: 2.2728 0.5400 sec/batch
Epoch 3/20 Iteration 522/3560 Training loss: 2.2726 0.5462 sec/batch
Epoch 3/20 Iteration 523/3560 Training loss: 2.2724 0.5419 sec/batch
Epoch 3/20 Iteration 524/3560 Training loss: 2.2723 0.5427 sec/batch
Epoch 3/20 Iteration 525/3560 Training loss: 2.2721 0.5613 sec/batch
Epoch 3/20 Iteration 526/3560 Training loss: 2.2718 0.5364 sec/batch
Epoch 3/20 Iteration 527/3560 Training loss: 2.2715 0.5429 sec/batch
Epoch 3/20 Iteration 528/3560 Training loss: 2.2714 0.5460 sec/batch
Epoch 3/20 Iteration 529/3560 Training loss: 2.2714 0.5987 sec/batch
Epoch 3/20 Iteration 530/3560 Training loss: 2.2713 0.5475 sec/batch
Epoch 3/20 Iteration 531/3560 Training loss: 2.2713 0.5453 sec/batch
Epoch 3/20 Iteration 532/3560 Training loss: 2.2711 0.5368 sec/batch
Epoch 3/20 Iteration 533/3560 Training loss: 2.2708 0.5381 sec/batch
Epoch 3/20 Iteration 534/3560 Training loss: 2.2705 0.5511 sec/batch
Epoch 4/20 Iteration 535/3560 Training loss: 2.2747 0.5338 sec/batch
Epoch 4/20 Iteration 536/3560 Training loss: 2.2345 0.5588 sec/batch
Epoch 4/20 Iteration 537/3560 Training loss: 2.2206 0.5353 sec/batch
Epoch 4/20 Iteration 538/3560 Training loss: 2.2185 0.5471 sec/batch
Epoch 4/20 Iteration 539/3560 Training loss: 2.2181 0.5436 sec/batch
Epoch 4/20 Iteration 540/3560 Training loss: 2.2157 0.5354 sec/batch
Epoch 4/20 Iteration 541/3560 Training loss: 2.2163 0.5373 sec/batch
Epoch 4/20 Iteration 542/3560 Training loss: 2.2182 0.5516 sec/batch
Epoch 4/20 Iteration 543/3560 Training loss: 2.2213 0.5302 sec/batch
Epoch 4/20 Iteration 544/3560 Training loss: 2.2212 0.5384 sec/batch
Epoch 4/20 Iteration 545/3560 Training loss: 2.2205 0.5506 sec/batch
Epoch 4/20 Iteration 546/3560 Training loss: 2.2190 0.5332 sec/batch
Epoch 4/20 Iteration 547/3560 Training loss: 2.2187 0.5356 sec/batch
Epoch 4/20 Iteration 548/3560 Training loss: 2.2202 0.5497 sec/batch
Epoch 4/20 Iteration 549/3560 Training loss: 2.2197 0.5447 sec/batch
Epoch 4/20 Iteration 550/3560 Training loss: 2.2186 0.5615 sec/batch
Epoch 4/20 Iteration 551/3560 Training loss: 2.2183 0.5410 sec/batch
Epoch 4/20 Iteration 552/3560 Training loss: 2.2201 0.5406 sec/batch
Epoch 4/20 Iteration 553/3560 Training loss: 2.2199 0.5420 sec/batch
Epoch 4/20 Iteration 554/3560 Training loss: 2.2191 0.5432 sec/batch
Epoch 4/20 Iteration 555/3560 Training loss: 2.2181 0.5345 sec/batch
Epoch 4/20 Iteration 556/3560 Training loss: 2.2191 0.5358 sec/batch
Epoch 4/20 Iteration 557/3560 Training loss: 2.2185 0.5281 sec/batch
Epoch 4/20 Iteration 558/3560 Training loss: 2.2175 0.5468 sec/batch
Epoch 4/20 Iteration 559/3560 Training loss: 2.2169 0.5441 sec/batch
Epoch 4/20 Iteration 560/3560 Training loss: 2.2161 0.5353 sec/batch
Epoch 4/20 Iteration 561/3560 Training loss: 2.2152 0.5398 sec/batch
Epoch 4/20 Iteration 562/3560 Training loss: 2.2150 0.5490 sec/batch
Epoch 4/20 Iteration 563/3560 Training loss: 2.2154 0.5325 sec/batch
Epoch 4/20 Iteration 564/3560 Training loss: 2.2158 0.5519 sec/batch
Epoch 4/20 Iteration 565/3560 Training loss: 2.2157 0.5434 sec/batch
Epoch 4/20 Iteration 566/3560 Training loss: 2.2149 0.5428 sec/batch
Epoch 4/20 Iteration 567/3560 Training loss: 2.2144 0.5298 sec/batch
Epoch 4/20 Iteration 568/3560 Training loss: 2.2147 0.5522 sec/batch
Epoch 4/20 Iteration 569/3560 Training loss: 2.2140 0.5364 sec/batch
Epoch 4/20 Iteration 570/3560 Training loss: 2.2139 0.5391 sec/batch
Epoch 4/20 Iteration 571/3560 Training loss: 2.2134 0.5343 sec/batch
Epoch 4/20 Iteration 572/3560 Training loss: 2.2123 0.5309 sec/batch
Epoch 4/20 Iteration 573/3560 Training loss: 2.2115 0.5511 sec/batch
Epoch 4/20 Iteration 574/3560 Training loss: 2.2108 0.5601 sec/batch
Epoch 4/20 Iteration 575/3560 Training loss: 2.2101 0.5372 sec/batch
Epoch 4/20 Iteration 576/3560 Training loss: 2.2097 0.5358 sec/batch
Epoch 4/20 Iteration 577/3560 Training loss: 2.2089 0.5341 sec/batch
Epoch 4/20 Iteration 578/3560 Training loss: 2.2080 0.5425 sec/batch
Epoch 4/20 Iteration 579/3560 Training loss: 2.2075 0.5421 sec/batch
Epoch 4/20 Iteration 580/3560 Training loss: 2.2060 0.5398 sec/batch
Epoch 4/20 Iteration 581/3560 Training loss: 2.2060 0.5279 sec/batch
Epoch 4/20 Iteration 582/3560 Training loss: 2.2052 0.5483 sec/batch
Epoch 4/20 Iteration 583/3560 Training loss: 2.2048 0.5427 sec/batch
Epoch 4/20 Iteration 584/3560 Training loss: 2.2050 0.5432 sec/batch
Epoch 4/20 Iteration 585/3560 Training loss: 2.2044 0.5275 sec/batch
Epoch 4/20 Iteration 586/3560 Training loss: 2.2045 0.5347 sec/batch
Epoch 4/20 Iteration 587/3560 Training loss: 2.2041 0.5489 sec/batch
Epoch 4/20 Iteration 588/3560 Training loss: 2.2036 0.5433 sec/batch
Epoch 4/20 Iteration 589/3560 Training loss: 2.2030 0.5364 sec/batch
Epoch 4/20 Iteration 590/3560 Training loss: 2.2027 0.5380 sec/batch
Epoch 4/20 Iteration 591/3560 Training loss: 2.2025 0.5374 sec/batch
Epoch 4/20 Iteration 592/3560 Training loss: 2.2020 0.5433 sec/batch
Epoch 4/20 Iteration 593/3560 Training loss: 2.2015 0.5525 sec/batch
Epoch 4/20 Iteration 594/3560 Training loss: 2.2017 0.5497 sec/batch
Epoch 4/20 Iteration 595/3560 Training loss: 2.2021 0.5516 sec/batch
Epoch 4/20 Iteration 596/3560 Training loss: 2.2028 0.5373 sec/batch
Epoch 4/20 Iteration 597/3560 Training loss: 2.2039 0.5460 sec/batch
Epoch 4/20 Iteration 598/3560 Training loss: 2.2045 0.5382 sec/batch
Epoch 4/20 Iteration 599/3560 Training loss: 2.2049 0.5439 sec/batch
Epoch 4/20 Iteration 600/3560 Training loss: 2.2054 0.5470 sec/batch
Epoch 4/20 Iteration 601/3560 Training loss: 2.2060 0.5439 sec/batch
Epoch 4/20 Iteration 602/3560 Training loss: 2.2059 0.5398 sec/batch
Epoch 4/20 Iteration 603/3560 Training loss: 2.2054 0.5517 sec/batch
Epoch 4/20 Iteration 604/3560 Training loss: 2.2051 0.5506 sec/batch
Epoch 4/20 Iteration 605/3560 Training loss: 2.2050 0.5352 sec/batch
Epoch 4/20 Iteration 606/3560 Training loss: 2.2047 0.5594 sec/batch
Epoch 4/20 Iteration 607/3560 Training loss: 2.2047 0.5467 sec/batch
Epoch 4/20 Iteration 608/3560 Training loss: 2.2043 0.5444 sec/batch
Epoch 4/20 Iteration 609/3560 Training loss: 2.2039 0.5434 sec/batch
Epoch 4/20 Iteration 610/3560 Training loss: 2.2040 0.5393 sec/batch
Epoch 4/20 Iteration 611/3560 Training loss: 2.2037 0.5465 sec/batch
Epoch 4/20 Iteration 612/3560 Training loss: 2.2036 0.5400 sec/batch
Epoch 4/20 Iteration 613/3560 Training loss: 2.2030 0.5519 sec/batch
Epoch 4/20 Iteration 614/3560 Training loss: 2.2027 0.5397 sec/batch
Epoch 4/20 Iteration 615/3560 Training loss: 2.2021 0.5488 sec/batch
Epoch 4/20 Iteration 616/3560 Training loss: 2.2019 0.5420 sec/batch
Epoch 4/20 Iteration 617/3560 Training loss: 2.2014 0.5421 sec/batch
Epoch 4/20 Iteration 618/3560 Training loss: 2.2009 0.5569 sec/batch
Epoch 4/20 Iteration 619/3560 Training loss: 2.2000 0.5379 sec/batch
Epoch 4/20 Iteration 620/3560 Training loss: 2.1995 0.5515 sec/batch
Epoch 4/20 Iteration 621/3560 Training loss: 2.1992 0.5370 sec/batch
Epoch 4/20 Iteration 622/3560 Training loss: 2.1986 0.5545 sec/batch
Epoch 4/20 Iteration 623/3560 Training loss: 2.1981 0.5441 sec/batch
Epoch 4/20 Iteration 624/3560 Training loss: 2.1980 0.5257 sec/batch
Epoch 4/20 Iteration 625/3560 Training loss: 2.1976 0.5344 sec/batch
Epoch 4/20 Iteration 626/3560 Training loss: 2.1972 0.5303 sec/batch
Epoch 4/20 Iteration 627/3560 Training loss: 2.1967 0.5371 sec/batch
Epoch 4/20 Iteration 628/3560 Training loss: 2.1963 0.5465 sec/batch
Epoch 4/20 Iteration 629/3560 Training loss: 2.1958 0.5278 sec/batch
Epoch 4/20 Iteration 630/3560 Training loss: 2.1954 0.5446 sec/batch
Epoch 4/20 Iteration 631/3560 Training loss: 2.1950 0.5534 sec/batch
Epoch 4/20 Iteration 632/3560 Training loss: 2.1945 0.5468 sec/batch
Epoch 4/20 Iteration 633/3560 Training loss: 2.1941 0.5338 sec/batch
Epoch 4/20 Iteration 634/3560 Training loss: 2.1936 0.5383 sec/batch
Epoch 4/20 Iteration 635/3560 Training loss: 2.1934 0.5505 sec/batch
Epoch 4/20 Iteration 636/3560 Training loss: 2.1932 0.5490 sec/batch
Epoch 4/20 Iteration 637/3560 Training loss: 2.1927 0.5642 sec/batch
Epoch 4/20 Iteration 638/3560 Training loss: 2.1924 0.5440 sec/batch
Epoch 4/20 Iteration 639/3560 Training loss: 2.1920 0.5524 sec/batch
Epoch 4/20 Iteration 640/3560 Training loss: 2.1917 0.5329 sec/batch
Epoch 4/20 Iteration 641/3560 Training loss: 2.1913 0.5390 sec/batch
Epoch 4/20 Iteration 642/3560 Training loss: 2.1911 0.5438 sec/batch
Epoch 4/20 Iteration 643/3560 Training loss: 2.1909 0.5378 sec/batch
Epoch 4/20 Iteration 644/3560 Training loss: 2.1905 0.5674 sec/batch
Epoch 4/20 Iteration 645/3560 Training loss: 2.1903 0.5467 sec/batch
Epoch 4/20 Iteration 646/3560 Training loss: 2.1900 0.5482 sec/batch
Epoch 4/20 Iteration 647/3560 Training loss: 2.1896 0.5424 sec/batch
Epoch 4/20 Iteration 648/3560 Training loss: 2.1892 0.5394 sec/batch
Epoch 4/20 Iteration 649/3560 Training loss: 2.1888 0.5366 sec/batch
Epoch 4/20 Iteration 650/3560 Training loss: 2.1881 0.5517 sec/batch
Epoch 4/20 Iteration 651/3560 Training loss: 2.1879 0.5483 sec/batch
Epoch 4/20 Iteration 652/3560 Training loss: 2.1876 0.5443 sec/batch
Epoch 4/20 Iteration 653/3560 Training loss: 2.1875 0.5421 sec/batch
Epoch 4/20 Iteration 654/3560 Training loss: 2.1872 0.5302 sec/batch
Epoch 4/20 Iteration 655/3560 Training loss: 2.1870 0.5581 sec/batch
Epoch 4/20 Iteration 656/3560 Training loss: 2.1867 0.5413 sec/batch
Epoch 4/20 Iteration 657/3560 Training loss: 2.1863 0.5421 sec/batch
Epoch 4/20 Iteration 658/3560 Training loss: 2.1862 0.5559 sec/batch
Epoch 4/20 Iteration 659/3560 Training loss: 2.1859 0.5367 sec/batch
Epoch 4/20 Iteration 660/3560 Training loss: 2.1854 0.5344 sec/batch
Epoch 4/20 Iteration 661/3560 Training loss: 2.1853 0.5402 sec/batch
Epoch 4/20 Iteration 662/3560 Training loss: 2.1851 0.5365 sec/batch
Epoch 4/20 Iteration 663/3560 Training loss: 2.1849 0.5329 sec/batch
Epoch 4/20 Iteration 664/3560 Training loss: 2.1847 0.5369 sec/batch
Epoch 4/20 Iteration 665/3560 Training loss: 2.1844 0.5414 sec/batch
Epoch 4/20 Iteration 666/3560 Training loss: 2.1838 0.5371 sec/batch
Epoch 4/20 Iteration 667/3560 Training loss: 2.1837 0.5352 sec/batch
Epoch 4/20 Iteration 668/3560 Training loss: 2.1835 0.5440 sec/batch
Epoch 4/20 Iteration 669/3560 Training loss: 2.1833 0.5488 sec/batch
Epoch 4/20 Iteration 670/3560 Training loss: 2.1831 0.5460 sec/batch
Epoch 4/20 Iteration 671/3560 Training loss: 2.1829 0.5489 sec/batch
Epoch 4/20 Iteration 672/3560 Training loss: 2.1827 0.5446 sec/batch
Epoch 4/20 Iteration 673/3560 Training loss: 2.1827 0.5453 sec/batch
Epoch 4/20 Iteration 674/3560 Training loss: 2.1823 0.5431 sec/batch
Epoch 4/20 Iteration 675/3560 Training loss: 2.1823 0.5391 sec/batch
Epoch 4/20 Iteration 676/3560 Training loss: 2.1820 0.5410 sec/batch
Epoch 4/20 Iteration 677/3560 Training loss: 2.1817 0.5349 sec/batch
Epoch 4/20 Iteration 678/3560 Training loss: 2.1815 0.5318 sec/batch
Epoch 4/20 Iteration 679/3560 Training loss: 2.1812 0.5335 sec/batch
Epoch 4/20 Iteration 680/3560 Training loss: 2.1811 0.5449 sec/batch
Epoch 4/20 Iteration 681/3560 Training loss: 2.1809 0.5501 sec/batch
Epoch 4/20 Iteration 682/3560 Training loss: 2.1808 0.6183 sec/batch
Epoch 4/20 Iteration 683/3560 Training loss: 2.1806 0.6682 sec/batch
Epoch 4/20 Iteration 684/3560 Training loss: 2.1803 0.6797 sec/batch
Epoch 4/20 Iteration 685/3560 Training loss: 2.1800 0.6619 sec/batch
Epoch 4/20 Iteration 686/3560 Training loss: 2.1800 0.6590 sec/batch
Epoch 4/20 Iteration 687/3560 Training loss: 2.1799 0.6677 sec/batch
Epoch 4/20 Iteration 688/3560 Training loss: 2.1797 0.6626 sec/batch
Epoch 4/20 Iteration 689/3560 Training loss: 2.1795 0.6561 sec/batch
Epoch 4/20 Iteration 690/3560 Training loss: 2.1793 0.7165 sec/batch
Epoch 4/20 Iteration 691/3560 Training loss: 2.1790 0.5545 sec/batch
Epoch 4/20 Iteration 692/3560 Training loss: 2.1788 0.6855 sec/batch
Epoch 4/20 Iteration 693/3560 Training loss: 2.1784 0.6682 sec/batch
Epoch 4/20 Iteration 694/3560 Training loss: 2.1784 0.5877 sec/batch
Epoch 4/20 Iteration 695/3560 Training loss: 2.1784 0.5438 sec/batch
Epoch 4/20 Iteration 696/3560 Training loss: 2.1780 0.5898 sec/batch
Epoch 4/20 Iteration 697/3560 Training loss: 2.1779 0.6159 sec/batch
Epoch 4/20 Iteration 698/3560 Training loss: 2.1776 0.6057 sec/batch
Epoch 4/20 Iteration 699/3560 Training loss: 2.1775 0.5988 sec/batch
Epoch 4/20 Iteration 700/3560 Training loss: 2.1772 0.5604 sec/batch
Epoch 4/20 Iteration 701/3560 Training loss: 2.1771 0.5930 sec/batch
Epoch 4/20 Iteration 702/3560 Training loss: 2.1771 0.5899 sec/batch
Epoch 4/20 Iteration 703/3560 Training loss: 2.1770 0.5685 sec/batch
Epoch 4/20 Iteration 704/3560 Training loss: 2.1768 0.5512 sec/batch
Epoch 4/20 Iteration 705/3560 Training loss: 2.1765 0.5432 sec/batch
Epoch 4/20 Iteration 706/3560 Training loss: 2.1763 0.5555 sec/batch
Epoch 4/20 Iteration 707/3560 Training loss: 2.1762 0.6077 sec/batch
Epoch 4/20 Iteration 708/3560 Training loss: 2.1761 0.5816 sec/batch
Epoch 4/20 Iteration 709/3560 Training loss: 2.1760 0.5517 sec/batch
Epoch 4/20 Iteration 710/3560 Training loss: 2.1759 0.6025 sec/batch
Epoch 4/20 Iteration 711/3560 Training loss: 2.1756 0.5574 sec/batch
Epoch 4/20 Iteration 712/3560 Training loss: 2.1753 0.5851 sec/batch
Epoch 5/20 Iteration 713/3560 Training loss: 2.1963 0.5507 sec/batch
Epoch 5/20 Iteration 714/3560 Training loss: 2.1520 0.7041 sec/batch
Epoch 5/20 Iteration 715/3560 Training loss: 2.1401 0.6797 sec/batch
Epoch 5/20 Iteration 716/3560 Training loss: 2.1323 0.6544 sec/batch
Epoch 5/20 Iteration 717/3560 Training loss: 2.1325 0.6474 sec/batch
Epoch 5/20 Iteration 718/3560 Training loss: 2.1301 0.5465 sec/batch
Epoch 5/20 Iteration 719/3560 Training loss: 2.1318 0.5416 sec/batch
Epoch 5/20 Iteration 720/3560 Training loss: 2.1317 0.5349 sec/batch
Epoch 5/20 Iteration 721/3560 Training loss: 2.1342 0.5470 sec/batch
Epoch 5/20 Iteration 722/3560 Training loss: 2.1341 0.5585 sec/batch
Epoch 5/20 Iteration 723/3560 Training loss: 2.1320 0.5756 sec/batch
Epoch 5/20 Iteration 724/3560 Training loss: 2.1301 0.6716 sec/batch
Epoch 5/20 Iteration 725/3560 Training loss: 2.1307 0.5888 sec/batch
Epoch 5/20 Iteration 726/3560 Training loss: 2.1328 0.6237 sec/batch
Epoch 5/20 Iteration 727/3560 Training loss: 2.1318 0.6503 sec/batch
Epoch 5/20 Iteration 728/3560 Training loss: 2.1306 0.6683 sec/batch
Epoch 5/20 Iteration 729/3560 Training loss: 2.1302 0.6498 sec/batch
Epoch 5/20 Iteration 730/3560 Training loss: 2.1323 0.7188 sec/batch
Epoch 5/20 Iteration 731/3560 Training loss: 2.1323 0.6541 sec/batch
Epoch 5/20 Iteration 732/3560 Training loss: 2.1317 0.6629 sec/batch
Epoch 5/20 Iteration 733/3560 Training loss: 2.1307 0.6286 sec/batch
Epoch 5/20 Iteration 734/3560 Training loss: 2.1318 0.5428 sec/batch
Epoch 5/20 Iteration 735/3560 Training loss: 2.1315 0.5399 sec/batch
Epoch 5/20 Iteration 736/3560 Training loss: 2.1310 0.5972 sec/batch
Epoch 5/20 Iteration 737/3560 Training loss: 2.1304 0.6708 sec/batch
Epoch 5/20 Iteration 738/3560 Training loss: 2.1300 0.6777 sec/batch
Epoch 5/20 Iteration 739/3560 Training loss: 2.1293 0.6852 sec/batch
Epoch 5/20 Iteration 740/3560 Training loss: 2.1293 0.6708 sec/batch
Epoch 5/20 Iteration 741/3560 Training loss: 2.1303 0.6564 sec/batch
Epoch 5/20 Iteration 742/3560 Training loss: 2.1305 0.6938 sec/batch
Epoch 5/20 Iteration 743/3560 Training loss: 2.1306 0.7044 sec/batch
Epoch 5/20 Iteration 744/3560 Training loss: 2.1297 0.5594 sec/batch
Epoch 5/20 Iteration 745/3560 Training loss: 2.1294 0.5373 sec/batch
Epoch 5/20 Iteration 746/3560 Training loss: 2.1302 0.5818 sec/batch
Epoch 5/20 Iteration 747/3560 Training loss: 2.1296 0.5408 sec/batch
Epoch 5/20 Iteration 748/3560 Training loss: 2.1293 0.5408 sec/batch
Epoch 5/20 Iteration 749/3560 Training loss: 2.1289 0.5366 sec/batch
Epoch 5/20 Iteration 750/3560 Training loss: 2.1275 0.5552 sec/batch
Epoch 5/20 Iteration 751/3560 Training loss: 2.1266 0.5343 sec/batch
Epoch 5/20 Iteration 752/3560 Training loss: 2.1260 0.5437 sec/batch
Epoch 5/20 Iteration 753/3560 Training loss: 2.1252 0.5671 sec/batch
Epoch 5/20 Iteration 754/3560 Training loss: 2.1247 0.6561 sec/batch
Epoch 5/20 Iteration 755/3560 Training loss: 2.1239 0.6587 sec/batch
Epoch 5/20 Iteration 756/3560 Training loss: 2.1230 0.6543 sec/batch
Epoch 5/20 Iteration 757/3560 Training loss: 2.1228 0.6680 sec/batch
Epoch 5/20 Iteration 758/3560 Training loss: 2.1213 0.6632 sec/batch
Epoch 5/20 Iteration 759/3560 Training loss: 2.1212 0.6919 sec/batch
Epoch 5/20 Iteration 760/3560 Training loss: 2.1206 0.6570 sec/batch
Epoch 5/20 Iteration 761/3560 Training loss: 2.1202 0.6473 sec/batch
Epoch 5/20 Iteration 762/3560 Training loss: 2.1207 0.6504 sec/batch
Epoch 5/20 Iteration 763/3560 Training loss: 2.1201 0.6759 sec/batch
Epoch 5/20 Iteration 764/3560 Training loss: 2.1203 0.6584 sec/batch
Epoch 5/20 Iteration 765/3560 Training loss: 2.1198 0.7075 sec/batch
Epoch 5/20 Iteration 766/3560 Training loss: 2.1195 0.7051 sec/batch
Epoch 5/20 Iteration 767/3560 Training loss: 2.1190 0.6233 sec/batch
Epoch 5/20 Iteration 768/3560 Training loss: 2.1190 0.5776 sec/batch
Epoch 5/20 Iteration 769/3560 Training loss: 2.1190 0.5132 sec/batch
Epoch 5/20 Iteration 770/3560 Training loss: 2.1187 0.5619 sec/batch
Epoch 5/20 Iteration 771/3560 Training loss: 2.1183 0.7029 sec/batch
Epoch 5/20 Iteration 772/3560 Training loss: 2.1186 0.6744 sec/batch
Epoch 5/20 Iteration 773/3560 Training loss: 2.1184 0.5128 sec/batch
Epoch 5/20 Iteration 774/3560 Training loss: 2.1187 0.6634 sec/batch
Epoch 5/20 Iteration 775/3560 Training loss: 2.1190 0.6877 sec/batch
Epoch 5/20 Iteration 776/3560 Training loss: 2.1188 0.6518 sec/batch
Epoch 5/20 Iteration 777/3560 Training loss: 2.1185 0.8216 sec/batch
Epoch 5/20 Iteration 778/3560 Training loss: 2.1186 0.8238 sec/batch
Epoch 5/20 Iteration 779/3560 Training loss: 2.1187 0.8502 sec/batch
Epoch 5/20 Iteration 780/3560 Training loss: 2.1179 0.8083 sec/batch
Epoch 5/20 Iteration 781/3560 Training loss: 2.1176 0.8422 sec/batch
Epoch 5/20 Iteration 782/3560 Training loss: 2.1174 0.6079 sec/batch
Epoch 5/20 Iteration 783/3560 Training loss: 2.1175 0.8440 sec/batch
Epoch 5/20 Iteration 784/3560 Training loss: 2.1174 0.7716 sec/batch
Epoch 5/20 Iteration 785/3560 Training loss: 2.1175 0.6995 sec/batch
Epoch 5/20 Iteration 786/3560 Training loss: 2.1171 0.6540 sec/batch
Epoch 5/20 Iteration 787/3560 Training loss: 2.1168 0.7104 sec/batch
Epoch 5/20 Iteration 788/3560 Training loss: 2.1171 0.6381 sec/batch
Epoch 5/20 Iteration 789/3560 Training loss: 2.1168 0.5722 sec/batch
Epoch 5/20 Iteration 790/3560 Training loss: 2.1167 0.6352 sec/batch
Epoch 5/20 Iteration 791/3560 Training loss: 2.1162 0.6422 sec/batch
Epoch 5/20 Iteration 792/3560 Training loss: 2.1159 0.7804 sec/batch
Epoch 5/20 Iteration 793/3560 Training loss: 2.1153 0.8337 sec/batch
Epoch 5/20 Iteration 794/3560 Training loss: 2.1153 0.7801 sec/batch
Epoch 5/20 Iteration 795/3560 Training loss: 2.1147 0.8827 sec/batch
Epoch 5/20 Iteration 796/3560 Training loss: 2.1144 0.8307 sec/batch
Epoch 5/20 Iteration 797/3560 Training loss: 2.1137 0.7307 sec/batch
Epoch 5/20 Iteration 798/3560 Training loss: 2.1132 0.6183 sec/batch
Epoch 5/20 Iteration 799/3560 Training loss: 2.1129 0.6526 sec/batch
Epoch 5/20 Iteration 800/3560 Training loss: 2.1126 0.8109 sec/batch
Epoch 5/20 Iteration 801/3560 Training loss: 2.1120 0.6676 sec/batch
Epoch 5/20 Iteration 802/3560 Training loss: 2.1119 0.5447 sec/batch
Epoch 5/20 Iteration 803/3560 Training loss: 2.1116 0.6747 sec/batch
Epoch 5/20 Iteration 804/3560 Training loss: 2.1114 0.7463 sec/batch
Epoch 5/20 Iteration 805/3560 Training loss: 2.1109 0.7445 sec/batch
Epoch 5/20 Iteration 806/3560 Training loss: 2.1104 0.7337 sec/batch
Epoch 5/20 Iteration 807/3560 Training loss: 2.1100 0.7692 sec/batch
Epoch 5/20 Iteration 808/3560 Training loss: 2.1097 0.7049 sec/batch
Epoch 5/20 Iteration 809/3560 Training loss: 2.1095 0.7533 sec/batch
Epoch 5/20 Iteration 810/3560 Training loss: 2.1092 0.8236 sec/batch
Epoch 5/20 Iteration 811/3560 Training loss: 2.1088 0.7419 sec/batch
Epoch 5/20 Iteration 812/3560 Training loss: 2.1084 0.6525 sec/batch
Epoch 5/20 Iteration 813/3560 Training loss: 2.1082 0.6738 sec/batch
Epoch 5/20 Iteration 814/3560 Training loss: 2.1081 0.5877 sec/batch
Epoch 5/20 Iteration 815/3560 Training loss: 2.1077 0.6560 sec/batch
Epoch 5/20 Iteration 816/3560 Training loss: 2.1075 0.7295 sec/batch
Epoch 5/20 Iteration 817/3560 Training loss: 2.1071 0.7907 sec/batch
Epoch 5/20 Iteration 818/3560 Training loss: 2.1070 0.7256 sec/batch
Epoch 5/20 Iteration 819/3560 Training loss: 2.1067 0.8022 sec/batch
Epoch 5/20 Iteration 820/3560 Training loss: 2.1065 0.8264 sec/batch
Epoch 5/20 Iteration 821/3560 Training loss: 2.1063 0.8176 sec/batch
Epoch 5/20 Iteration 822/3560 Training loss: 2.1060 0.7369 sec/batch
Epoch 5/20 Iteration 823/3560 Training loss: 2.1060 0.8584 sec/batch
Epoch 5/20 Iteration 824/3560 Training loss: 2.1057 0.8382 sec/batch
Epoch 5/20 Iteration 825/3560 Training loss: 2.1055 0.8535 sec/batch
Epoch 5/20 Iteration 826/3560 Training loss: 2.1052 0.8526 sec/batch
Epoch 5/20 Iteration 827/3560 Training loss: 2.1049 0.7349 sec/batch
Epoch 5/20 Iteration 828/3560 Training loss: 2.1044 0.7396 sec/batch
Epoch 5/20 Iteration 829/3560 Training loss: 2.1042 0.8221 sec/batch
Epoch 5/20 Iteration 830/3560 Training loss: 2.1039 0.7597 sec/batch
Epoch 5/20 Iteration 831/3560 Training loss: 2.1039 0.6665 sec/batch
Epoch 5/20 Iteration 832/3560 Training loss: 2.1037 0.5823 sec/batch
Epoch 5/20 Iteration 833/3560 Training loss: 2.1037 0.7218 sec/batch
Epoch 5/20 Iteration 834/3560 Training loss: 2.1034 0.8250 sec/batch
Epoch 5/20 Iteration 835/3560 Training loss: 2.1030 0.8699 sec/batch
Epoch 5/20 Iteration 836/3560 Training loss: 2.1031 0.6857 sec/batch
Epoch 5/20 Iteration 837/3560 Training loss: 2.1028 0.7816 sec/batch
Epoch 5/20 Iteration 838/3560 Training loss: 2.1024 0.7388 sec/batch
Epoch 5/20 Iteration 839/3560 Training loss: 2.1024 0.7359 sec/batch
Epoch 5/20 Iteration 840/3560 Training loss: 2.1023 0.6497 sec/batch
Epoch 5/20 Iteration 841/3560 Training loss: 2.1021 0.6771 sec/batch
Epoch 5/20 Iteration 842/3560 Training loss: 2.1020 0.7191 sec/batch
Epoch 5/20 Iteration 843/3560 Training loss: 2.1018 0.6819 sec/batch
Epoch 5/20 Iteration 844/3560 Training loss: 2.1013 0.6248 sec/batch
Epoch 5/20 Iteration 845/3560 Training loss: 2.1012 0.5041 sec/batch
Epoch 5/20 Iteration 846/3560 Training loss: 2.1012 0.6261 sec/batch
Epoch 5/20 Iteration 847/3560 Training loss: 2.1010 0.6891 sec/batch
Epoch 5/20 Iteration 848/3560 Training loss: 2.1009 0.7164 sec/batch
Epoch 5/20 Iteration 849/3560 Training loss: 2.1007 0.5222 sec/batch
Epoch 5/20 Iteration 850/3560 Training loss: 2.1006 0.5837 sec/batch
Epoch 5/20 Iteration 851/3560 Training loss: 2.1007 0.5783 sec/batch
Epoch 5/20 Iteration 852/3560 Training loss: 2.1004 0.6088 sec/batch
Epoch 5/20 Iteration 853/3560 Training loss: 2.1004 0.5002 sec/batch
Epoch 5/20 Iteration 854/3560 Training loss: 2.1001 0.5063 sec/batch
Epoch 5/20 Iteration 855/3560 Training loss: 2.1000 0.7317 sec/batch
Epoch 5/20 Iteration 856/3560 Training loss: 2.0998 0.4984 sec/batch
Epoch 5/20 Iteration 857/3560 Training loss: 2.0995 0.4717 sec/batch
Epoch 5/20 Iteration 858/3560 Training loss: 2.0994 0.6590 sec/batch
Epoch 5/20 Iteration 859/3560 Training loss: 2.0993 0.4755 sec/batch
Epoch 5/20 Iteration 860/3560 Training loss: 2.0993 0.4693 sec/batch
Epoch 5/20 Iteration 861/3560 Training loss: 2.0991 0.6712 sec/batch
Epoch 5/20 Iteration 862/3560 Training loss: 2.0988 0.7028 sec/batch
Epoch 5/20 Iteration 863/3560 Training loss: 2.0986 0.5631 sec/batch
Epoch 5/20 Iteration 864/3560 Training loss: 2.0986 0.7241 sec/batch
Epoch 5/20 Iteration 865/3560 Training loss: 2.0985 0.6981 sec/batch
Epoch 5/20 Iteration 866/3560 Training loss: 2.0984 0.8094 sec/batch
Epoch 5/20 Iteration 867/3560 Training loss: 2.0981 0.8366 sec/batch
Epoch 5/20 Iteration 868/3560 Training loss: 2.0979 0.8190 sec/batch
Epoch 5/20 Iteration 869/3560 Training loss: 2.0977 0.6194 sec/batch
Epoch 5/20 Iteration 870/3560 Training loss: 2.0975 0.5412 sec/batch
Epoch 5/20 Iteration 871/3560 Training loss: 2.0973 0.5124 sec/batch
Epoch 5/20 Iteration 872/3560 Training loss: 2.0973 0.4931 sec/batch
Epoch 5/20 Iteration 873/3560 Training loss: 2.0973 0.6018 sec/batch
Epoch 5/20 Iteration 874/3560 Training loss: 2.0971 0.4927 sec/batch
Epoch 5/20 Iteration 875/3560 Training loss: 2.0970 0.4740 sec/batch
Epoch 5/20 Iteration 876/3560 Training loss: 2.0969 0.5840 sec/batch
Epoch 5/20 Iteration 877/3560 Training loss: 2.0967 0.6230 sec/batch
Epoch 5/20 Iteration 878/3560 Training loss: 2.0965 0.6092 sec/batch
Epoch 5/20 Iteration 879/3560 Training loss: 2.0964 0.6155 sec/batch
Epoch 5/20 Iteration 880/3560 Training loss: 2.0965 0.7263 sec/batch
Epoch 5/20 Iteration 881/3560 Training loss: 2.0963 0.6119 sec/batch
Epoch 5/20 Iteration 882/3560 Training loss: 2.0961 0.5937 sec/batch
Epoch 5/20 Iteration 883/3560 Training loss: 2.0959 0.4978 sec/batch
Epoch 5/20 Iteration 884/3560 Training loss: 2.0958 0.4983 sec/batch
Epoch 5/20 Iteration 885/3560 Training loss: 2.0958 0.6884 sec/batch
Epoch 5/20 Iteration 886/3560 Training loss: 2.0957 0.4864 sec/batch
Epoch 5/20 Iteration 887/3560 Training loss: 2.0956 0.4758 sec/batch
Epoch 5/20 Iteration 888/3560 Training loss: 2.0955 0.4663 sec/batch
Epoch 5/20 Iteration 889/3560 Training loss: 2.0953 0.4768 sec/batch
Epoch 5/20 Iteration 890/3560 Training loss: 2.0951 0.4809 sec/batch
Epoch 6/20 Iteration 891/3560 Training loss: 2.1301 0.4690 sec/batch
Epoch 6/20 Iteration 892/3560 Training loss: 2.0899 0.4830 sec/batch
Epoch 6/20 Iteration 893/3560 Training loss: 2.0767 0.4728 sec/batch
Epoch 6/20 Iteration 894/3560 Training loss: 2.0697 0.4788 sec/batch
Epoch 6/20 Iteration 895/3560 Training loss: 2.0657 0.4696 sec/batch
Epoch 6/20 Iteration 896/3560 Training loss: 2.0608 0.4757 sec/batch
Epoch 6/20 Iteration 897/3560 Training loss: 2.0618 0.4691 sec/batch
Epoch 6/20 Iteration 898/3560 Training loss: 2.0616 0.4728 sec/batch
Epoch 6/20 Iteration 899/3560 Training loss: 2.0644 0.4718 sec/batch
Epoch 6/20 Iteration 900/3560 Training loss: 2.0645 0.4942 sec/batch
Epoch 6/20 Iteration 901/3560 Training loss: 2.0623 0.5131 sec/batch
Epoch 6/20 Iteration 902/3560 Training loss: 2.0610 0.5197 sec/batch
Epoch 6/20 Iteration 903/3560 Training loss: 2.0614 0.5297 sec/batch
Epoch 6/20 Iteration 904/3560 Training loss: 2.0631 0.6411 sec/batch
Epoch 6/20 Iteration 905/3560 Training loss: 2.0620 0.7261 sec/batch
Epoch 6/20 Iteration 906/3560 Training loss: 2.0607 0.5011 sec/batch
Epoch 6/20 Iteration 907/3560 Training loss: 2.0616 0.5138 sec/batch
Epoch 6/20 Iteration 908/3560 Training loss: 2.0642 0.6043 sec/batch
Epoch 6/20 Iteration 909/3560 Training loss: 2.0638 0.6580 sec/batch
Epoch 6/20 Iteration 910/3560 Training loss: 2.0637 0.4851 sec/batch
Epoch 6/20 Iteration 911/3560 Training loss: 2.0631 0.5844 sec/batch
Epoch 6/20 Iteration 912/3560 Training loss: 2.0640 0.4883 sec/batch
Epoch 6/20 Iteration 913/3560 Training loss: 2.0634 0.5029 sec/batch
Epoch 6/20 Iteration 914/3560 Training loss: 2.0629 0.4849 sec/batch
Epoch 6/20 Iteration 915/3560 Training loss: 2.0625 0.6117 sec/batch
Epoch 6/20 Iteration 916/3560 Training loss: 2.0617 0.5243 sec/batch
Epoch 6/20 Iteration 917/3560 Training loss: 2.0609 0.5095 sec/batch
Epoch 6/20 Iteration 918/3560 Training loss: 2.0612 0.5694 sec/batch
Epoch 6/20 Iteration 919/3560 Training loss: 2.0622 0.7942 sec/batch
Epoch 6/20 Iteration 920/3560 Training loss: 2.0624 0.7410 sec/batch
Epoch 6/20 Iteration 921/3560 Training loss: 2.0625 0.6716 sec/batch
Epoch 6/20 Iteration 922/3560 Training loss: 2.0618 0.6638 sec/batch
Epoch 6/20 Iteration 923/3560 Training loss: 2.0616 0.6563 sec/batch
Epoch 6/20 Iteration 924/3560 Training loss: 2.0623 0.7036 sec/batch
Epoch 6/20 Iteration 925/3560 Training loss: 2.0619 0.7871 sec/batch
Epoch 6/20 Iteration 926/3560 Training loss: 2.0618 0.7708 sec/batch
Epoch 6/20 Iteration 927/3560 Training loss: 2.0615 0.7195 sec/batch
Epoch 6/20 Iteration 928/3560 Training loss: 2.0602 0.7582 sec/batch
Epoch 6/20 Iteration 929/3560 Training loss: 2.0592 0.7652 sec/batch
Epoch 6/20 Iteration 930/3560 Training loss: 2.0585 0.7372 sec/batch
Epoch 6/20 Iteration 931/3560 Training loss: 2.0578 0.7497 sec/batch
Epoch 6/20 Iteration 932/3560 Training loss: 2.0574 0.8447 sec/batch
Epoch 6/20 Iteration 933/3560 Training loss: 2.0567 0.8349 sec/batch
Epoch 6/20 Iteration 934/3560 Training loss: 2.0556 0.7579 sec/batch
Epoch 6/20 Iteration 935/3560 Training loss: 2.0555 0.8342 sec/batch
Epoch 6/20 Iteration 936/3560 Training loss: 2.0543 0.7261 sec/batch
Epoch 6/20 Iteration 937/3560 Training loss: 2.0542 0.5769 sec/batch
Epoch 6/20 Iteration 938/3560 Training loss: 2.0537 0.4812 sec/batch
Epoch 6/20 Iteration 939/3560 Training loss: 2.0534 0.5753 sec/batch
Epoch 6/20 Iteration 940/3560 Training loss: 2.0538 0.5219 sec/batch
Epoch 6/20 Iteration 941/3560 Training loss: 2.0531 0.5026 sec/batch
Epoch 6/20 Iteration 942/3560 Training loss: 2.0536 0.4763 sec/batch
Epoch 6/20 Iteration 943/3560 Training loss: 2.0532 0.4933 sec/batch
Epoch 6/20 Iteration 944/3560 Training loss: 2.0530 0.5202 sec/batch
Epoch 6/20 Iteration 945/3560 Training loss: 2.0525 0.5047 sec/batch
Epoch 6/20 Iteration 946/3560 Training loss: 2.0524 0.6547 sec/batch
Epoch 6/20 Iteration 947/3560 Training loss: 2.0524 0.5060 sec/batch
Epoch 6/20 Iteration 948/3560 Training loss: 2.0520 0.6058 sec/batch
Epoch 6/20 Iteration 949/3560 Training loss: 2.0515 0.4759 sec/batch
Epoch 6/20 Iteration 950/3560 Training loss: 2.0517 0.4822 sec/batch
Epoch 6/20 Iteration 951/3560 Training loss: 2.0515 0.4816 sec/batch
Epoch 6/20 Iteration 952/3560 Training loss: 2.0519 0.5679 sec/batch
Epoch 6/20 Iteration 953/3560 Training loss: 2.0522 0.5720 sec/batch
Epoch 6/20 Iteration 954/3560 Training loss: 2.0523 0.5479 sec/batch
Epoch 6/20 Iteration 955/3560 Training loss: 2.0519 0.6508 sec/batch
Epoch 6/20 Iteration 956/3560 Training loss: 2.0522 0.6269 sec/batch
Epoch 6/20 Iteration 957/3560 Training loss: 2.0523 0.4817 sec/batch
Epoch 6/20 Iteration 958/3560 Training loss: 2.0518 0.4731 sec/batch
Epoch 6/20 Iteration 959/3560 Training loss: 2.0515 0.4738 sec/batch
Epoch 6/20 Iteration 960/3560 Training loss: 2.0513 0.4801 sec/batch
Epoch 6/20 Iteration 961/3560 Training loss: 2.0514 0.4797 sec/batch
Epoch 6/20 Iteration 962/3560 Training loss: 2.0514 0.4732 sec/batch
Epoch 6/20 Iteration 963/3560 Training loss: 2.0515 0.4845 sec/batch
Epoch 6/20 Iteration 964/3560 Training loss: 2.0511 0.4737 sec/batch
Epoch 6/20 Iteration 965/3560 Training loss: 2.0510 0.4810 sec/batch
Epoch 6/20 Iteration 966/3560 Training loss: 2.0513 0.4783 sec/batch
Epoch 6/20 Iteration 967/3560 Training loss: 2.0511 0.4787 sec/batch
Epoch 6/20 Iteration 968/3560 Training loss: 2.0511 0.5109 sec/batch
Epoch 6/20 Iteration 969/3560 Training loss: 2.0505 0.4782 sec/batch
Epoch 6/20 Iteration 970/3560 Training loss: 2.0502 0.4765 sec/batch
Epoch 6/20 Iteration 971/3560 Training loss: 2.0496 0.6619 sec/batch
Epoch 6/20 Iteration 972/3560 Training loss: 2.0495 0.6368 sec/batch
Epoch 6/20 Iteration 973/3560 Training loss: 2.0490 0.5636 sec/batch
Epoch 6/20 Iteration 974/3560 Training loss: 2.0487 0.5350 sec/batch
Epoch 6/20 Iteration 975/3560 Training loss: 2.0480 0.4957 sec/batch
Epoch 6/20 Iteration 976/3560 Training loss: 2.0476 0.5243 sec/batch
Epoch 6/20 Iteration 977/3560 Training loss: 2.0474 0.6178 sec/batch
Epoch 6/20 Iteration 978/3560 Training loss: 2.0469 0.5884 sec/batch
Epoch 6/20 Iteration 979/3560 Training loss: 2.0465 0.4663 sec/batch
Epoch 6/20 Iteration 980/3560 Training loss: 2.0465 0.5910 sec/batch
Epoch 6/20 Iteration 981/3560 Training loss: 2.0462 0.6191 sec/batch
Epoch 6/20 Iteration 982/3560 Training loss: 2.0461 0.5808 sec/batch
Epoch 6/20 Iteration 983/3560 Training loss: 2.0456 0.6125 sec/batch
Epoch 6/20 Iteration 984/3560 Training loss: 2.0452 0.4705 sec/batch
Epoch 6/20 Iteration 985/3560 Training loss: 2.0449 0.4807 sec/batch
Epoch 6/20 Iteration 986/3560 Training loss: 2.0447 0.4782 sec/batch
Epoch 6/20 Iteration 987/3560 Training loss: 2.0444 0.5382 sec/batch
Epoch 6/20 Iteration 988/3560 Training loss: 2.0441 0.4794 sec/batch
Epoch 6/20 Iteration 989/3560 Training loss: 2.0437 0.4799 sec/batch
Epoch 6/20 Iteration 990/3560 Training loss: 2.0432 0.4866 sec/batch
Epoch 6/20 Iteration 991/3560 Training loss: 2.0431 0.4808 sec/batch
Epoch 6/20 Iteration 992/3560 Training loss: 2.0430 0.4761 sec/batch
Epoch 6/20 Iteration 993/3560 Training loss: 2.0426 0.4725 sec/batch
Epoch 6/20 Iteration 994/3560 Training loss: 2.0423 0.4697 sec/batch
Epoch 6/20 Iteration 995/3560 Training loss: 2.0420 0.4669 sec/batch
Epoch 6/20 Iteration 996/3560 Training loss: 2.0418 0.4766 sec/batch
Epoch 6/20 Iteration 997/3560 Training loss: 2.0416 0.4715 sec/batch
Epoch 6/20 Iteration 998/3560 Training loss: 2.0415 0.4667 sec/batch
Epoch 6/20 Iteration 999/3560 Training loss: 2.0413 0.4718 sec/batch
Epoch 6/20 Iteration 1000/3560 Training loss: 2.0411 0.4833 sec/batch
Epoch 6/20 Iteration 1001/3560 Training loss: 2.0410 0.6071 sec/batch
Epoch 6/20 Iteration 1002/3560 Training loss: 2.0408 0.4788 sec/batch
Epoch 6/20 Iteration 1003/3560 Training loss: 2.0406 0.4746 sec/batch
Epoch 6/20 Iteration 1004/3560 Training loss: 2.0404 0.4710 sec/batch
Epoch 6/20 Iteration 1005/3560 Training loss: 2.0401 0.4727 sec/batch
Epoch 6/20 Iteration 1006/3560 Training loss: 2.0397 0.5682 sec/batch
Epoch 6/20 Iteration 1007/3560 Training loss: 2.0395 0.4790 sec/batch
Epoch 6/20 Iteration 1008/3560 Training loss: 2.0393 0.4792 sec/batch
Epoch 6/20 Iteration 1009/3560 Training loss: 2.0393 0.4749 sec/batch
Epoch 6/20 Iteration 1010/3560 Training loss: 2.0391 0.5318 sec/batch
Epoch 6/20 Iteration 1011/3560 Training loss: 2.0391 0.5289 sec/batch
Epoch 6/20 Iteration 1012/3560 Training loss: 2.0387 0.5167 sec/batch
Epoch 6/20 Iteration 1013/3560 Training loss: 2.0385 0.5559 sec/batch
Epoch 6/20 Iteration 1014/3560 Training loss: 2.0385 0.5094 sec/batch
Epoch 6/20 Iteration 1015/3560 Training loss: 2.0383 0.5310 sec/batch
Epoch 6/20 Iteration 1016/3560 Training loss: 2.0379 0.4713 sec/batch
Epoch 6/20 Iteration 1017/3560 Training loss: 2.0379 0.5120 sec/batch
Epoch 6/20 Iteration 1018/3560 Training loss: 2.0377 0.5058 sec/batch
Epoch 6/20 Iteration 1019/3560 Training loss: 2.0376 0.5367 sec/batch
Epoch 6/20 Iteration 1020/3560 Training loss: 2.0375 0.4703 sec/batch
Epoch 6/20 Iteration 1021/3560 Training loss: 2.0372 0.5329 sec/batch
Epoch 6/20 Iteration 1022/3560 Training loss: 2.0369 0.5585 sec/batch
Epoch 6/20 Iteration 1023/3560 Training loss: 2.0368 0.4819 sec/batch
Epoch 6/20 Iteration 1024/3560 Training loss: 2.0367 0.6130 sec/batch
Epoch 6/20 Iteration 1025/3560 Training loss: 2.0365 0.5535 sec/batch
Epoch 6/20 Iteration 1026/3560 Training loss: 2.0364 0.5943 sec/batch
Epoch 6/20 Iteration 1027/3560 Training loss: 2.0364 0.5338 sec/batch
Epoch 6/20 Iteration 1028/3560 Training loss: 2.0364 0.5266 sec/batch
Epoch 6/20 Iteration 1029/3560 Training loss: 2.0365 0.5029 sec/batch
Epoch 6/20 Iteration 1030/3560 Training loss: 2.0362 0.5068 sec/batch
Epoch 6/20 Iteration 1031/3560 Training loss: 2.0363 0.4851 sec/batch
Epoch 6/20 Iteration 1032/3560 Training loss: 2.0361 0.4857 sec/batch
Epoch 6/20 Iteration 1033/3560 Training loss: 2.0361 0.4691 sec/batch
Epoch 6/20 Iteration 1034/3560 Training loss: 2.0359 0.4673 sec/batch
Epoch 6/20 Iteration 1035/3560 Training loss: 2.0357 0.4780 sec/batch
Epoch 6/20 Iteration 1036/3560 Training loss: 2.0357 0.4681 sec/batch
Epoch 6/20 Iteration 1037/3560 Training loss: 2.0357 0.4719 sec/batch
Epoch 6/20 Iteration 1038/3560 Training loss: 2.0357 0.4700 sec/batch
Epoch 6/20 Iteration 1039/3560 Training loss: 2.0355 0.4715 sec/batch
Epoch 6/20 Iteration 1040/3560 Training loss: 2.0353 0.4736 sec/batch
Epoch 6/20 Iteration 1041/3560 Training loss: 2.0350 0.4743 sec/batch
Epoch 6/20 Iteration 1042/3560 Training loss: 2.0351 0.4679 sec/batch
Epoch 6/20 Iteration 1043/3560 Training loss: 2.0351 0.4767 sec/batch
Epoch 6/20 Iteration 1044/3560 Training loss: 2.0350 0.4837 sec/batch
Epoch 6/20 Iteration 1045/3560 Training loss: 2.0348 0.5809 sec/batch
Epoch 6/20 Iteration 1046/3560 Training loss: 2.0346 0.4867 sec/batch
Epoch 6/20 Iteration 1047/3560 Training loss: 2.0346 0.4738 sec/batch
Epoch 6/20 Iteration 1048/3560 Training loss: 2.0345 0.4774 sec/batch
Epoch 6/20 Iteration 1049/3560 Training loss: 2.0342 0.5036 sec/batch
Epoch 6/20 Iteration 1050/3560 Training loss: 2.0343 0.5506 sec/batch
Epoch 6/20 Iteration 1051/3560 Training loss: 2.0342 0.4663 sec/batch
Epoch 6/20 Iteration 1052/3560 Training loss: 2.0341 0.5518 sec/batch
Epoch 6/20 Iteration 1053/3560 Training loss: 2.0340 0.4944 sec/batch
Epoch 6/20 Iteration 1054/3560 Training loss: 2.0339 0.5768 sec/batch
Epoch 6/20 Iteration 1055/3560 Training loss: 2.0338 0.5630 sec/batch
Epoch 6/20 Iteration 1056/3560 Training loss: 2.0337 0.5217 sec/batch
Epoch 6/20 Iteration 1057/3560 Training loss: 2.0336 0.5885 sec/batch
Epoch 6/20 Iteration 1058/3560 Training loss: 2.0337 0.5437 sec/batch
Epoch 6/20 Iteration 1059/3560 Training loss: 2.0336 0.4743 sec/batch
Epoch 6/20 Iteration 1060/3560 Training loss: 2.0335 0.5000 sec/batch
Epoch 6/20 Iteration 1061/3560 Training loss: 2.0333 0.5837 sec/batch
Epoch 6/20 Iteration 1062/3560 Training loss: 2.0331 0.4738 sec/batch
Epoch 6/20 Iteration 1063/3560 Training loss: 2.0331 0.4737 sec/batch
Epoch 6/20 Iteration 1064/3560 Training loss: 2.0331 0.5732 sec/batch
Epoch 6/20 Iteration 1065/3560 Training loss: 2.0331 0.4822 sec/batch
Epoch 6/20 Iteration 1066/3560 Training loss: 2.0330 0.4752 sec/batch
Epoch 6/20 Iteration 1067/3560 Training loss: 2.0328 0.4690 sec/batch
Epoch 6/20 Iteration 1068/3560 Training loss: 2.0327 0.5532 sec/batch
Epoch 7/20 Iteration 1069/3560 Training loss: 2.0778 0.4714 sec/batch
Epoch 7/20 Iteration 1070/3560 Training loss: 2.0358 0.4771 sec/batch
Epoch 7/20 Iteration 1071/3560 Training loss: 2.0254 0.4741 sec/batch
Epoch 7/20 Iteration 1072/3560 Training loss: 2.0187 0.4794 sec/batch
Epoch 7/20 Iteration 1073/3560 Training loss: 2.0158 0.4693 sec/batch
Epoch 7/20 Iteration 1074/3560 Training loss: 2.0094 0.5142 sec/batch
Epoch 7/20 Iteration 1075/3560 Training loss: 2.0092 0.4800 sec/batch
Epoch 7/20 Iteration 1076/3560 Training loss: 2.0086 0.5755 sec/batch
Epoch 7/20 Iteration 1077/3560 Training loss: 2.0115 0.4701 sec/batch
Epoch 7/20 Iteration 1078/3560 Training loss: 2.0114 0.4885 sec/batch
Epoch 7/20 Iteration 1079/3560 Training loss: 2.0088 0.5682 sec/batch
Epoch 7/20 Iteration 1080/3560 Training loss: 2.0069 0.5096 sec/batch
Epoch 7/20 Iteration 1081/3560 Training loss: 2.0069 0.5704 sec/batch
Epoch 7/20 Iteration 1082/3560 Training loss: 2.0085 0.5015 sec/batch
Epoch 7/20 Iteration 1083/3560 Training loss: 2.0076 0.5961 sec/batch
Epoch 7/20 Iteration 1084/3560 Training loss: 2.0065 0.5921 sec/batch
Epoch 7/20 Iteration 1085/3560 Training loss: 2.0063 0.5075 sec/batch
Epoch 7/20 Iteration 1086/3560 Training loss: 2.0083 0.5758 sec/batch
Epoch 7/20 Iteration 1087/3560 Training loss: 2.0081 0.5627 sec/batch
Epoch 7/20 Iteration 1088/3560 Training loss: 2.0077 0.4885 sec/batch
Epoch 7/20 Iteration 1089/3560 Training loss: 2.0064 0.4695 sec/batch
Epoch 7/20 Iteration 1090/3560 Training loss: 2.0074 0.4842 sec/batch
Epoch 7/20 Iteration 1091/3560 Training loss: 2.0068 0.4797 sec/batch
Epoch 7/20 Iteration 1092/3560 Training loss: 2.0063 0.4834 sec/batch
Epoch 7/20 Iteration 1093/3560 Training loss: 2.0063 0.4700 sec/batch
Epoch 7/20 Iteration 1094/3560 Training loss: 2.0055 0.4653 sec/batch
Epoch 7/20 Iteration 1095/3560 Training loss: 2.0047 0.4730 sec/batch
Epoch 7/20 Iteration 1096/3560 Training loss: 2.0052 0.4759 sec/batch
Epoch 7/20 Iteration 1097/3560 Training loss: 2.0063 0.4823 sec/batch
Epoch 7/20 Iteration 1098/3560 Training loss: 2.0065 0.5669 sec/batch
Epoch 7/20 Iteration 1099/3560 Training loss: 2.0063 0.5886 sec/batch
Epoch 7/20 Iteration 1100/3560 Training loss: 2.0059 0.6158 sec/batch
Epoch 7/20 Iteration 1101/3560 Training loss: 2.0059 0.5001 sec/batch
Epoch 7/20 Iteration 1102/3560 Training loss: 2.0067 0.4888 sec/batch
Epoch 7/20 Iteration 1103/3560 Training loss: 2.0061 0.5800 sec/batch
Epoch 7/20 Iteration 1104/3560 Training loss: 2.0060 0.4747 sec/batch
Epoch 7/20 Iteration 1105/3560 Training loss: 2.0057 0.4733 sec/batch
Epoch 7/20 Iteration 1106/3560 Training loss: 2.0046 0.4751 sec/batch
Epoch 7/20 Iteration 1107/3560 Training loss: 2.0036 0.4700 sec/batch
Epoch 7/20 Iteration 1108/3560 Training loss: 2.0030 0.4703 sec/batch
Epoch 7/20 Iteration 1109/3560 Training loss: 2.0026 0.4768 sec/batch
Epoch 7/20 Iteration 1110/3560 Training loss: 2.0024 0.4795 sec/batch
Epoch 7/20 Iteration 1111/3560 Training loss: 2.0018 0.4796 sec/batch
Epoch 7/20 Iteration 1112/3560 Training loss: 2.0009 0.4720 sec/batch
Epoch 7/20 Iteration 1113/3560 Training loss: 2.0007 0.4811 sec/batch
Epoch 7/20 Iteration 1114/3560 Training loss: 1.9994 0.4873 sec/batch
Epoch 7/20 Iteration 1115/3560 Training loss: 1.9993 0.4748 sec/batch
Epoch 7/20 Iteration 1116/3560 Training loss: 1.9988 0.5790 sec/batch
Epoch 7/20 Iteration 1117/3560 Training loss: 1.9985 0.5294 sec/batch
Epoch 7/20 Iteration 1118/3560 Training loss: 1.9991 0.6580 sec/batch
Epoch 7/20 Iteration 1119/3560 Training loss: 1.9984 0.7794 sec/batch
Epoch 7/20 Iteration 1120/3560 Training loss: 1.9990 0.7055 sec/batch
Epoch 7/20 Iteration 1121/3560 Training loss: 1.9987 0.6982 sec/batch
Epoch 7/20 Iteration 1122/3560 Training loss: 1.9984 0.5658 sec/batch
Epoch 7/20 Iteration 1123/3560 Training loss: 1.9979 0.4815 sec/batch
Epoch 7/20 Iteration 1124/3560 Training loss: 1.9979 0.4980 sec/batch
Epoch 7/20 Iteration 1125/3560 Training loss: 1.9981 0.5635 sec/batch
Epoch 7/20 Iteration 1126/3560 Training loss: 1.9978 0.4905 sec/batch
Epoch 7/20 Iteration 1127/3560 Training loss: 1.9975 0.5784 sec/batch
Epoch 7/20 Iteration 1128/3560 Training loss: 1.9978 0.4880 sec/batch
Epoch 7/20 Iteration 1129/3560 Training loss: 1.9978 0.5542 sec/batch
Epoch 7/20 Iteration 1130/3560 Training loss: 1.9984 0.4893 sec/batch
Epoch 7/20 Iteration 1131/3560 Training loss: 1.9987 0.5051 sec/batch
Epoch 7/20 Iteration 1132/3560 Training loss: 1.9988 0.5149 sec/batch
Epoch 7/20 Iteration 1133/3560 Training loss: 1.9986 0.4891 sec/batch
Epoch 7/20 Iteration 1134/3560 Training loss: 1.9988 0.4937 sec/batch
Epoch 7/20 Iteration 1135/3560 Training loss: 1.9989 0.4690 sec/batch
Epoch 7/20 Iteration 1136/3560 Training loss: 1.9983 0.4727 sec/batch
Epoch 7/20 Iteration 1137/3560 Training loss: 1.9981 0.5012 sec/batch
Epoch 7/20 Iteration 1138/3560 Training loss: 1.9980 0.4814 sec/batch
Epoch 7/20 Iteration 1139/3560 Training loss: 1.9982 0.4731 sec/batch
Epoch 7/20 Iteration 1140/3560 Training loss: 1.9983 0.4814 sec/batch
Epoch 7/20 Iteration 1141/3560 Training loss: 1.9985 0.5525 sec/batch
Epoch 7/20 Iteration 1142/3560 Training loss: 1.9981 0.5260 sec/batch
Epoch 7/20 Iteration 1143/3560 Training loss: 1.9980 0.4823 sec/batch
Epoch 7/20 Iteration 1144/3560 Training loss: 1.9982 0.4693 sec/batch
Epoch 7/20 Iteration 1145/3560 Training loss: 1.9980 0.4772 sec/batch
Epoch 7/20 Iteration 1146/3560 Training loss: 1.9981 0.5589 sec/batch
Epoch 7/20 Iteration 1147/3560 Training loss: 1.9976 0.5688 sec/batch
Epoch 7/20 Iteration 1148/3560 Training loss: 1.9975 0.5634 sec/batch
Epoch 7/20 Iteration 1149/3560 Training loss: 1.9970 0.5113 sec/batch
Epoch 7/20 Iteration 1150/3560 Training loss: 1.9971 0.6236 sec/batch
Epoch 7/20 Iteration 1151/3560 Training loss: 1.9966 0.5070 sec/batch
Epoch 7/20 Iteration 1152/3560 Training loss: 1.9964 0.5143 sec/batch
Epoch 7/20 Iteration 1153/3560 Training loss: 1.9960 0.4766 sec/batch
Epoch 7/20 Iteration 1154/3560 Training loss: 1.9955 0.5478 sec/batch
Epoch 7/20 Iteration 1155/3560 Training loss: 1.9953 0.5318 sec/batch
Epoch 7/20 Iteration 1156/3560 Training loss: 1.9950 0.4837 sec/batch
Epoch 7/20 Iteration 1157/3560 Training loss: 1.9945 0.5306 sec/batch
Epoch 7/20 Iteration 1158/3560 Training loss: 1.9945 0.4771 sec/batch
Epoch 7/20 Iteration 1159/3560 Training loss: 1.9942 0.5668 sec/batch
Epoch 7/20 Iteration 1160/3560 Training loss: 1.9940 0.4741 sec/batch
Epoch 7/20 Iteration 1161/3560 Training loss: 1.9934 0.4729 sec/batch
Epoch 7/20 Iteration 1162/3560 Training loss: 1.9931 0.4762 sec/batch
Epoch 7/20 Iteration 1163/3560 Training loss: 1.9927 0.4676 sec/batch
Epoch 7/20 Iteration 1164/3560 Training loss: 1.9926 0.5079 sec/batch
Epoch 7/20 Iteration 1165/3560 Training loss: 1.9923 0.5199 sec/batch
Epoch 7/20 Iteration 1166/3560 Training loss: 1.9920 0.4780 sec/batch
Epoch 7/20 Iteration 1167/3560 Training loss: 1.9916 0.5021 sec/batch
Epoch 7/20 Iteration 1168/3560 Training loss: 1.9912 0.5453 sec/batch
Epoch 7/20 Iteration 1169/3560 Training loss: 1.9911 0.4665 sec/batch
Epoch 7/20 Iteration 1170/3560 Training loss: 1.9910 0.4878 sec/batch
Epoch 7/20 Iteration 1171/3560 Training loss: 1.9907 0.4791 sec/batch
Epoch 7/20 Iteration 1172/3560 Training loss: 1.9904 0.5489 sec/batch
Epoch 7/20 Iteration 1173/3560 Training loss: 1.9901 0.4715 sec/batch
Epoch 7/20 Iteration 1174/3560 Training loss: 1.9898 0.4776 sec/batch
Epoch 7/20 Iteration 1175/3560 Training loss: 1.9896 0.5387 sec/batch
Epoch 7/20 Iteration 1176/3560 Training loss: 1.9895 0.4864 sec/batch
Epoch 7/20 Iteration 1177/3560 Training loss: 1.9894 0.4685 sec/batch
Epoch 7/20 Iteration 1178/3560 Training loss: 1.9892 0.4850 sec/batch
Epoch 7/20 Iteration 1179/3560 Training loss: 1.9890 0.4771 sec/batch
Epoch 7/20 Iteration 1180/3560 Training loss: 1.9889 0.4798 sec/batch
Epoch 7/20 Iteration 1181/3560 Training loss: 1.9887 0.4662 sec/batch
Epoch 7/20 Iteration 1182/3560 Training loss: 1.9884 0.4810 sec/batch
Epoch 7/20 Iteration 1183/3560 Training loss: 1.9881 0.4679 sec/batch
Epoch 7/20 Iteration 1184/3560 Training loss: 1.9876 0.4780 sec/batch
Epoch 7/20 Iteration 1185/3560 Training loss: 1.9874 0.4747 sec/batch
Epoch 7/20 Iteration 1186/3560 Training loss: 1.9873 0.4753 sec/batch
Epoch 7/20 Iteration 1187/3560 Training loss: 1.9873 0.4686 sec/batch
Epoch 7/20 Iteration 1188/3560 Training loss: 1.9871 0.4720 sec/batch
Epoch 7/20 Iteration 1189/3560 Training loss: 1.9870 0.4777 sec/batch
Epoch 7/20 Iteration 1190/3560 Training loss: 1.9867 0.4761 sec/batch
Epoch 7/20 Iteration 1191/3560 Training loss: 1.9865 0.5213 sec/batch
Epoch 7/20 Iteration 1192/3560 Training loss: 1.9866 0.4918 sec/batch
Epoch 7/20 Iteration 1193/3560 Training loss: 1.9864 0.5315 sec/batch
Epoch 7/20 Iteration 1194/3560 Training loss: 1.9860 0.5271 sec/batch
Epoch 7/20 Iteration 1195/3560 Training loss: 1.9861 0.5144 sec/batch
Epoch 7/20 Iteration 1196/3560 Training loss: 1.9860 0.4693 sec/batch
Epoch 7/20 Iteration 1197/3560 Training loss: 1.9859 0.6087 sec/batch
Epoch 7/20 Iteration 1198/3560 Training loss: 1.9858 0.4925 sec/batch
Epoch 7/20 Iteration 1199/3560 Training loss: 1.9855 0.4761 sec/batch
Epoch 7/20 Iteration 1200/3560 Training loss: 1.9852 0.4756 sec/batch
Epoch 7/20 Iteration 1201/3560 Training loss: 1.9852 0.4720 sec/batch
Epoch 7/20 Iteration 1202/3560 Training loss: 1.9852 0.6031 sec/batch
Epoch 7/20 Iteration 1203/3560 Training loss: 1.9850 0.5602 sec/batch
Epoch 7/20 Iteration 1204/3560 Training loss: 1.9850 0.4944 sec/batch
Epoch 7/20 Iteration 1205/3560 Training loss: 1.9849 0.4844 sec/batch
Epoch 7/20 Iteration 1206/3560 Training loss: 1.9849 0.4749 sec/batch
Epoch 7/20 Iteration 1207/3560 Training loss: 1.9850 0.4715 sec/batch
Epoch 7/20 Iteration 1208/3560 Training loss: 1.9847 0.4763 sec/batch
Epoch 7/20 Iteration 1209/3560 Training loss: 1.9848 0.4748 sec/batch
Epoch 7/20 Iteration 1210/3560 Training loss: 1.9846 0.4712 sec/batch
Epoch 7/20 Iteration 1211/3560 Training loss: 1.9845 0.5555 sec/batch
Epoch 7/20 Iteration 1212/3560 Training loss: 1.9845 0.4790 sec/batch
Epoch 7/20 Iteration 1213/3560 Training loss: 1.9843 0.5896 sec/batch
Epoch 7/20 Iteration 1214/3560 Training loss: 1.9843 0.4759 sec/batch
Epoch 7/20 Iteration 1215/3560 Training loss: 1.9843 0.4875 sec/batch
Epoch 7/20 Iteration 1216/3560 Training loss: 1.9843 0.4763 sec/batch
Epoch 7/20 Iteration 1217/3560 Training loss: 1.9842 0.4666 sec/batch
Epoch 7/20 Iteration 1218/3560 Training loss: 1.9840 0.4765 sec/batch
Epoch 7/20 Iteration 1219/3560 Training loss: 1.9838 0.4707 sec/batch
Epoch 7/20 Iteration 1220/3560 Training loss: 1.9839 0.4734 sec/batch
Epoch 7/20 Iteration 1221/3560 Training loss: 1.9838 0.5783 sec/batch
Epoch 7/20 Iteration 1222/3560 Training loss: 1.9837 0.4764 sec/batch
Epoch 7/20 Iteration 1223/3560 Training loss: 1.9835 0.4808 sec/batch
Epoch 7/20 Iteration 1224/3560 Training loss: 1.9834 0.5330 sec/batch
Epoch 7/20 Iteration 1225/3560 Training loss: 1.9832 0.4759 sec/batch
Epoch 7/20 Iteration 1226/3560 Training loss: 1.9831 0.5200 sec/batch
Epoch 7/20 Iteration 1227/3560 Training loss: 1.9828 0.5927 sec/batch
Epoch 7/20 Iteration 1228/3560 Training loss: 1.9829 0.5758 sec/batch
Epoch 7/20 Iteration 1229/3560 Training loss: 1.9829 0.6433 sec/batch
Epoch 7/20 Iteration 1230/3560 Training loss: 1.9827 0.5295 sec/batch
Epoch 7/20 Iteration 1231/3560 Training loss: 1.9827 0.5366 sec/batch
Epoch 7/20 Iteration 1232/3560 Training loss: 1.9826 0.4782 sec/batch
Epoch 7/20 Iteration 1233/3560 Training loss: 1.9825 0.4780 sec/batch
Epoch 7/20 Iteration 1234/3560 Training loss: 1.9824 0.4760 sec/batch
Epoch 7/20 Iteration 1235/3560 Training loss: 1.9824 0.4649 sec/batch
Epoch 7/20 Iteration 1236/3560 Training loss: 1.9826 0.4838 sec/batch
Epoch 7/20 Iteration 1237/3560 Training loss: 1.9825 0.4656 sec/batch
Epoch 7/20 Iteration 1238/3560 Training loss: 1.9823 0.4805 sec/batch
Epoch 7/20 Iteration 1239/3560 Training loss: 1.9822 0.5283 sec/batch
Epoch 7/20 Iteration 1240/3560 Training loss: 1.9820 0.5378 sec/batch
Epoch 7/20 Iteration 1241/3560 Training loss: 1.9820 0.4843 sec/batch
Epoch 7/20 Iteration 1242/3560 Training loss: 1.9821 0.4649 sec/batch
Epoch 7/20 Iteration 1243/3560 Training loss: 1.9820 0.5718 sec/batch
Epoch 7/20 Iteration 1244/3560 Training loss: 1.9819 0.4726 sec/batch
Epoch 7/20 Iteration 1245/3560 Training loss: 1.9817 0.5061 sec/batch
Epoch 7/20 Iteration 1246/3560 Training loss: 1.9817 0.4938 sec/batch
Epoch 8/20 Iteration 1247/3560 Training loss: 2.0205 0.5418 sec/batch
Epoch 8/20 Iteration 1248/3560 Training loss: 1.9837 0.4914 sec/batch
Epoch 8/20 Iteration 1249/3560 Training loss: 1.9722 0.5243 sec/batch
Epoch 8/20 Iteration 1250/3560 Training loss: 1.9668 0.5573 sec/batch
Epoch 8/20 Iteration 1251/3560 Training loss: 1.9653 0.6264 sec/batch
Epoch 8/20 Iteration 1252/3560 Training loss: 1.9601 0.5145 sec/batch
Epoch 8/20 Iteration 1253/3560 Training loss: 1.9606 0.6358 sec/batch
Epoch 8/20 Iteration 1254/3560 Training loss: 1.9606 0.6496 sec/batch
Epoch 8/20 Iteration 1255/3560 Training loss: 1.9633 0.5028 sec/batch
Epoch 8/20 Iteration 1256/3560 Training loss: 1.9633 0.4748 sec/batch
Epoch 8/20 Iteration 1257/3560 Training loss: 1.9604 0.6465 sec/batch
Epoch 8/20 Iteration 1258/3560 Training loss: 1.9586 0.6867 sec/batch
Epoch 8/20 Iteration 1259/3560 Training loss: 1.9590 0.7608 sec/batch
Epoch 8/20 Iteration 1260/3560 Training loss: 1.9610 0.7797 sec/batch
Epoch 8/20 Iteration 1261/3560 Training loss: 1.9598 0.8638 sec/batch
Epoch 8/20 Iteration 1262/3560 Training loss: 1.9581 0.7473 sec/batch
Epoch 8/20 Iteration 1263/3560 Training loss: 1.9577 0.6795 sec/batch
Epoch 8/20 Iteration 1264/3560 Training loss: 1.9602 0.6495 sec/batch
Epoch 8/20 Iteration 1265/3560 Training loss: 1.9600 0.5757 sec/batch
Epoch 8/20 Iteration 1266/3560 Training loss: 1.9604 0.5485 sec/batch
Epoch 8/20 Iteration 1267/3560 Training loss: 1.9594 0.5405 sec/batch
Epoch 8/20 Iteration 1268/3560 Training loss: 1.9601 0.5758 sec/batch
Epoch 8/20 Iteration 1269/3560 Training loss: 1.9595 0.5806 sec/batch
Epoch 8/20 Iteration 1270/3560 Training loss: 1.9593 0.6409 sec/batch
Epoch 8/20 Iteration 1271/3560 Training loss: 1.9592 0.6141 sec/batch
Epoch 8/20 Iteration 1272/3560 Training loss: 1.9581 0.6383 sec/batch
Epoch 8/20 Iteration 1273/3560 Training loss: 1.9574 0.5939 sec/batch
Epoch 8/20 Iteration 1274/3560 Training loss: 1.9576 0.6259 sec/batch
Epoch 8/20 Iteration 1275/3560 Training loss: 1.9584 0.6325 sec/batch
Epoch 8/20 Iteration 1276/3560 Training loss: 1.9584 0.6314 sec/batch
Epoch 8/20 Iteration 1277/3560 Training loss: 1.9584 0.6065 sec/batch
Epoch 8/20 Iteration 1278/3560 Training loss: 1.9577 0.5517 sec/batch
Epoch 8/20 Iteration 1279/3560 Training loss: 1.9577 0.5645 sec/batch
Epoch 8/20 Iteration 1280/3560 Training loss: 1.9586 0.5649 sec/batch
Epoch 8/20 Iteration 1281/3560 Training loss: 1.9582 0.5671 sec/batch
Epoch 8/20 Iteration 1282/3560 Training loss: 1.9580 0.5582 sec/batch
Epoch 8/20 Iteration 1283/3560 Training loss: 1.9575 0.5423 sec/batch
Epoch 8/20 Iteration 1284/3560 Training loss: 1.9564 0.5422 sec/batch
Epoch 8/20 Iteration 1285/3560 Training loss: 1.9554 0.5651 sec/batch
Epoch 8/20 Iteration 1286/3560 Training loss: 1.9546 0.5670 sec/batch
Epoch 8/20 Iteration 1287/3560 Training loss: 1.9541 0.5620 sec/batch
Epoch 8/20 Iteration 1288/3560 Training loss: 1.9540 0.5728 sec/batch
Epoch 8/20 Iteration 1289/3560 Training loss: 1.9534 0.5779 sec/batch
Epoch 8/20 Iteration 1290/3560 Training loss: 1.9524 0.6267 sec/batch
Epoch 8/20 Iteration 1291/3560 Training loss: 1.9527 0.6592 sec/batch
Epoch 8/20 Iteration 1292/3560 Training loss: 1.9513 0.5914 sec/batch
Epoch 8/20 Iteration 1293/3560 Training loss: 1.9514 0.5878 sec/batch
Epoch 8/20 Iteration 1294/3560 Training loss: 1.9508 0.7145 sec/batch
Epoch 8/20 Iteration 1295/3560 Training loss: 1.9508 0.5730 sec/batch
Epoch 8/20 Iteration 1296/3560 Training loss: 1.9515 0.5973 sec/batch
Epoch 8/20 Iteration 1297/3560 Training loss: 1.9509 0.6667 sec/batch
Epoch 8/20 Iteration 1298/3560 Training loss: 1.9515 0.5855 sec/batch
Epoch 8/20 Iteration 1299/3560 Training loss: 1.9511 0.6452 sec/batch
Epoch 8/20 Iteration 1300/3560 Training loss: 1.9508 0.6605 sec/batch
Epoch 8/20 Iteration 1301/3560 Training loss: 1.9504 0.6347 sec/batch
Epoch 8/20 Iteration 1302/3560 Training loss: 1.9506 0.6251 sec/batch
Epoch 8/20 Iteration 1303/3560 Training loss: 1.9507 0.6010 sec/batch
Epoch 8/20 Iteration 1304/3560 Training loss: 1.9506 0.6123 sec/batch
Epoch 8/20 Iteration 1305/3560 Training loss: 1.9501 0.5680 sec/batch
Epoch 8/20 Iteration 1306/3560 Training loss: 1.9505 0.5630 sec/batch
Epoch 8/20 Iteration 1307/3560 Training loss: 1.9504 0.5678 sec/batch
Epoch 8/20 Iteration 1308/3560 Training loss: 1.9509 0.5661 sec/batch
Epoch 8/20 Iteration 1309/3560 Training loss: 1.9511 0.5801 sec/batch
Epoch 8/20 Iteration 1310/3560 Training loss: 1.9512 0.5596 sec/batch
Epoch 8/20 Iteration 1311/3560 Training loss: 1.9509 0.5625 sec/batch
Epoch 8/20 Iteration 1312/3560 Training loss: 1.9513 0.5896 sec/batch
Epoch 8/20 Iteration 1313/3560 Training loss: 1.9514 0.5753 sec/batch
Epoch 8/20 Iteration 1314/3560 Training loss: 1.9507 0.5615 sec/batch
Epoch 8/20 Iteration 1315/3560 Training loss: 1.9505 0.5701 sec/batch
Epoch 8/20 Iteration 1316/3560 Training loss: 1.9503 0.5456 sec/batch
Epoch 8/20 Iteration 1317/3560 Training loss: 1.9507 0.5556 sec/batch
Epoch 8/20 Iteration 1318/3560 Training loss: 1.9508 0.5733 sec/batch
Epoch 8/20 Iteration 1319/3560 Training loss: 1.9510 0.6123 sec/batch
Epoch 8/20 Iteration 1320/3560 Training loss: 1.9507 0.5831 sec/batch
Epoch 8/20 Iteration 1321/3560 Training loss: 1.9506 0.5703 sec/batch
Epoch 8/20 Iteration 1322/3560 Training loss: 1.9509 0.5647 sec/batch
Epoch 8/20 Iteration 1323/3560 Training loss: 1.9507 0.5812 sec/batch
Epoch 8/20 Iteration 1324/3560 Training loss: 1.9508 0.5708 sec/batch
Epoch 8/20 Iteration 1325/3560 Training loss: 1.9503 0.5897 sec/batch
Epoch 8/20 Iteration 1326/3560 Training loss: 1.9502 0.7378 sec/batch
Epoch 8/20 Iteration 1327/3560 Training loss: 1.9498 0.6647 sec/batch
Epoch 8/20 Iteration 1328/3560 Training loss: 1.9498 0.6447 sec/batch
Epoch 8/20 Iteration 1329/3560 Training loss: 1.9493 0.6327 sec/batch
Epoch 8/20 Iteration 1330/3560 Training loss: 1.9491 0.5767 sec/batch
Epoch 8/20 Iteration 1331/3560 Training loss: 1.9487 0.5732 sec/batch
Epoch 8/20 Iteration 1332/3560 Training loss: 1.9484 0.5870 sec/batch
Epoch 8/20 Iteration 1333/3560 Training loss: 1.9483 0.7338 sec/batch
Epoch 8/20 Iteration 1334/3560 Training loss: 1.9480 0.6211 sec/batch
Epoch 8/20 Iteration 1335/3560 Training loss: 1.9474 0.6270 sec/batch
Epoch 8/20 Iteration 1336/3560 Training loss: 1.9474 0.6826 sec/batch
Epoch 8/20 Iteration 1337/3560 Training loss: 1.9471 0.5977 sec/batch
Epoch 8/20 Iteration 1338/3560 Training loss: 1.9469 0.5819 sec/batch
Epoch 8/20 Iteration 1339/3560 Training loss: 1.9465 0.5657 sec/batch
Epoch 8/20 Iteration 1340/3560 Training loss: 1.9460 0.6548 sec/batch
Epoch 8/20 Iteration 1341/3560 Training loss: 1.9458 0.7232 sec/batch
Epoch 8/20 Iteration 1342/3560 Training loss: 1.9456 0.5965 sec/batch
Epoch 8/20 Iteration 1343/3560 Training loss: 1.9455 0.6548 sec/batch
Epoch 8/20 Iteration 1344/3560 Training loss: 1.9451 0.8250 sec/batch
Epoch 8/20 Iteration 1345/3560 Training loss: 1.9446 0.7135 sec/batch
Epoch 8/20 Iteration 1346/3560 Training loss: 1.9441 0.7523 sec/batch
Epoch 8/20 Iteration 1347/3560 Training loss: 1.9442 0.7531 sec/batch
Epoch 8/20 Iteration 1348/3560 Training loss: 1.9442 0.7474 sec/batch
Epoch 8/20 Iteration 1349/3560 Training loss: 1.9438 0.7389 sec/batch
Epoch 8/20 Iteration 1350/3560 Training loss: 1.9436 0.6311 sec/batch
Epoch 8/20 Iteration 1351/3560 Training loss: 1.9433 0.5685 sec/batch
Epoch 8/20 Iteration 1352/3560 Training loss: 1.9431 0.5616 sec/batch
Epoch 8/20 Iteration 1353/3560 Training loss: 1.9429 0.5454 sec/batch
Epoch 8/20 Iteration 1354/3560 Training loss: 1.9428 0.5774 sec/batch
Epoch 8/20 Iteration 1355/3560 Training loss: 1.9429 0.5889 sec/batch
Epoch 8/20 Iteration 1356/3560 Training loss: 1.9428 0.5826 sec/batch
Epoch 8/20 Iteration 1357/3560 Training loss: 1.9427 0.6285 sec/batch
Epoch 8/20 Iteration 1358/3560 Training loss: 1.9426 0.7569 sec/batch
Epoch 8/20 Iteration 1359/3560 Training loss: 1.9423 0.8134 sec/batch
Epoch 8/20 Iteration 1360/3560 Training loss: 1.9421 0.6205 sec/batch
Epoch 8/20 Iteration 1361/3560 Training loss: 1.9419 0.6466 sec/batch
Epoch 8/20 Iteration 1362/3560 Training loss: 1.9414 0.6655 sec/batch
Epoch 8/20 Iteration 1363/3560 Training loss: 1.9413 0.6417 sec/batch
Epoch 8/20 Iteration 1364/3560 Training loss: 1.9411 0.5978 sec/batch
Epoch 8/20 Iteration 1365/3560 Training loss: 1.9411 0.6164 sec/batch
Epoch 8/20 Iteration 1366/3560 Training loss: 1.9410 0.5447 sec/batch
Epoch 8/20 Iteration 1367/3560 Training loss: 1.9409 0.5700 sec/batch
Epoch 8/20 Iteration 1368/3560 Training loss: 1.9406 0.5595 sec/batch
Epoch 8/20 Iteration 1369/3560 Training loss: 1.9403 0.5799 sec/batch
Epoch 8/20 Iteration 1370/3560 Training loss: 1.9404 0.5545 sec/batch
Epoch 8/20 Iteration 1371/3560 Training loss: 1.9402 0.5508 sec/batch
Epoch 8/20 Iteration 1372/3560 Training loss: 1.9399 0.5761 sec/batch
Epoch 8/20 Iteration 1373/3560 Training loss: 1.9400 0.5712 sec/batch
Epoch 8/20 Iteration 1374/3560 Training loss: 1.9399 0.5655 sec/batch
Epoch 8/20 Iteration 1375/3560 Training loss: 1.9398 0.5570 sec/batch
Epoch 8/20 Iteration 1376/3560 Training loss: 1.9398 0.5726 sec/batch
Epoch 8/20 Iteration 1377/3560 Training loss: 1.9395 0.6305 sec/batch
Epoch 8/20 Iteration 1378/3560 Training loss: 1.9393 0.6600 sec/batch
Epoch 8/20 Iteration 1379/3560 Training loss: 1.9393 0.5998 sec/batch
Epoch 8/20 Iteration 1380/3560 Training loss: 1.9392 0.6071 sec/batch
Epoch 8/20 Iteration 1381/3560 Training loss: 1.9391 0.5634 sec/batch
Epoch 8/20 Iteration 1382/3560 Training loss: 1.9391 0.5821 sec/batch
Epoch 8/20 Iteration 1383/3560 Training loss: 1.9391 0.5980 sec/batch
Epoch 8/20 Iteration 1384/3560 Training loss: 1.9391 0.6606 sec/batch
Epoch 8/20 Iteration 1385/3560 Training loss: 1.9393 0.6136 sec/batch
Epoch 8/20 Iteration 1386/3560 Training loss: 1.9390 0.6303 sec/batch
Epoch 8/20 Iteration 1387/3560 Training loss: 1.9391 0.6008 sec/batch
Epoch 8/20 Iteration 1388/3560 Training loss: 1.9390 0.5833 sec/batch
Epoch 8/20 Iteration 1389/3560 Training loss: 1.9390 0.6006 sec/batch
Epoch 8/20 Iteration 1390/3560 Training loss: 1.9390 0.5646 sec/batch
Epoch 8/20 Iteration 1391/3560 Training loss: 1.9388 0.5884 sec/batch
Epoch 8/20 Iteration 1392/3560 Training loss: 1.9388 0.5743 sec/batch
Epoch 8/20 Iteration 1393/3560 Training loss: 1.9387 0.5606 sec/batch
Epoch 8/20 Iteration 1394/3560 Training loss: 1.9389 0.5720 sec/batch
Epoch 8/20 Iteration 1395/3560 Training loss: 1.9388 0.5598 sec/batch
Epoch 8/20 Iteration 1396/3560 Training loss: 1.9387 0.5692 sec/batch
Epoch 8/20 Iteration 1397/3560 Training loss: 1.9384 0.5624 sec/batch
Epoch 8/20 Iteration 1398/3560 Training loss: 1.9385 0.5714 sec/batch
Epoch 8/20 Iteration 1399/3560 Training loss: 1.9384 0.5495 sec/batch
Epoch 8/20 Iteration 1400/3560 Training loss: 1.9384 0.5668 sec/batch
Epoch 8/20 Iteration 1401/3560 Training loss: 1.9383 0.5836 sec/batch
Epoch 8/20 Iteration 1402/3560 Training loss: 1.9382 0.5969 sec/batch
Epoch 8/20 Iteration 1403/3560 Training loss: 1.9382 0.5835 sec/batch
Epoch 8/20 Iteration 1404/3560 Training loss: 1.9381 0.6008 sec/batch
Epoch 8/20 Iteration 1405/3560 Training loss: 1.9377 0.5921 sec/batch
Epoch 8/20 Iteration 1406/3560 Training loss: 1.9378 0.6018 sec/batch
Epoch 8/20 Iteration 1407/3560 Training loss: 1.9378 0.5993 sec/batch
Epoch 8/20 Iteration 1408/3560 Training loss: 1.9377 0.5697 sec/batch
Epoch 8/20 Iteration 1409/3560 Training loss: 1.9376 0.5848 sec/batch
Epoch 8/20 Iteration 1410/3560 Training loss: 1.9376 0.5960 sec/batch
Epoch 8/20 Iteration 1411/3560 Training loss: 1.9375 0.6019 sec/batch
Epoch 8/20 Iteration 1412/3560 Training loss: 1.9374 0.6202 sec/batch
Epoch 8/20 Iteration 1413/3560 Training loss: 1.9374 0.5599 sec/batch
Epoch 8/20 Iteration 1414/3560 Training loss: 1.9377 0.5680 sec/batch
Epoch 8/20 Iteration 1415/3560 Training loss: 1.9376 0.5611 sec/batch
Epoch 8/20 Iteration 1416/3560 Training loss: 1.9375 0.5629 sec/batch
Epoch 8/20 Iteration 1417/3560 Training loss: 1.9373 0.5535 sec/batch
Epoch 8/20 Iteration 1418/3560 Training loss: 1.9372 0.5659 sec/batch
Epoch 8/20 Iteration 1419/3560 Training loss: 1.9373 0.5658 sec/batch
Epoch 8/20 Iteration 1420/3560 Training loss: 1.9374 0.6106 sec/batch
Epoch 8/20 Iteration 1421/3560 Training loss: 1.9373 0.6492 sec/batch
Epoch 8/20 Iteration 1422/3560 Training loss: 1.9371 0.6853 sec/batch
Epoch 8/20 Iteration 1423/3560 Training loss: 1.9369 0.6178 sec/batch
Epoch 8/20 Iteration 1424/3560 Training loss: 1.9369 0.6178 sec/batch
Epoch 9/20 Iteration 1425/3560 Training loss: 1.9867 0.5664 sec/batch
Epoch 9/20 Iteration 1426/3560 Training loss: 1.9522 0.5990 sec/batch
Epoch 9/20 Iteration 1427/3560 Training loss: 1.9380 0.6184 sec/batch
Epoch 9/20 Iteration 1428/3560 Training loss: 1.9286 0.5901 sec/batch
Epoch 9/20 Iteration 1429/3560 Training loss: 1.9248 0.5662 sec/batch
Epoch 9/20 Iteration 1430/3560 Training loss: 1.9187 0.5692 sec/batch
Epoch 9/20 Iteration 1431/3560 Training loss: 1.9194 0.5594 sec/batch
Epoch 9/20 Iteration 1432/3560 Training loss: 1.9200 0.5760 sec/batch
Epoch 9/20 Iteration 1433/3560 Training loss: 1.9231 0.5566 sec/batch
Epoch 9/20 Iteration 1434/3560 Training loss: 1.9225 0.5662 sec/batch
Epoch 9/20 Iteration 1435/3560 Training loss: 1.9195 0.5683 sec/batch
Epoch 9/20 Iteration 1436/3560 Training loss: 1.9170 0.5723 sec/batch
Epoch 9/20 Iteration 1437/3560 Training loss: 1.9169 0.5580 sec/batch
Epoch 9/20 Iteration 1438/3560 Training loss: 1.9195 0.5616 sec/batch
Epoch 9/20 Iteration 1439/3560 Training loss: 1.9185 0.5796 sec/batch
Epoch 9/20 Iteration 1440/3560 Training loss: 1.9172 0.6164 sec/batch
Epoch 9/20 Iteration 1441/3560 Training loss: 1.9168 0.6649 sec/batch
Epoch 9/20 Iteration 1442/3560 Training loss: 1.9189 0.5749 sec/batch
Epoch 9/20 Iteration 1443/3560 Training loss: 1.9187 0.5568 sec/batch
Epoch 9/20 Iteration 1444/3560 Training loss: 1.9192 0.5557 sec/batch
Epoch 9/20 Iteration 1445/3560 Training loss: 1.9186 0.5553 sec/batch
Epoch 9/20 Iteration 1446/3560 Training loss: 1.9191 0.5520 sec/batch
Epoch 9/20 Iteration 1447/3560 Training loss: 1.9184 0.5704 sec/batch
Epoch 9/20 Iteration 1448/3560 Training loss: 1.9181 0.5981 sec/batch
Epoch 9/20 Iteration 1449/3560 Training loss: 1.9176 0.6084 sec/batch
Epoch 9/20 Iteration 1450/3560 Training loss: 1.9166 0.5824 sec/batch
Epoch 9/20 Iteration 1451/3560 Training loss: 1.9158 0.5837 sec/batch
Epoch 9/20 Iteration 1452/3560 Training loss: 1.9162 0.5919 sec/batch
Epoch 9/20 Iteration 1453/3560 Training loss: 1.9171 0.5842 sec/batch
Epoch 9/20 Iteration 1454/3560 Training loss: 1.9171 0.6063 sec/batch
Epoch 9/20 Iteration 1455/3560 Training loss: 1.9168 0.6047 sec/batch
Epoch 9/20 Iteration 1456/3560 Training loss: 1.9162 0.6027 sec/batch
Epoch 9/20 Iteration 1457/3560 Training loss: 1.9163 0.6162 sec/batch
Epoch 9/20 Iteration 1458/3560 Training loss: 1.9172 0.8007 sec/batch
Epoch 9/20 Iteration 1459/3560 Training loss: 1.9167 0.6836 sec/batch
Epoch 9/20 Iteration 1460/3560 Training loss: 1.9163 0.6356 sec/batch
Epoch 9/20 Iteration 1461/3560 Training loss: 1.9157 0.5615 sec/batch
Epoch 9/20 Iteration 1462/3560 Training loss: 1.9144 0.5616 sec/batch
Epoch 9/20 Iteration 1463/3560 Training loss: 1.9132 0.5633 sec/batch
Epoch 9/20 Iteration 1464/3560 Training loss: 1.9125 0.5574 sec/batch
Epoch 9/20 Iteration 1465/3560 Training loss: 1.9123 0.5872 sec/batch
Epoch 9/20 Iteration 1466/3560 Training loss: 1.9124 0.6104 sec/batch
Epoch 9/20 Iteration 1467/3560 Training loss: 1.9121 0.6113 sec/batch
Epoch 9/20 Iteration 1468/3560 Training loss: 1.9111 0.5995 sec/batch
Epoch 9/20 Iteration 1469/3560 Training loss: 1.9110 0.6180 sec/batch
Epoch 9/20 Iteration 1470/3560 Training loss: 1.9098 0.6004 sec/batch
Epoch 9/20 Iteration 1471/3560 Training loss: 1.9095 0.5926 sec/batch
Epoch 9/20 Iteration 1472/3560 Training loss: 1.9090 0.5892 sec/batch
Epoch 9/20 Iteration 1473/3560 Training loss: 1.9088 0.5896 sec/batch
Epoch 9/20 Iteration 1474/3560 Training loss: 1.9099 0.5980 sec/batch
Epoch 9/20 Iteration 1475/3560 Training loss: 1.9093 0.5565 sec/batch
Epoch 9/20 Iteration 1476/3560 Training loss: 1.9101 0.5433 sec/batch
Epoch 9/20 Iteration 1477/3560 Training loss: 1.9100 0.5728 sec/batch
Epoch 9/20 Iteration 1478/3560 Training loss: 1.9101 0.5766 sec/batch
Epoch 9/20 Iteration 1479/3560 Training loss: 1.9097 0.5574 sec/batch
Epoch 9/20 Iteration 1480/3560 Training loss: 1.9098 0.5580 sec/batch
Epoch 9/20 Iteration 1481/3560 Training loss: 1.9099 0.5636 sec/batch
Epoch 9/20 Iteration 1482/3560 Training loss: 1.9096 0.5945 sec/batch
Epoch 9/20 Iteration 1483/3560 Training loss: 1.9092 0.6111 sec/batch
Epoch 9/20 Iteration 1484/3560 Training loss: 1.9096 0.6072 sec/batch
Epoch 9/20 Iteration 1485/3560 Training loss: 1.9093 0.6020 sec/batch
Epoch 9/20 Iteration 1486/3560 Training loss: 1.9098 0.5798 sec/batch
Epoch 9/20 Iteration 1487/3560 Training loss: 1.9101 0.6295 sec/batch
Epoch 9/20 Iteration 1488/3560 Training loss: 1.9102 0.6171 sec/batch
Epoch 9/20 Iteration 1489/3560 Training loss: 1.9101 0.6133 sec/batch
Epoch 9/20 Iteration 1490/3560 Training loss: 1.9104 0.5890 sec/batch
Epoch 9/20 Iteration 1491/3560 Training loss: 1.9106 0.5648 sec/batch
Epoch 9/20 Iteration 1492/3560 Training loss: 1.9101 0.5794 sec/batch
Epoch 9/20 Iteration 1493/3560 Training loss: 1.9100 0.5639 sec/batch
Epoch 9/20 Iteration 1494/3560 Training loss: 1.9099 0.5585 sec/batch
Epoch 9/20 Iteration 1495/3560 Training loss: 1.9102 0.5642 sec/batch
Epoch 9/20 Iteration 1496/3560 Training loss: 1.9103 0.5599 sec/batch
Epoch 9/20 Iteration 1497/3560 Training loss: 1.9105 0.5553 sec/batch
Epoch 9/20 Iteration 1498/3560 Training loss: 1.9102 0.5535 sec/batch
Epoch 9/20 Iteration 1499/3560 Training loss: 1.9101 0.5812 sec/batch
Epoch 9/20 Iteration 1500/3560 Training loss: 1.9105 0.6075 sec/batch
Epoch 9/20 Iteration 1501/3560 Training loss: 1.9104 0.5797 sec/batch
Epoch 9/20 Iteration 1502/3560 Training loss: 1.9104 0.5768 sec/batch
Epoch 9/20 Iteration 1503/3560 Training loss: 1.9100 0.5862 sec/batch
Epoch 9/20 Iteration 1504/3560 Training loss: 1.9099 0.6048 sec/batch
Epoch 9/20 Iteration 1505/3560 Training loss: 1.9095 0.5744 sec/batch
Epoch 9/20 Iteration 1506/3560 Training loss: 1.9095 0.5824 sec/batch
Epoch 9/20 Iteration 1507/3560 Training loss: 1.9092 0.5791 sec/batch
Epoch 9/20 Iteration 1508/3560 Training loss: 1.9090 0.5866 sec/batch
Epoch 9/20 Iteration 1509/3560 Training loss: 1.9084 0.5684 sec/batch
Epoch 9/20 Iteration 1510/3560 Training loss: 1.9079 0.5986 sec/batch
Epoch 9/20 Iteration 1511/3560 Training loss: 1.9077 0.5430 sec/batch
Epoch 9/20 Iteration 1512/3560 Training loss: 1.9072 0.5733 sec/batch
Epoch 9/20 Iteration 1513/3560 Training loss: 1.9068 0.5720 sec/batch
Epoch 9/20 Iteration 1514/3560 Training loss: 1.9068 0.6332 sec/batch
Epoch 9/20 Iteration 1515/3560 Training loss: 1.9065 0.5855 sec/batch
Epoch 9/20 Iteration 1516/3560 Training loss: 1.9064 0.6046 sec/batch
Epoch 9/20 Iteration 1517/3560 Training loss: 1.9058 0.6083 sec/batch
Epoch 9/20 Iteration 1518/3560 Training loss: 1.9055 0.5865 sec/batch
Epoch 9/20 Iteration 1519/3560 Training loss: 1.9053 0.6093 sec/batch
Epoch 9/20 Iteration 1520/3560 Training loss: 1.9052 0.6156 sec/batch
Epoch 9/20 Iteration 1521/3560 Training loss: 1.9051 0.5956 sec/batch
Epoch 9/20 Iteration 1522/3560 Training loss: 1.9047 0.6144 sec/batch
Epoch 9/20 Iteration 1523/3560 Training loss: 1.9043 0.6302 sec/batch
Epoch 9/20 Iteration 1524/3560 Training loss: 1.9039 0.5734 sec/batch
Epoch 9/20 Iteration 1525/3560 Training loss: 1.9039 0.5765 sec/batch
Epoch 9/20 Iteration 1526/3560 Training loss: 1.9037 0.6127 sec/batch
Epoch 9/20 Iteration 1527/3560 Training loss: 1.9034 0.6435 sec/batch
Epoch 9/20 Iteration 1528/3560 Training loss: 1.9033 0.6664 sec/batch
Epoch 9/20 Iteration 1529/3560 Training loss: 1.9029 0.6097 sec/batch
Epoch 9/20 Iteration 1530/3560 Training loss: 1.9027 0.5861 sec/batch
Epoch 9/20 Iteration 1531/3560 Training loss: 1.9026 0.6160 sec/batch
Epoch 9/20 Iteration 1532/3560 Training loss: 1.9025 0.6024 sec/batch
Epoch 9/20 Iteration 1533/3560 Training loss: 1.9025 0.6135 sec/batch
Epoch 9/20 Iteration 1534/3560 Training loss: 1.9024 0.5615 sec/batch
Epoch 9/20 Iteration 1535/3560 Training loss: 1.9022 0.5651 sec/batch
Epoch 9/20 Iteration 1536/3560 Training loss: 1.9021 0.5796 sec/batch
Epoch 9/20 Iteration 1537/3560 Training loss: 1.9019 0.5714 sec/batch
Epoch 9/20 Iteration 1538/3560 Training loss: 1.9016 0.5615 sec/batch
Epoch 9/20 Iteration 1539/3560 Training loss: 1.9015 0.5563 sec/batch
Epoch 9/20 Iteration 1540/3560 Training loss: 1.9011 0.5966 sec/batch
Epoch 9/20 Iteration 1541/3560 Training loss: 1.9010 0.5817 sec/batch
Epoch 9/20 Iteration 1542/3560 Training loss: 1.9009 0.5825 sec/batch
Epoch 9/20 Iteration 1543/3560 Training loss: 1.9009 0.5717 sec/batch
Epoch 9/20 Iteration 1544/3560 Training loss: 1.9007 0.5752 sec/batch
Epoch 9/20 Iteration 1545/3560 Training loss: 1.9007 0.5706 sec/batch
Epoch 9/20 Iteration 1546/3560 Training loss: 1.9004 0.5803 sec/batch
Epoch 9/20 Iteration 1547/3560 Training loss: 1.9001 0.6058 sec/batch
Epoch 9/20 Iteration 1548/3560 Training loss: 1.9002 0.6142 sec/batch
Epoch 9/20 Iteration 1549/3560 Training loss: 1.9000 0.6244 sec/batch
Epoch 9/20 Iteration 1550/3560 Training loss: 1.8997 0.5798 sec/batch
Epoch 9/20 Iteration 1551/3560 Training loss: 1.8997 0.5665 sec/batch
Epoch 9/20 Iteration 1552/3560 Training loss: 1.8997 0.5779 sec/batch
Epoch 9/20 Iteration 1553/3560 Training loss: 1.8996 0.5767 sec/batch
Epoch 9/20 Iteration 1554/3560 Training loss: 1.8995 0.5703 sec/batch
Epoch 9/20 Iteration 1555/3560 Training loss: 1.8992 0.5796 sec/batch
Epoch 9/20 Iteration 1556/3560 Training loss: 1.8989 0.5616 sec/batch
Epoch 9/20 Iteration 1557/3560 Training loss: 1.8990 0.5669 sec/batch
Epoch 9/20 Iteration 1558/3560 Training loss: 1.8989 0.5830 sec/batch
Epoch 9/20 Iteration 1559/3560 Training loss: 1.8989 0.5747 sec/batch
Epoch 9/20 Iteration 1560/3560 Training loss: 1.8989 0.5761 sec/batch
Epoch 9/20 Iteration 1561/3560 Training loss: 1.8990 0.5883 sec/batch
Epoch 9/20 Iteration 1562/3560 Training loss: 1.8990 0.5874 sec/batch
Epoch 9/20 Iteration 1563/3560 Training loss: 1.8992 0.5992 sec/batch
Epoch 9/20 Iteration 1564/3560 Training loss: 1.8990 0.6048 sec/batch
Epoch 9/20 Iteration 1565/3560 Training loss: 1.8992 0.6482 sec/batch
Epoch 9/20 Iteration 1566/3560 Training loss: 1.8991 0.6617 sec/batch
Epoch 9/20 Iteration 1567/3560 Training loss: 1.8990 0.5773 sec/batch
Epoch 9/20 Iteration 1568/3560 Training loss: 1.8990 0.5711 sec/batch
Epoch 9/20 Iteration 1569/3560 Training loss: 1.8988 0.5527 sec/batch
Epoch 9/20 Iteration 1570/3560 Training loss: 1.8989 0.5721 sec/batch
Epoch 9/20 Iteration 1571/3560 Training loss: 1.8989 0.5714 sec/batch
Epoch 9/20 Iteration 1572/3560 Training loss: 1.8990 0.5649 sec/batch
Epoch 9/20 Iteration 1573/3560 Training loss: 1.8989 0.5615 sec/batch
Epoch 9/20 Iteration 1574/3560 Training loss: 1.8987 0.6231 sec/batch
Epoch 9/20 Iteration 1575/3560 Training loss: 1.8985 0.5627 sec/batch
Epoch 9/20 Iteration 1576/3560 Training loss: 1.8986 0.5928 sec/batch
Epoch 9/20 Iteration 1577/3560 Training loss: 1.8986 0.5813 sec/batch
Epoch 9/20 Iteration 1578/3560 Training loss: 1.8986 0.5682 sec/batch
Epoch 9/20 Iteration 1579/3560 Training loss: 1.8985 0.5782 sec/batch
Epoch 9/20 Iteration 1580/3560 Training loss: 1.8984 0.5711 sec/batch
Epoch 9/20 Iteration 1581/3560 Training loss: 1.8984 0.5642 sec/batch
Epoch 9/20 Iteration 1582/3560 Training loss: 1.8984 0.5721 sec/batch
Epoch 9/20 Iteration 1583/3560 Training loss: 1.8982 0.5701 sec/batch
Epoch 9/20 Iteration 1584/3560 Training loss: 1.8982 0.5678 sec/batch
Epoch 9/20 Iteration 1585/3560 Training loss: 1.8983 0.5606 sec/batch
Epoch 9/20 Iteration 1586/3560 Training loss: 1.8982 0.5726 sec/batch
Epoch 9/20 Iteration 1587/3560 Training loss: 1.8983 0.5686 sec/batch
Epoch 9/20 Iteration 1588/3560 Training loss: 1.8982 0.5641 sec/batch
Epoch 9/20 Iteration 1589/3560 Training loss: 1.8982 0.5575 sec/batch
Epoch 9/20 Iteration 1590/3560 Training loss: 1.8981 0.5614 sec/batch
Epoch 9/20 Iteration 1591/3560 Training loss: 1.8980 0.5687 sec/batch
Epoch 9/20 Iteration 1592/3560 Training loss: 1.8983 0.5699 sec/batch
Epoch 9/20 Iteration 1593/3560 Training loss: 1.8982 0.5578 sec/batch
Epoch 9/20 Iteration 1594/3560 Training loss: 1.8981 0.5757 sec/batch
Epoch 9/20 Iteration 1595/3560 Training loss: 1.8980 0.5736 sec/batch
Epoch 9/20 Iteration 1596/3560 Training loss: 1.8979 0.5865 sec/batch
Epoch 9/20 Iteration 1597/3560 Training loss: 1.8979 0.5754 sec/batch
Epoch 9/20 Iteration 1598/3560 Training loss: 1.8979 0.5685 sec/batch
Epoch 9/20 Iteration 1599/3560 Training loss: 1.8979 0.6011 sec/batch
Epoch 9/20 Iteration 1600/3560 Training loss: 1.8978 0.5686 sec/batch
Epoch 9/20 Iteration 1601/3560 Training loss: 1.8976 0.5702 sec/batch
Epoch 9/20 Iteration 1602/3560 Training loss: 1.8976 0.5600 sec/batch
Epoch 10/20 Iteration 1603/3560 Training loss: 1.9488 0.5679 sec/batch
Epoch 10/20 Iteration 1604/3560 Training loss: 1.9142 0.5786 sec/batch
Epoch 10/20 Iteration 1605/3560 Training loss: 1.9010 0.5725 sec/batch
Epoch 10/20 Iteration 1606/3560 Training loss: 1.8924 0.5569 sec/batch
Epoch 10/20 Iteration 1607/3560 Training loss: 1.8897 0.5626 sec/batch
Epoch 10/20 Iteration 1608/3560 Training loss: 1.8826 0.5559 sec/batch
Epoch 10/20 Iteration 1609/3560 Training loss: 1.8853 0.5642 sec/batch
Epoch 10/20 Iteration 1610/3560 Training loss: 1.8834 0.5673 sec/batch
Epoch 10/20 Iteration 1611/3560 Training loss: 1.8857 0.5747 sec/batch
Epoch 10/20 Iteration 1612/3560 Training loss: 1.8856 0.5878 sec/batch
Epoch 10/20 Iteration 1613/3560 Training loss: 1.8830 0.5675 sec/batch
Epoch 10/20 Iteration 1614/3560 Training loss: 1.8814 0.5634 sec/batch
Epoch 10/20 Iteration 1615/3560 Training loss: 1.8813 0.5784 sec/batch
Epoch 10/20 Iteration 1616/3560 Training loss: 1.8833 0.5759 sec/batch
Epoch 10/20 Iteration 1617/3560 Training loss: 1.8824 0.5682 sec/batch
Epoch 10/20 Iteration 1618/3560 Training loss: 1.8811 0.5809 sec/batch
Epoch 10/20 Iteration 1619/3560 Training loss: 1.8811 0.6165 sec/batch
Epoch 10/20 Iteration 1620/3560 Training loss: 1.8826 0.5906 sec/batch
Epoch 10/20 Iteration 1621/3560 Training loss: 1.8826 0.5570 sec/batch
Epoch 10/20 Iteration 1622/3560 Training loss: 1.8826 0.5555 sec/batch
Epoch 10/20 Iteration 1623/3560 Training loss: 1.8819 0.5489 sec/batch
Epoch 10/20 Iteration 1624/3560 Training loss: 1.8826 0.5590 sec/batch
Epoch 10/20 Iteration 1625/3560 Training loss: 1.8822 0.5447 sec/batch
Epoch 10/20 Iteration 1626/3560 Training loss: 1.8815 0.5677 sec/batch
Epoch 10/20 Iteration 1627/3560 Training loss: 1.8813 0.5659 sec/batch
Epoch 10/20 Iteration 1628/3560 Training loss: 1.8803 0.5697 sec/batch
Epoch 10/20 Iteration 1629/3560 Training loss: 1.8796 0.5670 sec/batch
Epoch 10/20 Iteration 1630/3560 Training loss: 1.8798 0.5690 sec/batch
Epoch 10/20 Iteration 1631/3560 Training loss: 1.8805 0.5667 sec/batch
Epoch 10/20 Iteration 1632/3560 Training loss: 1.8805 0.5719 sec/batch
Epoch 10/20 Iteration 1633/3560 Training loss: 1.8803 0.5704 sec/batch
Epoch 10/20 Iteration 1634/3560 Training loss: 1.8799 0.5766 sec/batch
Epoch 10/20 Iteration 1635/3560 Training loss: 1.8799 0.5683 sec/batch
Epoch 10/20 Iteration 1636/3560 Training loss: 1.8807 0.5641 sec/batch
Epoch 10/20 Iteration 1637/3560 Training loss: 1.8803 0.5644 sec/batch
Epoch 10/20 Iteration 1638/3560 Training loss: 1.8801 0.5622 sec/batch
Epoch 10/20 Iteration 1639/3560 Training loss: 1.8796 0.5553 sec/batch
Epoch 10/20 Iteration 1640/3560 Training loss: 1.8784 0.5812 sec/batch
Epoch 10/20 Iteration 1641/3560 Training loss: 1.8775 0.5849 sec/batch
Epoch 10/20 Iteration 1642/3560 Training loss: 1.8769 0.7758 sec/batch
Epoch 10/20 Iteration 1643/3560 Training loss: 1.8765 0.7605 sec/batch
Epoch 10/20 Iteration 1644/3560 Training loss: 1.8767 0.7250 sec/batch
Epoch 10/20 Iteration 1645/3560 Training loss: 1.8760 0.7644 sec/batch
Epoch 10/20 Iteration 1646/3560 Training loss: 1.8750 0.6392 sec/batch
Epoch 10/20 Iteration 1647/3560 Training loss: 1.8750 0.5545 sec/batch
Epoch 10/20 Iteration 1648/3560 Training loss: 1.8738 0.6007 sec/batch
Epoch 10/20 Iteration 1649/3560 Training loss: 1.8738 0.5611 sec/batch
Epoch 10/20 Iteration 1650/3560 Training loss: 1.8734 0.6055 sec/batch
Epoch 10/20 Iteration 1651/3560 Training loss: 1.8732 0.5684 sec/batch
Epoch 10/20 Iteration 1652/3560 Training loss: 1.8741 0.5500 sec/batch
Epoch 10/20 Iteration 1653/3560 Training loss: 1.8735 0.5610 sec/batch
Epoch 10/20 Iteration 1654/3560 Training loss: 1.8743 0.5813 sec/batch
Epoch 10/20 Iteration 1655/3560 Training loss: 1.8742 0.5658 sec/batch
Epoch 10/20 Iteration 1656/3560 Training loss: 1.8741 0.5825 sec/batch
Epoch 10/20 Iteration 1657/3560 Training loss: 1.8738 0.5501 sec/batch
Epoch 10/20 Iteration 1658/3560 Training loss: 1.8739 0.5585 sec/batch
Epoch 10/20 Iteration 1659/3560 Training loss: 1.8742 0.5683 sec/batch
Epoch 10/20 Iteration 1660/3560 Training loss: 1.8739 0.5882 sec/batch
Epoch 10/20 Iteration 1661/3560 Training loss: 1.8737 0.5688 sec/batch
Epoch 10/20 Iteration 1662/3560 Training loss: 1.8743 0.5841 sec/batch
Epoch 10/20 Iteration 1663/3560 Training loss: 1.8740 0.5671 sec/batch
Epoch 10/20 Iteration 1664/3560 Training loss: 1.8748 0.5734 sec/batch
Epoch 10/20 Iteration 1665/3560 Training loss: 1.8752 0.5754 sec/batch
Epoch 10/20 Iteration 1666/3560 Training loss: 1.8755 0.5749 sec/batch
Epoch 10/20 Iteration 1667/3560 Training loss: 1.8754 0.5764 sec/batch
Epoch 10/20 Iteration 1668/3560 Training loss: 1.8757 0.5691 sec/batch
Epoch 10/20 Iteration 1669/3560 Training loss: 1.8758 0.5503 sec/batch
Epoch 10/20 Iteration 1670/3560 Training loss: 1.8753 0.5599 sec/batch
Epoch 10/20 Iteration 1671/3560 Training loss: 1.8750 0.5699 sec/batch
Epoch 10/20 Iteration 1672/3560 Training loss: 1.8748 0.5618 sec/batch
Epoch 10/20 Iteration 1673/3560 Training loss: 1.8753 0.5608 sec/batch
Epoch 10/20 Iteration 1674/3560 Training loss: 1.8754 0.5866 sec/batch
Epoch 10/20 Iteration 1675/3560 Training loss: 1.8757 0.6203 sec/batch
Epoch 10/20 Iteration 1676/3560 Training loss: 1.8754 0.5852 sec/batch
Epoch 10/20 Iteration 1677/3560 Training loss: 1.8753 0.5937 sec/batch
Epoch 10/20 Iteration 1678/3560 Training loss: 1.8756 0.5922 sec/batch
Epoch 10/20 Iteration 1679/3560 Training loss: 1.8756 0.5753 sec/batch
Epoch 10/20 Iteration 1680/3560 Training loss: 1.8757 0.5633 sec/batch
Epoch 10/20 Iteration 1681/3560 Training loss: 1.8753 0.5608 sec/batch
Epoch 10/20 Iteration 1682/3560 Training loss: 1.8752 0.5711 sec/batch
Epoch 10/20 Iteration 1683/3560 Training loss: 1.8747 0.5751 sec/batch
Epoch 10/20 Iteration 1684/3560 Training loss: 1.8748 0.5605 sec/batch
Epoch 10/20 Iteration 1685/3560 Training loss: 1.8744 0.5739 sec/batch
Epoch 10/20 Iteration 1686/3560 Training loss: 1.8743 0.5764 sec/batch
Epoch 10/20 Iteration 1687/3560 Training loss: 1.8738 0.5704 sec/batch
Epoch 10/20 Iteration 1688/3560 Training loss: 1.8735 0.5780 sec/batch
Epoch 10/20 Iteration 1689/3560 Training loss: 1.8733 0.5900 sec/batch
Epoch 10/20 Iteration 1690/3560 Training loss: 1.8729 0.6586 sec/batch
Epoch 10/20 Iteration 1691/3560 Training loss: 1.8725 0.6029 sec/batch
Epoch 10/20 Iteration 1692/3560 Training loss: 1.8725 0.5617 sec/batch
Epoch 10/20 Iteration 1693/3560 Training loss: 1.8722 0.5705 sec/batch
Epoch 10/20 Iteration 1694/3560 Training loss: 1.8720 0.5945 sec/batch
Epoch 10/20 Iteration 1695/3560 Training loss: 1.8716 0.5726 sec/batch
Epoch 10/20 Iteration 1696/3560 Training loss: 1.8712 0.5725 sec/batch
Epoch 10/20 Iteration 1697/3560 Training loss: 1.8709 0.5656 sec/batch
Epoch 10/20 Iteration 1698/3560 Training loss: 1.8709 0.5613 sec/batch
Epoch 10/20 Iteration 1699/3560 Training loss: 1.8708 0.5611 sec/batch
Epoch 10/20 Iteration 1700/3560 Training loss: 1.8703 0.5831 sec/batch
Epoch 10/20 Iteration 1701/3560 Training loss: 1.8699 0.5572 sec/batch
Epoch 10/20 Iteration 1702/3560 Training loss: 1.8694 0.5658 sec/batch
Epoch 10/20 Iteration 1703/3560 Training loss: 1.8694 0.5598 sec/batch
Epoch 10/20 Iteration 1704/3560 Training loss: 1.8692 0.5407 sec/batch
Epoch 10/20 Iteration 1705/3560 Training loss: 1.8690 0.5676 sec/batch
Epoch 10/20 Iteration 1706/3560 Training loss: 1.8688 0.7405 sec/batch
Epoch 10/20 Iteration 1707/3560 Training loss: 1.8685 0.7448 sec/batch
Epoch 10/20 Iteration 1708/3560 Training loss: 1.8684 0.7723 sec/batch
Epoch 10/20 Iteration 1709/3560 Training loss: 1.8682 0.7180 sec/batch
Epoch 10/20 Iteration 1710/3560 Training loss: 1.8681 0.5844 sec/batch
Epoch 10/20 Iteration 1711/3560 Training loss: 1.8681 0.5586 sec/batch
Epoch 10/20 Iteration 1712/3560 Training loss: 1.8681 0.5783 sec/batch
Epoch 10/20 Iteration 1713/3560 Training loss: 1.8680 0.5684 sec/batch
Epoch 10/20 Iteration 1714/3560 Training loss: 1.8678 0.5635 sec/batch
Epoch 10/20 Iteration 1715/3560 Training loss: 1.8676 0.5685 sec/batch
Epoch 10/20 Iteration 1716/3560 Training loss: 1.8675 0.7866 sec/batch
Epoch 10/20 Iteration 1717/3560 Training loss: 1.8671 0.7231 sec/batch
Epoch 10/20 Iteration 1718/3560 Training loss: 1.8668 0.7844 sec/batch
Epoch 10/20 Iteration 1719/3560 Training loss: 1.8667 0.7152 sec/batch
Epoch 10/20 Iteration 1720/3560 Training loss: 1.8665 0.6513 sec/batch
Epoch 10/20 Iteration 1721/3560 Training loss: 1.8665 0.7289 sec/batch
Epoch 10/20 Iteration 1722/3560 Training loss: 1.8664 0.7254 sec/batch
Epoch 10/20 Iteration 1723/3560 Training loss: 1.8664 0.7539 sec/batch
Epoch 10/20 Iteration 1724/3560 Training loss: 1.8661 0.7315 sec/batch
Epoch 10/20 Iteration 1725/3560 Training loss: 1.8658 0.5739 sec/batch
Epoch 10/20 Iteration 1726/3560 Training loss: 1.8659 0.5608 sec/batch
Epoch 10/20 Iteration 1727/3560 Training loss: 1.8658 0.5692 sec/batch
Epoch 10/20 Iteration 1728/3560 Training loss: 1.8655 0.5670 sec/batch
Epoch 10/20 Iteration 1729/3560 Training loss: 1.8656 0.5709 sec/batch
Epoch 10/20 Iteration 1730/3560 Training loss: 1.8656 0.5586 sec/batch
Epoch 10/20 Iteration 1731/3560 Training loss: 1.8656 0.5606 sec/batch
Epoch 10/20 Iteration 1732/3560 Training loss: 1.8655 0.5774 sec/batch
Epoch 10/20 Iteration 1733/3560 Training loss: 1.8653 0.5687 sec/batch
Epoch 10/20 Iteration 1734/3560 Training loss: 1.8650 0.5615 sec/batch
Epoch 10/20 Iteration 1735/3560 Training loss: 1.8650 0.5682 sec/batch
Epoch 10/20 Iteration 1736/3560 Training loss: 1.8650 0.5586 sec/batch
Epoch 10/20 Iteration 1737/3560 Training loss: 1.8649 0.5604 sec/batch
Epoch 10/20 Iteration 1738/3560 Training loss: 1.8649 0.5839 sec/batch
Epoch 10/20 Iteration 1739/3560 Training loss: 1.8650 0.6008 sec/batch
Epoch 10/20 Iteration 1740/3560 Training loss: 1.8650 0.6295 sec/batch
Epoch 10/20 Iteration 1741/3560 Training loss: 1.8652 0.5619 sec/batch
Epoch 10/20 Iteration 1742/3560 Training loss: 1.8651 0.5708 sec/batch
Epoch 10/20 Iteration 1743/3560 Training loss: 1.8652 0.5686 sec/batch
Epoch 10/20 Iteration 1744/3560 Training loss: 1.8651 0.5633 sec/batch
Epoch 10/20 Iteration 1745/3560 Training loss: 1.8651 0.5563 sec/batch
Epoch 10/20 Iteration 1746/3560 Training loss: 1.8651 0.5372 sec/batch
Epoch 10/20 Iteration 1747/3560 Training loss: 1.8650 0.5396 sec/batch
Epoch 10/20 Iteration 1748/3560 Training loss: 1.8651 0.5425 sec/batch
Epoch 10/20 Iteration 1749/3560 Training loss: 1.8651 0.5141 sec/batch
Epoch 10/20 Iteration 1750/3560 Training loss: 1.8653 0.7435 sec/batch
Epoch 10/20 Iteration 1751/3560 Training loss: 1.8652 0.6762 sec/batch
Epoch 10/20 Iteration 1752/3560 Training loss: 1.8651 0.7159 sec/batch
Epoch 10/20 Iteration 1753/3560 Training loss: 1.8648 0.7746 sec/batch
Epoch 10/20 Iteration 1754/3560 Training loss: 1.8649 0.7214 sec/batch
Epoch 10/20 Iteration 1755/3560 Training loss: 1.8649 0.5376 sec/batch
Epoch 10/20 Iteration 1756/3560 Training loss: 1.8649 0.6585 sec/batch
Epoch 10/20 Iteration 1757/3560 Training loss: 1.8647 0.6330 sec/batch
Epoch 10/20 Iteration 1758/3560 Training loss: 1.8647 0.6946 sec/batch
Epoch 10/20 Iteration 1759/3560 Training loss: 1.8647 0.6027 sec/batch
Epoch 10/20 Iteration 1760/3560 Training loss: 1.8647 0.6029 sec/batch
Epoch 10/20 Iteration 1761/3560 Training loss: 1.8645 0.5855 sec/batch
Epoch 10/20 Iteration 1762/3560 Training loss: 1.8645 0.6703 sec/batch
Epoch 10/20 Iteration 1763/3560 Training loss: 1.8646 0.7414 sec/batch
Epoch 10/20 Iteration 1764/3560 Training loss: 1.8645 0.7537 sec/batch
Epoch 10/20 Iteration 1765/3560 Training loss: 1.8646 0.6594 sec/batch
Epoch 10/20 Iteration 1766/3560 Training loss: 1.8646 0.7909 sec/batch
Epoch 10/20 Iteration 1767/3560 Training loss: 1.8646 0.8468 sec/batch
Epoch 10/20 Iteration 1768/3560 Training loss: 1.8645 0.7948 sec/batch
Epoch 10/20 Iteration 1769/3560 Training loss: 1.8645 0.8077 sec/batch
Epoch 10/20 Iteration 1770/3560 Training loss: 1.8648 0.7816 sec/batch
Epoch 10/20 Iteration 1771/3560 Training loss: 1.8647 0.7290 sec/batch
Epoch 10/20 Iteration 1772/3560 Training loss: 1.8646 0.8130 sec/batch
Epoch 10/20 Iteration 1773/3560 Training loss: 1.8645 0.8026 sec/batch
Epoch 10/20 Iteration 1774/3560 Training loss: 1.8643 0.7412 sec/batch
Epoch 10/20 Iteration 1775/3560 Training loss: 1.8644 0.7401 sec/batch
Epoch 10/20 Iteration 1776/3560 Training loss: 1.8644 0.7922 sec/batch
Epoch 10/20 Iteration 1777/3560 Training loss: 1.8643 0.8610 sec/batch
Epoch 10/20 Iteration 1778/3560 Training loss: 1.8643 0.5965 sec/batch
Epoch 10/20 Iteration 1779/3560 Training loss: 1.8642 0.5425 sec/batch
Epoch 10/20 Iteration 1780/3560 Training loss: 1.8641 0.5298 sec/batch
Epoch 11/20 Iteration 1781/3560 Training loss: 1.9271 0.5602 sec/batch
Epoch 11/20 Iteration 1782/3560 Training loss: 1.8899 0.5364 sec/batch
Epoch 11/20 Iteration 1783/3560 Training loss: 1.8750 0.5270 sec/batch
Epoch 11/20 Iteration 1784/3560 Training loss: 1.8689 0.5313 sec/batch
Epoch 11/20 Iteration 1785/3560 Training loss: 1.8632 0.6743 sec/batch
Epoch 11/20 Iteration 1786/3560 Training loss: 1.8554 0.7175 sec/batch
Epoch 11/20 Iteration 1787/3560 Training loss: 1.8562 0.7176 sec/batch
Epoch 11/20 Iteration 1788/3560 Training loss: 1.8547 0.5852 sec/batch
Epoch 11/20 Iteration 1789/3560 Training loss: 1.8575 0.5272 sec/batch
Epoch 11/20 Iteration 1790/3560 Training loss: 1.8557 0.7525 sec/batch
Epoch 11/20 Iteration 1791/3560 Training loss: 1.8524 0.7355 sec/batch
Epoch 11/20 Iteration 1792/3560 Training loss: 1.8506 0.7921 sec/batch
Epoch 11/20 Iteration 1793/3560 Training loss: 1.8497 0.7915 sec/batch
Epoch 11/20 Iteration 1794/3560 Training loss: 1.8520 0.7479 sec/batch
Epoch 11/20 Iteration 1795/3560 Training loss: 1.8513 0.7923 sec/batch
Epoch 11/20 Iteration 1796/3560 Training loss: 1.8502 0.8288 sec/batch
Epoch 11/20 Iteration 1797/3560 Training loss: 1.8500 0.8581 sec/batch
Epoch 11/20 Iteration 1798/3560 Training loss: 1.8517 0.8216 sec/batch
Epoch 11/20 Iteration 1799/3560 Training loss: 1.8522 0.7455 sec/batch
Epoch 11/20 Iteration 1800/3560 Training loss: 1.8524 0.5430 sec/batch
Epoch 11/20 Iteration 1801/3560 Training loss: 1.8516 0.6073 sec/batch
Epoch 11/20 Iteration 1802/3560 Training loss: 1.8524 0.5942 sec/batch
Epoch 11/20 Iteration 1803/3560 Training loss: 1.8516 0.6418 sec/batch
Epoch 11/20 Iteration 1804/3560 Training loss: 1.8514 0.6054 sec/batch
Epoch 11/20 Iteration 1805/3560 Training loss: 1.8510 0.5262 sec/batch
Epoch 11/20 Iteration 1806/3560 Training loss: 1.8502 0.6491 sec/batch
Epoch 11/20 Iteration 1807/3560 Training loss: 1.8491 0.6497 sec/batch
Epoch 11/20 Iteration 1808/3560 Training loss: 1.8493 0.7204 sec/batch
Epoch 11/20 Iteration 1809/3560 Training loss: 1.8501 0.6649 sec/batch
Epoch 11/20 Iteration 1810/3560 Training loss: 1.8502 0.5637 sec/batch
Epoch 11/20 Iteration 1811/3560 Training loss: 1.8500 0.5867 sec/batch
Epoch 11/20 Iteration 1812/3560 Training loss: 1.8497 0.5545 sec/batch
Epoch 11/20 Iteration 1813/3560 Training loss: 1.8501 0.7245 sec/batch
Epoch 11/20 Iteration 1814/3560 Training loss: 1.8508 0.6162 sec/batch
Epoch 11/20 Iteration 1815/3560 Training loss: 1.8502 0.6055 sec/batch
Epoch 11/20 Iteration 1816/3560 Training loss: 1.8500 0.5304 sec/batch
Epoch 11/20 Iteration 1817/3560 Training loss: 1.8494 0.5389 sec/batch
Epoch 11/20 Iteration 1818/3560 Training loss: 1.8482 0.5316 sec/batch
Epoch 11/20 Iteration 1819/3560 Training loss: 1.8473 0.5329 sec/batch
Epoch 11/20 Iteration 1820/3560 Training loss: 1.8467 0.6976 sec/batch
Epoch 11/20 Iteration 1821/3560 Training loss: 1.8461 0.7764 sec/batch
Epoch 11/20 Iteration 1822/3560 Training loss: 1.8462 0.8076 sec/batch
Epoch 11/20 Iteration 1823/3560 Training loss: 1.8459 0.6930 sec/batch
Epoch 11/20 Iteration 1824/3560 Training loss: 1.8452 0.6503 sec/batch
Epoch 11/20 Iteration 1825/3560 Training loss: 1.8455 0.7717 sec/batch
Epoch 11/20 Iteration 1826/3560 Training loss: 1.8445 0.5915 sec/batch
Epoch 11/20 Iteration 1827/3560 Training loss: 1.8444 0.5175 sec/batch
Epoch 11/20 Iteration 1828/3560 Training loss: 1.8439 0.6944 sec/batch
Epoch 11/20 Iteration 1829/3560 Training loss: 1.8436 0.8346 sec/batch
Epoch 11/20 Iteration 1830/3560 Training loss: 1.8443 0.6703 sec/batch
Epoch 11/20 Iteration 1831/3560 Training loss: 1.8438 0.7505 sec/batch
Epoch 11/20 Iteration 1832/3560 Training loss: 1.8446 0.6291 sec/batch
Epoch 11/20 Iteration 1833/3560 Training loss: 1.8444 0.5253 sec/batch
Epoch 11/20 Iteration 1834/3560 Training loss: 1.8444 0.5397 sec/batch
Epoch 11/20 Iteration 1835/3560 Training loss: 1.8438 0.5600 sec/batch
Epoch 11/20 Iteration 1836/3560 Training loss: 1.8440 0.5844 sec/batch
Epoch 11/20 Iteration 1837/3560 Training loss: 1.8444 0.5545 sec/batch
Epoch 11/20 Iteration 1838/3560 Training loss: 1.8442 0.5932 sec/batch
Epoch 11/20 Iteration 1839/3560 Training loss: 1.8437 0.5196 sec/batch
Epoch 11/20 Iteration 1840/3560 Training loss: 1.8442 0.6129 sec/batch
Epoch 11/20 Iteration 1841/3560 Training loss: 1.8440 0.5687 sec/batch
Epoch 11/20 Iteration 1842/3560 Training loss: 1.8448 0.5605 sec/batch
Epoch 11/20 Iteration 1843/3560 Training loss: 1.8452 0.5880 sec/batch
Epoch 11/20 Iteration 1844/3560 Training loss: 1.8455 0.5509 sec/batch
Epoch 11/20 Iteration 1845/3560 Training loss: 1.8453 0.5332 sec/batch
Epoch 11/20 Iteration 1846/3560 Training loss: 1.8457 0.6697 sec/batch
Epoch 11/20 Iteration 1847/3560 Training loss: 1.8458 0.7300 sec/batch
Epoch 11/20 Iteration 1848/3560 Training loss: 1.8454 0.7244 sec/batch
Epoch 11/20 Iteration 1849/3560 Training loss: 1.8453 0.5435 sec/batch
Epoch 11/20 Iteration 1850/3560 Training loss: 1.8453 0.5892 sec/batch
Epoch 11/20 Iteration 1851/3560 Training loss: 1.8456 0.5587 sec/batch
Epoch 11/20 Iteration 1852/3560 Training loss: 1.8457 0.5285 sec/batch
Epoch 11/20 Iteration 1853/3560 Training loss: 1.8461 0.5522 sec/batch
Epoch 11/20 Iteration 1854/3560 Training loss: 1.8458 0.5224 sec/batch
Epoch 11/20 Iteration 1855/3560 Training loss: 1.8456 0.5378 sec/batch
Epoch 11/20 Iteration 1856/3560 Training loss: 1.8459 0.6768 sec/batch
Epoch 11/20 Iteration 1857/3560 Training loss: 1.8457 0.5824 sec/batch
Epoch 11/20 Iteration 1858/3560 Training loss: 1.8458 0.7207 sec/batch
Epoch 11/20 Iteration 1859/3560 Training loss: 1.8453 0.5531 sec/batch
Epoch 11/20 Iteration 1860/3560 Training loss: 1.8453 0.6034 sec/batch
Epoch 11/20 Iteration 1861/3560 Training loss: 1.8448 0.6991 sec/batch
Epoch 11/20 Iteration 1862/3560 Training loss: 1.8450 0.8564 sec/batch
Epoch 11/20 Iteration 1863/3560 Training loss: 1.8446 0.7699 sec/batch
Epoch 11/20 Iteration 1864/3560 Training loss: 1.8443 0.5839 sec/batch
Epoch 11/20 Iteration 1865/3560 Training loss: 1.8438 0.6232 sec/batch
Epoch 11/20 Iteration 1866/3560 Training loss: 1.8433 0.7907 sec/batch
Epoch 11/20 Iteration 1867/3560 Training loss: 1.8431 0.7928 sec/batch
Epoch 11/20 Iteration 1868/3560 Training loss: 1.8429 0.7643 sec/batch
Epoch 11/20 Iteration 1869/3560 Training loss: 1.8423 0.5261 sec/batch
Epoch 11/20 Iteration 1870/3560 Training loss: 1.8423 0.5509 sec/batch
Epoch 11/20 Iteration 1871/3560 Training loss: 1.8422 0.5208 sec/batch
Epoch 11/20 Iteration 1872/3560 Training loss: 1.8420 0.5540 sec/batch
Epoch 11/20 Iteration 1873/3560 Training loss: 1.8416 0.5637 sec/batch
Epoch 11/20 Iteration 1874/3560 Training loss: 1.8412 0.5395 sec/batch
Epoch 11/20 Iteration 1875/3560 Training loss: 1.8409 0.5509 sec/batch
Epoch 11/20 Iteration 1876/3560 Training loss: 1.8409 0.7564 sec/batch
Epoch 11/20 Iteration 1877/3560 Training loss: 1.8408 0.6679 sec/batch
Epoch 11/20 Iteration 1878/3560 Training loss: 1.8405 0.5764 sec/batch
Epoch 11/20 Iteration 1879/3560 Training loss: 1.8401 0.5759 sec/batch
Epoch 11/20 Iteration 1880/3560 Training loss: 1.8397 0.5517 sec/batch
Epoch 11/20 Iteration 1881/3560 Training loss: 1.8397 0.5125 sec/batch
Epoch 11/20 Iteration 1882/3560 Training loss: 1.8396 0.5301 sec/batch
Epoch 11/20 Iteration 1883/3560 Training loss: 1.8393 0.5366 sec/batch
Epoch 11/20 Iteration 1884/3560 Training loss: 1.8392 0.5095 sec/batch
Epoch 11/20 Iteration 1885/3560 Training loss: 1.8389 0.5670 sec/batch
Epoch 11/20 Iteration 1886/3560 Training loss: 1.8388 0.5478 sec/batch
Epoch 11/20 Iteration 1887/3560 Training loss: 1.8387 0.5527 sec/batch
Epoch 11/20 Iteration 1888/3560 Training loss: 1.8386 0.6198 sec/batch
Epoch 11/20 Iteration 1889/3560 Training loss: 1.8387 0.5609 sec/batch
Epoch 11/20 Iteration 1890/3560 Training loss: 1.8387 0.6841 sec/batch
Epoch 11/20 Iteration 1891/3560 Training loss: 1.8386 0.5631 sec/batch
Epoch 11/20 Iteration 1892/3560 Training loss: 1.8384 0.6622 sec/batch
Epoch 11/20 Iteration 1893/3560 Training loss: 1.8383 0.5372 sec/batch
Epoch 11/20 Iteration 1894/3560 Training loss: 1.8382 0.5183 sec/batch
Epoch 11/20 Iteration 1895/3560 Training loss: 1.8380 0.5635 sec/batch
Epoch 11/20 Iteration 1896/3560 Training loss: 1.8377 0.5920 sec/batch
Epoch 11/20 Iteration 1897/3560 Training loss: 1.8375 0.8595 sec/batch
Epoch 11/20 Iteration 1898/3560 Training loss: 1.8374 0.6116 sec/batch
Epoch 11/20 Iteration 1899/3560 Training loss: 1.8373 0.6512 sec/batch
Epoch 11/20 Iteration 1900/3560 Training loss: 1.8372 0.8462 sec/batch
Epoch 11/20 Iteration 1901/3560 Training loss: 1.8373 0.7597 sec/batch
Epoch 11/20 Iteration 1902/3560 Training loss: 1.8370 0.7500 sec/batch
Epoch 11/20 Iteration 1903/3560 Training loss: 1.8367 0.7122 sec/batch
Epoch 11/20 Iteration 1904/3560 Training loss: 1.8368 0.6001 sec/batch
Epoch 11/20 Iteration 1905/3560 Training loss: 1.8367 0.6237 sec/batch
Epoch 11/20 Iteration 1906/3560 Training loss: 1.8364 0.5622 sec/batch
Epoch 11/20 Iteration 1907/3560 Training loss: 1.8365 0.5921 sec/batch
Epoch 11/20 Iteration 1908/3560 Training loss: 1.8365 0.7154 sec/batch
Epoch 11/20 Iteration 1909/3560 Training loss: 1.8364 0.7853 sec/batch
Epoch 11/20 Iteration 1910/3560 Training loss: 1.8362 0.5923 sec/batch
Epoch 11/20 Iteration 1911/3560 Training loss: 1.8359 0.5424 sec/batch
Epoch 11/20 Iteration 1912/3560 Training loss: 1.8356 0.6289 sec/batch
Epoch 11/20 Iteration 1913/3560 Training loss: 1.8357 0.5334 sec/batch
Epoch 11/20 Iteration 1914/3560 Training loss: 1.8357 0.6582 sec/batch
Epoch 11/20 Iteration 1915/3560 Training loss: 1.8356 0.6869 sec/batch
Epoch 11/20 Iteration 1916/3560 Training loss: 1.8356 0.7967 sec/batch
Epoch 11/20 Iteration 1917/3560 Training loss: 1.8356 0.8654 sec/batch
Epoch 11/20 Iteration 1918/3560 Training loss: 1.8356 0.7935 sec/batch
Epoch 11/20 Iteration 1919/3560 Training loss: 1.8359 0.7713 sec/batch
Epoch 11/20 Iteration 1920/3560 Training loss: 1.8356 0.7480 sec/batch
Epoch 11/20 Iteration 1921/3560 Training loss: 1.8358 0.8180 sec/batch
Epoch 11/20 Iteration 1922/3560 Training loss: 1.8358 0.6820 sec/batch
Epoch 11/20 Iteration 1923/3560 Training loss: 1.8357 0.8031 sec/batch
Epoch 11/20 Iteration 1924/3560 Training loss: 1.8357 0.9022 sec/batch
Epoch 11/20 Iteration 1925/3560 Training loss: 1.8355 0.8340 sec/batch
Epoch 11/20 Iteration 1926/3560 Training loss: 1.8355 0.7634 sec/batch
Epoch 11/20 Iteration 1927/3560 Training loss: 1.8355 0.8123 sec/batch
Epoch 11/20 Iteration 1928/3560 Training loss: 1.8357 0.8318 sec/batch
Epoch 11/20 Iteration 1929/3560 Training loss: 1.8357 0.7950 sec/batch
Epoch 11/20 Iteration 1930/3560 Training loss: 1.8355 0.8479 sec/batch
Epoch 11/20 Iteration 1931/3560 Training loss: 1.8353 0.8108 sec/batch
Epoch 11/20 Iteration 1932/3560 Training loss: 1.8354 0.7935 sec/batch
Epoch 11/20 Iteration 1933/3560 Training loss: 1.8354 0.8481 sec/batch
Epoch 11/20 Iteration 1934/3560 Training loss: 1.8354 0.7638 sec/batch
Epoch 11/20 Iteration 1935/3560 Training loss: 1.8353 0.8332 sec/batch
Epoch 11/20 Iteration 1936/3560 Training loss: 1.8353 0.6770 sec/batch
Epoch 11/20 Iteration 1937/3560 Training loss: 1.8353 0.6342 sec/batch
Epoch 11/20 Iteration 1938/3560 Training loss: 1.8353 0.6993 sec/batch
Epoch 11/20 Iteration 1939/3560 Training loss: 1.8350 0.8074 sec/batch
Epoch 11/20 Iteration 1940/3560 Training loss: 1.8352 0.8043 sec/batch
Epoch 11/20 Iteration 1941/3560 Training loss: 1.8353 0.6168 sec/batch
Epoch 11/20 Iteration 1942/3560 Training loss: 1.8352 0.6244 sec/batch
Epoch 11/20 Iteration 1943/3560 Training loss: 1.8353 0.6050 sec/batch
Epoch 11/20 Iteration 1944/3560 Training loss: 1.8352 0.6265 sec/batch
Epoch 11/20 Iteration 1945/3560 Training loss: 1.8352 0.6405 sec/batch
Epoch 11/20 Iteration 1946/3560 Training loss: 1.8351 0.7490 sec/batch
Epoch 11/20 Iteration 1947/3560 Training loss: 1.8351 0.8007 sec/batch
Epoch 11/20 Iteration 1948/3560 Training loss: 1.8355 0.8352 sec/batch
Epoch 11/20 Iteration 1949/3560 Training loss: 1.8354 0.8091 sec/batch
Epoch 11/20 Iteration 1950/3560 Training loss: 1.8354 0.7200 sec/batch
Epoch 11/20 Iteration 1951/3560 Training loss: 1.8352 0.7966 sec/batch
Epoch 11/20 Iteration 1952/3560 Training loss: 1.8351 0.7879 sec/batch
Epoch 11/20 Iteration 1953/3560 Training loss: 1.8352 0.8154 sec/batch
Epoch 11/20 Iteration 1954/3560 Training loss: 1.8352 0.8485 sec/batch
Epoch 11/20 Iteration 1955/3560 Training loss: 1.8353 0.8452 sec/batch
Epoch 11/20 Iteration 1956/3560 Training loss: 1.8352 0.7100 sec/batch
Epoch 11/20 Iteration 1957/3560 Training loss: 1.8350 0.5785 sec/batch
Epoch 11/20 Iteration 1958/3560 Training loss: 1.8351 0.7113 sec/batch
Epoch 12/20 Iteration 1959/3560 Training loss: 1.8939 0.6745 sec/batch
Epoch 12/20 Iteration 1960/3560 Training loss: 1.8576 0.6738 sec/batch
Epoch 12/20 Iteration 1961/3560 Training loss: 1.8444 0.8021 sec/batch
Epoch 12/20 Iteration 1962/3560 Training loss: 1.8383 0.7601 sec/batch
Epoch 12/20 Iteration 1963/3560 Training loss: 1.8347 0.8070 sec/batch
Epoch 12/20 Iteration 1964/3560 Training loss: 1.8269 0.7712 sec/batch
Epoch 12/20 Iteration 1965/3560 Training loss: 1.8263 0.7986 sec/batch
Epoch 12/20 Iteration 1966/3560 Training loss: 1.8268 0.8481 sec/batch
Epoch 12/20 Iteration 1967/3560 Training loss: 1.8293 0.5381 sec/batch
Epoch 12/20 Iteration 1968/3560 Training loss: 1.8281 0.6104 sec/batch
Epoch 12/20 Iteration 1969/3560 Training loss: 1.8247 0.7351 sec/batch
Epoch 12/20 Iteration 1970/3560 Training loss: 1.8229 0.5474 sec/batch
Epoch 12/20 Iteration 1971/3560 Training loss: 1.8225 0.5433 sec/batch
Epoch 12/20 Iteration 1972/3560 Training loss: 1.8246 0.5203 sec/batch
Epoch 12/20 Iteration 1973/3560 Training loss: 1.8229 0.5709 sec/batch
Epoch 12/20 Iteration 1974/3560 Training loss: 1.8219 0.5370 sec/batch
Epoch 12/20 Iteration 1975/3560 Training loss: 1.8219 0.5082 sec/batch
Epoch 12/20 Iteration 1976/3560 Training loss: 1.8240 0.5387 sec/batch
Epoch 12/20 Iteration 1977/3560 Training loss: 1.8241 0.5628 sec/batch
Epoch 12/20 Iteration 1978/3560 Training loss: 1.8245 0.6552 sec/batch
Epoch 12/20 Iteration 1979/3560 Training loss: 1.8235 0.5400 sec/batch
Epoch 12/20 Iteration 1980/3560 Training loss: 1.8242 0.5749 sec/batch
Epoch 12/20 Iteration 1981/3560 Training loss: 1.8236 0.5879 sec/batch
Epoch 12/20 Iteration 1982/3560 Training loss: 1.8234 0.5449 sec/batch
Epoch 12/20 Iteration 1983/3560 Training loss: 1.8232 0.5905 sec/batch
Epoch 12/20 Iteration 1984/3560 Training loss: 1.8224 0.5451 sec/batch
Epoch 12/20 Iteration 1985/3560 Training loss: 1.8213 0.5505 sec/batch
Epoch 12/20 Iteration 1986/3560 Training loss: 1.8220 0.5317 sec/batch
Epoch 12/20 Iteration 1987/3560 Training loss: 1.8227 0.7350 sec/batch
Epoch 12/20 Iteration 1988/3560 Training loss: 1.8228 0.7958 sec/batch
Epoch 12/20 Iteration 1989/3560 Training loss: 1.8228 0.8279 sec/batch
Epoch 12/20 Iteration 1990/3560 Training loss: 1.8221 0.6075 sec/batch
Epoch 12/20 Iteration 1991/3560 Training loss: 1.8224 0.5504 sec/batch
Epoch 12/20 Iteration 1992/3560 Training loss: 1.8233 0.5172 sec/batch
Epoch 12/20 Iteration 1993/3560 Training loss: 1.8230 0.5054 sec/batch
Epoch 12/20 Iteration 1994/3560 Training loss: 1.8227 0.5413 sec/batch
Epoch 12/20 Iteration 1995/3560 Training loss: 1.8222 0.5833 sec/batch
Epoch 12/20 Iteration 1996/3560 Training loss: 1.8212 0.7831 sec/batch
Epoch 12/20 Iteration 1997/3560 Training loss: 1.8201 0.7595 sec/batch
Epoch 12/20 Iteration 1998/3560 Training loss: 1.8194 0.6039 sec/batch
Epoch 12/20 Iteration 1999/3560 Training loss: 1.8189 0.6960 sec/batch
Epoch 12/20 Iteration 2000/3560 Training loss: 1.8188 0.5855 sec/batch
Epoch 12/20 Iteration 2001/3560 Training loss: 1.8184 0.5242 sec/batch
Epoch 12/20 Iteration 2002/3560 Training loss: 1.8176 0.5714 sec/batch
Epoch 12/20 Iteration 2003/3560 Training loss: 1.8178 0.5621 sec/batch
Epoch 12/20 Iteration 2004/3560 Training loss: 1.8167 0.5943 sec/batch
Epoch 12/20 Iteration 2005/3560 Training loss: 1.8166 0.5516 sec/batch
Epoch 12/20 Iteration 2006/3560 Training loss: 1.8160 0.5378 sec/batch
Epoch 12/20 Iteration 2007/3560 Training loss: 1.8158 0.5423 sec/batch
Epoch 12/20 Iteration 2008/3560 Training loss: 1.8167 0.5130 sec/batch
Epoch 12/20 Iteration 2009/3560 Training loss: 1.8162 0.5524 sec/batch
Epoch 12/20 Iteration 2010/3560 Training loss: 1.8170 0.5756 sec/batch
Epoch 12/20 Iteration 2011/3560 Training loss: 1.8168 0.6619 sec/batch
Epoch 12/20 Iteration 2012/3560 Training loss: 1.8169 0.5771 sec/batch
Epoch 12/20 Iteration 2013/3560 Training loss: 1.8166 0.6093 sec/batch
Epoch 12/20 Iteration 2014/3560 Training loss: 1.8170 0.6864 sec/batch
Epoch 12/20 Iteration 2015/3560 Training loss: 1.8173 0.5395 sec/batch
Epoch 12/20 Iteration 2016/3560 Training loss: 1.8171 0.5847 sec/batch
Epoch 12/20 Iteration 2017/3560 Training loss: 1.8167 0.6734 sec/batch
Epoch 12/20 Iteration 2018/3560 Training loss: 1.8173 0.6030 sec/batch
Epoch 12/20 Iteration 2019/3560 Training loss: 1.8173 0.5695 sec/batch
Epoch 12/20 Iteration 2020/3560 Training loss: 1.8180 0.6654 sec/batch
Epoch 12/20 Iteration 2021/3560 Training loss: 1.8184 0.5865 sec/batch
Epoch 12/20 Iteration 2022/3560 Training loss: 1.8187 0.5476 sec/batch
Epoch 12/20 Iteration 2023/3560 Training loss: 1.8185 0.6213 sec/batch
Epoch 12/20 Iteration 2024/3560 Training loss: 1.8189 0.6100 sec/batch
Epoch 12/20 Iteration 2025/3560 Training loss: 1.8191 0.5922 sec/batch
Epoch 12/20 Iteration 2026/3560 Training loss: 1.8185 0.5308 sec/batch
Epoch 12/20 Iteration 2027/3560 Training loss: 1.8184 0.5133 sec/batch
Epoch 12/20 Iteration 2028/3560 Training loss: 1.8185 0.5238 sec/batch
Epoch 12/20 Iteration 2029/3560 Training loss: 1.8188 0.5405 sec/batch
Epoch 12/20 Iteration 2030/3560 Training loss: 1.8190 0.5766 sec/batch
Epoch 12/20 Iteration 2031/3560 Training loss: 1.8193 0.5589 sec/batch
Epoch 12/20 Iteration 2032/3560 Training loss: 1.8191 0.5275 sec/batch
Epoch 12/20 Iteration 2033/3560 Training loss: 1.8190 0.5024 sec/batch
Epoch 12/20 Iteration 2034/3560 Training loss: 1.8193 0.5959 sec/batch
Epoch 12/20 Iteration 2035/3560 Training loss: 1.8192 0.4930 sec/batch
Epoch 12/20 Iteration 2036/3560 Training loss: 1.8192 0.5251 sec/batch
Epoch 12/20 Iteration 2037/3560 Training loss: 1.8189 0.5635 sec/batch
Epoch 12/20 Iteration 2038/3560 Training loss: 1.8189 0.5367 sec/batch
Epoch 12/20 Iteration 2039/3560 Training loss: 1.8185 0.8673 sec/batch
Epoch 12/20 Iteration 2040/3560 Training loss: 1.8186 0.7848 sec/batch
Epoch 12/20 Iteration 2041/3560 Training loss: 1.8182 0.5436 sec/batch
Epoch 12/20 Iteration 2042/3560 Training loss: 1.8182 0.5208 sec/batch
Epoch 12/20 Iteration 2043/3560 Training loss: 1.8177 0.5192 sec/batch
Epoch 12/20 Iteration 2044/3560 Training loss: 1.8173 0.5162 sec/batch
Epoch 12/20 Iteration 2045/3560 Training loss: 1.8170 0.5220 sec/batch
Epoch 12/20 Iteration 2046/3560 Training loss: 1.8168 0.7651 sec/batch
Epoch 12/20 Iteration 2047/3560 Training loss: 1.8163 0.7713 sec/batch
Epoch 12/20 Iteration 2048/3560 Training loss: 1.8164 0.8308 sec/batch
Epoch 12/20 Iteration 2049/3560 Training loss: 1.8160 0.8665 sec/batch
Epoch 12/20 Iteration 2050/3560 Training loss: 1.8159 0.8179 sec/batch
Epoch 12/20 Iteration 2051/3560 Training loss: 1.8154 0.8194 sec/batch
Epoch 12/20 Iteration 2052/3560 Training loss: 1.8151 0.8337 sec/batch
Epoch 12/20 Iteration 2053/3560 Training loss: 1.8147 0.7846 sec/batch
Epoch 12/20 Iteration 2054/3560 Training loss: 1.8147 0.6979 sec/batch
Epoch 12/20 Iteration 2055/3560 Training loss: 1.8146 0.6661 sec/batch
Epoch 12/20 Iteration 2056/3560 Training loss: 1.8142 0.7473 sec/batch
Epoch 12/20 Iteration 2057/3560 Training loss: 1.8138 0.6988 sec/batch
Epoch 12/20 Iteration 2058/3560 Training loss: 1.8133 0.5862 sec/batch
Epoch 12/20 Iteration 2059/3560 Training loss: 1.8131 0.5168 sec/batch
Epoch 12/20 Iteration 2060/3560 Training loss: 1.8130 0.6759 sec/batch
Epoch 12/20 Iteration 2061/3560 Training loss: 1.8127 0.5799 sec/batch
Epoch 12/20 Iteration 2062/3560 Training loss: 1.8124 0.6155 sec/batch
Epoch 12/20 Iteration 2063/3560 Training loss: 1.8122 0.6147 sec/batch
Epoch 12/20 Iteration 2064/3560 Training loss: 1.8120 0.6338 sec/batch
Epoch 12/20 Iteration 2065/3560 Training loss: 1.8119 0.5758 sec/batch
Epoch 12/20 Iteration 2066/3560 Training loss: 1.8118 0.5816 sec/batch
Epoch 12/20 Iteration 2067/3560 Training loss: 1.8118 0.5478 sec/batch
Epoch 12/20 Iteration 2068/3560 Training loss: 1.8118 0.5925 sec/batch
Epoch 12/20 Iteration 2069/3560 Training loss: 1.8118 0.6881 sec/batch
Epoch 12/20 Iteration 2070/3560 Training loss: 1.8116 0.5885 sec/batch
Epoch 12/20 Iteration 2071/3560 Training loss: 1.8115 0.5460 sec/batch
Epoch 12/20 Iteration 2072/3560 Training loss: 1.8113 0.5321 sec/batch
Epoch 12/20 Iteration 2073/3560 Training loss: 1.8110 0.5935 sec/batch
Epoch 12/20 Iteration 2074/3560 Training loss: 1.8105 0.6417 sec/batch
Epoch 12/20 Iteration 2075/3560 Training loss: 1.8105 0.5436 sec/batch
Epoch 12/20 Iteration 2076/3560 Training loss: 1.8104 0.5402 sec/batch
Epoch 12/20 Iteration 2077/3560 Training loss: 1.8104 0.5952 sec/batch
Epoch 12/20 Iteration 2078/3560 Training loss: 1.8104 0.8274 sec/batch
Epoch 12/20 Iteration 2079/3560 Training loss: 1.8103 0.7089 sec/batch
Epoch 12/20 Iteration 2080/3560 Training loss: 1.8100 0.5661 sec/batch
Epoch 12/20 Iteration 2081/3560 Training loss: 1.8097 0.5361 sec/batch
Epoch 12/20 Iteration 2082/3560 Training loss: 1.8098 0.6054 sec/batch
Epoch 12/20 Iteration 2083/3560 Training loss: 1.8097 0.5958 sec/batch
Epoch 12/20 Iteration 2084/3560 Training loss: 1.8094 0.5237 sec/batch
Epoch 12/20 Iteration 2085/3560 Training loss: 1.8094 0.6118 sec/batch
Epoch 12/20 Iteration 2086/3560 Training loss: 1.8094 0.5239 sec/batch
Epoch 12/20 Iteration 2087/3560 Training loss: 1.8093 0.5138 sec/batch
Epoch 12/20 Iteration 2088/3560 Training loss: 1.8093 0.5232 sec/batch
Epoch 12/20 Iteration 2089/3560 Training loss: 1.8090 0.5166 sec/batch
Epoch 12/20 Iteration 2090/3560 Training loss: 1.8087 0.5214 sec/batch
Epoch 12/20 Iteration 2091/3560 Training loss: 1.8087 0.6465 sec/batch
Epoch 12/20 Iteration 2092/3560 Training loss: 1.8087 0.6493 sec/batch
Epoch 12/20 Iteration 2093/3560 Training loss: 1.8087 0.5851 sec/batch
Epoch 12/20 Iteration 2094/3560 Training loss: 1.8087 0.5432 sec/batch
Epoch 12/20 Iteration 2095/3560 Training loss: 1.8088 0.5475 sec/batch
Epoch 12/20 Iteration 2096/3560 Training loss: 1.8088 0.5360 sec/batch
Epoch 12/20 Iteration 2097/3560 Training loss: 1.8090 0.8465 sec/batch
Epoch 12/20 Iteration 2098/3560 Training loss: 1.8089 0.6092 sec/batch
Epoch 12/20 Iteration 2099/3560 Training loss: 1.8091 0.6890 sec/batch
Epoch 12/20 Iteration 2100/3560 Training loss: 1.8090 0.6377 sec/batch
Epoch 12/20 Iteration 2101/3560 Training loss: 1.8091 0.6521 sec/batch
Epoch 12/20 Iteration 2102/3560 Training loss: 1.8091 0.6781 sec/batch
Epoch 12/20 Iteration 2103/3560 Training loss: 1.8090 0.8104 sec/batch
Epoch 12/20 Iteration 2104/3560 Training loss: 1.8090 0.7901 sec/batch
Epoch 12/20 Iteration 2105/3560 Training loss: 1.8090 0.6561 sec/batch
Epoch 12/20 Iteration 2106/3560 Training loss: 1.8092 0.5306 sec/batch
Epoch 12/20 Iteration 2107/3560 Training loss: 1.8092 0.5375 sec/batch
Epoch 12/20 Iteration 2108/3560 Training loss: 1.8091 0.5430 sec/batch
Epoch 12/20 Iteration 2109/3560 Training loss: 1.8088 0.5418 sec/batch
Epoch 12/20 Iteration 2110/3560 Training loss: 1.8089 0.6634 sec/batch
Epoch 12/20 Iteration 2111/3560 Training loss: 1.8089 0.7962 sec/batch
Epoch 12/20 Iteration 2112/3560 Training loss: 1.8089 0.8254 sec/batch
Epoch 12/20 Iteration 2113/3560 Training loss: 1.8088 0.5901 sec/batch
Epoch 12/20 Iteration 2114/3560 Training loss: 1.8088 0.5113 sec/batch
Epoch 12/20 Iteration 2115/3560 Training loss: 1.8088 0.5746 sec/batch
Epoch 12/20 Iteration 2116/3560 Training loss: 1.8088 0.5631 sec/batch
Epoch 12/20 Iteration 2117/3560 Training loss: 1.8086 0.8012 sec/batch
Epoch 12/20 Iteration 2118/3560 Training loss: 1.8088 0.6763 sec/batch
Epoch 12/20 Iteration 2119/3560 Training loss: 1.8089 0.5408 sec/batch
Epoch 12/20 Iteration 2120/3560 Training loss: 1.8089 0.5403 sec/batch
Epoch 12/20 Iteration 2121/3560 Training loss: 1.8090 0.6586 sec/batch
Epoch 12/20 Iteration 2122/3560 Training loss: 1.8090 0.6755 sec/batch
Epoch 12/20 Iteration 2123/3560 Training loss: 1.8090 0.7585 sec/batch
Epoch 12/20 Iteration 2124/3560 Training loss: 1.8090 0.7197 sec/batch
Epoch 12/20 Iteration 2125/3560 Training loss: 1.8090 0.6912 sec/batch
Epoch 12/20 Iteration 2126/3560 Training loss: 1.8094 0.6197 sec/batch
Epoch 12/20 Iteration 2127/3560 Training loss: 1.8093 0.5982 sec/batch
Epoch 12/20 Iteration 2128/3560 Training loss: 1.8093 0.5380 sec/batch
Epoch 12/20 Iteration 2129/3560 Training loss: 1.8092 0.5173 sec/batch
In [35]:
tf.train.get_checkpoint_state('checkpoints/anna')
Out[35]:
model_checkpoint_path: "checkpoints/anna/i3560_l512_1.122.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i200_l512_2.432.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i400_l512_1.980.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i600_l512_1.750.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i800_l512_1.595.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1000_l512_1.484.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1200_l512_1.407.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1400_l512_1.349.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1600_l512_1.292.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i1800_l512_1.255.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2000_l512_1.224.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2200_l512_1.204.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2400_l512_1.187.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2600_l512_1.172.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i2800_l512_1.160.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3000_l512_1.148.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3200_l512_1.137.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3400_l512_1.129.ckpt"
all_model_checkpoint_paths: "checkpoints/anna/i3560_l512_1.122.ckpt"
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
In [17]:
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
In [41]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
In [44]:
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
Farlathit that if had so
like it that it were. He could not trouble to his wife, and there was
anything in them of the side of his weaky in the creature at his forteren
to him.
"What is it? I can't bread to those," said Stepan Arkadyevitch. "It's not
my children, and there is an almost this arm, true it mays already,
and tell you what I have say to you, and was not looking at the peasant,
why is, I don't know him out, and she doesn't speak to me immediately, as
you would say the countess and the more frest an angelembre, and time and
things's silent, but I was not in my stand that is in my head. But if he
say, and was so feeling with his soul. A child--in his soul of his
soul of his soul. He should not see that any of that sense of. Here he
had not been so composed and to speak for as in a whole picture, but
all the setting and her excellent and society, who had been delighted
and see to anywing had been being troed to thousand words on them,
we liked him.
That set in her money at the table, he came into the party. The capable
of his she could not be as an old composure.
"That's all something there will be down becime by throe is
such a silent, as in a countess, I should state it out and divorct.
The discussion is not for me. I was that something was simply they are
all three manshess of a sensitions of mind it all."
"No," he thought, shouted and lifting his soul. "While it might see your
honser and she, I could burst. And I had been a midelity. And I had a
marnief are through the countess," he said, looking at him, a chosing
which they had been carried out and still solied, and there was a sen that
was to be completely, and that this matter of all the seconds of it, and
a concipation were to her husband, who came up and conscaously, that he
was not the station. All his fourse she was always at the country,,
to speak oft, and though they were to hear the delightful throom and
whether they came towards the morning, and his living and a coller and
hold--the children.
In [43]:
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Farnt him oste wha sorind thans tout thint asd an sesand an hires on thime sind thit aled, ban thand and out hore as the ter hos ton ho te that, was tis tart al the hand sostint him sore an tit an son thes, win he se ther san ther hher tas tarereng,.
Anl at an ades in ond hesiln, ad hhe torers teans, wast tar arering tho this sos alten sorer has hhas an siton ther him he had sin he ard ate te anling the sosin her ans and
arins asd and ther ale te tot an tand tanginge wath and ho ald, so sot th asend sat hare sother horesinnd, he hesense wing ante her so tith tir sherinn, anded and to the toul anderin he sorit he torsith she se atere an ting ot hand and thit hhe so the te wile har
ens ont in the sersise, and we he seres tar aterer, to ato tat or has he he wan ton here won and sen heren he sosering, to to theer oo adent har herere the wosh oute, was serild ward tous hed astend..
I's sint on alt in har tor tit her asd hade shithans ored he talereng an soredendere tim tot hees. Tise sor and
In [46]:
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Fard as astice her said he celatice of to seress in the raice, and to be the some and sere allats to that said to that the sark and a cast a the wither ald the pacinesse of her had astition, he said to the sount as she west at hissele. Af the cond it he was a fact onthis astisarianing.
"Or a ton to to be that's a more at aspestale as the sont of anstiring as
thours and trey.
The same wo dangring the
raterst, who sore and somethy had ast out an of his book. "We had's beane were that, and a morted a thay he had to tere. Then to
her homent andertersed his his ancouted to the pirsted, the soution for of the pirsice inthirgest and stenciol, with the hard and and
a colrice of to be oneres,
the song to this anderssad.
The could ounterss the said to serom of
soment a carsed of sheres of she
torded
har and want in their of hould, but
her told in that in he tad a the same to her. Serghing an her has and with the seed, and the camt ont his about of the
sail, the her then all houg ant or to hus to
In [47]:
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Farrat, his felt has at it.
"When the pose ther hor exceed
to his sheant was," weat a sime of his sounsed. The coment and the facily that which had began terede a marilicaly whice whether the pose of his hand, at she was alligated herself the same on she had to
taiking to his forthing and streath how to hand
began in a lang at some at it, this he cholded not set all her. "Wo love that is setthing. Him anstering as seen that."
"Yes in the man that say the mare a crances is it?" said Sergazy Ivancatching. "You doon think were somether is ifficult of a mone of
though the most at the countes that the
mean on the come to say the most, to
his feesing of
a man she, whilo he
sained and well, that he would still at to said. He wind at his for the sore in the most
of hoss and almoved to see him. They have betine the sumper into at he his stire, and what he was that at the so steate of the
sound, and shin should have a geest of shall feet on the conderation to she had been at that imporsing the dre
Content source: azhurb/deep-learning
Similar notebooks: