Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [1]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [2]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [6]:
text[:1000]


Out[6]:
"Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverything was in confusion in the Oblonskys' house. The wife had\ndiscovered that the husband was carrying on an intrigue with a French\ngirl, who had been a governess in their family, and she had announced to\nher husband that she could not go on living in the same house with him.\nThis position of affairs had now lasted three days, and not only the\nhusband and wife themselves, but all the members of their family and\nhousehold, were painfully conscious of it. Every person in the house\nfelt that there was no sense in their living together, and that the\nstray people brought together by chance in any inn had more in common\nwith one another than they, the members of the family and household of\nthe Oblonskys. The wife did not leave her own room, the husband had not\nbeen at home for three days. The children ran wild all over the house;\nthe English governess quarreled with the housekeeper, and wrote to a\n"

And we can see the characters encoded as integers.


In [7]:
encoded[:100]


Out[7]:
array([18, 42, 20, 51, 36,  3, 68, 32, 26, 64, 64, 64, 53, 20, 51, 51, 45,
       32, 76, 20, 55,  0, 70,  0,  3, 15, 32, 20, 68,  3, 32, 20, 70, 70,
       32, 20, 70,  0, 43,  3, 79, 32,  3, 21,  3, 68, 45, 32, 54, 48, 42,
       20, 51, 51, 45, 32, 76, 20, 55,  0, 70, 45, 32,  0, 15, 32, 54, 48,
       42, 20, 51, 51, 45, 32,  0, 48, 32,  0, 36, 15, 32,  1, 58, 48, 64,
       58, 20, 45, 25, 64, 64, 44, 21,  3, 68, 45, 36, 42,  0, 48], dtype=int32)

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [8]:
len(vocab)


Out[8]:
83

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.


In [9]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the number of characters per batch and number of batches we can make
    characters_per_batch = n_seqs * n_steps
    n_batches = len(arr)//characters_per_batch
    
    # Keep only enough characters to make full batches
    arr = arr[:n_batches * characters_per_batch]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs, -1))
    
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:, n:n+n_steps]
        # The targets, shifted by one
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [10]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)

In [11]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[18 42 20 51 36  3 68 32 26 64]
 [32 20 55 32 48  1 36 32 72  1]
 [21  0 48 25 64 64 75 49  3 15]
 [48 32 14 54 68  0 48 72 32 42]
 [32  0 36 32  0 15 81 32 15  0]
 [32 34 36 32 58 20 15 64  1 48]
 [42  3 48 32 80  1 55  3 32 76]
 [79 32 61 54 36 32 48  1 58 32]
 [36 32  0 15 48 27 36 25 32 37]
 [32 15 20  0 14 32 36  1 32 42]]

y
 [[42 20 51 36  3 68 32 26 64 64]
 [20 55 32 48  1 36 32 72  1  0]
 [ 0 48 25 64 64 75 49  3 15 81]
 [32 14 54 68  0 48 72 32 42  0]
 [ 0 36 32  0 15 81 32 15  0 68]
 [34 36 32 58 20 15 64  1 48 70]
 [ 3 48 32 80  1 55  3 32 76  1]
 [32 61 54 36 32 48  1 58 32 15]
 [32  0 15 48 27 36 25 32 37 42]
 [15 20  0 14 32 36  1 32 42  3]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.


In [12]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like

def build_cell(num_units, keep_prob):
    lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

    return drop

tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])

Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Below, we implement the build_lstm function to create these LSTM cells and the initial state.


In [13]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    
    def build_cell(lstm_size, keep_prob):
        # Use a basic LSTM cell
        lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        
        # Add dropout to the cell
        drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
        return drop
    
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
    initial_state = cell.zero_state(batch_size, tf.float32)
    
    return cell, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.


In [14]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        x: Input tensor
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''

    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    # That is, the shape should be batch_size*num_steps rows by lstm_size columns
    seq_output = tf.concat(lstm_output, axis=1)
    x = tf.reshape(seq_output, [-1, in_size])
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
        softmax_b = tf.Variable(tf.zeros(out_size))
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.matmul(x, softmax_w) + softmax_b
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits, name='predictions')
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.


In [15]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    
    # One-hot encode targets and reshape to match logits, one row per batch_size per step
    y_one_hot = tf.one_hot(targets, num_classes)
    y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
    
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
    loss = tf.reduce_mean(loss)
    return loss

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [16]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.


In [17]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)

        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN and collect the outputs
        outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here I'm defining the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [18]:
batch_size = 100        # Sequences per batch
num_steps = 100         # Number of sequence steps per batch
lstm_size = 512         # Size of hidden layers in LSTMs
num_layers = 2          # Number of LSTM layers
learning_rate = 0.001   # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt


In [20]:
epochs = 20
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


Epoch: 1/20...  Training Step: 1...  Training loss: 4.4173...  2.8966 sec/batch
Epoch: 1/20...  Training Step: 2...  Training loss: 4.3117...  2.8363 sec/batch
Epoch: 1/20...  Training Step: 3...  Training loss: 3.8040...  2.7976 sec/batch
Epoch: 1/20...  Training Step: 4...  Training loss: 3.7290...  2.7731 sec/batch
Epoch: 1/20...  Training Step: 5...  Training loss: 3.5718...  2.8130 sec/batch
Epoch: 1/20...  Training Step: 6...  Training loss: 3.4458...  2.8004 sec/batch
Epoch: 1/20...  Training Step: 7...  Training loss: 3.4053...  2.8790 sec/batch
Epoch: 1/20...  Training Step: 8...  Training loss: 3.3477...  2.8635 sec/batch
Epoch: 1/20...  Training Step: 9...  Training loss: 3.3354...  2.8236 sec/batch
Epoch: 1/20...  Training Step: 10...  Training loss: 3.3199...  2.8802 sec/batch
Epoch: 1/20...  Training Step: 11...  Training loss: 3.2799...  2.8318 sec/batch
Epoch: 1/20...  Training Step: 12...  Training loss: 3.2594...  2.9179 sec/batch
Epoch: 1/20...  Training Step: 13...  Training loss: 3.2575...  2.9287 sec/batch
Epoch: 1/20...  Training Step: 14...  Training loss: 3.2564...  2.8114 sec/batch
Epoch: 1/20...  Training Step: 15...  Training loss: 3.2363...  2.7889 sec/batch
Epoch: 1/20...  Training Step: 16...  Training loss: 3.2421...  2.7993 sec/batch
Epoch: 1/20...  Training Step: 17...  Training loss: 3.2157...  2.7885 sec/batch
Epoch: 1/20...  Training Step: 18...  Training loss: 3.2523...  2.8069 sec/batch
Epoch: 1/20...  Training Step: 19...  Training loss: 3.2287...  2.8266 sec/batch
Epoch: 1/20...  Training Step: 20...  Training loss: 3.1734...  2.8862 sec/batch
Epoch: 1/20...  Training Step: 21...  Training loss: 3.1926...  3.0705 sec/batch
Epoch: 1/20...  Training Step: 22...  Training loss: 3.1947...  2.9195 sec/batch
Epoch: 1/20...  Training Step: 23...  Training loss: 3.1941...  2.7994 sec/batch
Epoch: 1/20...  Training Step: 24...  Training loss: 3.1947...  2.8154 sec/batch
Epoch: 1/20...  Training Step: 25...  Training loss: 3.1801...  2.8380 sec/batch
Epoch: 1/20...  Training Step: 26...  Training loss: 3.1923...  2.7932 sec/batch
Epoch: 1/20...  Training Step: 27...  Training loss: 3.1939...  2.7717 sec/batch
Epoch: 1/20...  Training Step: 28...  Training loss: 3.1647...  2.7743 sec/batch
Epoch: 1/20...  Training Step: 29...  Training loss: 3.1748...  2.7549 sec/batch
Epoch: 1/20...  Training Step: 30...  Training loss: 3.1795...  2.7786 sec/batch
Epoch: 1/20...  Training Step: 31...  Training loss: 3.1924...  2.7741 sec/batch
Epoch: 1/20...  Training Step: 32...  Training loss: 3.1657...  2.7691 sec/batch
Epoch: 1/20...  Training Step: 33...  Training loss: 3.1555...  2.8139 sec/batch
Epoch: 1/20...  Training Step: 34...  Training loss: 3.1800...  2.7920 sec/batch
Epoch: 1/20...  Training Step: 35...  Training loss: 3.1498...  2.7767 sec/batch
Epoch: 1/20...  Training Step: 36...  Training loss: 3.1743...  2.8192 sec/batch
Epoch: 1/20...  Training Step: 37...  Training loss: 3.1360...  2.7785 sec/batch
Epoch: 1/20...  Training Step: 38...  Training loss: 3.1433...  2.8003 sec/batch
Epoch: 1/20...  Training Step: 39...  Training loss: 3.1338...  2.7971 sec/batch
Epoch: 1/20...  Training Step: 40...  Training loss: 3.1456...  2.7640 sec/batch
Epoch: 1/20...  Training Step: 41...  Training loss: 3.1371...  2.7735 sec/batch
Epoch: 1/20...  Training Step: 42...  Training loss: 3.1415...  2.8196 sec/batch
Epoch: 1/20...  Training Step: 43...  Training loss: 3.1290...  2.7928 sec/batch
Epoch: 1/20...  Training Step: 44...  Training loss: 3.1311...  2.7705 sec/batch
Epoch: 1/20...  Training Step: 45...  Training loss: 3.1182...  2.7753 sec/batch
Epoch: 1/20...  Training Step: 46...  Training loss: 3.1283...  2.7687 sec/batch
Epoch: 1/20...  Training Step: 47...  Training loss: 3.1326...  2.7761 sec/batch
Epoch: 1/20...  Training Step: 48...  Training loss: 3.1447...  2.7959 sec/batch
Epoch: 1/20...  Training Step: 49...  Training loss: 3.1335...  2.7842 sec/batch
Epoch: 1/20...  Training Step: 50...  Training loss: 3.1328...  2.8006 sec/batch
Epoch: 1/20...  Training Step: 51...  Training loss: 3.1239...  2.7632 sec/batch
Epoch: 1/20...  Training Step: 52...  Training loss: 3.1212...  2.7734 sec/batch
Epoch: 1/20...  Training Step: 53...  Training loss: 3.1218...  2.7845 sec/batch
Epoch: 1/20...  Training Step: 54...  Training loss: 3.1012...  2.8156 sec/batch
Epoch: 1/20...  Training Step: 55...  Training loss: 3.1160...  2.8108 sec/batch
Epoch: 1/20...  Training Step: 56...  Training loss: 3.0958...  2.7846 sec/batch
Epoch: 1/20...  Training Step: 57...  Training loss: 3.1069...  2.7716 sec/batch
Epoch: 1/20...  Training Step: 58...  Training loss: 3.0985...  2.7436 sec/batch
Epoch: 1/20...  Training Step: 59...  Training loss: 3.0861...  2.7912 sec/batch
Epoch: 1/20...  Training Step: 60...  Training loss: 3.2611...  2.7648 sec/batch
Epoch: 1/20...  Training Step: 61...  Training loss: 3.2624...  2.7650 sec/batch
Epoch: 1/20...  Training Step: 62...  Training loss: 3.1153...  2.7360 sec/batch
Epoch: 1/20...  Training Step: 63...  Training loss: 3.1301...  2.7732 sec/batch
Epoch: 1/20...  Training Step: 64...  Training loss: 3.0809...  2.7595 sec/batch
Epoch: 1/20...  Training Step: 65...  Training loss: 3.0813...  2.7682 sec/batch
Epoch: 1/20...  Training Step: 66...  Training loss: 3.1121...  2.7815 sec/batch
Epoch: 1/20...  Training Step: 67...  Training loss: 3.0956...  2.7670 sec/batch
Epoch: 1/20...  Training Step: 68...  Training loss: 3.0602...  2.7687 sec/batch
Epoch: 1/20...  Training Step: 69...  Training loss: 3.0742...  2.8251 sec/batch
Epoch: 1/20...  Training Step: 70...  Training loss: 3.0900...  2.7612 sec/batch
Epoch: 1/20...  Training Step: 71...  Training loss: 3.0809...  2.7492 sec/batch
Epoch: 1/20...  Training Step: 72...  Training loss: 3.1013...  2.7569 sec/batch
Epoch: 1/20...  Training Step: 73...  Training loss: 3.0770...  2.7801 sec/batch
Epoch: 1/20...  Training Step: 74...  Training loss: 3.0766...  2.8164 sec/batch
Epoch: 1/20...  Training Step: 75...  Training loss: 3.0844...  2.7720 sec/batch
Epoch: 1/20...  Training Step: 76...  Training loss: 3.0906...  2.8203 sec/batch
Epoch: 1/20...  Training Step: 77...  Training loss: 3.0737...  2.7610 sec/batch
Epoch: 1/20...  Training Step: 78...  Training loss: 3.0704...  2.7873 sec/batch
Epoch: 1/20...  Training Step: 79...  Training loss: 3.0583...  2.7673 sec/batch
Epoch: 1/20...  Training Step: 80...  Training loss: 3.0439...  2.7955 sec/batch
Epoch: 1/20...  Training Step: 81...  Training loss: 3.0529...  2.8159 sec/batch
Epoch: 1/20...  Training Step: 82...  Training loss: 3.0609...  2.7611 sec/batch
Epoch: 1/20...  Training Step: 83...  Training loss: 3.0585...  2.8007 sec/batch
Epoch: 1/20...  Training Step: 84...  Training loss: 3.0414...  2.7594 sec/batch
Epoch: 1/20...  Training Step: 85...  Training loss: 3.0206...  2.7713 sec/batch
Epoch: 1/20...  Training Step: 86...  Training loss: 3.0259...  2.8102 sec/batch
Epoch: 1/20...  Training Step: 87...  Training loss: 3.0166...  2.8653 sec/batch
Epoch: 1/20...  Training Step: 88...  Training loss: 3.0182...  2.7892 sec/batch
Epoch: 1/20...  Training Step: 89...  Training loss: 3.0290...  2.7789 sec/batch
Epoch: 1/20...  Training Step: 90...  Training loss: 3.0239...  2.7983 sec/batch
Epoch: 1/20...  Training Step: 91...  Training loss: 3.0114...  2.7946 sec/batch
Epoch: 1/20...  Training Step: 92...  Training loss: 2.9962...  2.7871 sec/batch
Epoch: 1/20...  Training Step: 93...  Training loss: 2.9894...  2.7633 sec/batch
Epoch: 1/20...  Training Step: 94...  Training loss: 2.9847...  2.8123 sec/batch
Epoch: 1/20...  Training Step: 95...  Training loss: 2.9699...  2.7743 sec/batch
Epoch: 1/20...  Training Step: 96...  Training loss: 3.0274...  2.7543 sec/batch
Epoch: 1/20...  Training Step: 97...  Training loss: 2.9838...  2.8084 sec/batch
Epoch: 1/20...  Training Step: 98...  Training loss: 2.9930...  2.8510 sec/batch
Epoch: 1/20...  Training Step: 99...  Training loss: 2.9782...  2.7769 sec/batch
Epoch: 1/20...  Training Step: 100...  Training loss: 2.9622...  2.7788 sec/batch
Epoch: 1/20...  Training Step: 101...  Training loss: 2.9705...  2.7716 sec/batch
Epoch: 1/20...  Training Step: 102...  Training loss: 2.9572...  2.7702 sec/batch
Epoch: 1/20...  Training Step: 103...  Training loss: 2.9442...  3.8446 sec/batch
Epoch: 1/20...  Training Step: 104...  Training loss: 2.9279...  2.7688 sec/batch
Epoch: 1/20...  Training Step: 105...  Training loss: 2.9214...  2.7809 sec/batch
Epoch: 1/20...  Training Step: 106...  Training loss: 2.9228...  2.7837 sec/batch
Epoch: 1/20...  Training Step: 107...  Training loss: 2.8931...  2.7742 sec/batch
Epoch: 1/20...  Training Step: 108...  Training loss: 2.8988...  2.9545 sec/batch
Epoch: 1/20...  Training Step: 109...  Training loss: 2.8972...  2.8006 sec/batch
Epoch: 1/20...  Training Step: 110...  Training loss: 2.8522...  2.7550 sec/batch
Epoch: 1/20...  Training Step: 111...  Training loss: 2.8582...  2.7659 sec/batch
Epoch: 1/20...  Training Step: 112...  Training loss: 2.8598...  2.7547 sec/batch
Epoch: 1/20...  Training Step: 113...  Training loss: 2.8364...  2.7824 sec/batch
Epoch: 1/20...  Training Step: 114...  Training loss: 2.8134...  2.8041 sec/batch
Epoch: 1/20...  Training Step: 115...  Training loss: 2.8067...  2.7728 sec/batch
Epoch: 1/20...  Training Step: 116...  Training loss: 2.7887...  2.7766 sec/batch
Epoch: 1/20...  Training Step: 117...  Training loss: 2.7934...  2.7607 sec/batch
Epoch: 1/20...  Training Step: 118...  Training loss: 2.7960...  2.7557 sec/batch
Epoch: 1/20...  Training Step: 119...  Training loss: 2.7969...  2.8146 sec/batch
Epoch: 1/20...  Training Step: 120...  Training loss: 2.7700...  2.7502 sec/batch
Epoch: 1/20...  Training Step: 121...  Training loss: 2.7939...  3.3374 sec/batch
Epoch: 1/20...  Training Step: 122...  Training loss: 2.7503...  3.5664 sec/batch
Epoch: 1/20...  Training Step: 123...  Training loss: 2.7383...  3.1818 sec/batch
Epoch: 1/20...  Training Step: 124...  Training loss: 2.7486...  3.3094 sec/batch
Epoch: 1/20...  Training Step: 125...  Training loss: 2.7126...  3.1449 sec/batch
Epoch: 1/20...  Training Step: 126...  Training loss: 2.7077...  3.2198 sec/batch
Epoch: 1/20...  Training Step: 127...  Training loss: 2.7848...  3.1555 sec/batch
Epoch: 1/20...  Training Step: 128...  Training loss: 2.7445...  3.0606 sec/batch
Epoch: 1/20...  Training Step: 129...  Training loss: 2.7201...  3.1332 sec/batch
Epoch: 1/20...  Training Step: 130...  Training loss: 2.7276...  3.1819 sec/batch
Epoch: 1/20...  Training Step: 131...  Training loss: 2.7089...  3.1098 sec/batch
Epoch: 1/20...  Training Step: 132...  Training loss: 2.6825...  3.1309 sec/batch
Epoch: 1/20...  Training Step: 133...  Training loss: 2.7002...  3.0713 sec/batch
Epoch: 1/20...  Training Step: 134...  Training loss: 2.6870...  3.3047 sec/batch
Epoch: 1/20...  Training Step: 135...  Training loss: 2.6414...  3.0939 sec/batch
Epoch: 1/20...  Training Step: 136...  Training loss: 2.6361...  3.4532 sec/batch
Epoch: 1/20...  Training Step: 137...  Training loss: 2.6505...  3.2114 sec/batch
Epoch: 1/20...  Training Step: 138...  Training loss: 2.6446...  2.9402 sec/batch
Epoch: 1/20...  Training Step: 139...  Training loss: 2.6483...  2.8333 sec/batch
Epoch: 1/20...  Training Step: 140...  Training loss: 2.6208...  2.7831 sec/batch
Epoch: 1/20...  Training Step: 141...  Training loss: 2.6244...  2.7842 sec/batch
Epoch: 1/20...  Training Step: 142...  Training loss: 2.5926...  2.7807 sec/batch
Epoch: 1/20...  Training Step: 143...  Training loss: 2.6032...  2.7981 sec/batch
Epoch: 1/20...  Training Step: 144...  Training loss: 2.5784...  2.7917 sec/batch
Epoch: 1/20...  Training Step: 145...  Training loss: 2.5881...  2.8161 sec/batch
Epoch: 1/20...  Training Step: 146...  Training loss: 2.6076...  2.7757 sec/batch
Epoch: 1/20...  Training Step: 147...  Training loss: 2.5812...  2.8228 sec/batch
Epoch: 1/20...  Training Step: 148...  Training loss: 2.5994...  2.8808 sec/batch
Epoch: 1/20...  Training Step: 149...  Training loss: 2.5601...  2.8773 sec/batch
Epoch: 1/20...  Training Step: 150...  Training loss: 2.5542...  2.7817 sec/batch
Epoch: 1/20...  Training Step: 151...  Training loss: 2.6072...  2.8291 sec/batch
Epoch: 1/20...  Training Step: 152...  Training loss: 2.6047...  2.7772 sec/batch
Epoch: 1/20...  Training Step: 153...  Training loss: 2.5585...  2.7945 sec/batch
Epoch: 1/20...  Training Step: 154...  Training loss: 2.5611...  2.7593 sec/batch
Epoch: 1/20...  Training Step: 155...  Training loss: 2.5434...  2.7779 sec/batch
Epoch: 1/20...  Training Step: 156...  Training loss: 2.5355...  2.7735 sec/batch
Epoch: 1/20...  Training Step: 157...  Training loss: 2.5234...  2.7951 sec/batch
Epoch: 1/20...  Training Step: 158...  Training loss: 2.5184...  2.8272 sec/batch
Epoch: 1/20...  Training Step: 159...  Training loss: 2.4975...  2.8198 sec/batch
Epoch: 1/20...  Training Step: 160...  Training loss: 2.5479...  2.7803 sec/batch
Epoch: 1/20...  Training Step: 161...  Training loss: 2.5268...  2.7546 sec/batch
Epoch: 1/20...  Training Step: 162...  Training loss: 2.4940...  2.8233 sec/batch
Epoch: 1/20...  Training Step: 163...  Training loss: 2.4904...  2.8255 sec/batch
Epoch: 1/20...  Training Step: 164...  Training loss: 2.5108...  2.8539 sec/batch
Epoch: 1/20...  Training Step: 165...  Training loss: 2.5131...  2.8287 sec/batch
Epoch: 1/20...  Training Step: 166...  Training loss: 2.5044...  2.7883 sec/batch
Epoch: 1/20...  Training Step: 167...  Training loss: 2.5036...  2.7942 sec/batch
Epoch: 1/20...  Training Step: 168...  Training loss: 2.4975...  2.8088 sec/batch
Epoch: 1/20...  Training Step: 169...  Training loss: 2.5025...  2.7924 sec/batch
Epoch: 1/20...  Training Step: 170...  Training loss: 2.4660...  3.1194 sec/batch
Epoch: 1/20...  Training Step: 171...  Training loss: 2.4911...  2.9557 sec/batch
Epoch: 1/20...  Training Step: 172...  Training loss: 2.5114...  2.8134 sec/batch
Epoch: 1/20...  Training Step: 173...  Training loss: 2.5680...  2.7941 sec/batch
Epoch: 1/20...  Training Step: 174...  Training loss: 2.5563...  3.1147 sec/batch
Epoch: 1/20...  Training Step: 175...  Training loss: 2.5257...  2.8290 sec/batch
Epoch: 1/20...  Training Step: 176...  Training loss: 2.4884...  2.8079 sec/batch
Epoch: 1/20...  Training Step: 177...  Training loss: 2.4709...  2.7909 sec/batch
Epoch: 1/20...  Training Step: 178...  Training loss: 2.4569...  2.8611 sec/batch
Epoch: 1/20...  Training Step: 179...  Training loss: 2.4524...  2.7795 sec/batch
Epoch: 1/20...  Training Step: 180...  Training loss: 2.4486...  2.8833 sec/batch
Epoch: 1/20...  Training Step: 181...  Training loss: 2.4520...  3.0222 sec/batch
Epoch: 1/20...  Training Step: 182...  Training loss: 2.4678...  2.8530 sec/batch
Epoch: 1/20...  Training Step: 183...  Training loss: 2.4460...  2.8200 sec/batch
Epoch: 1/20...  Training Step: 184...  Training loss: 2.4748...  2.8213 sec/batch
Epoch: 1/20...  Training Step: 185...  Training loss: 2.5008...  2.7990 sec/batch
Epoch: 1/20...  Training Step: 186...  Training loss: 2.4578...  2.8290 sec/batch
Epoch: 1/20...  Training Step: 187...  Training loss: 2.4289...  2.9105 sec/batch
Epoch: 1/20...  Training Step: 188...  Training loss: 2.4268...  3.0415 sec/batch
Epoch: 1/20...  Training Step: 189...  Training loss: 2.4246...  3.0628 sec/batch
Epoch: 1/20...  Training Step: 190...  Training loss: 2.4322...  2.9593 sec/batch
Epoch: 1/20...  Training Step: 191...  Training loss: 2.4488...  2.9461 sec/batch
Epoch: 1/20...  Training Step: 192...  Training loss: 2.4014...  2.9305 sec/batch
Epoch: 1/20...  Training Step: 193...  Training loss: 2.4325...  2.8137 sec/batch
Epoch: 1/20...  Training Step: 194...  Training loss: 2.4265...  2.8873 sec/batch
Epoch: 1/20...  Training Step: 195...  Training loss: 2.4163...  2.8281 sec/batch
Epoch: 1/20...  Training Step: 196...  Training loss: 2.4053...  2.9337 sec/batch
Epoch: 1/20...  Training Step: 197...  Training loss: 2.4022...  3.0063 sec/batch
Epoch: 1/20...  Training Step: 198...  Training loss: 2.4017...  2.8359 sec/batch
Epoch: 2/20...  Training Step: 199...  Training loss: 2.4657...  2.8382 sec/batch
Epoch: 2/20...  Training Step: 200...  Training loss: 2.3832...  2.9728 sec/batch
Epoch: 2/20...  Training Step: 201...  Training loss: 2.3827...  3.0405 sec/batch
Epoch: 2/20...  Training Step: 202...  Training loss: 2.4054...  3.0270 sec/batch
Epoch: 2/20...  Training Step: 203...  Training loss: 2.4031...  2.8261 sec/batch
Epoch: 2/20...  Training Step: 204...  Training loss: 2.3974...  2.8433 sec/batch
Epoch: 2/20...  Training Step: 205...  Training loss: 2.3943...  2.7861 sec/batch
Epoch: 2/20...  Training Step: 206...  Training loss: 2.4074...  2.8638 sec/batch
Epoch: 2/20...  Training Step: 207...  Training loss: 2.4175...  2.8474 sec/batch
Epoch: 2/20...  Training Step: 208...  Training loss: 2.3886...  2.7453 sec/batch
Epoch: 2/20...  Training Step: 209...  Training loss: 2.3715...  2.8596 sec/batch
Epoch: 2/20...  Training Step: 210...  Training loss: 2.3841...  2.8101 sec/batch
Epoch: 2/20...  Training Step: 211...  Training loss: 2.3845...  2.8163 sec/batch
Epoch: 2/20...  Training Step: 212...  Training loss: 2.4153...  2.8771 sec/batch
Epoch: 2/20...  Training Step: 213...  Training loss: 2.3875...  2.7883 sec/batch
Epoch: 2/20...  Training Step: 214...  Training loss: 2.3771...  2.8993 sec/batch
Epoch: 2/20...  Training Step: 215...  Training loss: 2.3686...  3.0331 sec/batch
Epoch: 2/20...  Training Step: 216...  Training loss: 2.4062...  2.9120 sec/batch
Epoch: 2/20...  Training Step: 217...  Training loss: 2.3826...  2.8200 sec/batch
Epoch: 2/20...  Training Step: 218...  Training loss: 2.3443...  2.8584 sec/batch
Epoch: 2/20...  Training Step: 219...  Training loss: 2.3564...  2.8731 sec/batch
Epoch: 2/20...  Training Step: 220...  Training loss: 2.4050...  2.8235 sec/batch
Epoch: 2/20...  Training Step: 221...  Training loss: 2.3665...  2.9339 sec/batch
Epoch: 2/20...  Training Step: 222...  Training loss: 2.3495...  2.9527 sec/batch
Epoch: 2/20...  Training Step: 223...  Training loss: 2.3403...  2.9323 sec/batch
Epoch: 2/20...  Training Step: 224...  Training loss: 2.3425...  2.9392 sec/batch
Epoch: 2/20...  Training Step: 225...  Training loss: 2.3397...  2.8130 sec/batch
Epoch: 2/20...  Training Step: 226...  Training loss: 2.3498...  2.8908 sec/batch
Epoch: 2/20...  Training Step: 227...  Training loss: 2.3677...  3.0088 sec/batch
Epoch: 2/20...  Training Step: 228...  Training loss: 2.3533...  2.9533 sec/batch
Epoch: 2/20...  Training Step: 229...  Training loss: 2.3573...  2.8362 sec/batch
Epoch: 2/20...  Training Step: 230...  Training loss: 2.3305...  2.8372 sec/batch
Epoch: 2/20...  Training Step: 231...  Training loss: 2.3202...  3.1668 sec/batch
Epoch: 2/20...  Training Step: 232...  Training loss: 2.3456...  3.0945 sec/batch
Epoch: 2/20...  Training Step: 233...  Training loss: 2.3201...  2.9297 sec/batch
Epoch: 2/20...  Training Step: 234...  Training loss: 2.3432...  2.9278 sec/batch
Epoch: 2/20...  Training Step: 235...  Training loss: 2.3337...  2.8150 sec/batch
Epoch: 2/20...  Training Step: 236...  Training loss: 2.2989...  2.9028 sec/batch
Epoch: 2/20...  Training Step: 237...  Training loss: 2.3014...  2.9511 sec/batch
Epoch: 2/20...  Training Step: 238...  Training loss: 2.2969...  2.8690 sec/batch
Epoch: 2/20...  Training Step: 239...  Training loss: 2.2943...  2.9241 sec/batch
Epoch: 2/20...  Training Step: 240...  Training loss: 2.3030...  2.8887 sec/batch
Epoch: 2/20...  Training Step: 241...  Training loss: 2.2862...  2.8795 sec/batch
Epoch: 2/20...  Training Step: 242...  Training loss: 2.2922...  2.9633 sec/batch
Epoch: 2/20...  Training Step: 243...  Training loss: 2.2926...  2.7776 sec/batch
Epoch: 2/20...  Training Step: 244...  Training loss: 2.2466...  2.8324 sec/batch
Epoch: 2/20...  Training Step: 245...  Training loss: 2.3069...  2.8481 sec/batch
Epoch: 2/20...  Training Step: 246...  Training loss: 2.2795...  2.8231 sec/batch
Epoch: 2/20...  Training Step: 247...  Training loss: 2.2828...  2.7652 sec/batch
Epoch: 2/20...  Training Step: 248...  Training loss: 2.3154...  2.8968 sec/batch
Epoch: 2/20...  Training Step: 249...  Training loss: 2.2570...  2.9288 sec/batch
Epoch: 2/20...  Training Step: 250...  Training loss: 2.3008...  2.7647 sec/batch
Epoch: 2/20...  Training Step: 251...  Training loss: 2.2742...  2.8861 sec/batch
Epoch: 2/20...  Training Step: 252...  Training loss: 2.2692...  2.9438 sec/batch
Epoch: 2/20...  Training Step: 253...  Training loss: 2.2675...  2.9761 sec/batch
Epoch: 2/20...  Training Step: 254...  Training loss: 2.2805...  2.8803 sec/batch
Epoch: 2/20...  Training Step: 255...  Training loss: 2.2649...  2.8826 sec/batch
Epoch: 2/20...  Training Step: 256...  Training loss: 2.2555...  2.8014 sec/batch
Epoch: 2/20...  Training Step: 257...  Training loss: 2.2526...  2.8860 sec/batch
Epoch: 2/20...  Training Step: 258...  Training loss: 2.2886...  2.9743 sec/batch
Epoch: 2/20...  Training Step: 259...  Training loss: 2.2584...  2.9884 sec/batch
Epoch: 2/20...  Training Step: 260...  Training loss: 2.2807...  2.8213 sec/batch
Epoch: 2/20...  Training Step: 261...  Training loss: 2.2845...  3.0079 sec/batch
Epoch: 2/20...  Training Step: 262...  Training loss: 2.2452...  2.8365 sec/batch
Epoch: 2/20...  Training Step: 263...  Training loss: 2.2373...  2.8564 sec/batch
Epoch: 2/20...  Training Step: 264...  Training loss: 2.2639...  2.7858 sec/batch
Epoch: 2/20...  Training Step: 265...  Training loss: 2.2582...  3.0323 sec/batch
Epoch: 2/20...  Training Step: 266...  Training loss: 2.2192...  2.9927 sec/batch
Epoch: 2/20...  Training Step: 267...  Training loss: 2.2192...  2.7966 sec/batch
Epoch: 2/20...  Training Step: 268...  Training loss: 2.2434...  2.8186 sec/batch
Epoch: 2/20...  Training Step: 269...  Training loss: 2.2652...  2.8411 sec/batch
Epoch: 2/20...  Training Step: 270...  Training loss: 2.2462...  2.7620 sec/batch
Epoch: 2/20...  Training Step: 271...  Training loss: 2.2402...  2.7716 sec/batch
Epoch: 2/20...  Training Step: 272...  Training loss: 2.2239...  2.8540 sec/batch
Epoch: 2/20...  Training Step: 273...  Training loss: 2.2228...  2.9072 sec/batch
Epoch: 2/20...  Training Step: 274...  Training loss: 2.2703...  2.8609 sec/batch
Epoch: 2/20...  Training Step: 275...  Training loss: 2.2255...  2.8419 sec/batch
Epoch: 2/20...  Training Step: 276...  Training loss: 2.2366...  2.8948 sec/batch
Epoch: 2/20...  Training Step: 277...  Training loss: 2.2062...  2.8683 sec/batch
Epoch: 2/20...  Training Step: 278...  Training loss: 2.2059...  2.8140 sec/batch
Epoch: 2/20...  Training Step: 279...  Training loss: 2.1985...  2.7903 sec/batch
Epoch: 2/20...  Training Step: 280...  Training loss: 2.2448...  2.8794 sec/batch
Epoch: 2/20...  Training Step: 281...  Training loss: 2.1903...  2.7766 sec/batch
Epoch: 2/20...  Training Step: 282...  Training loss: 2.1921...  2.7767 sec/batch
Epoch: 2/20...  Training Step: 283...  Training loss: 2.1738...  2.8136 sec/batch
Epoch: 2/20...  Training Step: 284...  Training loss: 2.1910...  2.9697 sec/batch
Epoch: 2/20...  Training Step: 285...  Training loss: 2.2084...  2.7845 sec/batch
Epoch: 2/20...  Training Step: 286...  Training loss: 2.1964...  2.7991 sec/batch
Epoch: 2/20...  Training Step: 287...  Training loss: 2.1876...  2.8801 sec/batch
Epoch: 2/20...  Training Step: 288...  Training loss: 2.2082...  2.7615 sec/batch
Epoch: 2/20...  Training Step: 289...  Training loss: 2.1779...  2.7884 sec/batch
Epoch: 2/20...  Training Step: 290...  Training loss: 2.1989...  2.7728 sec/batch
Epoch: 2/20...  Training Step: 291...  Training loss: 2.1662...  2.7964 sec/batch
Epoch: 2/20...  Training Step: 292...  Training loss: 2.1804...  2.8181 sec/batch
Epoch: 2/20...  Training Step: 293...  Training loss: 2.1612...  2.8081 sec/batch
Epoch: 2/20...  Training Step: 294...  Training loss: 2.1743...  2.7685 sec/batch
Epoch: 2/20...  Training Step: 295...  Training loss: 2.1771...  2.8007 sec/batch
Epoch: 2/20...  Training Step: 296...  Training loss: 2.1746...  2.7748 sec/batch
Epoch: 2/20...  Training Step: 297...  Training loss: 2.1579...  2.8973 sec/batch
Epoch: 2/20...  Training Step: 298...  Training loss: 2.1370...  2.8143 sec/batch
Epoch: 2/20...  Training Step: 299...  Training loss: 2.1934...  2.7769 sec/batch
Epoch: 2/20...  Training Step: 300...  Training loss: 2.1790...  2.7724 sec/batch
Epoch: 2/20...  Training Step: 301...  Training loss: 2.1559...  2.7621 sec/batch
Epoch: 2/20...  Training Step: 302...  Training loss: 2.1632...  2.7743 sec/batch
Epoch: 2/20...  Training Step: 303...  Training loss: 2.1567...  2.8460 sec/batch
Epoch: 2/20...  Training Step: 304...  Training loss: 2.1676...  2.8215 sec/batch
Epoch: 2/20...  Training Step: 305...  Training loss: 2.1592...  2.8463 sec/batch
Epoch: 2/20...  Training Step: 306...  Training loss: 2.1828...  2.8022 sec/batch
Epoch: 2/20...  Training Step: 307...  Training loss: 2.1710...  2.8033 sec/batch
Epoch: 2/20...  Training Step: 308...  Training loss: 2.1489...  2.9592 sec/batch
Epoch: 2/20...  Training Step: 309...  Training loss: 2.1633...  2.7580 sec/batch
Epoch: 2/20...  Training Step: 310...  Training loss: 2.1581...  2.7781 sec/batch
Epoch: 2/20...  Training Step: 311...  Training loss: 2.1441...  2.9087 sec/batch
Epoch: 2/20...  Training Step: 312...  Training loss: 2.1384...  2.7790 sec/batch
Epoch: 2/20...  Training Step: 313...  Training loss: 2.1320...  2.9106 sec/batch
Epoch: 2/20...  Training Step: 314...  Training loss: 2.1042...  2.9873 sec/batch
Epoch: 2/20...  Training Step: 315...  Training loss: 2.1562...  3.3027 sec/batch
Epoch: 2/20...  Training Step: 316...  Training loss: 2.1354...  3.0076 sec/batch
Epoch: 2/20...  Training Step: 317...  Training loss: 2.1556...  2.8664 sec/batch
Epoch: 2/20...  Training Step: 318...  Training loss: 2.1428...  2.7758 sec/batch
Epoch: 2/20...  Training Step: 319...  Training loss: 2.1578...  2.9017 sec/batch
Epoch: 2/20...  Training Step: 320...  Training loss: 2.1229...  2.8569 sec/batch
Epoch: 2/20...  Training Step: 321...  Training loss: 2.1089...  2.8812 sec/batch
Epoch: 2/20...  Training Step: 322...  Training loss: 2.1509...  2.8321 sec/batch
Epoch: 2/20...  Training Step: 323...  Training loss: 2.1394...  2.9336 sec/batch
Epoch: 2/20...  Training Step: 324...  Training loss: 2.0948...  2.9070 sec/batch
Epoch: 2/20...  Training Step: 325...  Training loss: 2.1503...  2.9164 sec/batch
Epoch: 2/20...  Training Step: 326...  Training loss: 2.1424...  2.9447 sec/batch
Epoch: 2/20...  Training Step: 327...  Training loss: 2.1228...  2.9375 sec/batch
Epoch: 2/20...  Training Step: 328...  Training loss: 2.1358...  2.7784 sec/batch
Epoch: 2/20...  Training Step: 329...  Training loss: 2.1080...  2.7468 sec/batch
Epoch: 2/20...  Training Step: 330...  Training loss: 2.1015...  2.9882 sec/batch
Epoch: 2/20...  Training Step: 331...  Training loss: 2.1350...  2.8036 sec/batch
Epoch: 2/20...  Training Step: 332...  Training loss: 2.1271...  2.9321 sec/batch
Epoch: 2/20...  Training Step: 333...  Training loss: 2.1111...  3.0220 sec/batch
Epoch: 2/20...  Training Step: 334...  Training loss: 2.1319...  2.8791 sec/batch
Epoch: 2/20...  Training Step: 335...  Training loss: 2.1174...  3.0156 sec/batch
Epoch: 2/20...  Training Step: 336...  Training loss: 2.1182...  2.8627 sec/batch
Epoch: 2/20...  Training Step: 337...  Training loss: 2.1467...  2.8321 sec/batch
Epoch: 2/20...  Training Step: 338...  Training loss: 2.1094...  2.8500 sec/batch
Epoch: 2/20...  Training Step: 339...  Training loss: 2.1234...  2.8669 sec/batch
Epoch: 2/20...  Training Step: 340...  Training loss: 2.1075...  2.8668 sec/batch
Epoch: 2/20...  Training Step: 341...  Training loss: 2.1149...  2.9441 sec/batch
Epoch: 2/20...  Training Step: 342...  Training loss: 2.0940...  2.7967 sec/batch
Epoch: 2/20...  Training Step: 343...  Training loss: 2.0943...  2.7866 sec/batch
Epoch: 2/20...  Training Step: 344...  Training loss: 2.1150...  3.1184 sec/batch
Epoch: 2/20...  Training Step: 345...  Training loss: 2.1153...  3.0297 sec/batch
Epoch: 2/20...  Training Step: 346...  Training loss: 2.1214...  2.7921 sec/batch
Epoch: 2/20...  Training Step: 347...  Training loss: 2.0960...  3.1703 sec/batch
Epoch: 2/20...  Training Step: 348...  Training loss: 2.0839...  3.2479 sec/batch
Epoch: 2/20...  Training Step: 349...  Training loss: 2.0982...  2.9609 sec/batch
Epoch: 2/20...  Training Step: 350...  Training loss: 2.1353...  2.8818 sec/batch
Epoch: 2/20...  Training Step: 351...  Training loss: 2.1056...  2.9465 sec/batch
Epoch: 2/20...  Training Step: 352...  Training loss: 2.1014...  2.9178 sec/batch
Epoch: 2/20...  Training Step: 353...  Training loss: 2.0706...  2.8744 sec/batch
Epoch: 2/20...  Training Step: 354...  Training loss: 2.0802...  2.8718 sec/batch
Epoch: 2/20...  Training Step: 355...  Training loss: 2.0797...  2.8026 sec/batch
Epoch: 2/20...  Training Step: 356...  Training loss: 2.0737...  2.8550 sec/batch
Epoch: 2/20...  Training Step: 357...  Training loss: 2.0478...  2.9854 sec/batch
Epoch: 2/20...  Training Step: 358...  Training loss: 2.1132...  2.8007 sec/batch
Epoch: 2/20...  Training Step: 359...  Training loss: 2.0984...  2.7729 sec/batch
Epoch: 2/20...  Training Step: 360...  Training loss: 2.0677...  2.8435 sec/batch
Epoch: 2/20...  Training Step: 361...  Training loss: 2.0856...  2.9070 sec/batch
Epoch: 2/20...  Training Step: 362...  Training loss: 2.0828...  2.8364 sec/batch
Epoch: 2/20...  Training Step: 363...  Training loss: 2.0820...  2.9807 sec/batch
Epoch: 2/20...  Training Step: 364...  Training loss: 2.0645...  2.8252 sec/batch
Epoch: 2/20...  Training Step: 365...  Training loss: 2.0850...  2.9521 sec/batch
Epoch: 2/20...  Training Step: 366...  Training loss: 2.0914...  2.9328 sec/batch
Epoch: 2/20...  Training Step: 367...  Training loss: 2.0552...  2.9293 sec/batch
Epoch: 2/20...  Training Step: 368...  Training loss: 2.0551...  2.8648 sec/batch
Epoch: 2/20...  Training Step: 369...  Training loss: 2.0541...  2.8992 sec/batch
Epoch: 2/20...  Training Step: 370...  Training loss: 2.0833...  2.8561 sec/batch
Epoch: 2/20...  Training Step: 371...  Training loss: 2.0944...  2.8930 sec/batch
Epoch: 2/20...  Training Step: 372...  Training loss: 2.0790...  2.8850 sec/batch
Epoch: 2/20...  Training Step: 373...  Training loss: 2.0791...  2.9164 sec/batch
Epoch: 2/20...  Training Step: 374...  Training loss: 2.0572...  2.9321 sec/batch
Epoch: 2/20...  Training Step: 375...  Training loss: 2.0462...  2.8722 sec/batch
Epoch: 2/20...  Training Step: 376...  Training loss: 2.0647...  2.8801 sec/batch
Epoch: 2/20...  Training Step: 377...  Training loss: 2.0340...  2.9218 sec/batch
Epoch: 2/20...  Training Step: 378...  Training loss: 2.0232...  2.9443 sec/batch
Epoch: 2/20...  Training Step: 379...  Training loss: 2.0263...  3.2582 sec/batch
Epoch: 2/20...  Training Step: 380...  Training loss: 2.0506...  3.4369 sec/batch
Epoch: 2/20...  Training Step: 381...  Training loss: 2.0452...  3.4795 sec/batch
Epoch: 2/20...  Training Step: 382...  Training loss: 2.0608...  3.2056 sec/batch
Epoch: 2/20...  Training Step: 383...  Training loss: 2.0645...  3.5195 sec/batch
Epoch: 2/20...  Training Step: 384...  Training loss: 2.0424...  3.3733 sec/batch
Epoch: 2/20...  Training Step: 385...  Training loss: 2.0392...  3.3540 sec/batch
Epoch: 2/20...  Training Step: 386...  Training loss: 2.0210...  3.3318 sec/batch
Epoch: 2/20...  Training Step: 387...  Training loss: 2.0213...  3.3590 sec/batch
Epoch: 2/20...  Training Step: 388...  Training loss: 2.0339...  3.2956 sec/batch
Epoch: 2/20...  Training Step: 389...  Training loss: 2.0395...  3.2863 sec/batch
Epoch: 2/20...  Training Step: 390...  Training loss: 2.0106...  3.0680 sec/batch
Epoch: 2/20...  Training Step: 391...  Training loss: 2.0425...  3.2219 sec/batch
Epoch: 2/20...  Training Step: 392...  Training loss: 2.0172...  3.4084 sec/batch
Epoch: 2/20...  Training Step: 393...  Training loss: 2.0081...  3.3737 sec/batch
Epoch: 2/20...  Training Step: 394...  Training loss: 2.0279...  3.2585 sec/batch
Epoch: 2/20...  Training Step: 395...  Training loss: 2.0192...  3.3853 sec/batch
Epoch: 2/20...  Training Step: 396...  Training loss: 2.0123...  3.3056 sec/batch
Epoch: 3/20...  Training Step: 397...  Training loss: 2.0857...  3.3070 sec/batch
Epoch: 3/20...  Training Step: 398...  Training loss: 1.9959...  3.3438 sec/batch
Epoch: 3/20...  Training Step: 399...  Training loss: 2.0028...  3.2534 sec/batch
Epoch: 3/20...  Training Step: 400...  Training loss: 2.0013...  3.3093 sec/batch
Epoch: 3/20...  Training Step: 401...  Training loss: 2.0137...  3.2638 sec/batch
Epoch: 3/20...  Training Step: 402...  Training loss: 1.9750...  3.1250 sec/batch
Epoch: 3/20...  Training Step: 403...  Training loss: 2.0135...  3.1675 sec/batch
Epoch: 3/20...  Training Step: 404...  Training loss: 1.9948...  2.9818 sec/batch
Epoch: 3/20...  Training Step: 405...  Training loss: 2.0490...  3.3091 sec/batch
Epoch: 3/20...  Training Step: 406...  Training loss: 1.9944...  3.1467 sec/batch
Epoch: 3/20...  Training Step: 407...  Training loss: 1.9900...  3.2062 sec/batch
Epoch: 3/20...  Training Step: 408...  Training loss: 1.9896...  3.2032 sec/batch
Epoch: 3/20...  Training Step: 409...  Training loss: 2.0072...  3.1314 sec/batch
Epoch: 3/20...  Training Step: 410...  Training loss: 2.0391...  3.1318 sec/batch
Epoch: 3/20...  Training Step: 411...  Training loss: 1.9954...  3.1060 sec/batch
Epoch: 3/20...  Training Step: 412...  Training loss: 1.9867...  3.1787 sec/batch
Epoch: 3/20...  Training Step: 413...  Training loss: 1.9953...  2.9249 sec/batch
Epoch: 3/20...  Training Step: 414...  Training loss: 2.0387...  3.1095 sec/batch
Epoch: 3/20...  Training Step: 415...  Training loss: 1.9972...  3.2690 sec/batch
Epoch: 3/20...  Training Step: 416...  Training loss: 2.0014...  3.2379 sec/batch
Epoch: 3/20...  Training Step: 417...  Training loss: 1.9838...  3.0807 sec/batch
Epoch: 3/20...  Training Step: 418...  Training loss: 2.0278...  3.3334 sec/batch
Epoch: 3/20...  Training Step: 419...  Training loss: 1.9900...  2.9117 sec/batch
Epoch: 3/20...  Training Step: 420...  Training loss: 1.9789...  2.7540 sec/batch
Epoch: 3/20...  Training Step: 421...  Training loss: 1.9786...  2.8141 sec/batch
Epoch: 3/20...  Training Step: 422...  Training loss: 1.9645...  2.7982 sec/batch
Epoch: 3/20...  Training Step: 423...  Training loss: 1.9630...  2.7697 sec/batch
Epoch: 3/20...  Training Step: 424...  Training loss: 1.9902...  2.7887 sec/batch
Epoch: 3/20...  Training Step: 425...  Training loss: 2.0127...  2.8563 sec/batch
Epoch: 3/20...  Training Step: 426...  Training loss: 1.9993...  2.7972 sec/batch
Epoch: 3/20...  Training Step: 427...  Training loss: 1.9861...  2.8060 sec/batch
Epoch: 3/20...  Training Step: 428...  Training loss: 1.9597...  2.7945 sec/batch
Epoch: 3/20...  Training Step: 429...  Training loss: 1.9763...  2.8193 sec/batch
Epoch: 3/20...  Training Step: 430...  Training loss: 2.0092...  2.7806 sec/batch
Epoch: 3/20...  Training Step: 431...  Training loss: 1.9608...  2.7919 sec/batch
Epoch: 3/20...  Training Step: 432...  Training loss: 1.9633...  2.7884 sec/batch
Epoch: 3/20...  Training Step: 433...  Training loss: 1.9605...  2.8954 sec/batch
Epoch: 3/20...  Training Step: 434...  Training loss: 1.9391...  2.7695 sec/batch
Epoch: 3/20...  Training Step: 435...  Training loss: 1.9230...  2.7663 sec/batch
Epoch: 3/20...  Training Step: 436...  Training loss: 1.9357...  2.8287 sec/batch
Epoch: 3/20...  Training Step: 437...  Training loss: 1.9429...  2.7951 sec/batch
Epoch: 3/20...  Training Step: 438...  Training loss: 1.9699...  2.8030 sec/batch
Epoch: 3/20...  Training Step: 439...  Training loss: 1.9408...  2.8193 sec/batch
Epoch: 3/20...  Training Step: 440...  Training loss: 1.9296...  2.7792 sec/batch
Epoch: 3/20...  Training Step: 441...  Training loss: 1.9696...  2.7742 sec/batch
Epoch: 3/20...  Training Step: 442...  Training loss: 1.8989...  2.7964 sec/batch
Epoch: 3/20...  Training Step: 443...  Training loss: 1.9550...  2.8173 sec/batch
Epoch: 3/20...  Training Step: 444...  Training loss: 1.9328...  2.8068 sec/batch
Epoch: 3/20...  Training Step: 445...  Training loss: 1.9353...  2.7912 sec/batch
Epoch: 3/20...  Training Step: 446...  Training loss: 1.9921...  2.8539 sec/batch
Epoch: 3/20...  Training Step: 447...  Training loss: 1.9191...  2.7749 sec/batch
Epoch: 3/20...  Training Step: 448...  Training loss: 2.0060...  2.7812 sec/batch
Epoch: 3/20...  Training Step: 449...  Training loss: 1.9342...  2.7754 sec/batch
Epoch: 3/20...  Training Step: 450...  Training loss: 1.9438...  2.8417 sec/batch
Epoch: 3/20...  Training Step: 451...  Training loss: 1.9240...  2.7783 sec/batch
Epoch: 3/20...  Training Step: 452...  Training loss: 1.9490...  2.7954 sec/batch
Epoch: 3/20...  Training Step: 453...  Training loss: 1.9561...  2.8141 sec/batch
Epoch: 3/20...  Training Step: 454...  Training loss: 1.9361...  2.7874 sec/batch
Epoch: 3/20...  Training Step: 455...  Training loss: 1.9157...  2.7694 sec/batch
Epoch: 3/20...  Training Step: 456...  Training loss: 1.9721...  2.7877 sec/batch
Epoch: 3/20...  Training Step: 457...  Training loss: 1.9282...  2.8523 sec/batch
Epoch: 3/20...  Training Step: 458...  Training loss: 1.9670...  2.7915 sec/batch
Epoch: 3/20...  Training Step: 459...  Training loss: 1.9713...  2.7797 sec/batch
Epoch: 3/20...  Training Step: 460...  Training loss: 1.9536...  2.7530 sec/batch
Epoch: 3/20...  Training Step: 461...  Training loss: 1.9234...  2.8553 sec/batch
Epoch: 3/20...  Training Step: 462...  Training loss: 1.9612...  2.7758 sec/batch
Epoch: 3/20...  Training Step: 463...  Training loss: 1.9482...  2.7776 sec/batch
Epoch: 3/20...  Training Step: 464...  Training loss: 1.9199...  2.8238 sec/batch
Epoch: 3/20...  Training Step: 465...  Training loss: 1.9200...  2.8047 sec/batch
Epoch: 3/20...  Training Step: 466...  Training loss: 1.9286...  2.7792 sec/batch
Epoch: 3/20...  Training Step: 467...  Training loss: 1.9671...  2.8259 sec/batch
Epoch: 3/20...  Training Step: 468...  Training loss: 1.9384...  2.8149 sec/batch
Epoch: 3/20...  Training Step: 469...  Training loss: 1.9382...  2.7911 sec/batch
Epoch: 3/20...  Training Step: 470...  Training loss: 1.9161...  2.8123 sec/batch
Epoch: 3/20...  Training Step: 471...  Training loss: 1.9207...  2.7709 sec/batch
Epoch: 3/20...  Training Step: 472...  Training loss: 1.9495...  2.7656 sec/batch
Epoch: 3/20...  Training Step: 473...  Training loss: 1.9329...  2.7971 sec/batch
Epoch: 3/20...  Training Step: 474...  Training loss: 1.9232...  2.8085 sec/batch
Epoch: 3/20...  Training Step: 475...  Training loss: 1.8832...  2.7989 sec/batch
Epoch: 3/20...  Training Step: 476...  Training loss: 1.9092...  2.7839 sec/batch
Epoch: 3/20...  Training Step: 477...  Training loss: 1.8759...  2.9076 sec/batch
Epoch: 3/20...  Training Step: 478...  Training loss: 1.9316...  2.7954 sec/batch
Epoch: 3/20...  Training Step: 479...  Training loss: 1.8729...  2.8111 sec/batch
Epoch: 3/20...  Training Step: 480...  Training loss: 1.8997...  2.7726 sec/batch
Epoch: 3/20...  Training Step: 481...  Training loss: 1.8751...  2.7715 sec/batch
Epoch: 3/20...  Training Step: 482...  Training loss: 1.8953...  2.7939 sec/batch
Epoch: 3/20...  Training Step: 483...  Training loss: 1.8957...  2.7934 sec/batch
Epoch: 3/20...  Training Step: 484...  Training loss: 1.8828...  2.7936 sec/batch
Epoch: 3/20...  Training Step: 485...  Training loss: 1.8586...  2.8041 sec/batch
Epoch: 3/20...  Training Step: 486...  Training loss: 1.9145...  2.8844 sec/batch
Epoch: 3/20...  Training Step: 487...  Training loss: 1.8790...  2.8295 sec/batch
Epoch: 3/20...  Training Step: 488...  Training loss: 1.8977...  2.8011 sec/batch
Epoch: 3/20...  Training Step: 489...  Training loss: 1.8670...  2.7895 sec/batch
Epoch: 3/20...  Training Step: 490...  Training loss: 1.8715...  2.9549 sec/batch
Epoch: 3/20...  Training Step: 491...  Training loss: 1.8716...  2.7883 sec/batch
Epoch: 3/20...  Training Step: 492...  Training loss: 1.8979...  2.7573 sec/batch
Epoch: 3/20...  Training Step: 493...  Training loss: 1.8861...  2.7485 sec/batch
Epoch: 3/20...  Training Step: 494...  Training loss: 1.8694...  2.7608 sec/batch
Epoch: 3/20...  Training Step: 495...  Training loss: 1.8652...  2.7919 sec/batch
Epoch: 3/20...  Training Step: 496...  Training loss: 1.8508...  2.7869 sec/batch
Epoch: 3/20...  Training Step: 497...  Training loss: 1.8912...  2.8016 sec/batch
Epoch: 3/20...  Training Step: 498...  Training loss: 1.8755...  2.8015 sec/batch
Epoch: 3/20...  Training Step: 499...  Training loss: 1.8688...  2.7788 sec/batch
Epoch: 3/20...  Training Step: 500...  Training loss: 1.8758...  2.7878 sec/batch
Epoch: 3/20...  Training Step: 501...  Training loss: 1.8639...  2.8049 sec/batch
Epoch: 3/20...  Training Step: 502...  Training loss: 1.8786...  2.8002 sec/batch
Epoch: 3/20...  Training Step: 503...  Training loss: 1.8774...  2.7757 sec/batch
Epoch: 3/20...  Training Step: 504...  Training loss: 1.8849...  2.7731 sec/batch
Epoch: 3/20...  Training Step: 505...  Training loss: 1.8892...  2.7796 sec/batch
Epoch: 3/20...  Training Step: 506...  Training loss: 1.8831...  3.5045 sec/batch
Epoch: 3/20...  Training Step: 507...  Training loss: 1.8806...  3.7060 sec/batch
Epoch: 3/20...  Training Step: 508...  Training loss: 1.8671...  3.2757 sec/batch
Epoch: 3/20...  Training Step: 509...  Training loss: 1.8853...  3.3647 sec/batch
Epoch: 3/20...  Training Step: 510...  Training loss: 1.8822...  3.1644 sec/batch
Epoch: 3/20...  Training Step: 511...  Training loss: 1.8702...  3.1049 sec/batch
Epoch: 3/20...  Training Step: 512...  Training loss: 1.8436...  3.1259 sec/batch
Epoch: 3/20...  Training Step: 513...  Training loss: 1.8741...  3.0894 sec/batch
Epoch: 3/20...  Training Step: 514...  Training loss: 1.8726...  3.0890 sec/batch
Epoch: 3/20...  Training Step: 515...  Training loss: 1.8792...  3.1091 sec/batch
Epoch: 3/20...  Training Step: 516...  Training loss: 1.8644...  3.0657 sec/batch
Epoch: 3/20...  Training Step: 517...  Training loss: 1.8883...  3.1409 sec/batch
Epoch: 3/20...  Training Step: 518...  Training loss: 1.8410...  3.2906 sec/batch
Epoch: 3/20...  Training Step: 519...  Training loss: 1.8395...  3.2733 sec/batch
Epoch: 3/20...  Training Step: 520...  Training loss: 1.8929...  3.1463 sec/batch
Epoch: 3/20...  Training Step: 521...  Training loss: 1.8573...  3.4551 sec/batch
Epoch: 3/20...  Training Step: 522...  Training loss: 1.8110...  3.3125 sec/batch
Epoch: 3/20...  Training Step: 523...  Training loss: 1.8865...  2.8527 sec/batch
Epoch: 3/20...  Training Step: 524...  Training loss: 1.8735...  2.8066 sec/batch
Epoch: 3/20...  Training Step: 525...  Training loss: 1.8602...  3.0115 sec/batch
Epoch: 3/20...  Training Step: 526...  Training loss: 1.8504...  2.9933 sec/batch
Epoch: 3/20...  Training Step: 527...  Training loss: 1.8252...  3.2891 sec/batch
Epoch: 3/20...  Training Step: 528...  Training loss: 1.8240...  2.9320 sec/batch
Epoch: 3/20...  Training Step: 529...  Training loss: 1.8742...  2.8952 sec/batch
Epoch: 3/20...  Training Step: 530...  Training loss: 1.8703...  3.1091 sec/batch
Epoch: 3/20...  Training Step: 531...  Training loss: 1.8589...  2.8896 sec/batch
Epoch: 3/20...  Training Step: 532...  Training loss: 1.8717...  3.1830 sec/batch
Epoch: 3/20...  Training Step: 533...  Training loss: 1.8676...  3.3377 sec/batch
Epoch: 3/20...  Training Step: 534...  Training loss: 1.8489...  2.9635 sec/batch
Epoch: 3/20...  Training Step: 535...  Training loss: 1.8858...  3.0793 sec/batch
Epoch: 3/20...  Training Step: 536...  Training loss: 1.8445...  3.1792 sec/batch
Epoch: 3/20...  Training Step: 537...  Training loss: 1.8852...  3.1496 sec/batch
Epoch: 3/20...  Training Step: 538...  Training loss: 1.8413...  3.1437 sec/batch
Epoch: 3/20...  Training Step: 539...  Training loss: 1.8449...  3.1150 sec/batch
Epoch: 3/20...  Training Step: 540...  Training loss: 1.8565...  3.0602 sec/batch
Epoch: 3/20...  Training Step: 541...  Training loss: 1.8320...  3.0671 sec/batch
Epoch: 3/20...  Training Step: 542...  Training loss: 1.8534...  3.0088 sec/batch
Epoch: 3/20...  Training Step: 543...  Training loss: 1.8606...  3.0056 sec/batch
Epoch: 3/20...  Training Step: 544...  Training loss: 1.8781...  3.0369 sec/batch
Epoch: 3/20...  Training Step: 545...  Training loss: 1.8452...  3.0536 sec/batch
Epoch: 3/20...  Training Step: 546...  Training loss: 1.8350...  2.9543 sec/batch
Epoch: 3/20...  Training Step: 547...  Training loss: 1.8259...  2.8914 sec/batch
Epoch: 3/20...  Training Step: 548...  Training loss: 1.8810...  2.8799 sec/batch
Epoch: 3/20...  Training Step: 549...  Training loss: 1.8465...  2.9290 sec/batch
Epoch: 3/20...  Training Step: 550...  Training loss: 1.8446...  2.9098 sec/batch
Epoch: 3/20...  Training Step: 551...  Training loss: 1.8387...  2.9128 sec/batch
Epoch: 3/20...  Training Step: 552...  Training loss: 1.8214...  2.8892 sec/batch
Epoch: 3/20...  Training Step: 553...  Training loss: 1.8466...  2.8511 sec/batch
Epoch: 3/20...  Training Step: 554...  Training loss: 1.8350...  2.9005 sec/batch
Epoch: 3/20...  Training Step: 555...  Training loss: 1.8091...  2.8503 sec/batch
Epoch: 3/20...  Training Step: 556...  Training loss: 1.8619...  2.8864 sec/batch
Epoch: 3/20...  Training Step: 557...  Training loss: 1.8634...  2.8747 sec/batch
Epoch: 3/20...  Training Step: 558...  Training loss: 1.8315...  2.9052 sec/batch
Epoch: 3/20...  Training Step: 559...  Training loss: 1.8519...  2.9565 sec/batch
Epoch: 3/20...  Training Step: 560...  Training loss: 1.8306...  2.8785 sec/batch
Epoch: 3/20...  Training Step: 561...  Training loss: 1.8260...  2.8994 sec/batch
Epoch: 3/20...  Training Step: 562...  Training loss: 1.8218...  2.8483 sec/batch
Epoch: 3/20...  Training Step: 563...  Training loss: 1.8479...  2.8907 sec/batch
Epoch: 3/20...  Training Step: 564...  Training loss: 1.8898...  2.8814 sec/batch
Epoch: 3/20...  Training Step: 565...  Training loss: 1.8138...  2.8921 sec/batch
Epoch: 3/20...  Training Step: 566...  Training loss: 1.8297...  2.9295 sec/batch
Epoch: 3/20...  Training Step: 567...  Training loss: 1.8069...  2.9183 sec/batch
Epoch: 3/20...  Training Step: 568...  Training loss: 1.8126...  3.6114 sec/batch
Epoch: 3/20...  Training Step: 569...  Training loss: 1.8472...  3.9499 sec/batch
Epoch: 3/20...  Training Step: 570...  Training loss: 1.8343...  2.9105 sec/batch
Epoch: 3/20...  Training Step: 571...  Training loss: 1.8251...  2.8617 sec/batch
Epoch: 3/20...  Training Step: 572...  Training loss: 1.8217...  2.8114 sec/batch
Epoch: 3/20...  Training Step: 573...  Training loss: 1.8062...  2.8078 sec/batch
Epoch: 3/20...  Training Step: 574...  Training loss: 1.8329...  2.7956 sec/batch
Epoch: 3/20...  Training Step: 575...  Training loss: 1.7902...  2.7623 sec/batch
Epoch: 3/20...  Training Step: 576...  Training loss: 1.7919...  2.7544 sec/batch
Epoch: 3/20...  Training Step: 577...  Training loss: 1.7958...  2.7401 sec/batch
Epoch: 3/20...  Training Step: 578...  Training loss: 1.8127...  2.7557 sec/batch
Epoch: 3/20...  Training Step: 579...  Training loss: 1.8099...  2.8119 sec/batch
Epoch: 3/20...  Training Step: 580...  Training loss: 1.8376...  2.7683 sec/batch
Epoch: 3/20...  Training Step: 581...  Training loss: 1.8115...  2.7780 sec/batch
Epoch: 3/20...  Training Step: 582...  Training loss: 1.7947...  2.8109 sec/batch
Epoch: 3/20...  Training Step: 583...  Training loss: 1.8082...  2.7470 sec/batch
Epoch: 3/20...  Training Step: 584...  Training loss: 1.7893...  2.8348 sec/batch
Epoch: 3/20...  Training Step: 585...  Training loss: 1.7959...  2.7775 sec/batch
Epoch: 3/20...  Training Step: 586...  Training loss: 1.8082...  2.7759 sec/batch
Epoch: 3/20...  Training Step: 587...  Training loss: 1.8113...  2.8276 sec/batch
Epoch: 3/20...  Training Step: 588...  Training loss: 1.7721...  2.7821 sec/batch
Epoch: 3/20...  Training Step: 589...  Training loss: 1.8086...  2.8642 sec/batch
Epoch: 3/20...  Training Step: 590...  Training loss: 1.7827...  2.8552 sec/batch
Epoch: 3/20...  Training Step: 591...  Training loss: 1.7602...  2.7662 sec/batch
Epoch: 3/20...  Training Step: 592...  Training loss: 1.7981...  2.7987 sec/batch
Epoch: 3/20...  Training Step: 593...  Training loss: 1.7846...  2.7463 sec/batch
Epoch: 3/20...  Training Step: 594...  Training loss: 1.7989...  2.7599 sec/batch
Epoch: 4/20...  Training Step: 595...  Training loss: 1.8811...  2.7942 sec/batch
Epoch: 4/20...  Training Step: 596...  Training loss: 1.7953...  2.7605 sec/batch
Epoch: 4/20...  Training Step: 597...  Training loss: 1.7924...  2.7432 sec/batch
Epoch: 4/20...  Training Step: 598...  Training loss: 1.7901...  2.7580 sec/batch
Epoch: 4/20...  Training Step: 599...  Training loss: 1.7867...  2.8262 sec/batch
Epoch: 4/20...  Training Step: 600...  Training loss: 1.7536...  2.8241 sec/batch
Epoch: 4/20...  Training Step: 601...  Training loss: 1.7981...  2.8527 sec/batch
Epoch: 4/20...  Training Step: 602...  Training loss: 1.7760...  2.8190 sec/batch
Epoch: 4/20...  Training Step: 603...  Training loss: 1.8236...  2.7969 sec/batch
Epoch: 4/20...  Training Step: 604...  Training loss: 1.7897...  2.8028 sec/batch
Epoch: 4/20...  Training Step: 605...  Training loss: 1.7598...  2.7908 sec/batch
Epoch: 4/20...  Training Step: 606...  Training loss: 1.7625...  2.8014 sec/batch
Epoch: 4/20...  Training Step: 607...  Training loss: 1.7846...  2.7956 sec/batch
Epoch: 4/20...  Training Step: 608...  Training loss: 1.8285...  2.8160 sec/batch
Epoch: 4/20...  Training Step: 609...  Training loss: 1.7749...  2.7802 sec/batch
Epoch: 4/20...  Training Step: 610...  Training loss: 1.7666...  2.7482 sec/batch
Epoch: 4/20...  Training Step: 611...  Training loss: 1.7889...  2.8578 sec/batch
Epoch: 4/20...  Training Step: 612...  Training loss: 1.8221...  2.7746 sec/batch
Epoch: 4/20...  Training Step: 613...  Training loss: 1.7875...  2.9281 sec/batch
Epoch: 4/20...  Training Step: 614...  Training loss: 1.7884...  2.8120 sec/batch
Epoch: 4/20...  Training Step: 615...  Training loss: 1.7665...  2.8220 sec/batch
Epoch: 4/20...  Training Step: 616...  Training loss: 1.7994...  2.7507 sec/batch
Epoch: 4/20...  Training Step: 617...  Training loss: 1.7726...  2.7863 sec/batch
Epoch: 4/20...  Training Step: 618...  Training loss: 1.7743...  2.7571 sec/batch
Epoch: 4/20...  Training Step: 619...  Training loss: 1.7822...  3.3205 sec/batch
Epoch: 4/20...  Training Step: 620...  Training loss: 1.7441...  2.8058 sec/batch
Epoch: 4/20...  Training Step: 621...  Training loss: 1.7455...  2.8311 sec/batch
Epoch: 4/20...  Training Step: 622...  Training loss: 1.7876...  2.9817 sec/batch
Epoch: 4/20...  Training Step: 623...  Training loss: 1.8105...  3.1118 sec/batch
Epoch: 4/20...  Training Step: 624...  Training loss: 1.7853...  3.0870 sec/batch
Epoch: 4/20...  Training Step: 625...  Training loss: 1.7683...  2.8420 sec/batch
Epoch: 4/20...  Training Step: 626...  Training loss: 1.7489...  2.7824 sec/batch
Epoch: 4/20...  Training Step: 627...  Training loss: 1.7817...  2.7873 sec/batch
Epoch: 4/20...  Training Step: 628...  Training loss: 1.7874...  2.8755 sec/batch
Epoch: 4/20...  Training Step: 629...  Training loss: 1.7623...  2.7754 sec/batch
Epoch: 4/20...  Training Step: 630...  Training loss: 1.7668...  2.8087 sec/batch
Epoch: 4/20...  Training Step: 631...  Training loss: 1.7583...  2.8116 sec/batch
Epoch: 4/20...  Training Step: 632...  Training loss: 1.7284...  2.8640 sec/batch
Epoch: 4/20...  Training Step: 633...  Training loss: 1.7162...  2.8440 sec/batch
Epoch: 4/20...  Training Step: 634...  Training loss: 1.7371...  2.7927 sec/batch
Epoch: 4/20...  Training Step: 635...  Training loss: 1.7357...  2.7924 sec/batch
Epoch: 4/20...  Training Step: 636...  Training loss: 1.7772...  2.8848 sec/batch
Epoch: 4/20...  Training Step: 637...  Training loss: 1.7340...  3.1115 sec/batch
Epoch: 4/20...  Training Step: 638...  Training loss: 1.7353...  2.9386 sec/batch
Epoch: 4/20...  Training Step: 639...  Training loss: 1.7717...  3.0830 sec/batch
Epoch: 4/20...  Training Step: 640...  Training loss: 1.7153...  3.1220 sec/batch
Epoch: 4/20...  Training Step: 641...  Training loss: 1.7504...  3.1277 sec/batch
Epoch: 4/20...  Training Step: 642...  Training loss: 1.7416...  3.1895 sec/batch
Epoch: 4/20...  Training Step: 643...  Training loss: 1.7345...  2.8715 sec/batch
Epoch: 4/20...  Training Step: 644...  Training loss: 1.7887...  3.2465 sec/batch
Epoch: 4/20...  Training Step: 645...  Training loss: 1.7241...  3.2838 sec/batch
Epoch: 4/20...  Training Step: 646...  Training loss: 1.8161...  3.2682 sec/batch
Epoch: 4/20...  Training Step: 647...  Training loss: 1.7415...  2.9011 sec/batch
Epoch: 4/20...  Training Step: 648...  Training loss: 1.7487...  3.3446 sec/batch
Epoch: 4/20...  Training Step: 649...  Training loss: 1.7348...  2.8156 sec/batch
Epoch: 4/20...  Training Step: 650...  Training loss: 1.7513...  2.7578 sec/batch
Epoch: 4/20...  Training Step: 651...  Training loss: 1.7652...  2.7785 sec/batch
Epoch: 4/20...  Training Step: 652...  Training loss: 1.7288...  2.8464 sec/batch
Epoch: 4/20...  Training Step: 653...  Training loss: 1.7167...  2.7826 sec/batch
Epoch: 4/20...  Training Step: 654...  Training loss: 1.7773...  2.8645 sec/batch
Epoch: 4/20...  Training Step: 655...  Training loss: 1.7437...  2.8365 sec/batch
Epoch: 4/20...  Training Step: 656...  Training loss: 1.7941...  2.7681 sec/batch
Epoch: 4/20...  Training Step: 657...  Training loss: 1.7775...  2.7683 sec/batch
Epoch: 4/20...  Training Step: 658...  Training loss: 1.7574...  2.8390 sec/batch
Epoch: 4/20...  Training Step: 659...  Training loss: 1.7361...  2.7724 sec/batch
Epoch: 4/20...  Training Step: 660...  Training loss: 1.7714...  2.7713 sec/batch
Epoch: 4/20...  Training Step: 661...  Training loss: 1.7674...  2.7773 sec/batch
Epoch: 4/20...  Training Step: 662...  Training loss: 1.7185...  2.7773 sec/batch
Epoch: 4/20...  Training Step: 663...  Training loss: 1.7246...  2.7854 sec/batch
Epoch: 4/20...  Training Step: 664...  Training loss: 1.7317...  2.8362 sec/batch
Epoch: 4/20...  Training Step: 665...  Training loss: 1.7728...  2.7732 sec/batch
Epoch: 4/20...  Training Step: 666...  Training loss: 1.7521...  2.7688 sec/batch
Epoch: 4/20...  Training Step: 667...  Training loss: 1.7628...  2.7997 sec/batch
Epoch: 4/20...  Training Step: 668...  Training loss: 1.7200...  2.8049 sec/batch
Epoch: 4/20...  Training Step: 669...  Training loss: 1.7355...  2.7996 sec/batch
Epoch: 4/20...  Training Step: 670...  Training loss: 1.7565...  2.8387 sec/batch
Epoch: 4/20...  Training Step: 671...  Training loss: 1.7354...  2.7983 sec/batch
Epoch: 4/20...  Training Step: 672...  Training loss: 1.7424...  2.7603 sec/batch
Epoch: 4/20...  Training Step: 673...  Training loss: 1.6952...  2.8460 sec/batch
Epoch: 4/20...  Training Step: 674...  Training loss: 1.7268...  2.8217 sec/batch
Epoch: 4/20...  Training Step: 675...  Training loss: 1.6832...  2.7844 sec/batch
Epoch: 4/20...  Training Step: 676...  Training loss: 1.7385...  2.8044 sec/batch
Epoch: 4/20...  Training Step: 677...  Training loss: 1.6921...  2.7770 sec/batch
Epoch: 4/20...  Training Step: 678...  Training loss: 1.7238...  2.8275 sec/batch
Epoch: 4/20...  Training Step: 679...  Training loss: 1.7058...  2.8039 sec/batch
Epoch: 4/20...  Training Step: 680...  Training loss: 1.7130...  2.8275 sec/batch
Epoch: 4/20...  Training Step: 681...  Training loss: 1.7078...  2.7658 sec/batch
Epoch: 4/20...  Training Step: 682...  Training loss: 1.7013...  2.7893 sec/batch
Epoch: 4/20...  Training Step: 683...  Training loss: 1.6850...  2.7976 sec/batch
Epoch: 4/20...  Training Step: 684...  Training loss: 1.7279...  2.8220 sec/batch
Epoch: 4/20...  Training Step: 685...  Training loss: 1.6939...  2.7803 sec/batch
Epoch: 4/20...  Training Step: 686...  Training loss: 1.7138...  2.7766 sec/batch
Epoch: 4/20...  Training Step: 687...  Training loss: 1.6807...  2.7893 sec/batch
Epoch: 4/20...  Training Step: 688...  Training loss: 1.7007...  2.7748 sec/batch
Epoch: 4/20...  Training Step: 689...  Training loss: 1.6936...  2.8430 sec/batch
Epoch: 4/20...  Training Step: 690...  Training loss: 1.7224...  2.7743 sec/batch
Epoch: 4/20...  Training Step: 691...  Training loss: 1.7088...  2.8201 sec/batch
Epoch: 4/20...  Training Step: 692...  Training loss: 1.6841...  2.8137 sec/batch
Epoch: 4/20...  Training Step: 693...  Training loss: 1.6883...  2.7755 sec/batch
Epoch: 4/20...  Training Step: 694...  Training loss: 1.6760...  2.8024 sec/batch
Epoch: 4/20...  Training Step: 695...  Training loss: 1.7087...  2.8393 sec/batch
Epoch: 4/20...  Training Step: 696...  Training loss: 1.7036...  2.7473 sec/batch
Epoch: 4/20...  Training Step: 697...  Training loss: 1.7005...  2.7949 sec/batch
Epoch: 4/20...  Training Step: 698...  Training loss: 1.7078...  2.7861 sec/batch
Epoch: 4/20...  Training Step: 699...  Training loss: 1.7006...  2.8106 sec/batch
Epoch: 4/20...  Training Step: 700...  Training loss: 1.7085...  2.7861 sec/batch
Epoch: 4/20...  Training Step: 701...  Training loss: 1.7074...  2.9474 sec/batch
Epoch: 4/20...  Training Step: 702...  Training loss: 1.7023...  2.9219 sec/batch
Epoch: 4/20...  Training Step: 703...  Training loss: 1.7184...  2.9203 sec/batch
Epoch: 4/20...  Training Step: 704...  Training loss: 1.7173...  2.8425 sec/batch
Epoch: 4/20...  Training Step: 705...  Training loss: 1.7053...  2.8227 sec/batch
Epoch: 4/20...  Training Step: 706...  Training loss: 1.6937...  2.8234 sec/batch
Epoch: 4/20...  Training Step: 707...  Training loss: 1.7027...  2.7822 sec/batch
Epoch: 4/20...  Training Step: 708...  Training loss: 1.6929...  3.3068 sec/batch
Epoch: 4/20...  Training Step: 709...  Training loss: 1.6772...  2.9131 sec/batch
Epoch: 4/20...  Training Step: 710...  Training loss: 1.6662...  3.1162 sec/batch
Epoch: 4/20...  Training Step: 711...  Training loss: 1.7146...  3.0744 sec/batch
Epoch: 4/20...  Training Step: 712...  Training loss: 1.6958...  3.3139 sec/batch
Epoch: 4/20...  Training Step: 713...  Training loss: 1.6967...  3.0748 sec/batch
Epoch: 4/20...  Training Step: 714...  Training loss: 1.6906...  2.9467 sec/batch
Epoch: 4/20...  Training Step: 715...  Training loss: 1.7063...  3.1454 sec/batch
Epoch: 4/20...  Training Step: 716...  Training loss: 1.6719...  2.8459 sec/batch
Epoch: 4/20...  Training Step: 717...  Training loss: 1.6602...  2.8438 sec/batch
Epoch: 4/20...  Training Step: 718...  Training loss: 1.7102...  2.8946 sec/batch
Epoch: 4/20...  Training Step: 719...  Training loss: 1.6937...  3.2422 sec/batch
Epoch: 4/20...  Training Step: 720...  Training loss: 1.6406...  2.8990 sec/batch
Epoch: 4/20...  Training Step: 721...  Training loss: 1.7156...  3.1407 sec/batch
Epoch: 4/20...  Training Step: 722...  Training loss: 1.7075...  2.8877 sec/batch
Epoch: 4/20...  Training Step: 723...  Training loss: 1.6829...  3.0541 sec/batch
Epoch: 4/20...  Training Step: 724...  Training loss: 1.6711...  3.3729 sec/batch
Epoch: 4/20...  Training Step: 725...  Training loss: 1.6556...  2.8705 sec/batch
Epoch: 4/20...  Training Step: 726...  Training loss: 1.6597...  3.1300 sec/batch
Epoch: 4/20...  Training Step: 727...  Training loss: 1.7058...  2.8241 sec/batch
Epoch: 4/20...  Training Step: 728...  Training loss: 1.7023...  2.7714 sec/batch
Epoch: 4/20...  Training Step: 729...  Training loss: 1.6984...  2.8712 sec/batch
Epoch: 4/20...  Training Step: 730...  Training loss: 1.7056...  2.8943 sec/batch
Epoch: 4/20...  Training Step: 731...  Training loss: 1.7045...  2.9347 sec/batch
Epoch: 4/20...  Training Step: 732...  Training loss: 1.6988...  2.8171 sec/batch
Epoch: 4/20...  Training Step: 733...  Training loss: 1.7214...  2.9019 sec/batch
Epoch: 4/20...  Training Step: 734...  Training loss: 1.6911...  2.8743 sec/batch
Epoch: 4/20...  Training Step: 735...  Training loss: 1.7365...  2.8531 sec/batch
Epoch: 4/20...  Training Step: 736...  Training loss: 1.6775...  2.9286 sec/batch
Epoch: 4/20...  Training Step: 737...  Training loss: 1.6879...  2.9674 sec/batch
Epoch: 4/20...  Training Step: 738...  Training loss: 1.7049...  3.1446 sec/batch
Epoch: 4/20...  Training Step: 739...  Training loss: 1.6658...  2.8658 sec/batch
Epoch: 4/20...  Training Step: 740...  Training loss: 1.7028...  3.0415 sec/batch
Epoch: 4/20...  Training Step: 741...  Training loss: 1.6951...  3.0876 sec/batch
Epoch: 4/20...  Training Step: 742...  Training loss: 1.7179...  2.8844 sec/batch
Epoch: 4/20...  Training Step: 743...  Training loss: 1.7003...  3.1722 sec/batch
Epoch: 4/20...  Training Step: 744...  Training loss: 1.6790...  2.8440 sec/batch
Epoch: 4/20...  Training Step: 745...  Training loss: 1.6512...  2.8522 sec/batch
Epoch: 4/20...  Training Step: 746...  Training loss: 1.7038...  2.8593 sec/batch
Epoch: 4/20...  Training Step: 747...  Training loss: 1.6890...  2.7875 sec/batch
Epoch: 4/20...  Training Step: 748...  Training loss: 1.6844...  2.7811 sec/batch
Epoch: 4/20...  Training Step: 749...  Training loss: 1.6843...  3.2074 sec/batch
Epoch: 4/20...  Training Step: 750...  Training loss: 1.6746...  2.9322 sec/batch
Epoch: 4/20...  Training Step: 751...  Training loss: 1.6917...  2.8394 sec/batch
Epoch: 4/20...  Training Step: 752...  Training loss: 1.6859...  2.9496 sec/batch
Epoch: 4/20...  Training Step: 753...  Training loss: 1.6480...  2.8044 sec/batch
Epoch: 4/20...  Training Step: 754...  Training loss: 1.7005...  3.3247 sec/batch
Epoch: 4/20...  Training Step: 755...  Training loss: 1.7123...  2.9653 sec/batch
Epoch: 4/20...  Training Step: 756...  Training loss: 1.6811...  3.0574 sec/batch
Epoch: 4/20...  Training Step: 757...  Training loss: 1.7013...  2.9646 sec/batch
Epoch: 4/20...  Training Step: 758...  Training loss: 1.6906...  2.9125 sec/batch
Epoch: 4/20...  Training Step: 759...  Training loss: 1.6778...  3.0041 sec/batch
Epoch: 4/20...  Training Step: 760...  Training loss: 1.6761...  2.9630 sec/batch
Epoch: 4/20...  Training Step: 761...  Training loss: 1.6893...  2.9608 sec/batch
Epoch: 4/20...  Training Step: 762...  Training loss: 1.7394...  3.0887 sec/batch
Epoch: 4/20...  Training Step: 763...  Training loss: 1.6643...  2.8904 sec/batch
Epoch: 4/20...  Training Step: 764...  Training loss: 1.6834...  3.2335 sec/batch
Epoch: 4/20...  Training Step: 765...  Training loss: 1.6591...  2.8746 sec/batch
Epoch: 4/20...  Training Step: 766...  Training loss: 1.6562...  2.9979 sec/batch
Epoch: 4/20...  Training Step: 767...  Training loss: 1.6960...  2.9188 sec/batch
Epoch: 4/20...  Training Step: 768...  Training loss: 1.6721...  2.8818 sec/batch
Epoch: 4/20...  Training Step: 769...  Training loss: 1.6853...  2.8724 sec/batch
Epoch: 4/20...  Training Step: 770...  Training loss: 1.6596...  2.9504 sec/batch
Epoch: 4/20...  Training Step: 771...  Training loss: 1.6564...  2.9876 sec/batch
Epoch: 4/20...  Training Step: 772...  Training loss: 1.6915...  2.9438 sec/batch
Epoch: 4/20...  Training Step: 773...  Training loss: 1.6489...  2.9548 sec/batch
Epoch: 4/20...  Training Step: 774...  Training loss: 1.6424...  3.3094 sec/batch
Epoch: 4/20...  Training Step: 775...  Training loss: 1.6450...  3.2160 sec/batch
Epoch: 4/20...  Training Step: 776...  Training loss: 1.6640...  2.8403 sec/batch
Epoch: 4/20...  Training Step: 777...  Training loss: 1.6601...  2.8765 sec/batch
Epoch: 4/20...  Training Step: 778...  Training loss: 1.6725...  3.3662 sec/batch
Epoch: 4/20...  Training Step: 779...  Training loss: 1.6674...  2.9642 sec/batch
Epoch: 4/20...  Training Step: 780...  Training loss: 1.6470...  2.8508 sec/batch
Epoch: 4/20...  Training Step: 781...  Training loss: 1.6757...  3.1444 sec/batch
Epoch: 4/20...  Training Step: 782...  Training loss: 1.6504...  3.1444 sec/batch
Epoch: 4/20...  Training Step: 783...  Training loss: 1.6534...  2.9592 sec/batch
Epoch: 4/20...  Training Step: 784...  Training loss: 1.6570...  2.8412 sec/batch
Epoch: 4/20...  Training Step: 785...  Training loss: 1.6670...  2.9472 sec/batch
Epoch: 4/20...  Training Step: 786...  Training loss: 1.6254...  2.7820 sec/batch
Epoch: 4/20...  Training Step: 787...  Training loss: 1.6702...  2.9175 sec/batch
Epoch: 4/20...  Training Step: 788...  Training loss: 1.6361...  2.9997 sec/batch
Epoch: 4/20...  Training Step: 789...  Training loss: 1.6299...  2.8970 sec/batch
Epoch: 4/20...  Training Step: 790...  Training loss: 1.6523...  2.8792 sec/batch
Epoch: 4/20...  Training Step: 791...  Training loss: 1.6445...  2.9099 sec/batch
Epoch: 4/20...  Training Step: 792...  Training loss: 1.6391...  2.8548 sec/batch
Epoch: 5/20...  Training Step: 793...  Training loss: 1.7553...  2.8126 sec/batch
Epoch: 5/20...  Training Step: 794...  Training loss: 1.6540...  2.8947 sec/batch
Epoch: 5/20...  Training Step: 795...  Training loss: 1.6440...  3.3239 sec/batch
Epoch: 5/20...  Training Step: 796...  Training loss: 1.6602...  3.6283 sec/batch
Epoch: 5/20...  Training Step: 797...  Training loss: 1.6494...  3.3916 sec/batch
Epoch: 5/20...  Training Step: 798...  Training loss: 1.6028...  3.5611 sec/batch
Epoch: 5/20...  Training Step: 799...  Training loss: 1.6473...  3.7656 sec/batch
Epoch: 5/20...  Training Step: 800...  Training loss: 1.6316...  3.4666 sec/batch
Epoch: 5/20...  Training Step: 801...  Training loss: 1.6699...  3.3747 sec/batch
Epoch: 5/20...  Training Step: 802...  Training loss: 1.6361...  3.6446 sec/batch
Epoch: 5/20...  Training Step: 803...  Training loss: 1.6169...  3.3719 sec/batch
Epoch: 5/20...  Training Step: 804...  Training loss: 1.6283...  3.5654 sec/batch
Epoch: 5/20...  Training Step: 805...  Training loss: 1.6420...  3.4366 sec/batch
Epoch: 5/20...  Training Step: 806...  Training loss: 1.6802...  3.4749 sec/batch
Epoch: 5/20...  Training Step: 807...  Training loss: 1.6384...  3.4970 sec/batch
Epoch: 5/20...  Training Step: 808...  Training loss: 1.6217...  3.3820 sec/batch
Epoch: 5/20...  Training Step: 809...  Training loss: 1.6461...  3.1425 sec/batch
Epoch: 5/20...  Training Step: 810...  Training loss: 1.6692...  3.0896 sec/batch
Epoch: 5/20...  Training Step: 811...  Training loss: 1.6503...  3.2979 sec/batch
Epoch: 5/20...  Training Step: 812...  Training loss: 1.6534...  3.0353 sec/batch
Epoch: 5/20...  Training Step: 813...  Training loss: 1.6352...  2.9633 sec/batch
Epoch: 5/20...  Training Step: 814...  Training loss: 1.6543...  3.1194 sec/batch
Epoch: 5/20...  Training Step: 815...  Training loss: 1.6279...  2.9457 sec/batch
Epoch: 5/20...  Training Step: 816...  Training loss: 1.6422...  3.1803 sec/batch
Epoch: 5/20...  Training Step: 817...  Training loss: 1.6452...  3.3511 sec/batch
Epoch: 5/20...  Training Step: 818...  Training loss: 1.6027...  2.8985 sec/batch
Epoch: 5/20...  Training Step: 819...  Training loss: 1.5882...  2.9498 sec/batch
Epoch: 5/20...  Training Step: 820...  Training loss: 1.6475...  3.4460 sec/batch
Epoch: 5/20...  Training Step: 821...  Training loss: 1.6658...  3.1287 sec/batch
Epoch: 5/20...  Training Step: 822...  Training loss: 1.6578...  3.0027 sec/batch
Epoch: 5/20...  Training Step: 823...  Training loss: 1.6418...  3.0022 sec/batch
Epoch: 5/20...  Training Step: 824...  Training loss: 1.6119...  3.1148 sec/batch
Epoch: 5/20...  Training Step: 825...  Training loss: 1.6536...  2.9468 sec/batch
Epoch: 5/20...  Training Step: 826...  Training loss: 1.6541...  2.9137 sec/batch
Epoch: 5/20...  Training Step: 827...  Training loss: 1.6248...  2.8785 sec/batch
Epoch: 5/20...  Training Step: 828...  Training loss: 1.6417...  3.0152 sec/batch
Epoch: 5/20...  Training Step: 829...  Training loss: 1.6135...  2.9494 sec/batch
Epoch: 5/20...  Training Step: 830...  Training loss: 1.5970...  2.9394 sec/batch
Epoch: 5/20...  Training Step: 831...  Training loss: 1.5898...  2.9545 sec/batch
Epoch: 5/20...  Training Step: 832...  Training loss: 1.6147...  2.9978 sec/batch
Epoch: 5/20...  Training Step: 833...  Training loss: 1.6077...  3.0160 sec/batch
Epoch: 5/20...  Training Step: 834...  Training loss: 1.6599...  2.9423 sec/batch
Epoch: 5/20...  Training Step: 835...  Training loss: 1.6080...  2.8540 sec/batch
Epoch: 5/20...  Training Step: 836...  Training loss: 1.5970...  2.8719 sec/batch
Epoch: 5/20...  Training Step: 837...  Training loss: 1.6361...  3.0899 sec/batch
Epoch: 5/20...  Training Step: 838...  Training loss: 1.5840...  3.1119 sec/batch
Epoch: 5/20...  Training Step: 839...  Training loss: 1.6209...  2.8330 sec/batch
Epoch: 5/20...  Training Step: 840...  Training loss: 1.6140...  3.2900 sec/batch
Epoch: 5/20...  Training Step: 841...  Training loss: 1.6059...  3.1137 sec/batch
Epoch: 5/20...  Training Step: 842...  Training loss: 1.6579...  2.9223 sec/batch
Epoch: 5/20...  Training Step: 843...  Training loss: 1.6042...  2.9750 sec/batch
Epoch: 5/20...  Training Step: 844...  Training loss: 1.6750...  2.9523 sec/batch
Epoch: 5/20...  Training Step: 845...  Training loss: 1.6300...  2.9459 sec/batch
Epoch: 5/20...  Training Step: 846...  Training loss: 1.6286...  2.9285 sec/batch
Epoch: 5/20...  Training Step: 847...  Training loss: 1.6212...  2.8581 sec/batch
Epoch: 5/20...  Training Step: 848...  Training loss: 1.6261...  2.8536 sec/batch
Epoch: 5/20...  Training Step: 849...  Training loss: 1.6399...  2.9136 sec/batch
Epoch: 5/20...  Training Step: 850...  Training loss: 1.6067...  2.9237 sec/batch
Epoch: 5/20...  Training Step: 851...  Training loss: 1.5994...  2.8322 sec/batch
Epoch: 5/20...  Training Step: 852...  Training loss: 1.6537...  2.7637 sec/batch
Epoch: 5/20...  Training Step: 853...  Training loss: 1.6264...  2.8020 sec/batch
Epoch: 5/20...  Training Step: 854...  Training loss: 1.6695...  2.8054 sec/batch
Epoch: 5/20...  Training Step: 855...  Training loss: 1.6567...  2.8828 sec/batch
Epoch: 5/20...  Training Step: 856...  Training loss: 1.6327...  2.8761 sec/batch
Epoch: 5/20...  Training Step: 857...  Training loss: 1.6241...  2.9030 sec/batch
Epoch: 5/20...  Training Step: 858...  Training loss: 1.6354...  2.8622 sec/batch
Epoch: 5/20...  Training Step: 859...  Training loss: 1.6389...  3.0179 sec/batch
Epoch: 5/20...  Training Step: 860...  Training loss: 1.6028...  2.9691 sec/batch
Epoch: 5/20...  Training Step: 861...  Training loss: 1.6175...  2.7683 sec/batch
Epoch: 5/20...  Training Step: 862...  Training loss: 1.6163...  2.8634 sec/batch
Epoch: 5/20...  Training Step: 863...  Training loss: 1.6610...  2.8181 sec/batch
Epoch: 5/20...  Training Step: 864...  Training loss: 1.6335...  3.1030 sec/batch
Epoch: 5/20...  Training Step: 865...  Training loss: 1.6461...  2.9031 sec/batch
Epoch: 5/20...  Training Step: 866...  Training loss: 1.6060...  3.2673 sec/batch
Epoch: 5/20...  Training Step: 867...  Training loss: 1.6177...  2.8198 sec/batch
Epoch: 5/20...  Training Step: 868...  Training loss: 1.6330...  2.7650 sec/batch
Epoch: 5/20...  Training Step: 869...  Training loss: 1.6146...  3.1936 sec/batch
Epoch: 5/20...  Training Step: 870...  Training loss: 1.6131...  2.8775 sec/batch
Epoch: 5/20...  Training Step: 871...  Training loss: 1.5779...  2.8216 sec/batch
Epoch: 5/20...  Training Step: 872...  Training loss: 1.6093...  3.4953 sec/batch
Epoch: 5/20...  Training Step: 873...  Training loss: 1.5711...  2.8174 sec/batch
Epoch: 5/20...  Training Step: 874...  Training loss: 1.6238...  3.0777 sec/batch
Epoch: 5/20...  Training Step: 875...  Training loss: 1.5701...  2.9944 sec/batch
Epoch: 5/20...  Training Step: 876...  Training loss: 1.6151...  2.7871 sec/batch
Epoch: 5/20...  Training Step: 877...  Training loss: 1.5883...  2.8081 sec/batch
Epoch: 5/20...  Training Step: 878...  Training loss: 1.5987...  2.7720 sec/batch
Epoch: 5/20...  Training Step: 879...  Training loss: 1.5954...  3.2891 sec/batch
Epoch: 5/20...  Training Step: 880...  Training loss: 1.5891...  2.9003 sec/batch
Epoch: 5/20...  Training Step: 881...  Training loss: 1.5633...  2.8897 sec/batch
Epoch: 5/20...  Training Step: 882...  Training loss: 1.6303...  2.9968 sec/batch
Epoch: 5/20...  Training Step: 883...  Training loss: 1.5750...  2.9278 sec/batch
Epoch: 5/20...  Training Step: 884...  Training loss: 1.5955...  2.7537 sec/batch
Epoch: 5/20...  Training Step: 885...  Training loss: 1.5859...  2.9563 sec/batch
Epoch: 5/20...  Training Step: 886...  Training loss: 1.5881...  2.7803 sec/batch
Epoch: 5/20...  Training Step: 887...  Training loss: 1.5837...  2.7615 sec/batch
Epoch: 5/20...  Training Step: 888...  Training loss: 1.6113...  2.7983 sec/batch
Epoch: 5/20...  Training Step: 889...  Training loss: 1.5843...  2.9037 sec/batch
Epoch: 5/20...  Training Step: 890...  Training loss: 1.5742...  3.3627 sec/batch
Epoch: 5/20...  Training Step: 891...  Training loss: 1.5850...  2.9239 sec/batch
Epoch: 5/20...  Training Step: 892...  Training loss: 1.5576...  2.7901 sec/batch
Epoch: 5/20...  Training Step: 893...  Training loss: 1.6012...  3.2808 sec/batch
Epoch: 5/20...  Training Step: 894...  Training loss: 1.5904...  3.1637 sec/batch
Epoch: 5/20...  Training Step: 895...  Training loss: 1.5958...  3.2324 sec/batch
Epoch: 5/20...  Training Step: 896...  Training loss: 1.5872...  2.9617 sec/batch
Epoch: 5/20...  Training Step: 897...  Training loss: 1.5862...  2.9676 sec/batch
Epoch: 5/20...  Training Step: 898...  Training loss: 1.5984...  3.1630 sec/batch
Epoch: 5/20...  Training Step: 899...  Training loss: 1.5979...  2.8076 sec/batch
Epoch: 5/20...  Training Step: 900...  Training loss: 1.5963...  2.7541 sec/batch
Epoch: 5/20...  Training Step: 901...  Training loss: 1.6037...  2.8508 sec/batch
Epoch: 5/20...  Training Step: 902...  Training loss: 1.6056...  3.0249 sec/batch
Epoch: 5/20...  Training Step: 903...  Training loss: 1.5778...  2.9677 sec/batch
Epoch: 5/20...  Training Step: 904...  Training loss: 1.5780...  2.7937 sec/batch
Epoch: 5/20...  Training Step: 905...  Training loss: 1.5858...  2.8084 sec/batch
Epoch: 5/20...  Training Step: 906...  Training loss: 1.5787...  2.8972 sec/batch
Epoch: 5/20...  Training Step: 907...  Training loss: 1.5657...  3.0212 sec/batch
Epoch: 5/20...  Training Step: 908...  Training loss: 1.5525...  3.6225 sec/batch
Epoch: 5/20...  Training Step: 909...  Training loss: 1.6069...  3.9099 sec/batch
Epoch: 5/20...  Training Step: 910...  Training loss: 1.5816...  3.2349 sec/batch
Epoch: 5/20...  Training Step: 911...  Training loss: 1.5883...  3.3704 sec/batch
Epoch: 5/20...  Training Step: 912...  Training loss: 1.5783...  3.1153 sec/batch
Epoch: 5/20...  Training Step: 913...  Training loss: 1.5865...  3.1793 sec/batch
Epoch: 5/20...  Training Step: 914...  Training loss: 1.5490...  3.3460 sec/batch
Epoch: 5/20...  Training Step: 915...  Training loss: 1.5479...  3.0950 sec/batch
Epoch: 5/20...  Training Step: 916...  Training loss: 1.5993...  3.0679 sec/batch
Epoch: 5/20...  Training Step: 917...  Training loss: 1.5863...  3.1082 sec/batch
Epoch: 5/20...  Training Step: 918...  Training loss: 1.5417...  3.0494 sec/batch
Epoch: 5/20...  Training Step: 919...  Training loss: 1.6021...  3.1844 sec/batch
Epoch: 5/20...  Training Step: 920...  Training loss: 1.5982...  3.0887 sec/batch
Epoch: 5/20...  Training Step: 921...  Training loss: 1.5794...  3.2434 sec/batch
Epoch: 5/20...  Training Step: 922...  Training loss: 1.5610...  3.1015 sec/batch
Epoch: 5/20...  Training Step: 923...  Training loss: 1.5487...  3.1438 sec/batch
Epoch: 5/20...  Training Step: 924...  Training loss: 1.5620...  3.1377 sec/batch
Epoch: 5/20...  Training Step: 925...  Training loss: 1.6108...  3.1122 sec/batch
Epoch: 5/20...  Training Step: 926...  Training loss: 1.5915...  3.0524 sec/batch
Epoch: 5/20...  Training Step: 927...  Training loss: 1.5941...  3.1220 sec/batch
Epoch: 5/20...  Training Step: 928...  Training loss: 1.5934...  3.4284 sec/batch
Epoch: 5/20...  Training Step: 929...  Training loss: 1.6149...  3.1826 sec/batch
Epoch: 5/20...  Training Step: 930...  Training loss: 1.5912...  2.7996 sec/batch
Epoch: 5/20...  Training Step: 931...  Training loss: 1.5906...  2.8671 sec/batch
Epoch: 5/20...  Training Step: 932...  Training loss: 1.5847...  2.8393 sec/batch
Epoch: 5/20...  Training Step: 933...  Training loss: 1.6371...  2.8303 sec/batch
Epoch: 5/20...  Training Step: 934...  Training loss: 1.5867...  2.8983 sec/batch
Epoch: 5/20...  Training Step: 935...  Training loss: 1.5738...  2.8577 sec/batch
Epoch: 5/20...  Training Step: 936...  Training loss: 1.6091...  2.7779 sec/batch
Epoch: 5/20...  Training Step: 937...  Training loss: 1.5601...  2.8104 sec/batch
Epoch: 5/20...  Training Step: 938...  Training loss: 1.6021...  2.8079 sec/batch
Epoch: 5/20...  Training Step: 939...  Training loss: 1.5930...  2.8138 sec/batch
Epoch: 5/20...  Training Step: 940...  Training loss: 1.6191...  2.8045 sec/batch
Epoch: 5/20...  Training Step: 941...  Training loss: 1.5981...  2.8267 sec/batch
Epoch: 5/20...  Training Step: 942...  Training loss: 1.5786...  2.8016 sec/batch
Epoch: 5/20...  Training Step: 943...  Training loss: 1.5419...  2.9056 sec/batch
Epoch: 5/20...  Training Step: 944...  Training loss: 1.5769...  2.8465 sec/batch
Epoch: 5/20...  Training Step: 945...  Training loss: 1.5946...  2.8077 sec/batch
Epoch: 5/20...  Training Step: 946...  Training loss: 1.5765...  2.7976 sec/batch
Epoch: 5/20...  Training Step: 947...  Training loss: 1.5768...  2.8392 sec/batch
Epoch: 5/20...  Training Step: 948...  Training loss: 1.5759...  2.8242 sec/batch
Epoch: 5/20...  Training Step: 949...  Training loss: 1.5885...  2.8379 sec/batch
Epoch: 5/20...  Training Step: 950...  Training loss: 1.5761...  2.8137 sec/batch
Epoch: 5/20...  Training Step: 951...  Training loss: 1.5421...  2.8739 sec/batch
Epoch: 5/20...  Training Step: 952...  Training loss: 1.6031...  2.8457 sec/batch
Epoch: 5/20...  Training Step: 953...  Training loss: 1.6173...  2.9231 sec/batch
Epoch: 5/20...  Training Step: 954...  Training loss: 1.5708...  2.8087 sec/batch
Epoch: 5/20...  Training Step: 955...  Training loss: 1.5855...  2.9047 sec/batch
Epoch: 5/20...  Training Step: 956...  Training loss: 1.5628...  2.8193 sec/batch
Epoch: 5/20...  Training Step: 957...  Training loss: 1.5824...  2.8425 sec/batch
Epoch: 5/20...  Training Step: 958...  Training loss: 1.5806...  2.8712 sec/batch
Epoch: 5/20...  Training Step: 959...  Training loss: 1.5947...  2.8438 sec/batch
Epoch: 5/20...  Training Step: 960...  Training loss: 1.6423...  2.7818 sec/batch
Epoch: 5/20...  Training Step: 961...  Training loss: 1.5782...  2.8179 sec/batch
Epoch: 5/20...  Training Step: 962...  Training loss: 1.5712...  2.8106 sec/batch
Epoch: 5/20...  Training Step: 963...  Training loss: 1.5709...  2.7917 sec/batch
Epoch: 5/20...  Training Step: 964...  Training loss: 1.5605...  2.8869 sec/batch
Epoch: 5/20...  Training Step: 965...  Training loss: 1.5942...  2.8461 sec/batch
Epoch: 5/20...  Training Step: 966...  Training loss: 1.5801...  2.8651 sec/batch
Epoch: 5/20...  Training Step: 967...  Training loss: 1.5864...  2.8345 sec/batch
Epoch: 5/20...  Training Step: 968...  Training loss: 1.5499...  2.8040 sec/batch
Epoch: 5/20...  Training Step: 969...  Training loss: 1.5458...  2.7923 sec/batch
Epoch: 5/20...  Training Step: 970...  Training loss: 1.5915...  2.8334 sec/batch
Epoch: 5/20...  Training Step: 971...  Training loss: 1.5516...  2.8196 sec/batch
Epoch: 5/20...  Training Step: 972...  Training loss: 1.5394...  2.8126 sec/batch
Epoch: 5/20...  Training Step: 973...  Training loss: 1.5511...  2.8362 sec/batch
Epoch: 5/20...  Training Step: 974...  Training loss: 1.5530...  2.7891 sec/batch
Epoch: 5/20...  Training Step: 975...  Training loss: 1.5635...  2.8090 sec/batch
Epoch: 5/20...  Training Step: 976...  Training loss: 1.5681...  2.8652 sec/batch
Epoch: 5/20...  Training Step: 977...  Training loss: 1.5625...  2.8031 sec/batch
Epoch: 5/20...  Training Step: 978...  Training loss: 1.5468...  2.8769 sec/batch
Epoch: 5/20...  Training Step: 979...  Training loss: 1.5901...  2.8362 sec/batch
Epoch: 5/20...  Training Step: 980...  Training loss: 1.5561...  2.8389 sec/batch
Epoch: 5/20...  Training Step: 981...  Training loss: 1.5632...  2.7957 sec/batch
Epoch: 5/20...  Training Step: 982...  Training loss: 1.5650...  2.8320 sec/batch
Epoch: 5/20...  Training Step: 983...  Training loss: 1.5560...  2.8308 sec/batch
Epoch: 5/20...  Training Step: 984...  Training loss: 1.5388...  2.8547 sec/batch
Epoch: 5/20...  Training Step: 985...  Training loss: 1.5587...  2.8445 sec/batch
Epoch: 5/20...  Training Step: 986...  Training loss: 1.5480...  2.8530 sec/batch
Epoch: 5/20...  Training Step: 987...  Training loss: 1.5405...  2.8438 sec/batch
Epoch: 5/20...  Training Step: 988...  Training loss: 1.5730...  2.8226 sec/batch
Epoch: 5/20...  Training Step: 989...  Training loss: 1.5583...  2.8216 sec/batch
Epoch: 5/20...  Training Step: 990...  Training loss: 1.5541...  2.7910 sec/batch
Epoch: 6/20...  Training Step: 991...  Training loss: 1.6613...  2.7826 sec/batch
Epoch: 6/20...  Training Step: 992...  Training loss: 1.5614...  2.8247 sec/batch
Epoch: 6/20...  Training Step: 993...  Training loss: 1.5485...  2.8025 sec/batch
Epoch: 6/20...  Training Step: 994...  Training loss: 1.5731...  2.8366 sec/batch
Epoch: 6/20...  Training Step: 995...  Training loss: 1.5474...  2.8056 sec/batch
Epoch: 6/20...  Training Step: 996...  Training loss: 1.5198...  2.8187 sec/batch
Epoch: 6/20...  Training Step: 997...  Training loss: 1.5683...  2.8536 sec/batch
Epoch: 6/20...  Training Step: 998...  Training loss: 1.5444...  2.7822 sec/batch
Epoch: 6/20...  Training Step: 999...  Training loss: 1.5645...  2.8326 sec/batch
Epoch: 6/20...  Training Step: 1000...  Training loss: 1.5528...  2.8135 sec/batch
Epoch: 6/20...  Training Step: 1001...  Training loss: 1.5330...  2.8062 sec/batch
Epoch: 6/20...  Training Step: 1002...  Training loss: 1.5464...  2.8100 sec/batch
Epoch: 6/20...  Training Step: 1003...  Training loss: 1.5533...  2.8196 sec/batch
Epoch: 6/20...  Training Step: 1004...  Training loss: 1.5875...  2.8279 sec/batch
Epoch: 6/20...  Training Step: 1005...  Training loss: 1.5504...  2.8040 sec/batch
Epoch: 6/20...  Training Step: 1006...  Training loss: 1.5423...  2.8100 sec/batch
Epoch: 6/20...  Training Step: 1007...  Training loss: 1.5543...  2.8286 sec/batch
Epoch: 6/20...  Training Step: 1008...  Training loss: 1.5800...  2.8190 sec/batch
Epoch: 6/20...  Training Step: 1009...  Training loss: 1.5693...  2.8097 sec/batch
Epoch: 6/20...  Training Step: 1010...  Training loss: 1.5852...  2.8852 sec/batch
Epoch: 6/20...  Training Step: 1011...  Training loss: 1.5448...  2.8610 sec/batch
Epoch: 6/20...  Training Step: 1012...  Training loss: 1.5735...  2.8213 sec/batch
Epoch: 6/20...  Training Step: 1013...  Training loss: 1.5401...  2.9033 sec/batch
Epoch: 6/20...  Training Step: 1014...  Training loss: 1.5646...  2.8230 sec/batch
Epoch: 6/20...  Training Step: 1015...  Training loss: 1.5656...  2.8212 sec/batch
Epoch: 6/20...  Training Step: 1016...  Training loss: 1.5113...  2.8328 sec/batch
Epoch: 6/20...  Training Step: 1017...  Training loss: 1.5187...  2.7990 sec/batch
Epoch: 6/20...  Training Step: 1018...  Training loss: 1.5737...  2.8626 sec/batch
Epoch: 6/20...  Training Step: 1019...  Training loss: 1.5657...  2.8775 sec/batch
Epoch: 6/20...  Training Step: 1020...  Training loss: 1.5708...  2.8430 sec/batch
Epoch: 6/20...  Training Step: 1021...  Training loss: 1.5453...  2.8216 sec/batch
Epoch: 6/20...  Training Step: 1022...  Training loss: 1.5246...  2.8184 sec/batch
Epoch: 6/20...  Training Step: 1023...  Training loss: 1.5564...  2.7904 sec/batch
Epoch: 6/20...  Training Step: 1024...  Training loss: 1.5677...  2.8592 sec/batch
Epoch: 6/20...  Training Step: 1025...  Training loss: 1.5370...  2.7861 sec/batch
Epoch: 6/20...  Training Step: 1026...  Training loss: 1.5534...  2.8008 sec/batch
Epoch: 6/20...  Training Step: 1027...  Training loss: 1.5279...  2.7803 sec/batch
Epoch: 6/20...  Training Step: 1028...  Training loss: 1.5197...  2.9337 sec/batch
Epoch: 6/20...  Training Step: 1029...  Training loss: 1.4966...  2.8134 sec/batch
Epoch: 6/20...  Training Step: 1030...  Training loss: 1.5262...  2.7795 sec/batch
Epoch: 6/20...  Training Step: 1031...  Training loss: 1.5248...  2.8026 sec/batch
Epoch: 6/20...  Training Step: 1032...  Training loss: 1.5693...  2.7932 sec/batch
Epoch: 6/20...  Training Step: 1033...  Training loss: 1.5245...  2.7774 sec/batch
Epoch: 6/20...  Training Step: 1034...  Training loss: 1.5184...  2.7766 sec/batch
Epoch: 6/20...  Training Step: 1035...  Training loss: 1.5541...  2.8110 sec/batch
Epoch: 6/20...  Training Step: 1036...  Training loss: 1.5000...  2.8059 sec/batch
Epoch: 6/20...  Training Step: 1037...  Training loss: 1.5427...  2.8160 sec/batch
Epoch: 6/20...  Training Step: 1038...  Training loss: 1.5311...  2.8206 sec/batch
Epoch: 6/20...  Training Step: 1039...  Training loss: 1.5295...  2.8439 sec/batch
Epoch: 6/20...  Training Step: 1040...  Training loss: 1.5707...  2.8896 sec/batch
Epoch: 6/20...  Training Step: 1041...  Training loss: 1.5185...  2.8018 sec/batch
Epoch: 6/20...  Training Step: 1042...  Training loss: 1.5898...  2.8735 sec/batch
Epoch: 6/20...  Training Step: 1043...  Training loss: 1.5392...  2.8792 sec/batch
Epoch: 6/20...  Training Step: 1044...  Training loss: 1.5490...  2.8502 sec/batch
Epoch: 6/20...  Training Step: 1045...  Training loss: 1.5267...  2.8046 sec/batch
Epoch: 6/20...  Training Step: 1046...  Training loss: 1.5405...  2.8421 sec/batch
Epoch: 6/20...  Training Step: 1047...  Training loss: 1.5593...  2.7947 sec/batch
Epoch: 6/20...  Training Step: 1048...  Training loss: 1.5192...  2.8240 sec/batch
Epoch: 6/20...  Training Step: 1049...  Training loss: 1.5237...  2.8113 sec/batch
Epoch: 6/20...  Training Step: 1050...  Training loss: 1.5731...  2.8507 sec/batch
Epoch: 6/20...  Training Step: 1051...  Training loss: 1.5385...  2.8338 sec/batch
Epoch: 6/20...  Training Step: 1052...  Training loss: 1.5912...  2.8094 sec/batch
Epoch: 6/20...  Training Step: 1053...  Training loss: 1.5742...  2.8155 sec/batch
Epoch: 6/20...  Training Step: 1054...  Training loss: 1.5448...  2.7887 sec/batch
Epoch: 6/20...  Training Step: 1055...  Training loss: 1.5397...  2.8071 sec/batch
Epoch: 6/20...  Training Step: 1056...  Training loss: 1.5486...  2.9042 sec/batch
Epoch: 6/20...  Training Step: 1057...  Training loss: 1.5515...  2.8141 sec/batch
Epoch: 6/20...  Training Step: 1058...  Training loss: 1.5296...  2.8450 sec/batch
Epoch: 6/20...  Training Step: 1059...  Training loss: 1.5407...  2.8307 sec/batch
Epoch: 6/20...  Training Step: 1060...  Training loss: 1.5314...  2.8341 sec/batch
Epoch: 6/20...  Training Step: 1061...  Training loss: 1.5931...  2.7997 sec/batch
Epoch: 6/20...  Training Step: 1062...  Training loss: 1.5485...  2.7914 sec/batch
Epoch: 6/20...  Training Step: 1063...  Training loss: 1.5689...  2.8179 sec/batch
Epoch: 6/20...  Training Step: 1064...  Training loss: 1.5269...  2.8230 sec/batch
Epoch: 6/20...  Training Step: 1065...  Training loss: 1.5351...  2.7650 sec/batch
Epoch: 6/20...  Training Step: 1066...  Training loss: 1.5600...  2.8192 sec/batch
Epoch: 6/20...  Training Step: 1067...  Training loss: 1.5274...  2.8292 sec/batch
Epoch: 6/20...  Training Step: 1068...  Training loss: 1.5313...  2.7970 sec/batch
Epoch: 6/20...  Training Step: 1069...  Training loss: 1.4978...  2.8518 sec/batch
Epoch: 6/20...  Training Step: 1070...  Training loss: 1.5264...  2.8796 sec/batch
Epoch: 6/20...  Training Step: 1071...  Training loss: 1.4911...  3.6478 sec/batch
Epoch: 6/20...  Training Step: 1072...  Training loss: 1.5448...  2.8536 sec/batch
Epoch: 6/20...  Training Step: 1073...  Training loss: 1.4909...  2.8507 sec/batch
Epoch: 6/20...  Training Step: 1074...  Training loss: 1.5332...  2.8066 sec/batch
Epoch: 6/20...  Training Step: 1075...  Training loss: 1.5110...  2.8409 sec/batch
Epoch: 6/20...  Training Step: 1076...  Training loss: 1.5241...  2.7992 sec/batch
Epoch: 6/20...  Training Step: 1077...  Training loss: 1.5041...  2.7986 sec/batch
Epoch: 6/20...  Training Step: 1078...  Training loss: 1.5084...  2.8191 sec/batch
Epoch: 6/20...  Training Step: 1079...  Training loss: 1.4924...  2.7960 sec/batch
Epoch: 6/20...  Training Step: 1080...  Training loss: 1.5413...  2.7929 sec/batch
Epoch: 6/20...  Training Step: 1081...  Training loss: 1.5086...  2.8508 sec/batch
Epoch: 6/20...  Training Step: 1082...  Training loss: 1.5075...  2.8718 sec/batch
Epoch: 6/20...  Training Step: 1083...  Training loss: 1.5039...  2.8105 sec/batch
Epoch: 6/20...  Training Step: 1084...  Training loss: 1.5004...  2.7814 sec/batch
Epoch: 6/20...  Training Step: 1085...  Training loss: 1.5076...  2.7717 sec/batch
Epoch: 6/20...  Training Step: 1086...  Training loss: 1.5240...  2.8513 sec/batch
Epoch: 6/20...  Training Step: 1087...  Training loss: 1.5166...  2.8039 sec/batch
Epoch: 6/20...  Training Step: 1088...  Training loss: 1.4945...  2.8300 sec/batch
Epoch: 6/20...  Training Step: 1089...  Training loss: 1.5036...  2.8261 sec/batch
Epoch: 6/20...  Training Step: 1090...  Training loss: 1.4929...  2.8021 sec/batch
Epoch: 6/20...  Training Step: 1091...  Training loss: 1.5298...  2.9100 sec/batch
Epoch: 6/20...  Training Step: 1092...  Training loss: 1.5092...  2.8045 sec/batch
Epoch: 6/20...  Training Step: 1093...  Training loss: 1.5112...  2.7796 sec/batch
Epoch: 6/20...  Training Step: 1094...  Training loss: 1.5115...  2.8215 sec/batch
Epoch: 6/20...  Training Step: 1095...  Training loss: 1.4949...  2.8288 sec/batch
Epoch: 6/20...  Training Step: 1096...  Training loss: 1.5220...  2.8312 sec/batch
Epoch: 6/20...  Training Step: 1097...  Training loss: 1.5276...  2.7890 sec/batch
Epoch: 6/20...  Training Step: 1098...  Training loss: 1.5263...  2.8302 sec/batch
Epoch: 6/20...  Training Step: 1099...  Training loss: 1.5096...  2.8530 sec/batch
Epoch: 6/20...  Training Step: 1100...  Training loss: 1.5314...  2.9070 sec/batch
Epoch: 6/20...  Training Step: 1101...  Training loss: 1.5160...  2.7869 sec/batch
Epoch: 6/20...  Training Step: 1102...  Training loss: 1.5218...  2.7865 sec/batch
Epoch: 6/20...  Training Step: 1103...  Training loss: 1.5114...  2.8605 sec/batch
Epoch: 6/20...  Training Step: 1104...  Training loss: 1.5049...  2.8112 sec/batch
Epoch: 6/20...  Training Step: 1105...  Training loss: 1.4951...  2.8169 sec/batch
Epoch: 6/20...  Training Step: 1106...  Training loss: 1.4720...  2.8392 sec/batch
Epoch: 6/20...  Training Step: 1107...  Training loss: 1.5218...  2.8482 sec/batch
Epoch: 6/20...  Training Step: 1108...  Training loss: 1.5221...  2.9056 sec/batch
Epoch: 6/20...  Training Step: 1109...  Training loss: 1.5129...  2.8429 sec/batch
Epoch: 6/20...  Training Step: 1110...  Training loss: 1.5075...  2.8511 sec/batch
Epoch: 6/20...  Training Step: 1111...  Training loss: 1.5145...  2.7971 sec/batch
Epoch: 6/20...  Training Step: 1112...  Training loss: 1.4867...  2.8094 sec/batch
Epoch: 6/20...  Training Step: 1113...  Training loss: 1.4744...  2.8560 sec/batch
Epoch: 6/20...  Training Step: 1114...  Training loss: 1.5266...  2.8267 sec/batch
Epoch: 6/20...  Training Step: 1115...  Training loss: 1.5044...  2.8599 sec/batch
Epoch: 6/20...  Training Step: 1116...  Training loss: 1.4704...  2.8471 sec/batch
Epoch: 6/20...  Training Step: 1117...  Training loss: 1.5336...  2.8234 sec/batch
Epoch: 6/20...  Training Step: 1118...  Training loss: 1.5313...  2.7957 sec/batch
Epoch: 6/20...  Training Step: 1119...  Training loss: 1.5086...  2.8022 sec/batch
Epoch: 6/20...  Training Step: 1120...  Training loss: 1.4910...  2.8845 sec/batch
Epoch: 6/20...  Training Step: 1121...  Training loss: 1.4783...  2.9123 sec/batch
Epoch: 6/20...  Training Step: 1122...  Training loss: 1.4918...  2.7765 sec/batch
Epoch: 6/20...  Training Step: 1123...  Training loss: 1.5312...  2.7681 sec/batch
Epoch: 6/20...  Training Step: 1124...  Training loss: 1.5236...  2.8880 sec/batch
Epoch: 6/20...  Training Step: 1125...  Training loss: 1.5199...  2.7946 sec/batch
Epoch: 6/20...  Training Step: 1126...  Training loss: 1.5149...  2.7849 sec/batch
Epoch: 6/20...  Training Step: 1127...  Training loss: 1.5516...  2.8133 sec/batch
Epoch: 6/20...  Training Step: 1128...  Training loss: 1.5169...  2.7885 sec/batch
Epoch: 6/20...  Training Step: 1129...  Training loss: 1.5269...  2.7720 sec/batch
Epoch: 6/20...  Training Step: 1130...  Training loss: 1.5156...  2.7810 sec/batch
Epoch: 6/20...  Training Step: 1131...  Training loss: 1.5717...  2.8146 sec/batch
Epoch: 6/20...  Training Step: 1132...  Training loss: 1.5139...  2.8039 sec/batch
Epoch: 6/20...  Training Step: 1133...  Training loss: 1.4980...  2.8448 sec/batch
Epoch: 6/20...  Training Step: 1134...  Training loss: 1.5398...  2.8451 sec/batch
Epoch: 6/20...  Training Step: 1135...  Training loss: 1.4851...  2.8000 sec/batch
Epoch: 6/20...  Training Step: 1136...  Training loss: 1.5286...  2.7645 sec/batch
Epoch: 6/20...  Training Step: 1137...  Training loss: 1.5133...  2.7991 sec/batch
Epoch: 6/20...  Training Step: 1138...  Training loss: 1.5492...  2.7904 sec/batch
Epoch: 6/20...  Training Step: 1139...  Training loss: 1.5277...  2.7616 sec/batch
Epoch: 6/20...  Training Step: 1140...  Training loss: 1.5000...  2.7841 sec/batch
Epoch: 6/20...  Training Step: 1141...  Training loss: 1.4730...  2.8203 sec/batch
Epoch: 6/20...  Training Step: 1142...  Training loss: 1.4987...  2.8125 sec/batch
Epoch: 6/20...  Training Step: 1143...  Training loss: 1.5328...  2.8401 sec/batch
Epoch: 6/20...  Training Step: 1144...  Training loss: 1.5122...  2.8051 sec/batch
Epoch: 6/20...  Training Step: 1145...  Training loss: 1.5138...  2.7949 sec/batch
Epoch: 6/20...  Training Step: 1146...  Training loss: 1.5088...  2.8057 sec/batch
Epoch: 6/20...  Training Step: 1147...  Training loss: 1.5313...  2.7794 sec/batch
Epoch: 6/20...  Training Step: 1148...  Training loss: 1.5133...  2.7618 sec/batch
Epoch: 6/20...  Training Step: 1149...  Training loss: 1.4747...  2.7928 sec/batch
Epoch: 6/20...  Training Step: 1150...  Training loss: 1.5310...  2.8049 sec/batch
Epoch: 6/20...  Training Step: 1151...  Training loss: 1.5452...  2.7685 sec/batch
Epoch: 6/20...  Training Step: 1152...  Training loss: 1.5097...  2.7757 sec/batch
Epoch: 6/20...  Training Step: 1153...  Training loss: 1.5144...  2.7600 sec/batch
Epoch: 6/20...  Training Step: 1154...  Training loss: 1.5106...  2.7664 sec/batch
Epoch: 6/20...  Training Step: 1155...  Training loss: 1.5081...  2.7959 sec/batch
Epoch: 6/20...  Training Step: 1156...  Training loss: 1.5106...  2.8129 sec/batch
Epoch: 6/20...  Training Step: 1157...  Training loss: 1.5349...  3.2753 sec/batch
Epoch: 6/20...  Training Step: 1158...  Training loss: 1.5798...  2.7777 sec/batch
Epoch: 6/20...  Training Step: 1159...  Training loss: 1.5143...  2.8177 sec/batch
Epoch: 6/20...  Training Step: 1160...  Training loss: 1.5001...  2.7874 sec/batch
Epoch: 6/20...  Training Step: 1161...  Training loss: 1.5015...  2.7613 sec/batch
Epoch: 6/20...  Training Step: 1162...  Training loss: 1.4913...  2.8032 sec/batch
Epoch: 6/20...  Training Step: 1163...  Training loss: 1.5453...  2.7933 sec/batch
Epoch: 6/20...  Training Step: 1164...  Training loss: 1.5010...  2.8173 sec/batch
Epoch: 6/20...  Training Step: 1165...  Training loss: 1.5227...  2.7843 sec/batch
Epoch: 6/20...  Training Step: 1166...  Training loss: 1.4907...  2.7888 sec/batch
Epoch: 6/20...  Training Step: 1167...  Training loss: 1.4768...  2.8153 sec/batch
Epoch: 6/20...  Training Step: 1168...  Training loss: 1.5397...  2.7698 sec/batch
Epoch: 6/20...  Training Step: 1169...  Training loss: 1.4832...  2.8152 sec/batch
Epoch: 6/20...  Training Step: 1170...  Training loss: 1.4749...  2.7697 sec/batch
Epoch: 6/20...  Training Step: 1171...  Training loss: 1.4829...  2.7802 sec/batch
Epoch: 6/20...  Training Step: 1172...  Training loss: 1.4978...  2.7873 sec/batch
Epoch: 6/20...  Training Step: 1173...  Training loss: 1.5006...  2.7656 sec/batch
Epoch: 6/20...  Training Step: 1174...  Training loss: 1.5004...  2.7761 sec/batch
Epoch: 6/20...  Training Step: 1175...  Training loss: 1.4966...  2.7638 sec/batch
Epoch: 6/20...  Training Step: 1176...  Training loss: 1.4846...  2.7913 sec/batch
Epoch: 6/20...  Training Step: 1177...  Training loss: 1.5244...  2.7984 sec/batch
Epoch: 6/20...  Training Step: 1178...  Training loss: 1.4897...  2.7885 sec/batch
Epoch: 6/20...  Training Step: 1179...  Training loss: 1.5067...  2.7726 sec/batch
Epoch: 6/20...  Training Step: 1180...  Training loss: 1.4947...  2.8068 sec/batch
Epoch: 6/20...  Training Step: 1181...  Training loss: 1.4892...  2.7865 sec/batch
Epoch: 6/20...  Training Step: 1182...  Training loss: 1.4750...  2.7753 sec/batch
Epoch: 6/20...  Training Step: 1183...  Training loss: 1.5048...  2.7832 sec/batch
Epoch: 6/20...  Training Step: 1184...  Training loss: 1.4700...  2.7971 sec/batch
Epoch: 6/20...  Training Step: 1185...  Training loss: 1.4683...  2.8402 sec/batch
Epoch: 6/20...  Training Step: 1186...  Training loss: 1.5065...  2.7868 sec/batch
Epoch: 6/20...  Training Step: 1187...  Training loss: 1.4818...  2.7900 sec/batch
Epoch: 6/20...  Training Step: 1188...  Training loss: 1.4820...  2.7940 sec/batch
Epoch: 7/20...  Training Step: 1189...  Training loss: 1.5946...  2.7993 sec/batch
Epoch: 7/20...  Training Step: 1190...  Training loss: 1.5051...  2.7698 sec/batch
Epoch: 7/20...  Training Step: 1191...  Training loss: 1.4875...  2.8157 sec/batch
Epoch: 7/20...  Training Step: 1192...  Training loss: 1.5056...  2.7656 sec/batch
Epoch: 7/20...  Training Step: 1193...  Training loss: 1.4737...  2.7819 sec/batch
Epoch: 7/20...  Training Step: 1194...  Training loss: 1.4670...  2.7932 sec/batch
Epoch: 7/20...  Training Step: 1195...  Training loss: 1.4958...  2.7841 sec/batch
Epoch: 7/20...  Training Step: 1196...  Training loss: 1.4749...  2.8632 sec/batch
Epoch: 7/20...  Training Step: 1197...  Training loss: 1.5066...  2.8080 sec/batch
Epoch: 7/20...  Training Step: 1198...  Training loss: 1.4825...  2.8085 sec/batch
Epoch: 7/20...  Training Step: 1199...  Training loss: 1.4637...  2.8191 sec/batch
Epoch: 7/20...  Training Step: 1200...  Training loss: 1.4862...  2.7614 sec/batch
Epoch: 7/20...  Training Step: 1201...  Training loss: 1.4949...  2.7966 sec/batch
Epoch: 7/20...  Training Step: 1202...  Training loss: 1.5164...  2.7717 sec/batch
Epoch: 7/20...  Training Step: 1203...  Training loss: 1.4859...  2.7552 sec/batch
Epoch: 7/20...  Training Step: 1204...  Training loss: 1.4735...  2.8336 sec/batch
Epoch: 7/20...  Training Step: 1205...  Training loss: 1.4952...  2.8051 sec/batch
Epoch: 7/20...  Training Step: 1206...  Training loss: 1.5132...  2.7826 sec/batch
Epoch: 7/20...  Training Step: 1207...  Training loss: 1.4937...  2.7895 sec/batch
Epoch: 7/20...  Training Step: 1208...  Training loss: 1.5168...  2.7970 sec/batch
Epoch: 7/20...  Training Step: 1209...  Training loss: 1.4916...  2.7879 sec/batch
Epoch: 7/20...  Training Step: 1210...  Training loss: 1.5151...  2.8695 sec/batch
Epoch: 7/20...  Training Step: 1211...  Training loss: 1.4816...  2.7734 sec/batch
Epoch: 7/20...  Training Step: 1212...  Training loss: 1.4867...  2.8147 sec/batch
Epoch: 7/20...  Training Step: 1213...  Training loss: 1.4953...  2.8524 sec/batch
Epoch: 7/20...  Training Step: 1214...  Training loss: 1.4522...  2.8463 sec/batch
Epoch: 7/20...  Training Step: 1215...  Training loss: 1.4619...  2.8088 sec/batch
Epoch: 7/20...  Training Step: 1216...  Training loss: 1.5101...  2.8445 sec/batch
Epoch: 7/20...  Training Step: 1217...  Training loss: 1.5090...  2.7697 sec/batch
Epoch: 7/20...  Training Step: 1218...  Training loss: 1.5028...  2.8818 sec/batch
Epoch: 7/20...  Training Step: 1219...  Training loss: 1.4789...  2.7772 sec/batch
Epoch: 7/20...  Training Step: 1220...  Training loss: 1.4534...  2.8616 sec/batch
Epoch: 7/20...  Training Step: 1221...  Training loss: 1.5073...  2.7857 sec/batch
Epoch: 7/20...  Training Step: 1222...  Training loss: 1.5007...  2.7809 sec/batch
Epoch: 7/20...  Training Step: 1223...  Training loss: 1.4873...  2.8184 sec/batch
Epoch: 7/20...  Training Step: 1224...  Training loss: 1.4912...  2.8076 sec/batch
Epoch: 7/20...  Training Step: 1225...  Training loss: 1.4620...  2.8221 sec/batch
Epoch: 7/20...  Training Step: 1226...  Training loss: 1.4506...  2.7868 sec/batch
Epoch: 7/20...  Training Step: 1227...  Training loss: 1.4410...  2.7727 sec/batch
Epoch: 7/20...  Training Step: 1228...  Training loss: 1.4720...  2.7693 sec/batch
Epoch: 7/20...  Training Step: 1229...  Training loss: 1.4680...  2.7726 sec/batch
Epoch: 7/20...  Training Step: 1230...  Training loss: 1.5191...  2.7786 sec/batch
Epoch: 7/20...  Training Step: 1231...  Training loss: 1.4639...  2.8469 sec/batch
Epoch: 7/20...  Training Step: 1232...  Training loss: 1.4572...  2.8113 sec/batch
Epoch: 7/20...  Training Step: 1233...  Training loss: 1.4995...  2.7994 sec/batch
Epoch: 7/20...  Training Step: 1234...  Training loss: 1.4567...  2.7530 sec/batch
Epoch: 7/20...  Training Step: 1235...  Training loss: 1.4674...  2.7773 sec/batch
Epoch: 7/20...  Training Step: 1236...  Training loss: 1.4646...  2.7893 sec/batch
Epoch: 7/20...  Training Step: 1237...  Training loss: 1.4774...  2.7836 sec/batch
Epoch: 7/20...  Training Step: 1238...  Training loss: 1.5052...  2.7602 sec/batch
Epoch: 7/20...  Training Step: 1239...  Training loss: 1.4587...  2.7975 sec/batch
Epoch: 7/20...  Training Step: 1240...  Training loss: 1.5345...  2.8221 sec/batch
Epoch: 7/20...  Training Step: 1241...  Training loss: 1.4889...  2.8622 sec/batch
Epoch: 7/20...  Training Step: 1242...  Training loss: 1.4932...  2.8393 sec/batch
Epoch: 7/20...  Training Step: 1243...  Training loss: 1.4698...  2.8215 sec/batch
Epoch: 7/20...  Training Step: 1244...  Training loss: 1.4743...  2.7830 sec/batch
Epoch: 7/20...  Training Step: 1245...  Training loss: 1.4944...  2.7778 sec/batch
Epoch: 7/20...  Training Step: 1246...  Training loss: 1.4719...  2.7797 sec/batch
Epoch: 7/20...  Training Step: 1247...  Training loss: 1.4553...  2.7868 sec/batch
Epoch: 7/20...  Training Step: 1248...  Training loss: 1.5083...  2.7779 sec/batch
Epoch: 7/20...  Training Step: 1249...  Training loss: 1.4794...  2.7989 sec/batch
Epoch: 7/20...  Training Step: 1250...  Training loss: 1.5340...  2.7803 sec/batch
Epoch: 7/20...  Training Step: 1251...  Training loss: 1.5124...  2.7766 sec/batch
Epoch: 7/20...  Training Step: 1252...  Training loss: 1.4952...  2.7974 sec/batch
Epoch: 7/20...  Training Step: 1253...  Training loss: 1.4789...  2.8097 sec/batch
Epoch: 7/20...  Training Step: 1254...  Training loss: 1.4980...  2.7625 sec/batch
Epoch: 7/20...  Training Step: 1255...  Training loss: 1.4876...  2.8227 sec/batch
Epoch: 7/20...  Training Step: 1256...  Training loss: 1.4696...  2.8003 sec/batch
Epoch: 7/20...  Training Step: 1257...  Training loss: 1.4678...  2.7638 sec/batch
Epoch: 7/20...  Training Step: 1258...  Training loss: 1.4616...  2.7722 sec/batch
Epoch: 7/20...  Training Step: 1259...  Training loss: 1.5299...  2.8262 sec/batch
Epoch: 7/20...  Training Step: 1260...  Training loss: 1.4963...  2.8098 sec/batch
Epoch: 7/20...  Training Step: 1261...  Training loss: 1.5182...  2.8285 sec/batch
Epoch: 7/20...  Training Step: 1262...  Training loss: 1.4652...  2.8434 sec/batch
Epoch: 7/20...  Training Step: 1263...  Training loss: 1.4781...  2.8025 sec/batch
Epoch: 7/20...  Training Step: 1264...  Training loss: 1.4976...  2.7773 sec/batch
Epoch: 7/20...  Training Step: 1265...  Training loss: 1.4798...  2.9734 sec/batch
Epoch: 7/20...  Training Step: 1266...  Training loss: 1.4725...  2.7678 sec/batch
Epoch: 7/20...  Training Step: 1267...  Training loss: 1.4440...  2.7898 sec/batch
Epoch: 7/20...  Training Step: 1268...  Training loss: 1.4822...  2.7744 sec/batch
Epoch: 7/20...  Training Step: 1269...  Training loss: 1.4419...  3.1149 sec/batch
Epoch: 7/20...  Training Step: 1270...  Training loss: 1.4841...  2.7643 sec/batch
Epoch: 7/20...  Training Step: 1271...  Training loss: 1.4413...  2.7678 sec/batch
Epoch: 7/20...  Training Step: 1272...  Training loss: 1.4731...  2.8295 sec/batch
Epoch: 7/20...  Training Step: 1273...  Training loss: 1.4449...  2.7890 sec/batch
Epoch: 7/20...  Training Step: 1274...  Training loss: 1.4797...  2.8123 sec/batch
Epoch: 7/20...  Training Step: 1275...  Training loss: 1.4512...  2.7919 sec/batch
Epoch: 7/20...  Training Step: 1276...  Training loss: 1.4529...  2.8099 sec/batch
Epoch: 7/20...  Training Step: 1277...  Training loss: 1.4413...  2.7932 sec/batch
Epoch: 7/20...  Training Step: 1278...  Training loss: 1.4910...  2.7904 sec/batch
Epoch: 7/20...  Training Step: 1279...  Training loss: 1.4534...  2.8247 sec/batch
Epoch: 7/20...  Training Step: 1280...  Training loss: 1.4686...  2.7482 sec/batch
Epoch: 7/20...  Training Step: 1281...  Training loss: 1.4441...  2.7876 sec/batch
Epoch: 7/20...  Training Step: 1282...  Training loss: 1.4469...  2.7970 sec/batch
Epoch: 7/20...  Training Step: 1283...  Training loss: 1.4502...  2.8104 sec/batch
Epoch: 7/20...  Training Step: 1284...  Training loss: 1.4804...  2.7743 sec/batch
Epoch: 7/20...  Training Step: 1285...  Training loss: 1.4753...  2.7799 sec/batch
Epoch: 7/20...  Training Step: 1286...  Training loss: 1.4287...  2.8097 sec/batch
Epoch: 7/20...  Training Step: 1287...  Training loss: 1.4460...  2.8236 sec/batch
Epoch: 7/20...  Training Step: 1288...  Training loss: 1.4372...  2.7972 sec/batch
Epoch: 7/20...  Training Step: 1289...  Training loss: 1.4676...  2.7874 sec/batch
Epoch: 7/20...  Training Step: 1290...  Training loss: 1.4616...  2.7620 sec/batch
Epoch: 7/20...  Training Step: 1291...  Training loss: 1.4553...  2.7936 sec/batch
Epoch: 7/20...  Training Step: 1292...  Training loss: 1.4655...  2.8105 sec/batch
Epoch: 7/20...  Training Step: 1293...  Training loss: 1.4474...  2.7887 sec/batch
Epoch: 7/20...  Training Step: 1294...  Training loss: 1.4676...  2.7995 sec/batch
Epoch: 7/20...  Training Step: 1295...  Training loss: 1.4696...  2.8105 sec/batch
Epoch: 7/20...  Training Step: 1296...  Training loss: 1.4661...  3.0669 sec/batch
Epoch: 7/20...  Training Step: 1297...  Training loss: 1.4649...  2.7807 sec/batch
Epoch: 7/20...  Training Step: 1298...  Training loss: 1.4814...  2.7951 sec/batch
Epoch: 7/20...  Training Step: 1299...  Training loss: 1.4517...  2.7847 sec/batch
Epoch: 7/20...  Training Step: 1300...  Training loss: 1.4630...  2.8060 sec/batch
Epoch: 7/20...  Training Step: 1301...  Training loss: 1.4625...  2.8186 sec/batch
Epoch: 7/20...  Training Step: 1302...  Training loss: 1.4505...  2.8190 sec/batch
Epoch: 7/20...  Training Step: 1303...  Training loss: 1.4364...  2.7762 sec/batch
Epoch: 7/20...  Training Step: 1304...  Training loss: 1.4190...  2.8012 sec/batch
Epoch: 7/20...  Training Step: 1305...  Training loss: 1.4652...  2.7792 sec/batch
Epoch: 7/20...  Training Step: 1306...  Training loss: 1.4677...  2.7820 sec/batch
Epoch: 7/20...  Training Step: 1307...  Training loss: 1.4564...  2.7795 sec/batch
Epoch: 7/20...  Training Step: 1308...  Training loss: 1.4567...  2.8171 sec/batch
Epoch: 7/20...  Training Step: 1309...  Training loss: 1.4637...  2.7889 sec/batch
Epoch: 7/20...  Training Step: 1310...  Training loss: 1.4305...  2.7902 sec/batch
Epoch: 7/20...  Training Step: 1311...  Training loss: 1.4170...  2.7816 sec/batch
Epoch: 7/20...  Training Step: 1312...  Training loss: 1.4794...  2.8312 sec/batch
Epoch: 7/20...  Training Step: 1313...  Training loss: 1.4614...  2.7684 sec/batch
Epoch: 7/20...  Training Step: 1314...  Training loss: 1.4167...  2.7699 sec/batch
Epoch: 7/20...  Training Step: 1315...  Training loss: 1.4793...  2.7821 sec/batch
Epoch: 7/20...  Training Step: 1316...  Training loss: 1.4736...  2.7856 sec/batch
Epoch: 7/20...  Training Step: 1317...  Training loss: 1.4479...  2.9758 sec/batch
Epoch: 7/20...  Training Step: 1318...  Training loss: 1.4241...  2.7631 sec/batch
Epoch: 7/20...  Training Step: 1319...  Training loss: 1.4181...  2.7775 sec/batch
Epoch: 7/20...  Training Step: 1320...  Training loss: 1.4349...  2.7898 sec/batch
Epoch: 7/20...  Training Step: 1321...  Training loss: 1.4866...  2.8757 sec/batch
Epoch: 7/20...  Training Step: 1322...  Training loss: 1.4681...  2.8078 sec/batch
Epoch: 7/20...  Training Step: 1323...  Training loss: 1.4582...  2.7918 sec/batch
Epoch: 7/20...  Training Step: 1324...  Training loss: 1.4667...  2.7876 sec/batch
Epoch: 7/20...  Training Step: 1325...  Training loss: 1.4966...  2.7827 sec/batch
Epoch: 7/20...  Training Step: 1326...  Training loss: 1.4674...  2.7985 sec/batch
Epoch: 7/20...  Training Step: 1327...  Training loss: 1.4816...  2.7952 sec/batch
Epoch: 7/20...  Training Step: 1328...  Training loss: 1.4601...  2.8003 sec/batch
Epoch: 7/20...  Training Step: 1329...  Training loss: 1.5107...  2.7847 sec/batch
Epoch: 7/20...  Training Step: 1330...  Training loss: 1.4750...  2.7689 sec/batch
Epoch: 7/20...  Training Step: 1331...  Training loss: 1.4533...  2.8048 sec/batch
Epoch: 7/20...  Training Step: 1332...  Training loss: 1.4883...  2.7771 sec/batch
Epoch: 7/20...  Training Step: 1333...  Training loss: 1.4447...  2.9345 sec/batch
Epoch: 7/20...  Training Step: 1334...  Training loss: 1.4834...  2.7682 sec/batch
Epoch: 7/20...  Training Step: 1335...  Training loss: 1.4679...  2.8256 sec/batch
Epoch: 7/20...  Training Step: 1336...  Training loss: 1.4996...  2.7850 sec/batch
Epoch: 7/20...  Training Step: 1337...  Training loss: 1.4817...  2.7940 sec/batch
Epoch: 7/20...  Training Step: 1338...  Training loss: 1.4483...  2.8393 sec/batch
Epoch: 7/20...  Training Step: 1339...  Training loss: 1.4187...  2.7850 sec/batch
Epoch: 7/20...  Training Step: 1340...  Training loss: 1.4520...  2.8226 sec/batch
Epoch: 7/20...  Training Step: 1341...  Training loss: 1.4741...  2.7745 sec/batch
Epoch: 7/20...  Training Step: 1342...  Training loss: 1.4623...  2.7654 sec/batch
Epoch: 7/20...  Training Step: 1343...  Training loss: 1.4642...  2.7986 sec/batch
Epoch: 7/20...  Training Step: 1344...  Training loss: 1.4599...  2.8332 sec/batch
Epoch: 7/20...  Training Step: 1345...  Training loss: 1.4687...  2.8194 sec/batch
Epoch: 7/20...  Training Step: 1346...  Training loss: 1.4585...  2.8086 sec/batch
Epoch: 7/20...  Training Step: 1347...  Training loss: 1.4236...  2.8099 sec/batch
Epoch: 7/20...  Training Step: 1348...  Training loss: 1.4845...  2.8440 sec/batch
Epoch: 7/20...  Training Step: 1349...  Training loss: 1.4930...  2.8707 sec/batch
Epoch: 7/20...  Training Step: 1350...  Training loss: 1.4576...  2.7993 sec/batch
Epoch: 7/20...  Training Step: 1351...  Training loss: 1.4675...  2.7784 sec/batch
Epoch: 7/20...  Training Step: 1352...  Training loss: 1.4667...  2.7837 sec/batch
Epoch: 7/20...  Training Step: 1353...  Training loss: 1.4652...  2.7885 sec/batch
Epoch: 7/20...  Training Step: 1354...  Training loss: 1.4565...  2.7995 sec/batch
Epoch: 7/20...  Training Step: 1355...  Training loss: 1.4850...  2.8454 sec/batch
Epoch: 7/20...  Training Step: 1356...  Training loss: 1.5242...  2.8244 sec/batch
Epoch: 7/20...  Training Step: 1357...  Training loss: 1.4634...  2.8225 sec/batch
Epoch: 7/20...  Training Step: 1358...  Training loss: 1.4506...  2.7665 sec/batch
Epoch: 7/20...  Training Step: 1359...  Training loss: 1.4447...  2.8120 sec/batch
Epoch: 7/20...  Training Step: 1360...  Training loss: 1.4329...  2.8131 sec/batch
Epoch: 7/20...  Training Step: 1361...  Training loss: 1.4879...  2.8203 sec/batch
Epoch: 7/20...  Training Step: 1362...  Training loss: 1.4589...  2.7589 sec/batch
Epoch: 7/20...  Training Step: 1363...  Training loss: 1.4704...  2.8402 sec/batch
Epoch: 7/20...  Training Step: 1364...  Training loss: 1.4467...  2.7981 sec/batch
Epoch: 7/20...  Training Step: 1365...  Training loss: 1.4335...  2.8311 sec/batch
Epoch: 7/20...  Training Step: 1366...  Training loss: 1.4773...  2.7798 sec/batch
Epoch: 7/20...  Training Step: 1367...  Training loss: 1.4349...  2.8125 sec/batch
Epoch: 7/20...  Training Step: 1368...  Training loss: 1.4230...  2.7768 sec/batch
Epoch: 7/20...  Training Step: 1369...  Training loss: 1.4300...  2.7938 sec/batch
Epoch: 7/20...  Training Step: 1370...  Training loss: 1.4442...  2.7973 sec/batch
Epoch: 7/20...  Training Step: 1371...  Training loss: 1.4469...  2.7900 sec/batch
Epoch: 7/20...  Training Step: 1372...  Training loss: 1.4385...  2.8104 sec/batch
Epoch: 7/20...  Training Step: 1373...  Training loss: 1.4413...  2.8129 sec/batch
Epoch: 7/20...  Training Step: 1374...  Training loss: 1.4381...  2.7663 sec/batch
Epoch: 7/20...  Training Step: 1375...  Training loss: 1.4727...  2.8338 sec/batch
Epoch: 7/20...  Training Step: 1376...  Training loss: 1.4406...  2.7692 sec/batch
Epoch: 7/20...  Training Step: 1377...  Training loss: 1.4463...  2.7774 sec/batch
Epoch: 7/20...  Training Step: 1378...  Training loss: 1.4417...  2.7798 sec/batch
Epoch: 7/20...  Training Step: 1379...  Training loss: 1.4359...  2.7746 sec/batch
Epoch: 7/20...  Training Step: 1380...  Training loss: 1.4393...  2.7796 sec/batch
Epoch: 7/20...  Training Step: 1381...  Training loss: 1.4462...  2.8634 sec/batch
Epoch: 7/20...  Training Step: 1382...  Training loss: 1.4278...  2.8383 sec/batch
Epoch: 7/20...  Training Step: 1383...  Training loss: 1.4054...  2.7774 sec/batch
Epoch: 7/20...  Training Step: 1384...  Training loss: 1.4559...  2.7828 sec/batch
Epoch: 7/20...  Training Step: 1385...  Training loss: 1.4409...  2.7716 sec/batch
Epoch: 7/20...  Training Step: 1386...  Training loss: 1.4341...  2.7862 sec/batch
Epoch: 8/20...  Training Step: 1387...  Training loss: 1.5477...  2.7811 sec/batch
Epoch: 8/20...  Training Step: 1388...  Training loss: 1.4486...  2.8106 sec/batch
Epoch: 8/20...  Training Step: 1389...  Training loss: 1.4447...  2.7838 sec/batch
Epoch: 8/20...  Training Step: 1390...  Training loss: 1.4592...  2.7833 sec/batch
Epoch: 8/20...  Training Step: 1391...  Training loss: 1.4311...  2.7973 sec/batch
Epoch: 8/20...  Training Step: 1392...  Training loss: 1.4073...  2.7883 sec/batch
Epoch: 8/20...  Training Step: 1393...  Training loss: 1.4435...  2.8277 sec/batch
Epoch: 8/20...  Training Step: 1394...  Training loss: 1.4353...  2.7946 sec/batch
Epoch: 8/20...  Training Step: 1395...  Training loss: 1.4565...  2.7854 sec/batch
Epoch: 8/20...  Training Step: 1396...  Training loss: 1.4395...  2.7963 sec/batch
Epoch: 8/20...  Training Step: 1397...  Training loss: 1.4228...  2.7825 sec/batch
Epoch: 8/20...  Training Step: 1398...  Training loss: 1.4452...  2.7880 sec/batch
Epoch: 8/20...  Training Step: 1399...  Training loss: 1.4390...  2.7957 sec/batch
Epoch: 8/20...  Training Step: 1400...  Training loss: 1.4718...  2.7633 sec/batch
Epoch: 8/20...  Training Step: 1401...  Training loss: 1.4357...  2.7951 sec/batch
Epoch: 8/20...  Training Step: 1402...  Training loss: 1.4300...  2.8496 sec/batch
Epoch: 8/20...  Training Step: 1403...  Training loss: 1.4616...  2.7817 sec/batch
Epoch: 8/20...  Training Step: 1404...  Training loss: 1.4703...  2.7825 sec/batch
Epoch: 8/20...  Training Step: 1405...  Training loss: 1.4460...  2.7626 sec/batch
Epoch: 8/20...  Training Step: 1406...  Training loss: 1.4746...  2.8028 sec/batch
Epoch: 8/20...  Training Step: 1407...  Training loss: 1.4422...  2.7891 sec/batch
Epoch: 8/20...  Training Step: 1408...  Training loss: 1.4527...  2.8032 sec/batch
Epoch: 8/20...  Training Step: 1409...  Training loss: 1.4363...  2.8361 sec/batch
Epoch: 8/20...  Training Step: 1410...  Training loss: 1.4558...  2.7761 sec/batch
Epoch: 8/20...  Training Step: 1411...  Training loss: 1.4445...  2.7784 sec/batch
Epoch: 8/20...  Training Step: 1412...  Training loss: 1.3929...  2.7686 sec/batch
Epoch: 8/20...  Training Step: 1413...  Training loss: 1.4107...  2.8298 sec/batch
Epoch: 8/20...  Training Step: 1414...  Training loss: 1.4647...  2.7818 sec/batch
Epoch: 8/20...  Training Step: 1415...  Training loss: 1.4581...  2.7796 sec/batch
Epoch: 8/20...  Training Step: 1416...  Training loss: 1.4640...  2.7607 sec/batch
Epoch: 8/20...  Training Step: 1417...  Training loss: 1.4406...  2.8021 sec/batch
Epoch: 8/20...  Training Step: 1418...  Training loss: 1.4147...  2.7773 sec/batch
Epoch: 8/20...  Training Step: 1419...  Training loss: 1.4564...  2.8062 sec/batch
Epoch: 8/20...  Training Step: 1420...  Training loss: 1.4533...  2.7633 sec/batch
Epoch: 8/20...  Training Step: 1421...  Training loss: 1.4414...  2.8422 sec/batch
Epoch: 8/20...  Training Step: 1422...  Training loss: 1.4450...  2.7593 sec/batch
Epoch: 8/20...  Training Step: 1423...  Training loss: 1.4181...  2.8698 sec/batch
Epoch: 8/20...  Training Step: 1424...  Training loss: 1.3980...  2.8511 sec/batch
Epoch: 8/20...  Training Step: 1425...  Training loss: 1.3955...  2.7693 sec/batch
Epoch: 8/20...  Training Step: 1426...  Training loss: 1.4240...  2.8169 sec/batch
Epoch: 8/20...  Training Step: 1427...  Training loss: 1.4168...  2.7976 sec/batch
Epoch: 8/20...  Training Step: 1428...  Training loss: 1.4785...  2.7823 sec/batch
Epoch: 8/20...  Training Step: 1429...  Training loss: 1.4222...  2.8350 sec/batch
Epoch: 8/20...  Training Step: 1430...  Training loss: 1.4196...  2.7998 sec/batch
Epoch: 8/20...  Training Step: 1431...  Training loss: 1.4498...  2.7900 sec/batch
Epoch: 8/20...  Training Step: 1432...  Training loss: 1.4096...  2.7982 sec/batch
Epoch: 8/20...  Training Step: 1433...  Training loss: 1.4298...  3.1784 sec/batch
Epoch: 8/20...  Training Step: 1434...  Training loss: 1.4260...  2.8301 sec/batch
Epoch: 8/20...  Training Step: 1435...  Training loss: 1.4307...  2.7776 sec/batch
Epoch: 8/20...  Training Step: 1436...  Training loss: 1.4510...  2.7734 sec/batch
Epoch: 8/20...  Training Step: 1437...  Training loss: 1.4058...  2.7684 sec/batch
Epoch: 8/20...  Training Step: 1438...  Training loss: 1.4758...  2.7836 sec/batch
Epoch: 8/20...  Training Step: 1439...  Training loss: 1.4404...  2.7761 sec/batch
Epoch: 8/20...  Training Step: 1440...  Training loss: 1.4454...  2.8376 sec/batch
Epoch: 8/20...  Training Step: 1441...  Training loss: 1.4202...  2.7858 sec/batch
Epoch: 8/20...  Training Step: 1442...  Training loss: 1.4378...  2.7597 sec/batch
Epoch: 8/20...  Training Step: 1443...  Training loss: 1.4542...  2.8013 sec/batch
Epoch: 8/20...  Training Step: 1444...  Training loss: 1.4184...  2.8163 sec/batch
Epoch: 8/20...  Training Step: 1445...  Training loss: 1.4092...  2.8637 sec/batch
Epoch: 8/20...  Training Step: 1446...  Training loss: 1.4727...  2.7778 sec/batch
Epoch: 8/20...  Training Step: 1447...  Training loss: 1.4381...  2.7715 sec/batch
Epoch: 8/20...  Training Step: 1448...  Training loss: 1.4903...  2.7917 sec/batch
Epoch: 8/20...  Training Step: 1449...  Training loss: 1.4603...  2.7832 sec/batch
Epoch: 8/20...  Training Step: 1450...  Training loss: 1.4380...  2.7904 sec/batch
Epoch: 8/20...  Training Step: 1451...  Training loss: 1.4418...  2.7828 sec/batch
Epoch: 8/20...  Training Step: 1452...  Training loss: 1.4461...  2.7998 sec/batch
Epoch: 8/20...  Training Step: 1453...  Training loss: 1.4590...  2.7668 sec/batch
Epoch: 8/20...  Training Step: 1454...  Training loss: 1.4208...  2.8836 sec/batch
Epoch: 8/20...  Training Step: 1455...  Training loss: 1.4373...  2.9794 sec/batch
Epoch: 8/20...  Training Step: 1456...  Training loss: 1.4299...  2.9051 sec/batch
Epoch: 8/20...  Training Step: 1457...  Training loss: 1.4889...  2.7994 sec/batch
Epoch: 8/20...  Training Step: 1458...  Training loss: 1.4591...  2.8230 sec/batch
Epoch: 8/20...  Training Step: 1459...  Training loss: 1.4645...  2.7843 sec/batch
Epoch: 8/20...  Training Step: 1460...  Training loss: 1.4139...  2.7933 sec/batch
Epoch: 8/20...  Training Step: 1461...  Training loss: 1.4329...  2.8287 sec/batch
Epoch: 8/20...  Training Step: 1462...  Training loss: 1.4561...  2.8486 sec/batch
Epoch: 8/20...  Training Step: 1463...  Training loss: 1.4286...  2.8264 sec/batch
Epoch: 8/20...  Training Step: 1464...  Training loss: 1.4153...  2.7703 sec/batch
Epoch: 8/20...  Training Step: 1465...  Training loss: 1.4042...  2.7752 sec/batch
Epoch: 8/20...  Training Step: 1466...  Training loss: 1.4371...  2.8361 sec/batch
Epoch: 8/20...  Training Step: 1467...  Training loss: 1.3911...  2.7853 sec/batch
Epoch: 8/20...  Training Step: 1468...  Training loss: 1.4268...  2.8140 sec/batch
Epoch: 8/20...  Training Step: 1469...  Training loss: 1.3982...  2.7768 sec/batch
Epoch: 8/20...  Training Step: 1470...  Training loss: 1.4236...  2.8108 sec/batch
Epoch: 8/20...  Training Step: 1471...  Training loss: 1.4186...  2.8046 sec/batch
Epoch: 8/20...  Training Step: 1472...  Training loss: 1.4185...  2.9226 sec/batch
Epoch: 8/20...  Training Step: 1473...  Training loss: 1.4094...  2.7931 sec/batch
Epoch: 8/20...  Training Step: 1474...  Training loss: 1.4183...  2.8083 sec/batch
Epoch: 8/20...  Training Step: 1475...  Training loss: 1.3970...  2.7886 sec/batch
Epoch: 8/20...  Training Step: 1476...  Training loss: 1.4508...  2.7560 sec/batch
Epoch: 8/20...  Training Step: 1477...  Training loss: 1.4087...  2.8171 sec/batch
Epoch: 8/20...  Training Step: 1478...  Training loss: 1.4039...  2.8007 sec/batch
Epoch: 8/20...  Training Step: 1479...  Training loss: 1.3919...  2.7920 sec/batch
Epoch: 8/20...  Training Step: 1480...  Training loss: 1.4100...  3.5713 sec/batch
Epoch: 8/20...  Training Step: 1481...  Training loss: 1.4099...  2.7687 sec/batch
Epoch: 8/20...  Training Step: 1482...  Training loss: 1.4397...  2.8229 sec/batch
Epoch: 8/20...  Training Step: 1483...  Training loss: 1.4340...  2.8030 sec/batch
Epoch: 8/20...  Training Step: 1484...  Training loss: 1.3876...  2.8480 sec/batch
Epoch: 8/20...  Training Step: 1485...  Training loss: 1.4034...  2.7896 sec/batch
Epoch: 8/20...  Training Step: 1486...  Training loss: 1.3898...  2.7703 sec/batch
Epoch: 8/20...  Training Step: 1487...  Training loss: 1.4272...  2.7947 sec/batch
Epoch: 8/20...  Training Step: 1488...  Training loss: 1.4159...  2.8020 sec/batch
Epoch: 8/20...  Training Step: 1489...  Training loss: 1.4264...  2.7721 sec/batch
Epoch: 8/20...  Training Step: 1490...  Training loss: 1.4274...  2.7743 sec/batch
Epoch: 8/20...  Training Step: 1491...  Training loss: 1.4064...  2.7725 sec/batch
Epoch: 8/20...  Training Step: 1492...  Training loss: 1.4297...  2.7800 sec/batch
Epoch: 8/20...  Training Step: 1493...  Training loss: 1.4276...  2.7854 sec/batch
Epoch: 8/20...  Training Step: 1494...  Training loss: 1.4178...  2.8343 sec/batch
Epoch: 8/20...  Training Step: 1495...  Training loss: 1.4182...  2.7900 sec/batch
Epoch: 8/20...  Training Step: 1496...  Training loss: 1.4460...  2.7795 sec/batch
Epoch: 8/20...  Training Step: 1497...  Training loss: 1.4115...  2.7741 sec/batch
Epoch: 8/20...  Training Step: 1498...  Training loss: 1.4214...  2.7919 sec/batch
Epoch: 8/20...  Training Step: 1499...  Training loss: 1.4162...  2.7868 sec/batch
Epoch: 8/20...  Training Step: 1500...  Training loss: 1.4181...  2.7786 sec/batch
Epoch: 8/20...  Training Step: 1501...  Training loss: 1.4050...  2.7731 sec/batch
Epoch: 8/20...  Training Step: 1502...  Training loss: 1.3805...  2.7833 sec/batch
Epoch: 8/20...  Training Step: 1503...  Training loss: 1.4242...  2.7675 sec/batch
Epoch: 8/20...  Training Step: 1504...  Training loss: 1.4302...  2.8300 sec/batch
Epoch: 8/20...  Training Step: 1505...  Training loss: 1.4144...  2.8657 sec/batch
Epoch: 8/20...  Training Step: 1506...  Training loss: 1.4078...  2.7552 sec/batch
Epoch: 8/20...  Training Step: 1507...  Training loss: 1.4182...  2.7956 sec/batch
Epoch: 8/20...  Training Step: 1508...  Training loss: 1.3800...  2.7805 sec/batch
Epoch: 8/20...  Training Step: 1509...  Training loss: 1.3649...  2.8666 sec/batch
Epoch: 8/20...  Training Step: 1510...  Training loss: 1.4228...  2.8096 sec/batch
Epoch: 8/20...  Training Step: 1511...  Training loss: 1.4093...  2.7892 sec/batch
Epoch: 8/20...  Training Step: 1512...  Training loss: 1.3725...  2.7764 sec/batch
Epoch: 8/20...  Training Step: 1513...  Training loss: 1.4313...  2.7809 sec/batch
Epoch: 8/20...  Training Step: 1514...  Training loss: 1.4304...  2.7574 sec/batch
Epoch: 8/20...  Training Step: 1515...  Training loss: 1.4062...  2.7702 sec/batch
Epoch: 8/20...  Training Step: 1516...  Training loss: 1.3846...  2.8110 sec/batch
Epoch: 8/20...  Training Step: 1517...  Training loss: 1.3672...  2.7873 sec/batch
Epoch: 8/20...  Training Step: 1518...  Training loss: 1.3934...  2.8854 sec/batch
Epoch: 8/20...  Training Step: 1519...  Training loss: 1.4353...  2.7816 sec/batch
Epoch: 8/20...  Training Step: 1520...  Training loss: 1.4182...  2.7957 sec/batch
Epoch: 8/20...  Training Step: 1521...  Training loss: 1.4338...  2.7882 sec/batch
Epoch: 8/20...  Training Step: 1522...  Training loss: 1.4171...  2.7608 sec/batch
Epoch: 8/20...  Training Step: 1523...  Training loss: 1.4509...  2.7727 sec/batch
Epoch: 8/20...  Training Step: 1524...  Training loss: 1.4231...  2.8113 sec/batch
Epoch: 8/20...  Training Step: 1525...  Training loss: 1.4301...  2.8095 sec/batch
Epoch: 8/20...  Training Step: 1526...  Training loss: 1.4166...  2.7648 sec/batch
Epoch: 8/20...  Training Step: 1527...  Training loss: 1.4705...  2.7987 sec/batch
Epoch: 8/20...  Training Step: 1528...  Training loss: 1.4368...  2.7743 sec/batch
Epoch: 8/20...  Training Step: 1529...  Training loss: 1.3995...  2.7863 sec/batch
Epoch: 8/20...  Training Step: 1530...  Training loss: 1.4591...  2.7722 sec/batch
Epoch: 8/20...  Training Step: 1531...  Training loss: 1.4033...  2.8137 sec/batch
Epoch: 8/20...  Training Step: 1532...  Training loss: 1.4443...  2.7908 sec/batch
Epoch: 8/20...  Training Step: 1533...  Training loss: 1.4401...  2.8264 sec/batch
Epoch: 8/20...  Training Step: 1534...  Training loss: 1.4549...  2.7817 sec/batch
Epoch: 8/20...  Training Step: 1535...  Training loss: 1.4429...  2.8085 sec/batch
Epoch: 8/20...  Training Step: 1536...  Training loss: 1.4104...  2.8024 sec/batch
Epoch: 8/20...  Training Step: 1537...  Training loss: 1.3833...  2.7646 sec/batch
Epoch: 8/20...  Training Step: 1538...  Training loss: 1.3998...  2.8422 sec/batch
Epoch: 8/20...  Training Step: 1539...  Training loss: 1.4262...  2.9039 sec/batch
Epoch: 8/20...  Training Step: 1540...  Training loss: 1.4162...  2.8137 sec/batch
Epoch: 8/20...  Training Step: 1541...  Training loss: 1.4210...  2.8348 sec/batch
Epoch: 8/20...  Training Step: 1542...  Training loss: 1.4154...  2.8035 sec/batch
Epoch: 8/20...  Training Step: 1543...  Training loss: 1.4195...  2.8116 sec/batch
Epoch: 8/20...  Training Step: 1544...  Training loss: 1.4113...  2.7704 sec/batch
Epoch: 8/20...  Training Step: 1545...  Training loss: 1.3792...  2.7875 sec/batch
Epoch: 8/20...  Training Step: 1546...  Training loss: 1.4372...  2.8109 sec/batch
Epoch: 8/20...  Training Step: 1547...  Training loss: 1.4482...  2.8212 sec/batch
Epoch: 8/20...  Training Step: 1548...  Training loss: 1.4137...  2.7779 sec/batch
Epoch: 8/20...  Training Step: 1549...  Training loss: 1.4144...  2.7987 sec/batch
Epoch: 8/20...  Training Step: 1550...  Training loss: 1.4087...  2.7899 sec/batch
Epoch: 8/20...  Training Step: 1551...  Training loss: 1.4155...  3.0789 sec/batch
Epoch: 8/20...  Training Step: 1552...  Training loss: 1.4126...  2.7999 sec/batch
Epoch: 8/20...  Training Step: 1553...  Training loss: 1.4411...  2.7567 sec/batch
Epoch: 8/20...  Training Step: 1554...  Training loss: 1.4790...  2.7857 sec/batch
Epoch: 8/20...  Training Step: 1555...  Training loss: 1.4156...  2.7783 sec/batch
Epoch: 8/20...  Training Step: 1556...  Training loss: 1.4089...  2.7666 sec/batch
Epoch: 8/20...  Training Step: 1557...  Training loss: 1.4007...  2.8215 sec/batch
Epoch: 8/20...  Training Step: 1558...  Training loss: 1.4008...  2.7954 sec/batch
Epoch: 8/20...  Training Step: 1559...  Training loss: 1.4449...  2.8097 sec/batch
Epoch: 8/20...  Training Step: 1560...  Training loss: 1.4120...  2.7836 sec/batch
Epoch: 8/20...  Training Step: 1561...  Training loss: 1.4225...  2.8930 sec/batch
Epoch: 8/20...  Training Step: 1562...  Training loss: 1.3764...  2.9259 sec/batch
Epoch: 8/20...  Training Step: 1563...  Training loss: 1.3955...  2.8021 sec/batch
Epoch: 8/20...  Training Step: 1564...  Training loss: 1.4299...  2.8261 sec/batch
Epoch: 8/20...  Training Step: 1565...  Training loss: 1.3853...  2.8269 sec/batch
Epoch: 8/20...  Training Step: 1566...  Training loss: 1.3670...  2.7545 sec/batch
Epoch: 8/20...  Training Step: 1567...  Training loss: 1.3882...  2.7922 sec/batch
Epoch: 8/20...  Training Step: 1568...  Training loss: 1.3954...  2.7904 sec/batch
Epoch: 8/20...  Training Step: 1569...  Training loss: 1.3988...  2.8265 sec/batch
Epoch: 8/20...  Training Step: 1570...  Training loss: 1.4016...  2.7664 sec/batch
Epoch: 8/20...  Training Step: 1571...  Training loss: 1.3949...  2.7787 sec/batch
Epoch: 8/20...  Training Step: 1572...  Training loss: 1.3843...  2.7817 sec/batch
Epoch: 8/20...  Training Step: 1573...  Training loss: 1.4324...  2.8078 sec/batch
Epoch: 8/20...  Training Step: 1574...  Training loss: 1.3850...  2.7668 sec/batch
Epoch: 8/20...  Training Step: 1575...  Training loss: 1.3957...  2.7892 sec/batch
Epoch: 8/20...  Training Step: 1576...  Training loss: 1.4075...  2.8108 sec/batch
Epoch: 8/20...  Training Step: 1577...  Training loss: 1.3861...  2.7853 sec/batch
Epoch: 8/20...  Training Step: 1578...  Training loss: 1.3962...  2.7647 sec/batch
Epoch: 8/20...  Training Step: 1579...  Training loss: 1.4053...  2.7910 sec/batch
Epoch: 8/20...  Training Step: 1580...  Training loss: 1.3799...  2.8050 sec/batch
Epoch: 8/20...  Training Step: 1581...  Training loss: 1.3747...  2.7749 sec/batch
Epoch: 8/20...  Training Step: 1582...  Training loss: 1.4104...  2.7923 sec/batch
Epoch: 8/20...  Training Step: 1583...  Training loss: 1.3826...  2.7877 sec/batch
Epoch: 8/20...  Training Step: 1584...  Training loss: 1.3853...  2.8143 sec/batch
Epoch: 9/20...  Training Step: 1585...  Training loss: 1.5109...  2.7859 sec/batch
Epoch: 9/20...  Training Step: 1586...  Training loss: 1.4060...  2.7807 sec/batch
Epoch: 9/20...  Training Step: 1587...  Training loss: 1.3937...  2.7875 sec/batch
Epoch: 9/20...  Training Step: 1588...  Training loss: 1.4158...  2.7865 sec/batch
Epoch: 9/20...  Training Step: 1589...  Training loss: 1.3719...  2.7487 sec/batch
Epoch: 9/20...  Training Step: 1590...  Training loss: 1.3682...  2.7866 sec/batch
Epoch: 9/20...  Training Step: 1591...  Training loss: 1.3909...  2.7830 sec/batch
Epoch: 9/20...  Training Step: 1592...  Training loss: 1.3957...  2.7737 sec/batch
Epoch: 9/20...  Training Step: 1593...  Training loss: 1.4044...  2.9125 sec/batch
Epoch: 9/20...  Training Step: 1594...  Training loss: 1.3891...  2.8796 sec/batch
Epoch: 9/20...  Training Step: 1595...  Training loss: 1.3738...  2.8143 sec/batch
Epoch: 9/20...  Training Step: 1596...  Training loss: 1.3847...  2.7656 sec/batch
Epoch: 9/20...  Training Step: 1597...  Training loss: 1.3835...  2.7808 sec/batch
Epoch: 9/20...  Training Step: 1598...  Training loss: 1.4137...  2.8019 sec/batch
Epoch: 9/20...  Training Step: 1599...  Training loss: 1.3915...  2.7971 sec/batch
Epoch: 9/20...  Training Step: 1600...  Training loss: 1.3689...  2.7658 sec/batch
Epoch: 9/20...  Training Step: 1601...  Training loss: 1.4097...  2.8002 sec/batch
Epoch: 9/20...  Training Step: 1602...  Training loss: 1.4073...  2.7800 sec/batch
Epoch: 9/20...  Training Step: 1603...  Training loss: 1.3925...  2.7893 sec/batch
Epoch: 9/20...  Training Step: 1604...  Training loss: 1.4159...  2.7636 sec/batch
Epoch: 9/20...  Training Step: 1605...  Training loss: 1.3905...  2.8254 sec/batch
Epoch: 9/20...  Training Step: 1606...  Training loss: 1.4004...  2.7807 sec/batch
Epoch: 9/20...  Training Step: 1607...  Training loss: 1.3789...  2.8045 sec/batch
Epoch: 9/20...  Training Step: 1608...  Training loss: 1.4035...  3.0985 sec/batch
Epoch: 9/20...  Training Step: 1609...  Training loss: 1.3888...  2.7830 sec/batch
Epoch: 9/20...  Training Step: 1610...  Training loss: 1.3543...  2.7982 sec/batch
Epoch: 9/20...  Training Step: 1611...  Training loss: 1.3669...  2.7703 sec/batch
Epoch: 9/20...  Training Step: 1612...  Training loss: 1.4058...  2.7513 sec/batch
Epoch: 9/20...  Training Step: 1613...  Training loss: 1.3982...  2.7940 sec/batch
Epoch: 9/20...  Training Step: 1614...  Training loss: 1.4134...  2.9561 sec/batch
Epoch: 9/20...  Training Step: 1615...  Training loss: 1.3870...  2.8616 sec/batch
Epoch: 9/20...  Training Step: 1616...  Training loss: 1.3685...  2.8582 sec/batch
Epoch: 9/20...  Training Step: 1617...  Training loss: 1.4049...  2.9152 sec/batch
Epoch: 9/20...  Training Step: 1618...  Training loss: 1.4003...  2.7789 sec/batch
Epoch: 9/20...  Training Step: 1619...  Training loss: 1.3800...  2.8284 sec/batch
Epoch: 9/20...  Training Step: 1620...  Training loss: 1.3997...  2.7643 sec/batch
Epoch: 9/20...  Training Step: 1621...  Training loss: 1.3663...  2.7709 sec/batch
Epoch: 9/20...  Training Step: 1622...  Training loss: 1.3492...  2.8044 sec/batch
Epoch: 9/20...  Training Step: 1623...  Training loss: 1.3513...  2.7627 sec/batch
Epoch: 9/20...  Training Step: 1624...  Training loss: 1.3706...  2.7734 sec/batch
Epoch: 9/20...  Training Step: 1625...  Training loss: 1.3618...  2.8055 sec/batch
Epoch: 9/20...  Training Step: 1626...  Training loss: 1.4221...  2.7577 sec/batch
Epoch: 9/20...  Training Step: 1627...  Training loss: 1.3719...  2.8813 sec/batch
Epoch: 9/20...  Training Step: 1628...  Training loss: 1.3623...  2.7709 sec/batch
Epoch: 9/20...  Training Step: 1629...  Training loss: 1.4014...  2.7811 sec/batch
Epoch: 9/20...  Training Step: 1630...  Training loss: 1.3546...  2.7779 sec/batch
Epoch: 9/20...  Training Step: 1631...  Training loss: 1.3865...  2.7527 sec/batch
Epoch: 9/20...  Training Step: 1632...  Training loss: 1.3809...  2.8411 sec/batch
Epoch: 9/20...  Training Step: 1633...  Training loss: 1.3685...  2.7781 sec/batch
Epoch: 9/20...  Training Step: 1634...  Training loss: 1.4030...  2.7868 sec/batch
Epoch: 9/20...  Training Step: 1635...  Training loss: 1.3666...  2.9926 sec/batch
Epoch: 9/20...  Training Step: 1636...  Training loss: 1.4314...  2.9797 sec/batch
Epoch: 9/20...  Training Step: 1637...  Training loss: 1.3877...  3.1723 sec/batch
Epoch: 9/20...  Training Step: 1638...  Training loss: 1.4023...  3.1718 sec/batch
Epoch: 9/20...  Training Step: 1639...  Training loss: 1.3764...  3.1539 sec/batch
Epoch: 9/20...  Training Step: 1640...  Training loss: 1.3944...  3.0808 sec/batch
Epoch: 9/20...  Training Step: 1641...  Training loss: 1.4108...  3.0709 sec/batch
Epoch: 9/20...  Training Step: 1642...  Training loss: 1.3761...  3.1073 sec/batch
Epoch: 9/20...  Training Step: 1643...  Training loss: 1.3596...  3.1791 sec/batch
Epoch: 9/20...  Training Step: 1644...  Training loss: 1.4150...  3.2514 sec/batch
Epoch: 9/20...  Training Step: 1645...  Training loss: 1.3872...  3.2411 sec/batch
Epoch: 9/20...  Training Step: 1646...  Training loss: 1.4392...  3.1305 sec/batch
Epoch: 9/20...  Training Step: 1647...  Training loss: 1.4149...  3.2779 sec/batch
Epoch: 9/20...  Training Step: 1648...  Training loss: 1.3867...  3.1224 sec/batch
Epoch: 9/20...  Training Step: 1649...  Training loss: 1.3834...  3.0687 sec/batch
Epoch: 9/20...  Training Step: 1650...  Training loss: 1.3886...  3.2088 sec/batch
Epoch: 9/20...  Training Step: 1651...  Training loss: 1.3925...  3.1279 sec/batch
Epoch: 9/20...  Training Step: 1652...  Training loss: 1.3766...  3.3455 sec/batch
Epoch: 9/20...  Training Step: 1653...  Training loss: 1.3854...  3.0849 sec/batch
Epoch: 9/20...  Training Step: 1654...  Training loss: 1.3783...  3.1115 sec/batch
Epoch: 9/20...  Training Step: 1655...  Training loss: 1.4298...  3.1650 sec/batch
Epoch: 9/20...  Training Step: 1656...  Training loss: 1.4011...  3.2367 sec/batch
Epoch: 9/20...  Training Step: 1657...  Training loss: 1.4199...  3.1277 sec/batch
Epoch: 9/20...  Training Step: 1658...  Training loss: 1.3743...  3.2622 sec/batch
Epoch: 9/20...  Training Step: 1659...  Training loss: 1.3965...  3.1731 sec/batch
Epoch: 9/20...  Training Step: 1660...  Training loss: 1.4112...  3.1616 sec/batch
Epoch: 9/20...  Training Step: 1661...  Training loss: 1.3960...  3.1608 sec/batch
Epoch: 9/20...  Training Step: 1662...  Training loss: 1.3824...  3.0654 sec/batch
Epoch: 9/20...  Training Step: 1663...  Training loss: 1.3487...  2.8120 sec/batch
Epoch: 9/20...  Training Step: 1664...  Training loss: 1.3929...  2.9238 sec/batch
Epoch: 9/20...  Training Step: 1665...  Training loss: 1.3493...  3.0361 sec/batch
Epoch: 9/20...  Training Step: 1666...  Training loss: 1.3888...  3.1654 sec/batch
Epoch: 9/20...  Training Step: 1667...  Training loss: 1.3564...  3.1056 sec/batch
Epoch: 9/20...  Training Step: 1668...  Training loss: 1.3813...  3.0959 sec/batch
Epoch: 9/20...  Training Step: 1669...  Training loss: 1.3618...  3.1706 sec/batch
Epoch: 9/20...  Training Step: 1670...  Training loss: 1.3972...  3.0884 sec/batch
Epoch: 9/20...  Training Step: 1671...  Training loss: 1.3733...  2.7571 sec/batch
Epoch: 9/20...  Training Step: 1672...  Training loss: 1.3695...  2.7709 sec/batch
Epoch: 9/20...  Training Step: 1673...  Training loss: 1.3630...  2.7678 sec/batch
Epoch: 9/20...  Training Step: 1674...  Training loss: 1.4050...  2.7909 sec/batch
Epoch: 9/20...  Training Step: 1675...  Training loss: 1.3738...  2.7826 sec/batch
Epoch: 9/20...  Training Step: 1676...  Training loss: 1.3753...  2.8333 sec/batch
Epoch: 9/20...  Training Step: 1677...  Training loss: 1.3688...  2.7414 sec/batch
Epoch: 9/20...  Training Step: 1678...  Training loss: 1.3700...  2.7522 sec/batch
Epoch: 9/20...  Training Step: 1679...  Training loss: 1.3767...  2.8198 sec/batch
Epoch: 9/20...  Training Step: 1680...  Training loss: 1.3926...  2.7762 sec/batch
Epoch: 9/20...  Training Step: 1681...  Training loss: 1.3937...  2.7703 sec/batch
Epoch: 9/20...  Training Step: 1682...  Training loss: 1.3523...  2.7897 sec/batch
Epoch: 9/20...  Training Step: 1683...  Training loss: 1.3636...  2.8109 sec/batch
Epoch: 9/20...  Training Step: 1684...  Training loss: 1.3549...  2.7784 sec/batch
Epoch: 9/20...  Training Step: 1685...  Training loss: 1.3820...  2.7866 sec/batch
Epoch: 9/20...  Training Step: 1686...  Training loss: 1.3582...  2.7976 sec/batch
Epoch: 9/20...  Training Step: 1687...  Training loss: 1.3699...  2.7984 sec/batch
Epoch: 9/20...  Training Step: 1688...  Training loss: 1.3680...  2.8303 sec/batch
Epoch: 9/20...  Training Step: 1689...  Training loss: 1.3668...  2.7618 sec/batch
Epoch: 9/20...  Training Step: 1690...  Training loss: 1.3763...  2.7573 sec/batch
Epoch: 9/20...  Training Step: 1691...  Training loss: 1.3793...  2.8263 sec/batch
Epoch: 9/20...  Training Step: 1692...  Training loss: 1.3783...  2.7915 sec/batch
Epoch: 9/20...  Training Step: 1693...  Training loss: 1.3715...  2.7724 sec/batch
Epoch: 9/20...  Training Step: 1694...  Training loss: 1.3888...  2.8102 sec/batch
Epoch: 9/20...  Training Step: 1695...  Training loss: 1.3622...  2.7622 sec/batch
Epoch: 9/20...  Training Step: 1696...  Training loss: 1.3709...  2.7739 sec/batch
Epoch: 9/20...  Training Step: 1697...  Training loss: 1.3686...  2.7881 sec/batch
Epoch: 9/20...  Training Step: 1698...  Training loss: 1.3632...  2.8376 sec/batch
Epoch: 9/20...  Training Step: 1699...  Training loss: 1.3504...  2.8054 sec/batch
Epoch: 9/20...  Training Step: 1700...  Training loss: 1.3321...  2.7914 sec/batch
Epoch: 9/20...  Training Step: 1701...  Training loss: 1.3801...  2.7964 sec/batch
Epoch: 9/20...  Training Step: 1702...  Training loss: 1.3842...  2.7678 sec/batch
Epoch: 9/20...  Training Step: 1703...  Training loss: 1.3706...  2.7633 sec/batch
Epoch: 9/20...  Training Step: 1704...  Training loss: 1.3612...  2.8309 sec/batch
Epoch: 9/20...  Training Step: 1705...  Training loss: 1.3678...  2.8038 sec/batch
Epoch: 9/20...  Training Step: 1706...  Training loss: 1.3372...  2.7598 sec/batch
Epoch: 9/20...  Training Step: 1707...  Training loss: 1.3194...  2.7532 sec/batch
Epoch: 9/20...  Training Step: 1708...  Training loss: 1.3667...  2.8285 sec/batch
Epoch: 9/20...  Training Step: 1709...  Training loss: 1.3658...  2.7699 sec/batch
Epoch: 9/20...  Training Step: 1710...  Training loss: 1.3344...  2.7917 sec/batch
Epoch: 9/20...  Training Step: 1711...  Training loss: 1.3795...  2.7986 sec/batch
Epoch: 9/20...  Training Step: 1712...  Training loss: 1.3681...  2.8187 sec/batch
Epoch: 9/20...  Training Step: 1713...  Training loss: 1.3568...  2.7807 sec/batch
Epoch: 9/20...  Training Step: 1714...  Training loss: 1.3373...  2.7617 sec/batch
Epoch: 9/20...  Training Step: 1715...  Training loss: 1.3252...  2.7859 sec/batch
Epoch: 9/20...  Training Step: 1716...  Training loss: 1.3432...  2.7803 sec/batch
Epoch: 9/20...  Training Step: 1717...  Training loss: 1.3921...  2.7620 sec/batch
Epoch: 9/20...  Training Step: 1718...  Training loss: 1.3782...  2.7777 sec/batch
Epoch: 9/20...  Training Step: 1719...  Training loss: 1.3713...  2.8683 sec/batch
Epoch: 9/20...  Training Step: 1720...  Training loss: 1.3731...  2.7621 sec/batch
Epoch: 9/20...  Training Step: 1721...  Training loss: 1.3998...  2.7921 sec/batch
Epoch: 9/20...  Training Step: 1722...  Training loss: 1.3851...  2.8001 sec/batch
Epoch: 9/20...  Training Step: 1723...  Training loss: 1.3668...  2.7684 sec/batch
Epoch: 9/20...  Training Step: 1724...  Training loss: 1.3682...  2.8137 sec/batch
Epoch: 9/20...  Training Step: 1725...  Training loss: 1.4173...  2.7944 sec/batch
Epoch: 9/20...  Training Step: 1726...  Training loss: 1.3781...  2.7937 sec/batch
Epoch: 9/20...  Training Step: 1727...  Training loss: 1.3597...  2.7750 sec/batch
Epoch: 9/20...  Training Step: 1728...  Training loss: 1.3915...  2.8051 sec/batch
Epoch: 9/20...  Training Step: 1729...  Training loss: 1.3518...  2.7843 sec/batch
Epoch: 9/20...  Training Step: 1730...  Training loss: 1.3913...  2.9443 sec/batch
Epoch: 9/20...  Training Step: 1731...  Training loss: 1.3927...  2.7787 sec/batch
Epoch: 9/20...  Training Step: 1732...  Training loss: 1.3985...  2.8139 sec/batch
Epoch: 9/20...  Training Step: 1733...  Training loss: 1.4016...  2.7746 sec/batch
Epoch: 9/20...  Training Step: 1734...  Training loss: 1.3608...  2.8150 sec/batch
Epoch: 9/20...  Training Step: 1735...  Training loss: 1.3380...  2.7870 sec/batch
Epoch: 9/20...  Training Step: 1736...  Training loss: 1.3529...  2.8015 sec/batch
Epoch: 9/20...  Training Step: 1737...  Training loss: 1.3867...  2.8135 sec/batch
Epoch: 9/20...  Training Step: 1738...  Training loss: 1.3689...  2.7769 sec/batch
Epoch: 9/20...  Training Step: 1739...  Training loss: 1.3715...  2.7815 sec/batch
Epoch: 9/20...  Training Step: 1740...  Training loss: 1.3696...  2.9709 sec/batch
Epoch: 9/20...  Training Step: 1741...  Training loss: 1.3690...  2.7678 sec/batch
Epoch: 9/20...  Training Step: 1742...  Training loss: 1.3681...  2.7818 sec/batch
Epoch: 9/20...  Training Step: 1743...  Training loss: 1.3337...  2.7895 sec/batch
Epoch: 9/20...  Training Step: 1744...  Training loss: 1.3915...  2.7877 sec/batch
Epoch: 9/20...  Training Step: 1745...  Training loss: 1.3984...  2.7678 sec/batch
Epoch: 9/20...  Training Step: 1746...  Training loss: 1.3746...  2.9400 sec/batch
Epoch: 9/20...  Training Step: 1747...  Training loss: 1.3653...  2.9583 sec/batch
Epoch: 9/20...  Training Step: 1748...  Training loss: 1.3636...  2.9261 sec/batch
Epoch: 9/20...  Training Step: 1749...  Training loss: 1.3734...  2.9805 sec/batch
Epoch: 9/20...  Training Step: 1750...  Training loss: 1.3582...  2.7753 sec/batch
Epoch: 9/20...  Training Step: 1751...  Training loss: 1.3930...  2.8066 sec/batch
Epoch: 9/20...  Training Step: 1752...  Training loss: 1.4348...  2.7925 sec/batch
Epoch: 9/20...  Training Step: 1753...  Training loss: 1.3715...  2.8171 sec/batch
Epoch: 9/20...  Training Step: 1754...  Training loss: 1.3635...  2.7642 sec/batch
Epoch: 9/20...  Training Step: 1755...  Training loss: 1.3540...  2.7669 sec/batch
Epoch: 9/20...  Training Step: 1756...  Training loss: 1.3431...  2.7906 sec/batch
Epoch: 9/20...  Training Step: 1757...  Training loss: 1.3924...  2.7953 sec/batch
Epoch: 9/20...  Training Step: 1758...  Training loss: 1.3589...  2.7807 sec/batch
Epoch: 9/20...  Training Step: 1759...  Training loss: 1.3842...  2.8146 sec/batch
Epoch: 9/20...  Training Step: 1760...  Training loss: 1.3489...  2.8119 sec/batch
Epoch: 9/20...  Training Step: 1761...  Training loss: 1.3529...  2.8062 sec/batch
Epoch: 9/20...  Training Step: 1762...  Training loss: 1.3958...  2.8215 sec/batch
Epoch: 9/20...  Training Step: 1763...  Training loss: 1.3428...  2.8002 sec/batch
Epoch: 9/20...  Training Step: 1764...  Training loss: 1.3376...  2.8301 sec/batch
Epoch: 9/20...  Training Step: 1765...  Training loss: 1.3394...  2.7793 sec/batch
Epoch: 9/20...  Training Step: 1766...  Training loss: 1.3532...  2.8084 sec/batch
Epoch: 9/20...  Training Step: 1767...  Training loss: 1.3460...  2.7616 sec/batch
Epoch: 9/20...  Training Step: 1768...  Training loss: 1.3550...  2.7915 sec/batch
Epoch: 9/20...  Training Step: 1769...  Training loss: 1.3603...  2.8268 sec/batch
Epoch: 9/20...  Training Step: 1770...  Training loss: 1.3510...  2.7583 sec/batch
Epoch: 9/20...  Training Step: 1771...  Training loss: 1.3961...  2.8768 sec/batch
Epoch: 9/20...  Training Step: 1772...  Training loss: 1.3524...  2.8322 sec/batch
Epoch: 9/20...  Training Step: 1773...  Training loss: 1.3549...  2.7792 sec/batch
Epoch: 9/20...  Training Step: 1774...  Training loss: 1.3601...  2.7818 sec/batch
Epoch: 9/20...  Training Step: 1775...  Training loss: 1.3330...  2.8111 sec/batch
Epoch: 9/20...  Training Step: 1776...  Training loss: 1.3446...  3.5383 sec/batch
Epoch: 9/20...  Training Step: 1777...  Training loss: 1.3683...  2.8101 sec/batch
Epoch: 9/20...  Training Step: 1778...  Training loss: 1.3496...  2.7994 sec/batch
Epoch: 9/20...  Training Step: 1779...  Training loss: 1.3386...  2.7874 sec/batch
Epoch: 9/20...  Training Step: 1780...  Training loss: 1.3686...  2.7828 sec/batch
Epoch: 9/20...  Training Step: 1781...  Training loss: 1.3562...  2.8468 sec/batch
Epoch: 9/20...  Training Step: 1782...  Training loss: 1.3605...  2.7514 sec/batch
Epoch: 10/20...  Training Step: 1783...  Training loss: 1.5174...  2.7834 sec/batch
Epoch: 10/20...  Training Step: 1784...  Training loss: 1.3765...  2.7879 sec/batch
Epoch: 10/20...  Training Step: 1785...  Training loss: 1.3520...  2.8552 sec/batch
Epoch: 10/20...  Training Step: 1786...  Training loss: 1.3713...  2.7830 sec/batch
Epoch: 10/20...  Training Step: 1787...  Training loss: 1.3452...  2.7946 sec/batch
Epoch: 10/20...  Training Step: 1788...  Training loss: 1.3336...  2.7810 sec/batch
Epoch: 10/20...  Training Step: 1789...  Training loss: 1.3759...  2.7688 sec/batch
Epoch: 10/20...  Training Step: 1790...  Training loss: 1.3542...  2.7645 sec/batch
Epoch: 10/20...  Training Step: 1791...  Training loss: 1.3716...  2.7648 sec/batch
Epoch: 10/20...  Training Step: 1792...  Training loss: 1.3513...  2.7707 sec/batch
Epoch: 10/20...  Training Step: 1793...  Training loss: 1.3477...  2.8006 sec/batch
Epoch: 10/20...  Training Step: 1794...  Training loss: 1.3682...  2.8190 sec/batch
Epoch: 10/20...  Training Step: 1795...  Training loss: 1.3700...  2.9993 sec/batch
Epoch: 10/20...  Training Step: 1796...  Training loss: 1.4037...  2.9939 sec/batch
Epoch: 10/20...  Training Step: 1797...  Training loss: 1.3656...  2.8772 sec/batch
Epoch: 10/20...  Training Step: 1798...  Training loss: 1.3512...  2.7780 sec/batch
Epoch: 10/20...  Training Step: 1799...  Training loss: 1.3893...  2.8102 sec/batch
Epoch: 10/20...  Training Step: 1800...  Training loss: 1.3963...  2.7835 sec/batch
Epoch: 10/20...  Training Step: 1801...  Training loss: 1.3733...  2.7674 sec/batch
Epoch: 10/20...  Training Step: 1802...  Training loss: 1.3853...  2.7943 sec/batch
Epoch: 10/20...  Training Step: 1803...  Training loss: 1.3657...  2.7663 sec/batch
Epoch: 10/20...  Training Step: 1804...  Training loss: 1.3890...  2.8590 sec/batch
Epoch: 10/20...  Training Step: 1805...  Training loss: 1.3585...  2.7682 sec/batch
Epoch: 10/20...  Training Step: 1806...  Training loss: 1.3909...  2.7766 sec/batch
Epoch: 10/20...  Training Step: 1807...  Training loss: 1.3736...  2.8231 sec/batch
Epoch: 10/20...  Training Step: 1808...  Training loss: 1.3321...  2.8006 sec/batch
Epoch: 10/20...  Training Step: 1809...  Training loss: 1.3466...  2.8314 sec/batch
Epoch: 10/20...  Training Step: 1810...  Training loss: 1.3858...  2.7928 sec/batch
Epoch: 10/20...  Training Step: 1811...  Training loss: 1.3813...  2.7616 sec/batch
Epoch: 10/20...  Training Step: 1812...  Training loss: 1.3798...  2.8199 sec/batch
Epoch: 10/20...  Training Step: 1813...  Training loss: 1.3607...  2.7901 sec/batch
Epoch: 10/20...  Training Step: 1814...  Training loss: 1.3340...  2.7783 sec/batch
Epoch: 10/20...  Training Step: 1815...  Training loss: 1.3802...  2.8428 sec/batch
Epoch: 10/20...  Training Step: 1816...  Training loss: 1.3736...  2.8229 sec/batch
Epoch: 10/20...  Training Step: 1817...  Training loss: 1.3537...  2.8570 sec/batch
Epoch: 10/20...  Training Step: 1818...  Training loss: 1.3714...  2.8082 sec/batch
Epoch: 10/20...  Training Step: 1819...  Training loss: 1.3416...  2.7755 sec/batch
Epoch: 10/20...  Training Step: 1820...  Training loss: 1.3160...  2.7979 sec/batch
Epoch: 10/20...  Training Step: 1821...  Training loss: 1.3143...  2.8064 sec/batch
Epoch: 10/20...  Training Step: 1822...  Training loss: 1.3355...  2.7699 sec/batch
Epoch: 10/20...  Training Step: 1823...  Training loss: 1.3321...  2.7744 sec/batch
Epoch: 10/20...  Training Step: 1824...  Training loss: 1.3965...  2.7856 sec/batch
Epoch: 10/20...  Training Step: 1825...  Training loss: 1.3429...  2.8249 sec/batch
Epoch: 10/20...  Training Step: 1826...  Training loss: 1.3277...  2.7966 sec/batch
Epoch: 10/20...  Training Step: 1827...  Training loss: 1.3635...  2.7685 sec/batch
Epoch: 10/20...  Training Step: 1828...  Training loss: 1.3267...  2.7720 sec/batch
Epoch: 10/20...  Training Step: 1829...  Training loss: 1.3455...  2.7827 sec/batch
Epoch: 10/20...  Training Step: 1830...  Training loss: 1.3494...  2.8206 sec/batch
Epoch: 10/20...  Training Step: 1831...  Training loss: 1.3481...  2.7863 sec/batch
Epoch: 10/20...  Training Step: 1832...  Training loss: 1.3744...  2.7600 sec/batch
Epoch: 10/20...  Training Step: 1833...  Training loss: 1.3289...  2.7974 sec/batch
Epoch: 10/20...  Training Step: 1834...  Training loss: 1.4027...  2.7739 sec/batch
Epoch: 10/20...  Training Step: 1835...  Training loss: 1.3532...  2.7740 sec/batch
Epoch: 10/20...  Training Step: 1836...  Training loss: 1.3660...  2.8555 sec/batch
Epoch: 10/20...  Training Step: 1837...  Training loss: 1.3397...  2.7952 sec/batch
Epoch: 10/20...  Training Step: 1838...  Training loss: 1.3583...  2.8009 sec/batch
Epoch: 10/20...  Training Step: 1839...  Training loss: 1.3720...  2.7867 sec/batch
Epoch: 10/20...  Training Step: 1840...  Training loss: 1.3389...  2.7527 sec/batch
Epoch: 10/20...  Training Step: 1841...  Training loss: 1.3247...  2.8040 sec/batch
Epoch: 10/20...  Training Step: 1842...  Training loss: 1.3907...  2.7655 sec/batch
Epoch: 10/20...  Training Step: 1843...  Training loss: 1.3538...  2.7995 sec/batch
Epoch: 10/20...  Training Step: 1844...  Training loss: 1.4065...  2.7724 sec/batch
Epoch: 10/20...  Training Step: 1845...  Training loss: 1.3824...  2.7697 sec/batch
Epoch: 10/20...  Training Step: 1846...  Training loss: 1.3627...  2.7820 sec/batch
Epoch: 10/20...  Training Step: 1847...  Training loss: 1.3528...  2.8372 sec/batch
Epoch: 10/20...  Training Step: 1848...  Training loss: 1.3688...  2.7777 sec/batch
Epoch: 10/20...  Training Step: 1849...  Training loss: 1.3673...  2.7717 sec/batch
Epoch: 10/20...  Training Step: 1850...  Training loss: 1.3432...  2.8065 sec/batch
Epoch: 10/20...  Training Step: 1851...  Training loss: 1.3574...  2.8001 sec/batch
Epoch: 10/20...  Training Step: 1852...  Training loss: 1.3521...  2.8311 sec/batch
Epoch: 10/20...  Training Step: 1853...  Training loss: 1.3964...  2.7686 sec/batch
Epoch: 10/20...  Training Step: 1854...  Training loss: 1.3779...  2.7973 sec/batch
Epoch: 10/20...  Training Step: 1855...  Training loss: 1.3879...  2.7809 sec/batch
Epoch: 10/20...  Training Step: 1856...  Training loss: 1.3365...  2.7933 sec/batch
Epoch: 10/20...  Training Step: 1857...  Training loss: 1.3541...  2.7938 sec/batch
Epoch: 10/20...  Training Step: 1858...  Training loss: 1.3689...  2.8098 sec/batch
Epoch: 10/20...  Training Step: 1859...  Training loss: 1.3523...  2.7858 sec/batch
Epoch: 10/20...  Training Step: 1860...  Training loss: 1.3392...  2.7584 sec/batch
Epoch: 10/20...  Training Step: 1861...  Training loss: 1.3103...  2.8267 sec/batch
Epoch: 10/20...  Training Step: 1862...  Training loss: 1.3549...  2.7798 sec/batch
Epoch: 10/20...  Training Step: 1863...  Training loss: 1.3273...  2.7701 sec/batch
Epoch: 10/20...  Training Step: 1864...  Training loss: 1.3541...  2.7952 sec/batch
Epoch: 10/20...  Training Step: 1865...  Training loss: 1.3104...  2.8328 sec/batch
Epoch: 10/20...  Training Step: 1866...  Training loss: 1.3501...  2.8499 sec/batch
Epoch: 10/20...  Training Step: 1867...  Training loss: 1.3297...  2.7642 sec/batch
Epoch: 10/20...  Training Step: 1868...  Training loss: 1.3538...  2.8075 sec/batch
Epoch: 10/20...  Training Step: 1869...  Training loss: 1.3195...  2.7541 sec/batch
Epoch: 10/20...  Training Step: 1870...  Training loss: 1.3305...  2.8106 sec/batch
Epoch: 10/20...  Training Step: 1871...  Training loss: 1.3155...  2.8073 sec/batch
Epoch: 10/20...  Training Step: 1872...  Training loss: 1.3593...  2.8340 sec/batch
Epoch: 10/20...  Training Step: 1873...  Training loss: 1.3311...  2.8121 sec/batch
Epoch: 10/20...  Training Step: 1874...  Training loss: 1.3353...  2.7882 sec/batch
Epoch: 10/20...  Training Step: 1875...  Training loss: 1.3224...  2.7683 sec/batch
Epoch: 10/20...  Training Step: 1876...  Training loss: 1.3182...  2.8584 sec/batch
Epoch: 10/20...  Training Step: 1877...  Training loss: 1.3308...  2.7920 sec/batch
Epoch: 10/20...  Training Step: 1878...  Training loss: 1.3529...  2.9612 sec/batch
Epoch: 10/20...  Training Step: 1879...  Training loss: 1.3509...  2.8584 sec/batch
Epoch: 10/20...  Training Step: 1880...  Training loss: 1.3124...  2.7880 sec/batch
Epoch: 10/20...  Training Step: 1881...  Training loss: 1.3217...  2.8016 sec/batch
Epoch: 10/20...  Training Step: 1882...  Training loss: 1.3169...  2.7763 sec/batch
Epoch: 10/20...  Training Step: 1883...  Training loss: 1.3464...  2.8112 sec/batch
Epoch: 10/20...  Training Step: 1884...  Training loss: 1.3369...  2.8186 sec/batch
Epoch: 10/20...  Training Step: 1885...  Training loss: 1.3473...  2.7695 sec/batch
Epoch: 10/20...  Training Step: 1886...  Training loss: 1.3326...  2.7985 sec/batch
Epoch: 10/20...  Training Step: 1887...  Training loss: 1.3405...  2.7714 sec/batch
Epoch: 10/20...  Training Step: 1888...  Training loss: 1.3320...  2.7711 sec/batch
Epoch: 10/20...  Training Step: 1889...  Training loss: 1.3413...  2.7960 sec/batch
Epoch: 10/20...  Training Step: 1890...  Training loss: 1.3456...  2.8079 sec/batch
Epoch: 10/20...  Training Step: 1891...  Training loss: 1.3294...  2.7512 sec/batch
Epoch: 10/20...  Training Step: 1892...  Training loss: 1.3657...  2.7960 sec/batch
Epoch: 10/20...  Training Step: 1893...  Training loss: 1.3291...  2.7925 sec/batch
Epoch: 10/20...  Training Step: 1894...  Training loss: 1.3493...  2.7927 sec/batch
Epoch: 10/20...  Training Step: 1895...  Training loss: 1.3425...  2.7904 sec/batch
Epoch: 10/20...  Training Step: 1896...  Training loss: 1.3367...  2.7853 sec/batch
Epoch: 10/20...  Training Step: 1897...  Training loss: 1.3199...  2.7737 sec/batch
Epoch: 10/20...  Training Step: 1898...  Training loss: 1.3090...  2.7906 sec/batch
Epoch: 10/20...  Training Step: 1899...  Training loss: 1.3477...  2.8098 sec/batch
Epoch: 10/20...  Training Step: 1900...  Training loss: 1.3368...  2.8320 sec/batch
Epoch: 10/20...  Training Step: 1901...  Training loss: 1.3403...  2.7929 sec/batch
Epoch: 10/20...  Training Step: 1902...  Training loss: 1.3385...  2.7921 sec/batch
Epoch: 10/20...  Training Step: 1903...  Training loss: 1.3424...  2.7641 sec/batch
Epoch: 10/20...  Training Step: 1904...  Training loss: 1.3077...  2.8025 sec/batch
Epoch: 10/20...  Training Step: 1905...  Training loss: 1.2891...  2.7706 sec/batch
Epoch: 10/20...  Training Step: 1906...  Training loss: 1.3390...  2.8251 sec/batch
Epoch: 10/20...  Training Step: 1907...  Training loss: 1.3295...  2.7880 sec/batch
Epoch: 10/20...  Training Step: 1908...  Training loss: 1.2976...  3.5837 sec/batch
Epoch: 10/20...  Training Step: 1909...  Training loss: 1.3512...  2.7669 sec/batch
Epoch: 10/20...  Training Step: 1910...  Training loss: 1.3484...  2.7601 sec/batch
Epoch: 10/20...  Training Step: 1911...  Training loss: 1.3318...  2.8257 sec/batch
Epoch: 10/20...  Training Step: 1912...  Training loss: 1.2970...  2.8305 sec/batch
Epoch: 10/20...  Training Step: 1913...  Training loss: 1.2872...  2.7956 sec/batch
Epoch: 10/20...  Training Step: 1914...  Training loss: 1.3237...  2.7911 sec/batch
Epoch: 10/20...  Training Step: 1915...  Training loss: 1.3656...  2.8024 sec/batch
Epoch: 10/20...  Training Step: 1916...  Training loss: 1.3520...  2.7961 sec/batch
Epoch: 10/20...  Training Step: 1917...  Training loss: 1.3467...  2.7986 sec/batch
Epoch: 10/20...  Training Step: 1918...  Training loss: 1.3441...  2.7562 sec/batch
Epoch: 10/20...  Training Step: 1919...  Training loss: 1.3719...  2.7839 sec/batch
Epoch: 10/20...  Training Step: 1920...  Training loss: 1.3525...  2.7653 sec/batch
Epoch: 10/20...  Training Step: 1921...  Training loss: 1.3433...  2.7658 sec/batch
Epoch: 10/20...  Training Step: 1922...  Training loss: 1.3455...  2.8265 sec/batch
Epoch: 10/20...  Training Step: 1923...  Training loss: 1.3979...  2.8042 sec/batch
Epoch: 10/20...  Training Step: 1924...  Training loss: 1.3562...  2.7890 sec/batch
Epoch: 10/20...  Training Step: 1925...  Training loss: 1.3557...  2.7763 sec/batch
Epoch: 10/20...  Training Step: 1926...  Training loss: 1.3823...  2.8417 sec/batch
Epoch: 10/20...  Training Step: 1927...  Training loss: 1.3145...  2.7869 sec/batch
Epoch: 10/20...  Training Step: 1928...  Training loss: 1.3754...  2.8029 sec/batch
Epoch: 10/20...  Training Step: 1929...  Training loss: 1.3546...  2.7608 sec/batch
Epoch: 10/20...  Training Step: 1930...  Training loss: 1.3731...  2.7789 sec/batch
Epoch: 10/20...  Training Step: 1931...  Training loss: 1.3730...  2.8192 sec/batch
Epoch: 10/20...  Training Step: 1932...  Training loss: 1.3452...  2.8087 sec/batch
Epoch: 10/20...  Training Step: 1933...  Training loss: 1.3142...  2.8657 sec/batch
Epoch: 10/20...  Training Step: 1934...  Training loss: 1.3226...  2.8007 sec/batch
Epoch: 10/20...  Training Step: 1935...  Training loss: 1.3667...  2.8120 sec/batch
Epoch: 10/20...  Training Step: 1936...  Training loss: 1.3378...  2.7733 sec/batch
Epoch: 10/20...  Training Step: 1937...  Training loss: 1.3461...  2.7983 sec/batch
Epoch: 10/20...  Training Step: 1938...  Training loss: 1.3401...  2.8011 sec/batch
Epoch: 10/20...  Training Step: 1939...  Training loss: 1.3420...  2.8239 sec/batch
Epoch: 10/20...  Training Step: 1940...  Training loss: 1.3530...  2.7986 sec/batch
Epoch: 10/20...  Training Step: 1941...  Training loss: 1.3089...  2.7583 sec/batch
Epoch: 10/20...  Training Step: 1942...  Training loss: 1.3624...  2.7952 sec/batch
Epoch: 10/20...  Training Step: 1943...  Training loss: 1.3778...  2.7987 sec/batch
Epoch: 10/20...  Training Step: 1944...  Training loss: 1.3497...  2.7874 sec/batch
Epoch: 10/20...  Training Step: 1945...  Training loss: 1.3445...  2.9901 sec/batch
Epoch: 10/20...  Training Step: 1946...  Training loss: 1.3447...  2.8011 sec/batch
Epoch: 10/20...  Training Step: 1947...  Training loss: 1.3405...  2.8297 sec/batch
Epoch: 10/20...  Training Step: 1948...  Training loss: 1.3416...  2.7920 sec/batch
Epoch: 10/20...  Training Step: 1949...  Training loss: 1.3695...  2.8332 sec/batch
Epoch: 10/20...  Training Step: 1950...  Training loss: 1.4113...  2.7972 sec/batch
Epoch: 10/20...  Training Step: 1951...  Training loss: 1.3531...  2.8332 sec/batch
Epoch: 10/20...  Training Step: 1952...  Training loss: 1.3423...  2.8031 sec/batch
Epoch: 10/20...  Training Step: 1953...  Training loss: 1.3313...  2.8062 sec/batch
Epoch: 10/20...  Training Step: 1954...  Training loss: 1.3191...  2.8170 sec/batch
Epoch: 10/20...  Training Step: 1955...  Training loss: 1.3763...  2.7820 sec/batch
Epoch: 10/20...  Training Step: 1956...  Training loss: 1.3486...  2.8049 sec/batch
Epoch: 10/20...  Training Step: 1957...  Training loss: 1.3533...  2.8274 sec/batch
Epoch: 10/20...  Training Step: 1958...  Training loss: 1.3136...  2.8057 sec/batch
Epoch: 10/20...  Training Step: 1959...  Training loss: 1.3286...  2.8055 sec/batch
Epoch: 10/20...  Training Step: 1960...  Training loss: 1.3740...  2.7802 sec/batch
Epoch: 10/20...  Training Step: 1961...  Training loss: 1.3194...  2.7962 sec/batch
Epoch: 10/20...  Training Step: 1962...  Training loss: 1.3178...  2.7604 sec/batch
Epoch: 10/20...  Training Step: 1963...  Training loss: 1.3077...  2.7962 sec/batch
Epoch: 10/20...  Training Step: 1964...  Training loss: 1.3378...  2.7567 sec/batch
Epoch: 10/20...  Training Step: 1965...  Training loss: 1.3351...  2.8121 sec/batch
Epoch: 10/20...  Training Step: 1966...  Training loss: 1.3365...  2.8070 sec/batch
Epoch: 10/20...  Training Step: 1967...  Training loss: 1.3319...  2.7949 sec/batch
Epoch: 10/20...  Training Step: 1968...  Training loss: 1.3099...  2.8230 sec/batch
Epoch: 10/20...  Training Step: 1969...  Training loss: 1.3594...  2.8154 sec/batch
Epoch: 10/20...  Training Step: 1970...  Training loss: 1.3222...  2.8021 sec/batch
Epoch: 10/20...  Training Step: 1971...  Training loss: 1.3353...  2.8040 sec/batch
Epoch: 10/20...  Training Step: 1972...  Training loss: 1.3295...  2.8130 sec/batch
Epoch: 10/20...  Training Step: 1973...  Training loss: 1.3159...  2.8034 sec/batch
Epoch: 10/20...  Training Step: 1974...  Training loss: 1.3202...  2.7730 sec/batch
Epoch: 10/20...  Training Step: 1975...  Training loss: 1.3423...  2.9085 sec/batch
Epoch: 10/20...  Training Step: 1976...  Training loss: 1.3303...  2.8054 sec/batch
Epoch: 10/20...  Training Step: 1977...  Training loss: 1.3086...  2.7903 sec/batch
Epoch: 10/20...  Training Step: 1978...  Training loss: 1.3416...  2.7924 sec/batch
Epoch: 10/20...  Training Step: 1979...  Training loss: 1.3247...  2.8009 sec/batch
Epoch: 10/20...  Training Step: 1980...  Training loss: 1.3239...  2.7756 sec/batch
Epoch: 11/20...  Training Step: 1981...  Training loss: 1.4831...  2.8705 sec/batch
Epoch: 11/20...  Training Step: 1982...  Training loss: 1.3437...  2.8147 sec/batch
Epoch: 11/20...  Training Step: 1983...  Training loss: 1.3356...  2.7631 sec/batch
Epoch: 11/20...  Training Step: 1984...  Training loss: 1.3591...  2.7879 sec/batch
Epoch: 11/20...  Training Step: 1985...  Training loss: 1.3189...  2.7889 sec/batch
Epoch: 11/20...  Training Step: 1986...  Training loss: 1.3032...  2.8362 sec/batch
Epoch: 11/20...  Training Step: 1987...  Training loss: 1.3491...  2.7744 sec/batch
Epoch: 11/20...  Training Step: 1988...  Training loss: 1.3385...  2.8218 sec/batch
Epoch: 11/20...  Training Step: 1989...  Training loss: 1.3429...  2.7936 sec/batch
Epoch: 11/20...  Training Step: 1990...  Training loss: 1.3217...  2.8089 sec/batch
Epoch: 11/20...  Training Step: 1991...  Training loss: 1.3200...  2.8654 sec/batch
Epoch: 11/20...  Training Step: 1992...  Training loss: 1.3291...  2.8169 sec/batch
Epoch: 11/20...  Training Step: 1993...  Training loss: 1.3352...  2.7716 sec/batch
Epoch: 11/20...  Training Step: 1994...  Training loss: 1.3467...  2.7740 sec/batch
Epoch: 11/20...  Training Step: 1995...  Training loss: 1.3262...  2.7789 sec/batch
Epoch: 11/20...  Training Step: 1996...  Training loss: 1.3094...  2.7873 sec/batch
Epoch: 11/20...  Training Step: 1997...  Training loss: 1.3439...  2.8089 sec/batch
Epoch: 11/20...  Training Step: 1998...  Training loss: 1.3493...  2.8082 sec/batch
Epoch: 11/20...  Training Step: 1999...  Training loss: 1.3262...  2.8032 sec/batch
Epoch: 11/20...  Training Step: 2000...  Training loss: 1.3582...  2.7898 sec/batch
Epoch: 11/20...  Training Step: 2001...  Training loss: 1.3332...  2.8294 sec/batch
Epoch: 11/20...  Training Step: 2002...  Training loss: 1.3436...  2.7950 sec/batch
Epoch: 11/20...  Training Step: 2003...  Training loss: 1.3190...  2.7944 sec/batch
Epoch: 11/20...  Training Step: 2004...  Training loss: 1.3521...  2.7979 sec/batch
Epoch: 11/20...  Training Step: 2005...  Training loss: 1.3402...  2.7783 sec/batch
Epoch: 11/20...  Training Step: 2006...  Training loss: 1.2918...  2.7847 sec/batch
Epoch: 11/20...  Training Step: 2007...  Training loss: 1.3076...  2.8785 sec/batch
Epoch: 11/20...  Training Step: 2008...  Training loss: 1.3514...  2.7936 sec/batch
Epoch: 11/20...  Training Step: 2009...  Training loss: 1.3352...  2.8108 sec/batch
Epoch: 11/20...  Training Step: 2010...  Training loss: 1.3517...  2.7602 sec/batch
Epoch: 11/20...  Training Step: 2011...  Training loss: 1.3223...  2.7735 sec/batch
Epoch: 11/20...  Training Step: 2012...  Training loss: 1.3140...  2.8296 sec/batch
Epoch: 11/20...  Training Step: 2013...  Training loss: 1.3432...  2.9475 sec/batch
Epoch: 11/20...  Training Step: 2014...  Training loss: 1.3478...  2.7908 sec/batch
Epoch: 11/20...  Training Step: 2015...  Training loss: 1.3280...  2.7529 sec/batch
Epoch: 11/20...  Training Step: 2016...  Training loss: 1.3339...  2.7995 sec/batch
Epoch: 11/20...  Training Step: 2017...  Training loss: 1.3033...  2.8015 sec/batch
Epoch: 11/20...  Training Step: 2018...  Training loss: 1.2977...  2.8304 sec/batch
Epoch: 11/20...  Training Step: 2019...  Training loss: 1.2855...  2.7965 sec/batch
Epoch: 11/20...  Training Step: 2020...  Training loss: 1.3071...  2.8306 sec/batch
Epoch: 11/20...  Training Step: 2021...  Training loss: 1.3024...  2.8051 sec/batch
Epoch: 11/20...  Training Step: 2022...  Training loss: 1.3719...  2.7965 sec/batch
Epoch: 11/20...  Training Step: 2023...  Training loss: 1.3164...  2.8230 sec/batch
Epoch: 11/20...  Training Step: 2024...  Training loss: 1.3008...  2.8028 sec/batch
Epoch: 11/20...  Training Step: 2025...  Training loss: 1.3386...  2.7864 sec/batch
Epoch: 11/20...  Training Step: 2026...  Training loss: 1.3058...  2.7783 sec/batch
Epoch: 11/20...  Training Step: 2027...  Training loss: 1.3283...  2.7921 sec/batch
Epoch: 11/20...  Training Step: 2028...  Training loss: 1.3227...  2.7720 sec/batch
Epoch: 11/20...  Training Step: 2029...  Training loss: 1.3295...  2.7952 sec/batch
Epoch: 11/20...  Training Step: 2030...  Training loss: 1.3429...  2.8603 sec/batch
Epoch: 11/20...  Training Step: 2031...  Training loss: 1.3062...  2.8298 sec/batch
Epoch: 11/20...  Training Step: 2032...  Training loss: 1.3739...  2.7769 sec/batch
Epoch: 11/20...  Training Step: 2033...  Training loss: 1.3258...  2.8324 sec/batch
Epoch: 11/20...  Training Step: 2034...  Training loss: 1.3405...  2.7743 sec/batch
Epoch: 11/20...  Training Step: 2035...  Training loss: 1.3177...  2.7934 sec/batch
Epoch: 11/20...  Training Step: 2036...  Training loss: 1.3317...  2.7776 sec/batch
Epoch: 11/20...  Training Step: 2037...  Training loss: 1.3400...  2.7876 sec/batch
Epoch: 11/20...  Training Step: 2038...  Training loss: 1.3110...  2.8037 sec/batch
Epoch: 11/20...  Training Step: 2039...  Training loss: 1.3027...  2.8977 sec/batch
Epoch: 11/20...  Training Step: 2040...  Training loss: 1.3626...  2.7914 sec/batch
Epoch: 11/20...  Training Step: 2041...  Training loss: 1.3280...  2.7719 sec/batch
Epoch: 11/20...  Training Step: 2042...  Training loss: 1.3734...  2.8204 sec/batch
Epoch: 11/20...  Training Step: 2043...  Training loss: 1.3502...  2.7869 sec/batch
Epoch: 11/20...  Training Step: 2044...  Training loss: 1.3305...  2.7904 sec/batch
Epoch: 11/20...  Training Step: 2045...  Training loss: 1.3344...  2.7883 sec/batch
Epoch: 11/20...  Training Step: 2046...  Training loss: 1.3345...  2.7645 sec/batch
Epoch: 11/20...  Training Step: 2047...  Training loss: 1.3456...  2.7923 sec/batch
Epoch: 11/20...  Training Step: 2048...  Training loss: 1.3187...  2.7795 sec/batch
Epoch: 11/20...  Training Step: 2049...  Training loss: 1.3370...  2.7972 sec/batch
Epoch: 11/20...  Training Step: 2050...  Training loss: 1.3186...  2.7977 sec/batch
Epoch: 11/20...  Training Step: 2051...  Training loss: 1.3718...  2.7844 sec/batch
Epoch: 11/20...  Training Step: 2052...  Training loss: 1.3407...  2.7825 sec/batch
Epoch: 11/20...  Training Step: 2053...  Training loss: 1.3549...  2.7936 sec/batch
Epoch: 11/20...  Training Step: 2054...  Training loss: 1.3050...  2.7921 sec/batch
Epoch: 11/20...  Training Step: 2055...  Training loss: 1.3223...  2.9273 sec/batch
Epoch: 11/20...  Training Step: 2056...  Training loss: 1.3513...  2.8086 sec/batch
Epoch: 11/20...  Training Step: 2057...  Training loss: 1.3303...  2.7845 sec/batch
Epoch: 11/20...  Training Step: 2058...  Training loss: 1.3142...  2.7866 sec/batch
Epoch: 11/20...  Training Step: 2059...  Training loss: 1.2850...  2.8182 sec/batch
Epoch: 11/20...  Training Step: 2060...  Training loss: 1.3294...  2.7922 sec/batch
Epoch: 11/20...  Training Step: 2061...  Training loss: 1.2901...  2.8063 sec/batch
Epoch: 11/20...  Training Step: 2062...  Training loss: 1.3275...  2.7850 sec/batch
Epoch: 11/20...  Training Step: 2063...  Training loss: 1.2937...  2.7680 sec/batch
Epoch: 11/20...  Training Step: 2064...  Training loss: 1.3284...  2.7936 sec/batch
Epoch: 11/20...  Training Step: 2065...  Training loss: 1.3034...  2.7891 sec/batch
Epoch: 11/20...  Training Step: 2066...  Training loss: 1.3234...  2.8104 sec/batch
Epoch: 11/20...  Training Step: 2067...  Training loss: 1.3048...  2.8669 sec/batch
Epoch: 11/20...  Training Step: 2068...  Training loss: 1.3027...  2.7840 sec/batch
Epoch: 11/20...  Training Step: 2069...  Training loss: 1.2987...  2.8669 sec/batch
Epoch: 11/20...  Training Step: 2070...  Training loss: 1.3465...  3.0289 sec/batch
Epoch: 11/20...  Training Step: 2071...  Training loss: 1.3148...  2.9180 sec/batch
Epoch: 11/20...  Training Step: 2072...  Training loss: 1.3189...  2.9480 sec/batch
Epoch: 11/20...  Training Step: 2073...  Training loss: 1.2944...  2.8931 sec/batch
Epoch: 11/20...  Training Step: 2074...  Training loss: 1.2970...  2.9243 sec/batch
Epoch: 11/20...  Training Step: 2075...  Training loss: 1.3033...  2.9262 sec/batch
Epoch: 11/20...  Training Step: 2076...  Training loss: 1.3409...  2.9289 sec/batch
Epoch: 11/20...  Training Step: 2077...  Training loss: 1.3333...  2.8894 sec/batch
Epoch: 11/20...  Training Step: 2078...  Training loss: 1.2934...  2.8994 sec/batch
Epoch: 11/20...  Training Step: 2079...  Training loss: 1.3046...  2.9061 sec/batch
Epoch: 11/20...  Training Step: 2080...  Training loss: 1.2895...  2.8787 sec/batch
Epoch: 11/20...  Training Step: 2081...  Training loss: 1.3264...  2.9039 sec/batch
Epoch: 11/20...  Training Step: 2082...  Training loss: 1.3194...  2.8869 sec/batch
Epoch: 11/20...  Training Step: 2083...  Training loss: 1.3224...  2.8966 sec/batch
Epoch: 11/20...  Training Step: 2084...  Training loss: 1.3134...  2.8945 sec/batch
Epoch: 11/20...  Training Step: 2085...  Training loss: 1.3142...  2.8633 sec/batch
Epoch: 11/20...  Training Step: 2086...  Training loss: 1.3156...  2.8984 sec/batch
Epoch: 11/20...  Training Step: 2087...  Training loss: 1.3217...  2.9198 sec/batch
Epoch: 11/20...  Training Step: 2088...  Training loss: 1.3293...  2.8878 sec/batch
Epoch: 11/20...  Training Step: 2089...  Training loss: 1.3105...  2.8841 sec/batch
Epoch: 11/20...  Training Step: 2090...  Training loss: 1.3347...  2.9312 sec/batch
Epoch: 11/20...  Training Step: 2091...  Training loss: 1.3051...  2.9524 sec/batch
Epoch: 11/20...  Training Step: 2092...  Training loss: 1.3213...  2.8997 sec/batch
Epoch: 11/20...  Training Step: 2093...  Training loss: 1.3195...  2.8883 sec/batch
Epoch: 11/20...  Training Step: 2094...  Training loss: 1.3090...  2.8859 sec/batch
Epoch: 11/20...  Training Step: 2095...  Training loss: 1.2855...  2.9121 sec/batch
Epoch: 11/20...  Training Step: 2096...  Training loss: 1.2854...  2.8697 sec/batch
Epoch: 11/20...  Training Step: 2097...  Training loss: 1.3252...  2.8762 sec/batch
Epoch: 11/20...  Training Step: 2098...  Training loss: 1.3196...  2.8647 sec/batch
Epoch: 11/20...  Training Step: 2099...  Training loss: 1.3167...  2.8733 sec/batch
Epoch: 11/20...  Training Step: 2100...  Training loss: 1.3110...  2.9306 sec/batch
Epoch: 11/20...  Training Step: 2101...  Training loss: 1.3253...  2.8652 sec/batch
Epoch: 11/20...  Training Step: 2102...  Training loss: 1.2786...  3.0461 sec/batch
Epoch: 11/20...  Training Step: 2103...  Training loss: 1.2679...  2.8604 sec/batch
Epoch: 11/20...  Training Step: 2104...  Training loss: 1.3151...  2.7771 sec/batch
Epoch: 11/20...  Training Step: 2105...  Training loss: 1.3117...  2.7796 sec/batch
Epoch: 11/20...  Training Step: 2106...  Training loss: 1.2842...  2.8193 sec/batch
Epoch: 11/20...  Training Step: 2107...  Training loss: 1.3299...  2.7937 sec/batch
Epoch: 11/20...  Training Step: 2108...  Training loss: 1.3123...  2.7865 sec/batch
Epoch: 11/20...  Training Step: 2109...  Training loss: 1.2994...  2.7974 sec/batch
Epoch: 11/20...  Training Step: 2110...  Training loss: 1.2852...  2.7811 sec/batch
Epoch: 11/20...  Training Step: 2111...  Training loss: 1.2683...  2.8085 sec/batch
Epoch: 11/20...  Training Step: 2112...  Training loss: 1.2954...  2.7854 sec/batch
Epoch: 11/20...  Training Step: 2113...  Training loss: 1.3458...  2.8595 sec/batch
Epoch: 11/20...  Training Step: 2114...  Training loss: 1.3248...  2.7707 sec/batch
Epoch: 11/20...  Training Step: 2115...  Training loss: 1.3283...  2.7882 sec/batch
Epoch: 11/20...  Training Step: 2116...  Training loss: 1.3249...  2.9243 sec/batch
Epoch: 11/20...  Training Step: 2117...  Training loss: 1.3515...  2.7877 sec/batch
Epoch: 11/20...  Training Step: 2118...  Training loss: 1.3348...  2.7530 sec/batch
Epoch: 11/20...  Training Step: 2119...  Training loss: 1.3310...  2.7943 sec/batch
Epoch: 11/20...  Training Step: 2120...  Training loss: 1.3146...  2.7906 sec/batch
Epoch: 11/20...  Training Step: 2121...  Training loss: 1.3656...  2.7884 sec/batch
Epoch: 11/20...  Training Step: 2122...  Training loss: 1.3277...  2.7778 sec/batch
Epoch: 11/20...  Training Step: 2123...  Training loss: 1.3211...  2.8128 sec/batch
Epoch: 11/20...  Training Step: 2124...  Training loss: 1.3486...  2.8164 sec/batch
Epoch: 11/20...  Training Step: 2125...  Training loss: 1.3019...  2.8221 sec/batch
Epoch: 11/20...  Training Step: 2126...  Training loss: 1.3455...  2.8523 sec/batch
Epoch: 11/20...  Training Step: 2127...  Training loss: 1.3349...  2.7986 sec/batch
Epoch: 11/20...  Training Step: 2128...  Training loss: 1.3656...  2.8060 sec/batch
Epoch: 11/20...  Training Step: 2129...  Training loss: 1.3534...  2.7770 sec/batch
Epoch: 11/20...  Training Step: 2130...  Training loss: 1.3167...  2.8062 sec/batch
Epoch: 11/20...  Training Step: 2131...  Training loss: 1.2834...  2.8023 sec/batch
Epoch: 11/20...  Training Step: 2132...  Training loss: 1.2962...  2.7795 sec/batch
Epoch: 11/20...  Training Step: 2133...  Training loss: 1.3315...  2.7764 sec/batch
Epoch: 11/20...  Training Step: 2134...  Training loss: 1.3182...  2.8740 sec/batch
Epoch: 11/20...  Training Step: 2135...  Training loss: 1.3153...  2.8323 sec/batch
Epoch: 11/20...  Training Step: 2136...  Training loss: 1.3157...  2.8132 sec/batch
Epoch: 11/20...  Training Step: 2137...  Training loss: 1.3173...  2.7668 sec/batch
Epoch: 11/20...  Training Step: 2138...  Training loss: 1.3114...  2.7827 sec/batch
Epoch: 11/20...  Training Step: 2139...  Training loss: 1.2875...  2.7920 sec/batch
Epoch: 11/20...  Training Step: 2140...  Training loss: 1.3359...  2.8183 sec/batch
Epoch: 11/20...  Training Step: 2141...  Training loss: 1.3475...  2.7759 sec/batch
Epoch: 11/20...  Training Step: 2142...  Training loss: 1.3237...  2.7978 sec/batch
Epoch: 11/20...  Training Step: 2143...  Training loss: 1.3230...  2.7811 sec/batch
Epoch: 11/20...  Training Step: 2144...  Training loss: 1.3198...  2.7653 sec/batch
Epoch: 11/20...  Training Step: 2145...  Training loss: 1.3313...  2.8497 sec/batch
Epoch: 11/20...  Training Step: 2146...  Training loss: 1.3113...  2.7726 sec/batch
Epoch: 11/20...  Training Step: 2147...  Training loss: 1.3430...  2.7887 sec/batch
Epoch: 11/20...  Training Step: 2148...  Training loss: 1.3858...  2.7971 sec/batch
Epoch: 11/20...  Training Step: 2149...  Training loss: 1.3301...  2.7803 sec/batch
Epoch: 11/20...  Training Step: 2150...  Training loss: 1.3179...  2.7892 sec/batch
Epoch: 11/20...  Training Step: 2151...  Training loss: 1.3170...  2.8134 sec/batch
Epoch: 11/20...  Training Step: 2152...  Training loss: 1.3044...  2.7952 sec/batch
Epoch: 11/20...  Training Step: 2153...  Training loss: 1.3524...  2.7844 sec/batch
Epoch: 11/20...  Training Step: 2154...  Training loss: 1.3212...  2.7702 sec/batch
Epoch: 11/20...  Training Step: 2155...  Training loss: 1.3298...  2.7935 sec/batch
Epoch: 11/20...  Training Step: 2156...  Training loss: 1.2932...  2.7769 sec/batch
Epoch: 11/20...  Training Step: 2157...  Training loss: 1.3034...  2.8211 sec/batch
Epoch: 11/20...  Training Step: 2158...  Training loss: 1.3467...  2.7921 sec/batch
Epoch: 11/20...  Training Step: 2159...  Training loss: 1.2964...  2.8140 sec/batch
Epoch: 11/20...  Training Step: 2160...  Training loss: 1.2964...  2.7815 sec/batch
Epoch: 11/20...  Training Step: 2161...  Training loss: 1.2971...  2.8327 sec/batch
Epoch: 11/20...  Training Step: 2162...  Training loss: 1.3121...  2.7864 sec/batch
Epoch: 11/20...  Training Step: 2163...  Training loss: 1.3130...  2.7808 sec/batch
Epoch: 11/20...  Training Step: 2164...  Training loss: 1.3125...  2.7786 sec/batch
Epoch: 11/20...  Training Step: 2165...  Training loss: 1.3093...  2.8404 sec/batch
Epoch: 11/20...  Training Step: 2166...  Training loss: 1.2926...  2.7883 sec/batch
Epoch: 11/20...  Training Step: 2167...  Training loss: 1.3365...  2.8081 sec/batch
Epoch: 11/20...  Training Step: 2168...  Training loss: 1.3029...  2.7769 sec/batch
Epoch: 11/20...  Training Step: 2169...  Training loss: 1.2983...  2.7800 sec/batch
Epoch: 11/20...  Training Step: 2170...  Training loss: 1.3124...  2.8907 sec/batch
Epoch: 11/20...  Training Step: 2171...  Training loss: 1.2908...  2.7953 sec/batch
Epoch: 11/20...  Training Step: 2172...  Training loss: 1.2950...  2.7784 sec/batch
Epoch: 11/20...  Training Step: 2173...  Training loss: 1.3196...  2.8277 sec/batch
Epoch: 11/20...  Training Step: 2174...  Training loss: 1.2918...  2.7845 sec/batch
Epoch: 11/20...  Training Step: 2175...  Training loss: 1.2869...  2.7675 sec/batch
Epoch: 11/20...  Training Step: 2176...  Training loss: 1.3164...  2.8438 sec/batch
Epoch: 11/20...  Training Step: 2177...  Training loss: 1.3105...  2.8212 sec/batch
Epoch: 11/20...  Training Step: 2178...  Training loss: 1.3021...  2.9999 sec/batch
Epoch: 12/20...  Training Step: 2179...  Training loss: 1.4881...  2.7620 sec/batch
Epoch: 12/20...  Training Step: 2180...  Training loss: 1.3254...  2.8229 sec/batch
Epoch: 12/20...  Training Step: 2181...  Training loss: 1.3154...  3.6760 sec/batch
Epoch: 12/20...  Training Step: 2182...  Training loss: 1.3395...  2.7668 sec/batch
Epoch: 12/20...  Training Step: 2183...  Training loss: 1.3016...  2.8204 sec/batch
Epoch: 12/20...  Training Step: 2184...  Training loss: 1.2897...  2.8222 sec/batch
Epoch: 12/20...  Training Step: 2185...  Training loss: 1.3252...  2.7812 sec/batch
Epoch: 12/20...  Training Step: 2186...  Training loss: 1.3167...  2.7788 sec/batch
Epoch: 12/20...  Training Step: 2187...  Training loss: 1.3262...  2.7944 sec/batch
Epoch: 12/20...  Training Step: 2188...  Training loss: 1.3139...  2.8135 sec/batch
Epoch: 12/20...  Training Step: 2189...  Training loss: 1.3000...  2.7931 sec/batch
Epoch: 12/20...  Training Step: 2190...  Training loss: 1.3138...  2.7565 sec/batch
Epoch: 12/20...  Training Step: 2191...  Training loss: 1.3340...  2.7912 sec/batch
Epoch: 12/20...  Training Step: 2192...  Training loss: 1.3452...  2.7651 sec/batch
Epoch: 12/20...  Training Step: 2193...  Training loss: 1.3086...  2.8063 sec/batch
Epoch: 12/20...  Training Step: 2194...  Training loss: 1.2934...  2.7805 sec/batch
Epoch: 12/20...  Training Step: 2195...  Training loss: 1.3307...  2.7994 sec/batch
Epoch: 12/20...  Training Step: 2196...  Training loss: 1.3386...  2.7801 sec/batch
Epoch: 12/20...  Training Step: 2197...  Training loss: 1.3218...  2.8684 sec/batch
Epoch: 12/20...  Training Step: 2198...  Training loss: 1.3420...  3.0254 sec/batch
Epoch: 12/20...  Training Step: 2199...  Training loss: 1.3100...  2.8341 sec/batch
Epoch: 12/20...  Training Step: 2200...  Training loss: 1.3193...  2.7912 sec/batch
Epoch: 12/20...  Training Step: 2201...  Training loss: 1.3052...  2.7836 sec/batch
Epoch: 12/20...  Training Step: 2202...  Training loss: 1.3289...  2.7409 sec/batch
Epoch: 12/20...  Training Step: 2203...  Training loss: 1.3189...  2.7896 sec/batch
Epoch: 12/20...  Training Step: 2204...  Training loss: 1.2743...  2.7942 sec/batch
Epoch: 12/20...  Training Step: 2205...  Training loss: 1.2903...  2.8384 sec/batch
Epoch: 12/20...  Training Step: 2206...  Training loss: 1.3302...  2.8023 sec/batch
Epoch: 12/20...  Training Step: 2207...  Training loss: 1.3167...  2.7787 sec/batch
Epoch: 12/20...  Training Step: 2208...  Training loss: 1.3323...  2.7727 sec/batch
Epoch: 12/20...  Training Step: 2209...  Training loss: 1.2920...  2.8176 sec/batch
Epoch: 12/20...  Training Step: 2210...  Training loss: 1.2879...  2.7830 sec/batch
Epoch: 12/20...  Training Step: 2211...  Training loss: 1.3175...  2.8038 sec/batch
Epoch: 12/20...  Training Step: 2212...  Training loss: 1.3225...  2.7748 sec/batch
Epoch: 12/20...  Training Step: 2213...  Training loss: 1.3015...  2.7805 sec/batch
Epoch: 12/20...  Training Step: 2214...  Training loss: 1.3134...  2.7833 sec/batch
Epoch: 12/20...  Training Step: 2215...  Training loss: 1.2823...  2.7855 sec/batch
Epoch: 12/20...  Training Step: 2216...  Training loss: 1.2757...  2.7776 sec/batch
Epoch: 12/20...  Training Step: 2217...  Training loss: 1.2726...  2.8395 sec/batch
Epoch: 12/20...  Training Step: 2218...  Training loss: 1.2928...  2.7974 sec/batch
Epoch: 12/20...  Training Step: 2219...  Training loss: 1.2836...  2.7957 sec/batch
Epoch: 12/20...  Training Step: 2220...  Training loss: 1.3500...  2.7939 sec/batch
Epoch: 12/20...  Training Step: 2221...  Training loss: 1.2986...  2.7956 sec/batch
Epoch: 12/20...  Training Step: 2222...  Training loss: 1.2880...  2.8357 sec/batch
Epoch: 12/20...  Training Step: 2223...  Training loss: 1.3243...  2.7777 sec/batch
Epoch: 12/20...  Training Step: 2224...  Training loss: 1.2866...  2.8043 sec/batch
Epoch: 12/20...  Training Step: 2225...  Training loss: 1.3038...  2.8190 sec/batch
Epoch: 12/20...  Training Step: 2226...  Training loss: 1.3041...  2.7994 sec/batch
Epoch: 12/20...  Training Step: 2227...  Training loss: 1.3030...  2.7740 sec/batch
Epoch: 12/20...  Training Step: 2228...  Training loss: 1.3190...  2.8127 sec/batch
Epoch: 12/20...  Training Step: 2229...  Training loss: 1.2795...  2.7951 sec/batch
Epoch: 12/20...  Training Step: 2230...  Training loss: 1.3504...  2.8613 sec/batch
Epoch: 12/20...  Training Step: 2231...  Training loss: 1.3129...  2.7968 sec/batch
Epoch: 12/20...  Training Step: 2232...  Training loss: 1.3218...  2.7705 sec/batch
Epoch: 12/20...  Training Step: 2233...  Training loss: 1.2947...  2.7708 sec/batch
Epoch: 12/20...  Training Step: 2234...  Training loss: 1.3109...  2.7970 sec/batch
Epoch: 12/20...  Training Step: 2235...  Training loss: 1.3235...  2.7776 sec/batch
Epoch: 12/20...  Training Step: 2236...  Training loss: 1.2974...  2.7958 sec/batch
Epoch: 12/20...  Training Step: 2237...  Training loss: 1.2774...  2.8320 sec/batch
Epoch: 12/20...  Training Step: 2238...  Training loss: 1.3400...  2.8189 sec/batch
Epoch: 12/20...  Training Step: 2239...  Training loss: 1.3119...  2.7834 sec/batch
Epoch: 12/20...  Training Step: 2240...  Training loss: 1.3483...  2.7675 sec/batch
Epoch: 12/20...  Training Step: 2241...  Training loss: 1.3353...  2.8324 sec/batch
Epoch: 12/20...  Training Step: 2242...  Training loss: 1.3118...  2.7754 sec/batch
Epoch: 12/20...  Training Step: 2243...  Training loss: 1.3076...  2.7867 sec/batch
Epoch: 12/20...  Training Step: 2244...  Training loss: 1.3193...  2.8104 sec/batch
Epoch: 12/20...  Training Step: 2245...  Training loss: 1.3302...  2.7816 sec/batch
Epoch: 12/20...  Training Step: 2246...  Training loss: 1.2905...  2.7884 sec/batch
Epoch: 12/20...  Training Step: 2247...  Training loss: 1.3121...  2.7866 sec/batch
Epoch: 12/20...  Training Step: 2248...  Training loss: 1.3012...  2.7667 sec/batch
Epoch: 12/20...  Training Step: 2249...  Training loss: 1.3483...  2.8200 sec/batch
Epoch: 12/20...  Training Step: 2250...  Training loss: 1.3295...  2.7958 sec/batch
Epoch: 12/20...  Training Step: 2251...  Training loss: 1.3434...  2.7893 sec/batch
Epoch: 12/20...  Training Step: 2252...  Training loss: 1.2960...  2.8090 sec/batch
Epoch: 12/20...  Training Step: 2253...  Training loss: 1.3143...  2.7749 sec/batch
Epoch: 12/20...  Training Step: 2254...  Training loss: 1.3329...  2.8081 sec/batch
Epoch: 12/20...  Training Step: 2255...  Training loss: 1.3095...  2.7850 sec/batch
Epoch: 12/20...  Training Step: 2256...  Training loss: 1.2952...  2.8110 sec/batch
Epoch: 12/20...  Training Step: 2257...  Training loss: 1.2691...  2.7723 sec/batch
Epoch: 12/20...  Training Step: 2258...  Training loss: 1.3046...  2.7964 sec/batch
Epoch: 12/20...  Training Step: 2259...  Training loss: 1.2695...  2.8005 sec/batch
Epoch: 12/20...  Training Step: 2260...  Training loss: 1.2978...  2.8120 sec/batch
Epoch: 12/20...  Training Step: 2261...  Training loss: 1.2673...  2.8124 sec/batch
Epoch: 12/20...  Training Step: 2262...  Training loss: 1.3013...  2.8330 sec/batch
Epoch: 12/20...  Training Step: 2263...  Training loss: 1.2909...  2.8128 sec/batch
Epoch: 12/20...  Training Step: 2264...  Training loss: 1.3055...  2.7538 sec/batch
Epoch: 12/20...  Training Step: 2265...  Training loss: 1.2817...  2.7901 sec/batch
Epoch: 12/20...  Training Step: 2266...  Training loss: 1.2810...  2.7803 sec/batch
Epoch: 12/20...  Training Step: 2267...  Training loss: 1.2764...  2.7902 sec/batch
Epoch: 12/20...  Training Step: 2268...  Training loss: 1.3120...  2.7889 sec/batch
Epoch: 12/20...  Training Step: 2269...  Training loss: 1.2931...  2.9699 sec/batch
Epoch: 12/20...  Training Step: 2270...  Training loss: 1.3037...  2.8129 sec/batch
Epoch: 12/20...  Training Step: 2271...  Training loss: 1.2723...  2.7842 sec/batch
Epoch: 12/20...  Training Step: 2272...  Training loss: 1.2849...  2.8123 sec/batch
Epoch: 12/20...  Training Step: 2273...  Training loss: 1.2880...  2.8598 sec/batch
Epoch: 12/20...  Training Step: 2274...  Training loss: 1.3149...  2.7708 sec/batch
Epoch: 12/20...  Training Step: 2275...  Training loss: 1.3074...  2.7836 sec/batch
Epoch: 12/20...  Training Step: 2276...  Training loss: 1.2706...  2.7811 sec/batch
Epoch: 12/20...  Training Step: 2277...  Training loss: 1.2893...  2.7536 sec/batch
Epoch: 12/20...  Training Step: 2278...  Training loss: 1.2819...  2.7869 sec/batch
Epoch: 12/20...  Training Step: 2279...  Training loss: 1.2985...  2.7971 sec/batch
Epoch: 12/20...  Training Step: 2280...  Training loss: 1.2921...  2.7917 sec/batch
Epoch: 12/20...  Training Step: 2281...  Training loss: 1.3035...  2.7928 sec/batch
Epoch: 12/20...  Training Step: 2282...  Training loss: 1.2916...  2.7914 sec/batch
Epoch: 12/20...  Training Step: 2283...  Training loss: 1.2881...  2.7993 sec/batch
Epoch: 12/20...  Training Step: 2284...  Training loss: 1.3020...  2.8348 sec/batch
Epoch: 12/20...  Training Step: 2285...  Training loss: 1.3045...  2.7971 sec/batch
Epoch: 12/20...  Training Step: 2286...  Training loss: 1.3075...  2.7993 sec/batch
Epoch: 12/20...  Training Step: 2287...  Training loss: 1.2898...  2.7801 sec/batch
Epoch: 12/20...  Training Step: 2288...  Training loss: 1.3295...  2.7850 sec/batch
Epoch: 12/20...  Training Step: 2289...  Training loss: 1.2888...  2.7880 sec/batch
Epoch: 12/20...  Training Step: 2290...  Training loss: 1.3105...  2.8167 sec/batch
Epoch: 12/20...  Training Step: 2291...  Training loss: 1.3107...  2.7857 sec/batch
Epoch: 12/20...  Training Step: 2292...  Training loss: 1.2868...  2.7785 sec/batch
Epoch: 12/20...  Training Step: 2293...  Training loss: 1.2811...  2.7717 sec/batch
Epoch: 12/20...  Training Step: 2294...  Training loss: 1.2646...  2.8197 sec/batch
Epoch: 12/20...  Training Step: 2295...  Training loss: 1.3104...  2.7978 sec/batch
Epoch: 12/20...  Training Step: 2296...  Training loss: 1.3060...  2.7772 sec/batch
Epoch: 12/20...  Training Step: 2297...  Training loss: 1.2957...  2.7740 sec/batch
Epoch: 12/20...  Training Step: 2298...  Training loss: 1.2960...  2.7620 sec/batch
Epoch: 12/20...  Training Step: 2299...  Training loss: 1.3068...  2.8009 sec/batch
Epoch: 12/20...  Training Step: 2300...  Training loss: 1.2708...  2.7778 sec/batch
Epoch: 12/20...  Training Step: 2301...  Training loss: 1.2523...  2.8315 sec/batch
Epoch: 12/20...  Training Step: 2302...  Training loss: 1.2967...  2.8095 sec/batch
Epoch: 12/20...  Training Step: 2303...  Training loss: 1.2909...  2.7685 sec/batch
Epoch: 12/20...  Training Step: 2304...  Training loss: 1.2545...  2.9055 sec/batch
Epoch: 12/20...  Training Step: 2305...  Training loss: 1.3088...  2.8409 sec/batch
Epoch: 12/20...  Training Step: 2306...  Training loss: 1.3091...  2.7730 sec/batch
Epoch: 12/20...  Training Step: 2307...  Training loss: 1.2791...  2.7631 sec/batch
Epoch: 12/20...  Training Step: 2308...  Training loss: 1.2571...  2.7789 sec/batch
Epoch: 12/20...  Training Step: 2309...  Training loss: 1.2480...  2.7670 sec/batch
Epoch: 12/20...  Training Step: 2310...  Training loss: 1.2838...  2.7934 sec/batch
Epoch: 12/20...  Training Step: 2311...  Training loss: 1.3142...  2.7767 sec/batch
Epoch: 12/20...  Training Step: 2312...  Training loss: 1.3043...  2.7873 sec/batch
Epoch: 12/20...  Training Step: 2313...  Training loss: 1.3131...  2.7687 sec/batch
Epoch: 12/20...  Training Step: 2314...  Training loss: 1.3055...  2.7725 sec/batch
Epoch: 12/20...  Training Step: 2315...  Training loss: 1.3321...  2.7826 sec/batch
Epoch: 12/20...  Training Step: 2316...  Training loss: 1.3207...  2.8510 sec/batch
Epoch: 12/20...  Training Step: 2317...  Training loss: 1.3078...  2.8258 sec/batch
Epoch: 12/20...  Training Step: 2318...  Training loss: 1.3094...  2.8478 sec/batch
Epoch: 12/20...  Training Step: 2319...  Training loss: 1.3466...  2.8239 sec/batch
Epoch: 12/20...  Training Step: 2320...  Training loss: 1.3104...  2.7754 sec/batch
Epoch: 12/20...  Training Step: 2321...  Training loss: 1.2985...  2.7747 sec/batch
Epoch: 12/20...  Training Step: 2322...  Training loss: 1.3298...  2.7669 sec/batch
Epoch: 12/20...  Training Step: 2323...  Training loss: 1.2811...  2.7842 sec/batch
Epoch: 12/20...  Training Step: 2324...  Training loss: 1.3283...  2.7973 sec/batch
Epoch: 12/20...  Training Step: 2325...  Training loss: 1.3070...  2.7791 sec/batch
Epoch: 12/20...  Training Step: 2326...  Training loss: 1.3466...  2.7745 sec/batch
Epoch: 12/20...  Training Step: 2327...  Training loss: 1.3225...  2.8012 sec/batch
Epoch: 12/20...  Training Step: 2328...  Training loss: 1.2898...  2.7937 sec/batch
Epoch: 12/20...  Training Step: 2329...  Training loss: 1.2655...  2.7863 sec/batch
Epoch: 12/20...  Training Step: 2330...  Training loss: 1.2786...  2.8905 sec/batch
Epoch: 12/20...  Training Step: 2331...  Training loss: 1.3122...  2.7743 sec/batch
Epoch: 12/20...  Training Step: 2332...  Training loss: 1.2967...  2.8269 sec/batch
Epoch: 12/20...  Training Step: 2333...  Training loss: 1.2953...  2.8042 sec/batch
Epoch: 12/20...  Training Step: 2334...  Training loss: 1.3003...  2.8001 sec/batch
Epoch: 12/20...  Training Step: 2335...  Training loss: 1.3064...  2.8002 sec/batch
Epoch: 12/20...  Training Step: 2336...  Training loss: 1.2938...  2.7869 sec/batch
Epoch: 12/20...  Training Step: 2337...  Training loss: 1.2615...  2.7995 sec/batch
Epoch: 12/20...  Training Step: 2338...  Training loss: 1.3205...  2.9207 sec/batch
Epoch: 12/20...  Training Step: 2339...  Training loss: 1.3278...  2.7990 sec/batch
Epoch: 12/20...  Training Step: 2340...  Training loss: 1.2983...  2.7749 sec/batch
Epoch: 12/20...  Training Step: 2341...  Training loss: 1.3067...  2.8167 sec/batch
Epoch: 12/20...  Training Step: 2342...  Training loss: 1.3010...  2.8155 sec/batch
Epoch: 12/20...  Training Step: 2343...  Training loss: 1.3005...  2.7709 sec/batch
Epoch: 12/20...  Training Step: 2344...  Training loss: 1.2999...  2.7842 sec/batch
Epoch: 12/20...  Training Step: 2345...  Training loss: 1.3266...  2.8071 sec/batch
Epoch: 12/20...  Training Step: 2346...  Training loss: 1.3647...  2.8315 sec/batch
Epoch: 12/20...  Training Step: 2347...  Training loss: 1.3128...  2.8088 sec/batch
Epoch: 12/20...  Training Step: 2348...  Training loss: 1.2911...  2.8256 sec/batch
Epoch: 12/20...  Training Step: 2349...  Training loss: 1.2850...  2.7622 sec/batch
Epoch: 12/20...  Training Step: 2350...  Training loss: 1.2774...  2.7818 sec/batch
Epoch: 12/20...  Training Step: 2351...  Training loss: 1.3326...  2.7946 sec/batch
Epoch: 12/20...  Training Step: 2352...  Training loss: 1.2881...  3.0141 sec/batch
Epoch: 12/20...  Training Step: 2353...  Training loss: 1.3112...  3.1182 sec/batch
Epoch: 12/20...  Training Step: 2354...  Training loss: 1.2700...  2.7667 sec/batch
Epoch: 12/20...  Training Step: 2355...  Training loss: 1.2872...  2.8170 sec/batch
Epoch: 12/20...  Training Step: 2356...  Training loss: 1.3300...  2.8099 sec/batch
Epoch: 12/20...  Training Step: 2357...  Training loss: 1.2838...  2.7994 sec/batch
Epoch: 12/20...  Training Step: 2358...  Training loss: 1.2698...  2.7953 sec/batch
Epoch: 12/20...  Training Step: 2359...  Training loss: 1.2685...  2.8081 sec/batch
Epoch: 12/20...  Training Step: 2360...  Training loss: 1.2877...  2.7823 sec/batch
Epoch: 12/20...  Training Step: 2361...  Training loss: 1.2942...  2.7781 sec/batch
Epoch: 12/20...  Training Step: 2362...  Training loss: 1.2931...  2.7759 sec/batch
Epoch: 12/20...  Training Step: 2363...  Training loss: 1.2833...  2.8173 sec/batch
Epoch: 12/20...  Training Step: 2364...  Training loss: 1.2808...  2.7773 sec/batch
Epoch: 12/20...  Training Step: 2365...  Training loss: 1.3211...  2.7928 sec/batch
Epoch: 12/20...  Training Step: 2366...  Training loss: 1.2785...  2.8025 sec/batch
Epoch: 12/20...  Training Step: 2367...  Training loss: 1.2927...  2.7893 sec/batch
Epoch: 12/20...  Training Step: 2368...  Training loss: 1.2972...  2.7813 sec/batch
Epoch: 12/20...  Training Step: 2369...  Training loss: 1.2796...  2.7870 sec/batch
Epoch: 12/20...  Training Step: 2370...  Training loss: 1.2737...  2.8257 sec/batch
Epoch: 12/20...  Training Step: 2371...  Training loss: 1.2975...  2.7777 sec/batch
Epoch: 12/20...  Training Step: 2372...  Training loss: 1.2859...  2.7907 sec/batch
Epoch: 12/20...  Training Step: 2373...  Training loss: 1.2574...  2.7973 sec/batch
Epoch: 12/20...  Training Step: 2374...  Training loss: 1.3042...  2.7997 sec/batch
Epoch: 12/20...  Training Step: 2375...  Training loss: 1.2825...  2.7860 sec/batch
Epoch: 12/20...  Training Step: 2376...  Training loss: 1.2813...  2.7786 sec/batch
Epoch: 13/20...  Training Step: 2377...  Training loss: 1.4407...  2.8136 sec/batch
Epoch: 13/20...  Training Step: 2378...  Training loss: 1.3125...  2.7887 sec/batch
Epoch: 13/20...  Training Step: 2379...  Training loss: 1.2877...  2.7730 sec/batch
Epoch: 13/20...  Training Step: 2380...  Training loss: 1.3175...  2.8311 sec/batch
Epoch: 13/20...  Training Step: 2381...  Training loss: 1.2893...  2.7840 sec/batch
Epoch: 13/20...  Training Step: 2382...  Training loss: 1.2699...  2.8393 sec/batch
Epoch: 13/20...  Training Step: 2383...  Training loss: 1.2990...  2.8286 sec/batch
Epoch: 13/20...  Training Step: 2384...  Training loss: 1.2895...  2.7793 sec/batch
Epoch: 13/20...  Training Step: 2385...  Training loss: 1.3086...  2.8337 sec/batch
Epoch: 13/20...  Training Step: 2386...  Training loss: 1.2869...  2.8167 sec/batch
Epoch: 13/20...  Training Step: 2387...  Training loss: 1.2874...  2.7873 sec/batch
Epoch: 13/20...  Training Step: 2388...  Training loss: 1.2883...  2.7972 sec/batch
Epoch: 13/20...  Training Step: 2389...  Training loss: 1.2957...  2.7860 sec/batch
Epoch: 13/20...  Training Step: 2390...  Training loss: 1.3169...  2.7994 sec/batch
Epoch: 13/20...  Training Step: 2391...  Training loss: 1.2874...  2.7792 sec/batch
Epoch: 13/20...  Training Step: 2392...  Training loss: 1.2817...  2.7761 sec/batch
Epoch: 13/20...  Training Step: 2393...  Training loss: 1.3102...  2.7914 sec/batch
Epoch: 13/20...  Training Step: 2394...  Training loss: 1.3168...  2.7826 sec/batch
Epoch: 13/20...  Training Step: 2395...  Training loss: 1.2920...  2.7823 sec/batch
Epoch: 13/20...  Training Step: 2396...  Training loss: 1.3224...  2.7636 sec/batch
Epoch: 13/20...  Training Step: 2397...  Training loss: 1.2842...  2.7723 sec/batch
Epoch: 13/20...  Training Step: 2398...  Training loss: 1.3051...  2.7852 sec/batch
Epoch: 13/20...  Training Step: 2399...  Training loss: 1.2885...  2.7813 sec/batch
Epoch: 13/20...  Training Step: 2400...  Training loss: 1.3157...  2.7717 sec/batch
Epoch: 13/20...  Training Step: 2401...  Training loss: 1.2955...  2.8026 sec/batch
Epoch: 13/20...  Training Step: 2402...  Training loss: 1.2606...  2.8123 sec/batch
Epoch: 13/20...  Training Step: 2403...  Training loss: 1.2662...  2.7971 sec/batch
Epoch: 13/20...  Training Step: 2404...  Training loss: 1.3002...  2.8023 sec/batch
Epoch: 13/20...  Training Step: 2405...  Training loss: 1.3002...  2.7859 sec/batch
Epoch: 13/20...  Training Step: 2406...  Training loss: 1.3101...  2.7916 sec/batch
Epoch: 13/20...  Training Step: 2407...  Training loss: 1.2814...  2.7909 sec/batch
Epoch: 13/20...  Training Step: 2408...  Training loss: 1.2614...  2.7729 sec/batch
Epoch: 13/20...  Training Step: 2409...  Training loss: 1.3063...  2.7616 sec/batch
Epoch: 13/20...  Training Step: 2410...  Training loss: 1.3011...  2.7685 sec/batch
Epoch: 13/20...  Training Step: 2411...  Training loss: 1.2804...  2.8113 sec/batch
Epoch: 13/20...  Training Step: 2412...  Training loss: 1.3091...  2.8056 sec/batch
Epoch: 13/20...  Training Step: 2413...  Training loss: 1.2820...  2.7865 sec/batch
Epoch: 13/20...  Training Step: 2414...  Training loss: 1.2549...  2.7660 sec/batch
Epoch: 13/20...  Training Step: 2415...  Training loss: 1.2642...  2.7529 sec/batch
Epoch: 13/20...  Training Step: 2416...  Training loss: 1.2721...  2.7862 sec/batch
Epoch: 13/20...  Training Step: 2417...  Training loss: 1.2704...  2.7620 sec/batch
Epoch: 13/20...  Training Step: 2418...  Training loss: 1.3310...  2.7855 sec/batch
Epoch: 13/20...  Training Step: 2419...  Training loss: 1.2808...  2.7867 sec/batch
Epoch: 13/20...  Training Step: 2420...  Training loss: 1.2674...  2.8069 sec/batch
Epoch: 13/20...  Training Step: 2421...  Training loss: 1.3026...  2.8241 sec/batch
Epoch: 13/20...  Training Step: 2422...  Training loss: 1.2612...  2.9245 sec/batch
Epoch: 13/20...  Training Step: 2423...  Training loss: 1.2825...  3.0032 sec/batch
Epoch: 13/20...  Training Step: 2424...  Training loss: 1.2835...  2.9762 sec/batch
Epoch: 13/20...  Training Step: 2425...  Training loss: 1.2823...  2.8826 sec/batch
Epoch: 13/20...  Training Step: 2426...  Training loss: 1.3063...  2.8878 sec/batch
Epoch: 13/20...  Training Step: 2427...  Training loss: 1.2657...  2.8854 sec/batch
Epoch: 13/20...  Training Step: 2428...  Training loss: 1.3305...  2.8909 sec/batch
Epoch: 13/20...  Training Step: 2429...  Training loss: 1.2949...  2.9464 sec/batch
Epoch: 13/20...  Training Step: 2430...  Training loss: 1.3071...  2.9006 sec/batch
Epoch: 13/20...  Training Step: 2431...  Training loss: 1.2755...  2.9164 sec/batch
Epoch: 13/20...  Training Step: 2432...  Training loss: 1.2981...  2.9126 sec/batch
Epoch: 13/20...  Training Step: 2433...  Training loss: 1.2985...  2.9360 sec/batch
Epoch: 13/20...  Training Step: 2434...  Training loss: 1.2826...  2.8923 sec/batch
Epoch: 13/20...  Training Step: 2435...  Training loss: 1.2753...  2.9024 sec/batch
Epoch: 13/20...  Training Step: 2436...  Training loss: 1.3236...  2.8821 sec/batch
Epoch: 13/20...  Training Step: 2437...  Training loss: 1.2908...  2.8990 sec/batch
Epoch: 13/20...  Training Step: 2438...  Training loss: 1.3259...  2.9312 sec/batch
Epoch: 13/20...  Training Step: 2439...  Training loss: 1.3092...  2.8529 sec/batch
Epoch: 13/20...  Training Step: 2440...  Training loss: 1.2932...  2.9033 sec/batch
Epoch: 13/20...  Training Step: 2441...  Training loss: 1.2893...  2.8618 sec/batch
Epoch: 13/20...  Training Step: 2442...  Training loss: 1.2994...  2.9050 sec/batch
Epoch: 13/20...  Training Step: 2443...  Training loss: 1.3171...  2.9399 sec/batch
Epoch: 13/20...  Training Step: 2444...  Training loss: 1.2782...  2.9538 sec/batch
Epoch: 13/20...  Training Step: 2445...  Training loss: 1.3000...  2.9602 sec/batch
Epoch: 13/20...  Training Step: 2446...  Training loss: 1.2833...  2.9100 sec/batch
Epoch: 13/20...  Training Step: 2447...  Training loss: 1.3378...  2.8760 sec/batch
Epoch: 13/20...  Training Step: 2448...  Training loss: 1.3134...  2.8569 sec/batch
Epoch: 13/20...  Training Step: 2449...  Training loss: 1.3196...  2.8817 sec/batch
Epoch: 13/20...  Training Step: 2450...  Training loss: 1.2739...  2.9569 sec/batch
Epoch: 13/20...  Training Step: 2451...  Training loss: 1.2951...  2.9117 sec/batch
Epoch: 13/20...  Training Step: 2452...  Training loss: 1.3100...  2.9082 sec/batch
Epoch: 13/20...  Training Step: 2453...  Training loss: 1.2930...  2.9678 sec/batch
Epoch: 13/20...  Training Step: 2454...  Training loss: 1.2861...  2.9418 sec/batch
Epoch: 13/20...  Training Step: 2455...  Training loss: 1.2471...  2.9207 sec/batch
Epoch: 13/20...  Training Step: 2456...  Training loss: 1.2891...  2.8970 sec/batch
Epoch: 13/20...  Training Step: 2457...  Training loss: 1.2608...  2.9460 sec/batch
Epoch: 13/20...  Training Step: 2458...  Training loss: 1.2888...  2.9040 sec/batch
Epoch: 13/20...  Training Step: 2459...  Training loss: 1.2582...  2.9011 sec/batch
Epoch: 13/20...  Training Step: 2460...  Training loss: 1.2813...  2.9177 sec/batch
Epoch: 13/20...  Training Step: 2461...  Training loss: 1.2703...  2.9112 sec/batch
Epoch: 13/20...  Training Step: 2462...  Training loss: 1.2862...  2.8971 sec/batch
Epoch: 13/20...  Training Step: 2463...  Training loss: 1.2690...  2.8535 sec/batch
Epoch: 13/20...  Training Step: 2464...  Training loss: 1.2709...  2.9581 sec/batch
Epoch: 13/20...  Training Step: 2465...  Training loss: 1.2596...  2.9040 sec/batch
Epoch: 13/20...  Training Step: 2466...  Training loss: 1.2988...  2.9174 sec/batch
Epoch: 13/20...  Training Step: 2467...  Training loss: 1.2688...  2.9201 sec/batch
Epoch: 13/20...  Training Step: 2468...  Training loss: 1.2775...  2.8751 sec/batch
Epoch: 13/20...  Training Step: 2469...  Training loss: 1.2638...  2.8951 sec/batch
Epoch: 13/20...  Training Step: 2470...  Training loss: 1.2615...  2.8933 sec/batch
Epoch: 13/20...  Training Step: 2471...  Training loss: 1.2711...  2.9007 sec/batch
Epoch: 13/20...  Training Step: 2472...  Training loss: 1.2985...  2.9221 sec/batch
Epoch: 13/20...  Training Step: 2473...  Training loss: 1.2915...  2.9252 sec/batch
Epoch: 13/20...  Training Step: 2474...  Training loss: 1.2522...  2.8754 sec/batch
Epoch: 13/20...  Training Step: 2475...  Training loss: 1.2689...  2.8880 sec/batch
Epoch: 13/20...  Training Step: 2476...  Training loss: 1.2609...  2.8850 sec/batch
Epoch: 13/20...  Training Step: 2477...  Training loss: 1.2799...  2.8913 sec/batch
Epoch: 13/20...  Training Step: 2478...  Training loss: 1.2767...  2.8681 sec/batch
Epoch: 13/20...  Training Step: 2479...  Training loss: 1.2929...  2.9308 sec/batch
Epoch: 13/20...  Training Step: 2480...  Training loss: 1.2733...  2.8495 sec/batch
Epoch: 13/20...  Training Step: 2481...  Training loss: 1.2752...  2.9181 sec/batch
Epoch: 13/20...  Training Step: 2482...  Training loss: 1.2802...  2.9023 sec/batch
Epoch: 13/20...  Training Step: 2483...  Training loss: 1.2790...  2.8962 sec/batch
Epoch: 13/20...  Training Step: 2484...  Training loss: 1.2848...  2.8804 sec/batch
Epoch: 13/20...  Training Step: 2485...  Training loss: 1.2642...  2.9282 sec/batch
Epoch: 13/20...  Training Step: 2486...  Training loss: 1.2951...  2.9007 sec/batch
Epoch: 13/20...  Training Step: 2487...  Training loss: 1.2771...  2.8793 sec/batch
Epoch: 13/20...  Training Step: 2488...  Training loss: 1.2868...  2.8328 sec/batch
Epoch: 13/20...  Training Step: 2489...  Training loss: 1.2876...  2.8158 sec/batch
Epoch: 13/20...  Training Step: 2490...  Training loss: 1.2731...  2.7548 sec/batch
Epoch: 13/20...  Training Step: 2491...  Training loss: 1.2560...  2.8049 sec/batch
Epoch: 13/20...  Training Step: 2492...  Training loss: 1.2480...  2.7757 sec/batch
Epoch: 13/20...  Training Step: 2493...  Training loss: 1.2948...  2.8066 sec/batch
Epoch: 13/20...  Training Step: 2494...  Training loss: 1.2884...  2.7870 sec/batch
Epoch: 13/20...  Training Step: 2495...  Training loss: 1.2793...  2.7884 sec/batch
Epoch: 13/20...  Training Step: 2496...  Training loss: 1.2816...  2.8184 sec/batch
Epoch: 13/20...  Training Step: 2497...  Training loss: 1.2900...  2.8002 sec/batch
Epoch: 13/20...  Training Step: 2498...  Training loss: 1.2507...  2.8448 sec/batch
Epoch: 13/20...  Training Step: 2499...  Training loss: 1.2355...  2.7848 sec/batch
Epoch: 13/20...  Training Step: 2500...  Training loss: 1.2765...  2.8124 sec/batch
Epoch: 13/20...  Training Step: 2501...  Training loss: 1.2618...  2.8058 sec/batch
Epoch: 13/20...  Training Step: 2502...  Training loss: 1.2561...  2.7852 sec/batch
Epoch: 13/20...  Training Step: 2503...  Training loss: 1.2913...  2.7607 sec/batch
Epoch: 13/20...  Training Step: 2504...  Training loss: 1.2963...  2.8372 sec/batch
Epoch: 13/20...  Training Step: 2505...  Training loss: 1.2624...  2.7983 sec/batch
Epoch: 13/20...  Training Step: 2506...  Training loss: 1.2401...  2.8443 sec/batch
Epoch: 13/20...  Training Step: 2507...  Training loss: 1.2326...  2.7965 sec/batch
Epoch: 13/20...  Training Step: 2508...  Training loss: 1.2656...  2.7661 sec/batch
Epoch: 13/20...  Training Step: 2509...  Training loss: 1.3044...  2.7898 sec/batch
Epoch: 13/20...  Training Step: 2510...  Training loss: 1.2887...  2.7773 sec/batch
Epoch: 13/20...  Training Step: 2511...  Training loss: 1.2914...  2.7557 sec/batch
Epoch: 13/20...  Training Step: 2512...  Training loss: 1.2779...  2.7843 sec/batch
Epoch: 13/20...  Training Step: 2513...  Training loss: 1.3121...  3.7765 sec/batch
Epoch: 13/20...  Training Step: 2514...  Training loss: 1.3057...  2.7845 sec/batch
Epoch: 13/20...  Training Step: 2515...  Training loss: 1.2873...  2.9060 sec/batch
Epoch: 13/20...  Training Step: 2516...  Training loss: 1.2974...  2.7773 sec/batch
Epoch: 13/20...  Training Step: 2517...  Training loss: 1.3315...  2.8333 sec/batch
Epoch: 13/20...  Training Step: 2518...  Training loss: 1.2922...  2.8086 sec/batch
Epoch: 13/20...  Training Step: 2519...  Training loss: 1.2754...  2.7987 sec/batch
Epoch: 13/20...  Training Step: 2520...  Training loss: 1.3135...  2.8014 sec/batch
Epoch: 13/20...  Training Step: 2521...  Training loss: 1.2680...  2.8013 sec/batch
Epoch: 13/20...  Training Step: 2522...  Training loss: 1.3095...  2.8525 sec/batch
Epoch: 13/20...  Training Step: 2523...  Training loss: 1.2966...  2.7945 sec/batch
Epoch: 13/20...  Training Step: 2524...  Training loss: 1.3119...  2.8011 sec/batch
Epoch: 13/20...  Training Step: 2525...  Training loss: 1.3095...  2.7722 sec/batch
Epoch: 13/20...  Training Step: 2526...  Training loss: 1.2750...  2.7492 sec/batch
Epoch: 13/20...  Training Step: 2527...  Training loss: 1.2489...  2.8301 sec/batch
Epoch: 13/20...  Training Step: 2528...  Training loss: 1.2575...  2.8111 sec/batch
Epoch: 13/20...  Training Step: 2529...  Training loss: 1.2906...  2.7877 sec/batch
Epoch: 13/20...  Training Step: 2530...  Training loss: 1.2885...  2.7727 sec/batch
Epoch: 13/20...  Training Step: 2531...  Training loss: 1.2843...  2.7596 sec/batch
Epoch: 13/20...  Training Step: 2532...  Training loss: 1.2840...  2.7918 sec/batch
Epoch: 13/20...  Training Step: 2533...  Training loss: 1.2832...  2.8083 sec/batch
Epoch: 13/20...  Training Step: 2534...  Training loss: 1.2711...  2.7756 sec/batch
Epoch: 13/20...  Training Step: 2535...  Training loss: 1.2507...  2.8019 sec/batch
Epoch: 13/20...  Training Step: 2536...  Training loss: 1.3074...  2.7964 sec/batch
Epoch: 13/20...  Training Step: 2537...  Training loss: 1.3038...  2.8657 sec/batch
Epoch: 13/20...  Training Step: 2538...  Training loss: 1.2906...  2.8241 sec/batch
Epoch: 13/20...  Training Step: 2539...  Training loss: 1.2866...  2.7709 sec/batch
Epoch: 13/20...  Training Step: 2540...  Training loss: 1.2829...  2.7847 sec/batch
Epoch: 13/20...  Training Step: 2541...  Training loss: 1.2823...  2.8110 sec/batch
Epoch: 13/20...  Training Step: 2542...  Training loss: 1.2842...  2.7633 sec/batch
Epoch: 13/20...  Training Step: 2543...  Training loss: 1.3148...  2.8144 sec/batch
Epoch: 13/20...  Training Step: 2544...  Training loss: 1.3505...  2.7786 sec/batch
Epoch: 13/20...  Training Step: 2545...  Training loss: 1.2839...  2.7973 sec/batch
Epoch: 13/20...  Training Step: 2546...  Training loss: 1.2830...  2.9392 sec/batch
Epoch: 13/20...  Training Step: 2547...  Training loss: 1.2833...  2.7973 sec/batch
Epoch: 13/20...  Training Step: 2548...  Training loss: 1.2652...  2.7849 sec/batch
Epoch: 13/20...  Training Step: 2549...  Training loss: 1.3051...  2.8052 sec/batch
Epoch: 13/20...  Training Step: 2550...  Training loss: 1.2770...  2.8292 sec/batch
Epoch: 13/20...  Training Step: 2551...  Training loss: 1.2972...  2.7884 sec/batch
Epoch: 13/20...  Training Step: 2552...  Training loss: 1.2484...  2.7878 sec/batch
Epoch: 13/20...  Training Step: 2553...  Training loss: 1.2760...  2.7883 sec/batch
Epoch: 13/20...  Training Step: 2554...  Training loss: 1.3206...  2.7788 sec/batch
Epoch: 13/20...  Training Step: 2555...  Training loss: 1.2587...  2.8134 sec/batch
Epoch: 13/20...  Training Step: 2556...  Training loss: 1.2640...  2.7935 sec/batch
Epoch: 13/20...  Training Step: 2557...  Training loss: 1.2623...  2.8143 sec/batch
Epoch: 13/20...  Training Step: 2558...  Training loss: 1.2774...  2.8088 sec/batch
Epoch: 13/20...  Training Step: 2559...  Training loss: 1.2766...  2.8395 sec/batch
Epoch: 13/20...  Training Step: 2560...  Training loss: 1.2787...  2.7852 sec/batch
Epoch: 13/20...  Training Step: 2561...  Training loss: 1.2730...  2.7773 sec/batch
Epoch: 13/20...  Training Step: 2562...  Training loss: 1.2503...  2.7870 sec/batch
Epoch: 13/20...  Training Step: 2563...  Training loss: 1.3016...  2.7694 sec/batch
Epoch: 13/20...  Training Step: 2564...  Training loss: 1.2713...  2.7677 sec/batch
Epoch: 13/20...  Training Step: 2565...  Training loss: 1.2850...  2.7689 sec/batch
Epoch: 13/20...  Training Step: 2566...  Training loss: 1.2729...  2.8076 sec/batch
Epoch: 13/20...  Training Step: 2567...  Training loss: 1.2566...  2.8510 sec/batch
Epoch: 13/20...  Training Step: 2568...  Training loss: 1.2659...  2.8065 sec/batch
Epoch: 13/20...  Training Step: 2569...  Training loss: 1.2783...  2.7641 sec/batch
Epoch: 13/20...  Training Step: 2570...  Training loss: 1.2564...  2.9809 sec/batch
Epoch: 13/20...  Training Step: 2571...  Training loss: 1.2390...  2.7794 sec/batch
Epoch: 13/20...  Training Step: 2572...  Training loss: 1.2825...  2.7810 sec/batch
Epoch: 13/20...  Training Step: 2573...  Training loss: 1.2742...  2.8240 sec/batch
Epoch: 13/20...  Training Step: 2574...  Training loss: 1.2732...  2.7904 sec/batch
Epoch: 14/20...  Training Step: 2575...  Training loss: 1.4211...  2.7831 sec/batch
Epoch: 14/20...  Training Step: 2576...  Training loss: 1.3152...  2.7731 sec/batch
Epoch: 14/20...  Training Step: 2577...  Training loss: 1.2862...  2.7997 sec/batch
Epoch: 14/20...  Training Step: 2578...  Training loss: 1.2998...  2.8261 sec/batch
Epoch: 14/20...  Training Step: 2579...  Training loss: 1.2662...  2.8019 sec/batch
Epoch: 14/20...  Training Step: 2580...  Training loss: 1.2446...  2.8032 sec/batch
Epoch: 14/20...  Training Step: 2581...  Training loss: 1.2814...  2.8110 sec/batch
Epoch: 14/20...  Training Step: 2582...  Training loss: 1.2865...  2.7389 sec/batch
Epoch: 14/20...  Training Step: 2583...  Training loss: 1.2917...  2.7936 sec/batch
Epoch: 14/20...  Training Step: 2584...  Training loss: 1.2722...  2.7810 sec/batch
Epoch: 14/20...  Training Step: 2585...  Training loss: 1.2667...  2.7599 sec/batch
Epoch: 14/20...  Training Step: 2586...  Training loss: 1.2810...  2.7906 sec/batch
Epoch: 14/20...  Training Step: 2587...  Training loss: 1.2831...  2.7663 sec/batch
Epoch: 14/20...  Training Step: 2588...  Training loss: 1.2973...  2.8013 sec/batch
Epoch: 14/20...  Training Step: 2589...  Training loss: 1.2720...  2.8143 sec/batch
Epoch: 14/20...  Training Step: 2590...  Training loss: 1.2572...  2.7853 sec/batch
Epoch: 14/20...  Training Step: 2591...  Training loss: 1.2853...  2.8010 sec/batch
Epoch: 14/20...  Training Step: 2592...  Training loss: 1.3092...  2.8046 sec/batch
Epoch: 14/20...  Training Step: 2593...  Training loss: 1.2765...  2.7706 sec/batch
Epoch: 14/20...  Training Step: 2594...  Training loss: 1.3055...  2.7644 sec/batch
Epoch: 14/20...  Training Step: 2595...  Training loss: 1.2774...  2.8055 sec/batch
Epoch: 14/20...  Training Step: 2596...  Training loss: 1.2843...  2.8040 sec/batch
Epoch: 14/20...  Training Step: 2597...  Training loss: 1.2799...  2.7832 sec/batch
Epoch: 14/20...  Training Step: 2598...  Training loss: 1.3002...  2.7807 sec/batch
Epoch: 14/20...  Training Step: 2599...  Training loss: 1.2815...  2.7586 sec/batch
Epoch: 14/20...  Training Step: 2600...  Training loss: 1.2427...  2.8116 sec/batch
Epoch: 14/20...  Training Step: 2601...  Training loss: 1.2525...  2.7770 sec/batch
Epoch: 14/20...  Training Step: 2602...  Training loss: 1.2978...  2.8069 sec/batch
Epoch: 14/20...  Training Step: 2603...  Training loss: 1.2903...  2.7958 sec/batch
Epoch: 14/20...  Training Step: 2604...  Training loss: 1.2957...  2.8143 sec/batch
Epoch: 14/20...  Training Step: 2605...  Training loss: 1.2666...  2.7892 sec/batch
Epoch: 14/20...  Training Step: 2606...  Training loss: 1.2582...  2.7832 sec/batch
Epoch: 14/20...  Training Step: 2607...  Training loss: 1.2908...  2.8015 sec/batch
Epoch: 14/20...  Training Step: 2608...  Training loss: 1.2948...  2.8102 sec/batch
Epoch: 14/20...  Training Step: 2609...  Training loss: 1.2658...  2.8033 sec/batch
Epoch: 14/20...  Training Step: 2610...  Training loss: 1.2874...  2.7864 sec/batch
Epoch: 14/20...  Training Step: 2611...  Training loss: 1.2603...  2.7702 sec/batch
Epoch: 14/20...  Training Step: 2612...  Training loss: 1.2446...  2.7933 sec/batch
Epoch: 14/20...  Training Step: 2613...  Training loss: 1.2447...  2.8595 sec/batch
Epoch: 14/20...  Training Step: 2614...  Training loss: 1.2686...  2.7946 sec/batch
Epoch: 14/20...  Training Step: 2615...  Training loss: 1.2622...  2.8239 sec/batch
Epoch: 14/20...  Training Step: 2616...  Training loss: 1.3146...  2.7848 sec/batch
Epoch: 14/20...  Training Step: 2617...  Training loss: 1.2604...  2.7896 sec/batch
Epoch: 14/20...  Training Step: 2618...  Training loss: 1.2553...  2.7779 sec/batch
Epoch: 14/20...  Training Step: 2619...  Training loss: 1.2758...  2.7972 sec/batch
Epoch: 14/20...  Training Step: 2620...  Training loss: 1.2527...  2.7742 sec/batch
Epoch: 14/20...  Training Step: 2621...  Training loss: 1.2737...  2.7822 sec/batch
Epoch: 14/20...  Training Step: 2622...  Training loss: 1.2702...  2.8708 sec/batch
Epoch: 14/20...  Training Step: 2623...  Training loss: 1.2760...  2.7826 sec/batch
Epoch: 14/20...  Training Step: 2624...  Training loss: 1.2950...  2.8234 sec/batch
Epoch: 14/20...  Training Step: 2625...  Training loss: 1.2515...  2.8192 sec/batch
Epoch: 14/20...  Training Step: 2626...  Training loss: 1.3186...  2.8085 sec/batch
Epoch: 14/20...  Training Step: 2627...  Training loss: 1.2838...  2.7903 sec/batch
Epoch: 14/20...  Training Step: 2628...  Training loss: 1.2941...  2.8235 sec/batch
Epoch: 14/20...  Training Step: 2629...  Training loss: 1.2622...  2.8043 sec/batch
Epoch: 14/20...  Training Step: 2630...  Training loss: 1.2683...  2.8144 sec/batch
Epoch: 14/20...  Training Step: 2631...  Training loss: 1.2880...  2.7724 sec/batch
Epoch: 14/20...  Training Step: 2632...  Training loss: 1.2571...  2.7949 sec/batch
Epoch: 14/20...  Training Step: 2633...  Training loss: 1.2561...  2.7789 sec/batch
Epoch: 14/20...  Training Step: 2634...  Training loss: 1.3099...  2.8224 sec/batch
Epoch: 14/20...  Training Step: 2635...  Training loss: 1.2845...  2.8355 sec/batch
Epoch: 14/20...  Training Step: 2636...  Training loss: 1.3176...  2.7908 sec/batch
Epoch: 14/20...  Training Step: 2637...  Training loss: 1.3015...  2.7960 sec/batch
Epoch: 14/20...  Training Step: 2638...  Training loss: 1.2785...  2.8214 sec/batch
Epoch: 14/20...  Training Step: 2639...  Training loss: 1.2729...  2.7959 sec/batch
Epoch: 14/20...  Training Step: 2640...  Training loss: 1.2785...  2.7916 sec/batch
Epoch: 14/20...  Training Step: 2641...  Training loss: 1.2928...  2.7508 sec/batch
Epoch: 14/20...  Training Step: 2642...  Training loss: 1.2693...  2.7717 sec/batch
Epoch: 14/20...  Training Step: 2643...  Training loss: 1.2894...  2.7895 sec/batch
Epoch: 14/20...  Training Step: 2644...  Training loss: 1.2626...  2.8105 sec/batch
Epoch: 14/20...  Training Step: 2645...  Training loss: 1.3184...  2.8236 sec/batch
Epoch: 14/20...  Training Step: 2646...  Training loss: 1.2990...  2.7608 sec/batch
Epoch: 14/20...  Training Step: 2647...  Training loss: 1.3076...  2.7821 sec/batch
Epoch: 14/20...  Training Step: 2648...  Training loss: 1.2618...  2.8247 sec/batch
Epoch: 14/20...  Training Step: 2649...  Training loss: 1.2682...  2.8280 sec/batch
Epoch: 14/20...  Training Step: 2650...  Training loss: 1.2949...  2.7949 sec/batch
Epoch: 14/20...  Training Step: 2651...  Training loss: 1.2799...  2.8149 sec/batch
Epoch: 14/20...  Training Step: 2652...  Training loss: 1.2684...  2.7870 sec/batch
Epoch: 14/20...  Training Step: 2653...  Training loss: 1.2419...  2.7933 sec/batch
Epoch: 14/20...  Training Step: 2654...  Training loss: 1.2757...  2.8053 sec/batch
Epoch: 14/20...  Training Step: 2655...  Training loss: 1.2442...  2.7650 sec/batch
Epoch: 14/20...  Training Step: 2656...  Training loss: 1.2697...  2.8468 sec/batch
Epoch: 14/20...  Training Step: 2657...  Training loss: 1.2412...  2.8294 sec/batch
Epoch: 14/20...  Training Step: 2658...  Training loss: 1.2751...  2.7891 sec/batch
Epoch: 14/20...  Training Step: 2659...  Training loss: 1.2517...  2.7940 sec/batch
Epoch: 14/20...  Training Step: 2660...  Training loss: 1.2729...  2.8150 sec/batch
Epoch: 14/20...  Training Step: 2661...  Training loss: 1.2460...  2.7937 sec/batch
Epoch: 14/20...  Training Step: 2662...  Training loss: 1.2489...  3.0150 sec/batch
Epoch: 14/20...  Training Step: 2663...  Training loss: 1.2444...  2.7777 sec/batch
Epoch: 14/20...  Training Step: 2664...  Training loss: 1.2749...  2.8054 sec/batch
Epoch: 14/20...  Training Step: 2665...  Training loss: 1.2590...  3.1416 sec/batch
Epoch: 14/20...  Training Step: 2666...  Training loss: 1.2649...  2.8467 sec/batch
Epoch: 14/20...  Training Step: 2667...  Training loss: 1.2495...  2.8487 sec/batch
Epoch: 14/20...  Training Step: 2668...  Training loss: 1.2437...  2.7780 sec/batch
Epoch: 14/20...  Training Step: 2669...  Training loss: 1.2601...  2.8186 sec/batch
Epoch: 14/20...  Training Step: 2670...  Training loss: 1.2733...  2.7675 sec/batch
Epoch: 14/20...  Training Step: 2671...  Training loss: 1.2830...  2.7817 sec/batch
Epoch: 14/20...  Training Step: 2672...  Training loss: 1.2380...  2.7789 sec/batch
Epoch: 14/20...  Training Step: 2673...  Training loss: 1.2577...  2.7873 sec/batch
Epoch: 14/20...  Training Step: 2674...  Training loss: 1.2534...  2.7886 sec/batch
Epoch: 14/20...  Training Step: 2675...  Training loss: 1.2649...  2.7707 sec/batch
Epoch: 14/20...  Training Step: 2676...  Training loss: 1.2604...  2.8240 sec/batch
Epoch: 14/20...  Training Step: 2677...  Training loss: 1.2728...  2.7767 sec/batch
Epoch: 14/20...  Training Step: 2678...  Training loss: 1.2592...  2.7749 sec/batch
Epoch: 14/20...  Training Step: 2679...  Training loss: 1.2648...  2.7748 sec/batch
Epoch: 14/20...  Training Step: 2680...  Training loss: 1.2629...  2.7859 sec/batch
Epoch: 14/20...  Training Step: 2681...  Training loss: 1.2650...  2.7850 sec/batch
Epoch: 14/20...  Training Step: 2682...  Training loss: 1.2716...  3.3849 sec/batch
Epoch: 14/20...  Training Step: 2683...  Training loss: 1.2596...  2.8331 sec/batch
Epoch: 14/20...  Training Step: 2684...  Training loss: 1.2806...  2.8328 sec/batch
Epoch: 14/20...  Training Step: 2685...  Training loss: 1.2630...  2.8161 sec/batch
Epoch: 14/20...  Training Step: 2686...  Training loss: 1.2843...  2.7722 sec/batch
Epoch: 14/20...  Training Step: 2687...  Training loss: 1.2715...  2.7774 sec/batch
Epoch: 14/20...  Training Step: 2688...  Training loss: 1.2706...  2.8284 sec/batch
Epoch: 14/20...  Training Step: 2689...  Training loss: 1.2597...  2.7449 sec/batch
Epoch: 14/20...  Training Step: 2690...  Training loss: 1.2320...  2.7780 sec/batch
Epoch: 14/20...  Training Step: 2691...  Training loss: 1.2701...  2.7603 sec/batch
Epoch: 14/20...  Training Step: 2692...  Training loss: 1.2752...  2.7880 sec/batch
Epoch: 14/20...  Training Step: 2693...  Training loss: 1.2618...  2.7804 sec/batch
Epoch: 14/20...  Training Step: 2694...  Training loss: 1.2679...  2.8478 sec/batch
Epoch: 14/20...  Training Step: 2695...  Training loss: 1.2716...  2.7920 sec/batch
Epoch: 14/20...  Training Step: 2696...  Training loss: 1.2353...  2.7932 sec/batch
Epoch: 14/20...  Training Step: 2697...  Training loss: 1.2274...  2.7899 sec/batch
Epoch: 14/20...  Training Step: 2698...  Training loss: 1.2678...  2.8177 sec/batch
Epoch: 14/20...  Training Step: 2699...  Training loss: 1.2562...  2.8132 sec/batch
Epoch: 14/20...  Training Step: 2700...  Training loss: 1.2348...  2.8103 sec/batch
Epoch: 14/20...  Training Step: 2701...  Training loss: 1.2752...  2.7749 sec/batch
Epoch: 14/20...  Training Step: 2702...  Training loss: 1.2756...  2.7868 sec/batch
Epoch: 14/20...  Training Step: 2703...  Training loss: 1.2519...  2.7983 sec/batch
Epoch: 14/20...  Training Step: 2704...  Training loss: 1.2270...  2.7977 sec/batch
Epoch: 14/20...  Training Step: 2705...  Training loss: 1.2186...  2.8022 sec/batch
Epoch: 14/20...  Training Step: 2706...  Training loss: 1.2484...  2.7931 sec/batch
Epoch: 14/20...  Training Step: 2707...  Training loss: 1.2917...  2.7648 sec/batch
Epoch: 14/20...  Training Step: 2708...  Training loss: 1.2828...  2.8180 sec/batch
Epoch: 14/20...  Training Step: 2709...  Training loss: 1.2751...  3.5939 sec/batch
Epoch: 14/20...  Training Step: 2710...  Training loss: 1.2715...  2.7998 sec/batch
Epoch: 14/20...  Training Step: 2711...  Training loss: 1.2950...  2.8084 sec/batch
Epoch: 14/20...  Training Step: 2712...  Training loss: 1.2905...  2.7461 sec/batch
Epoch: 14/20...  Training Step: 2713...  Training loss: 1.2724...  2.8055 sec/batch
Epoch: 14/20...  Training Step: 2714...  Training loss: 1.2727...  2.7936 sec/batch
Epoch: 14/20...  Training Step: 2715...  Training loss: 1.3215...  2.7574 sec/batch
Epoch: 14/20...  Training Step: 2716...  Training loss: 1.2938...  2.7782 sec/batch
Epoch: 14/20...  Training Step: 2717...  Training loss: 1.2517...  2.8037 sec/batch
Epoch: 14/20...  Training Step: 2718...  Training loss: 1.2921...  2.7912 sec/batch
Epoch: 14/20...  Training Step: 2719...  Training loss: 1.2596...  2.7500 sec/batch
Epoch: 14/20...  Training Step: 2720...  Training loss: 1.2969...  2.8049 sec/batch
Epoch: 14/20...  Training Step: 2721...  Training loss: 1.2842...  2.7925 sec/batch
Epoch: 14/20...  Training Step: 2722...  Training loss: 1.3061...  2.7751 sec/batch
Epoch: 14/20...  Training Step: 2723...  Training loss: 1.3057...  2.7879 sec/batch
Epoch: 14/20...  Training Step: 2724...  Training loss: 1.2592...  2.7928 sec/batch
Epoch: 14/20...  Training Step: 2725...  Training loss: 1.2390...  2.7976 sec/batch
Epoch: 14/20...  Training Step: 2726...  Training loss: 1.2470...  2.8407 sec/batch
Epoch: 14/20...  Training Step: 2727...  Training loss: 1.2788...  2.7973 sec/batch
Epoch: 14/20...  Training Step: 2728...  Training loss: 1.2605...  2.9587 sec/batch
Epoch: 14/20...  Training Step: 2729...  Training loss: 1.2746...  2.9037 sec/batch
Epoch: 14/20...  Training Step: 2730...  Training loss: 1.2701...  2.8643 sec/batch
Epoch: 14/20...  Training Step: 2731...  Training loss: 1.2641...  2.7903 sec/batch
Epoch: 14/20...  Training Step: 2732...  Training loss: 1.2649...  2.8235 sec/batch
Epoch: 14/20...  Training Step: 2733...  Training loss: 1.2343...  2.7820 sec/batch
Epoch: 14/20...  Training Step: 2734...  Training loss: 1.2856...  2.7856 sec/batch
Epoch: 14/20...  Training Step: 2735...  Training loss: 1.2972...  2.7773 sec/batch
Epoch: 14/20...  Training Step: 2736...  Training loss: 1.2760...  2.7773 sec/batch
Epoch: 14/20...  Training Step: 2737...  Training loss: 1.2686...  2.8251 sec/batch
Epoch: 14/20...  Training Step: 2738...  Training loss: 1.2755...  2.7936 sec/batch
Epoch: 14/20...  Training Step: 2739...  Training loss: 1.2731...  2.7855 sec/batch
Epoch: 14/20...  Training Step: 2740...  Training loss: 1.2722...  2.7679 sec/batch
Epoch: 14/20...  Training Step: 2741...  Training loss: 1.2955...  2.9096 sec/batch
Epoch: 14/20...  Training Step: 2742...  Training loss: 1.3350...  2.7782 sec/batch
Epoch: 14/20...  Training Step: 2743...  Training loss: 1.2713...  2.7770 sec/batch
Epoch: 14/20...  Training Step: 2744...  Training loss: 1.2706...  2.8620 sec/batch
Epoch: 14/20...  Training Step: 2745...  Training loss: 1.2556...  2.8183 sec/batch
Epoch: 14/20...  Training Step: 2746...  Training loss: 1.2532...  2.7831 sec/batch
Epoch: 14/20...  Training Step: 2747...  Training loss: 1.3012...  2.7904 sec/batch
Epoch: 14/20...  Training Step: 2748...  Training loss: 1.2653...  2.8428 sec/batch
Epoch: 14/20...  Training Step: 2749...  Training loss: 1.2766...  2.8103 sec/batch
Epoch: 14/20...  Training Step: 2750...  Training loss: 1.2391...  2.7860 sec/batch
Epoch: 14/20...  Training Step: 2751...  Training loss: 1.2593...  2.7994 sec/batch
Epoch: 14/20...  Training Step: 2752...  Training loss: 1.2997...  3.0841 sec/batch
Epoch: 14/20...  Training Step: 2753...  Training loss: 1.2527...  2.8265 sec/batch
Epoch: 14/20...  Training Step: 2754...  Training loss: 1.2451...  2.8859 sec/batch
Epoch: 14/20...  Training Step: 2755...  Training loss: 1.2453...  2.7836 sec/batch
Epoch: 14/20...  Training Step: 2756...  Training loss: 1.2667...  2.8122 sec/batch
Epoch: 14/20...  Training Step: 2757...  Training loss: 1.2590...  2.7667 sec/batch
Epoch: 14/20...  Training Step: 2758...  Training loss: 1.2604...  2.8029 sec/batch
Epoch: 14/20...  Training Step: 2759...  Training loss: 1.2624...  2.8376 sec/batch
Epoch: 14/20...  Training Step: 2760...  Training loss: 1.2406...  2.7471 sec/batch
Epoch: 14/20...  Training Step: 2761...  Training loss: 1.2925...  2.8241 sec/batch
Epoch: 14/20...  Training Step: 2762...  Training loss: 1.2598...  2.8379 sec/batch
Epoch: 14/20...  Training Step: 2763...  Training loss: 1.2550...  2.7893 sec/batch
Epoch: 14/20...  Training Step: 2764...  Training loss: 1.2755...  2.7860 sec/batch
Epoch: 14/20...  Training Step: 2765...  Training loss: 1.2435...  2.7871 sec/batch
Epoch: 14/20...  Training Step: 2766...  Training loss: 1.2511...  2.7993 sec/batch
Epoch: 14/20...  Training Step: 2767...  Training loss: 1.2610...  2.8349 sec/batch
Epoch: 14/20...  Training Step: 2768...  Training loss: 1.2583...  2.8510 sec/batch
Epoch: 14/20...  Training Step: 2769...  Training loss: 1.2260...  2.8009 sec/batch
Epoch: 14/20...  Training Step: 2770...  Training loss: 1.2683...  2.8164 sec/batch
Epoch: 14/20...  Training Step: 2771...  Training loss: 1.2627...  2.7727 sec/batch
Epoch: 14/20...  Training Step: 2772...  Training loss: 1.2578...  2.8363 sec/batch
Epoch: 15/20...  Training Step: 2773...  Training loss: 1.3933...  2.8138 sec/batch
Epoch: 15/20...  Training Step: 2774...  Training loss: 1.2880...  2.7984 sec/batch
Epoch: 15/20...  Training Step: 2775...  Training loss: 1.2537...  2.8174 sec/batch
Epoch: 15/20...  Training Step: 2776...  Training loss: 1.2820...  2.7790 sec/batch
Epoch: 15/20...  Training Step: 2777...  Training loss: 1.2420...  2.8351 sec/batch
Epoch: 15/20...  Training Step: 2778...  Training loss: 1.2312...  2.7721 sec/batch
Epoch: 15/20...  Training Step: 2779...  Training loss: 1.2627...  2.7787 sec/batch
Epoch: 15/20...  Training Step: 2780...  Training loss: 1.2598...  2.7600 sec/batch
Epoch: 15/20...  Training Step: 2781...  Training loss: 1.2656...  2.7797 sec/batch
Epoch: 15/20...  Training Step: 2782...  Training loss: 1.2570...  2.7624 sec/batch
Epoch: 15/20...  Training Step: 2783...  Training loss: 1.2466...  2.8171 sec/batch
Epoch: 15/20...  Training Step: 2784...  Training loss: 1.2581...  2.7874 sec/batch
Epoch: 15/20...  Training Step: 2785...  Training loss: 1.2668...  2.7791 sec/batch
Epoch: 15/20...  Training Step: 2786...  Training loss: 1.2838...  2.7789 sec/batch
Epoch: 15/20...  Training Step: 2787...  Training loss: 1.2496...  2.7796 sec/batch
Epoch: 15/20...  Training Step: 2788...  Training loss: 1.2480...  2.7892 sec/batch
Epoch: 15/20...  Training Step: 2789...  Training loss: 1.2669...  2.7943 sec/batch
Epoch: 15/20...  Training Step: 2790...  Training loss: 1.2913...  2.7922 sec/batch
Epoch: 15/20...  Training Step: 2791...  Training loss: 1.2665...  2.7892 sec/batch
Epoch: 15/20...  Training Step: 2792...  Training loss: 1.2981...  2.7819 sec/batch
Epoch: 15/20...  Training Step: 2793...  Training loss: 1.2593...  2.7941 sec/batch
Epoch: 15/20...  Training Step: 2794...  Training loss: 1.2856...  2.8249 sec/batch
Epoch: 15/20...  Training Step: 2795...  Training loss: 1.2565...  2.7893 sec/batch
Epoch: 15/20...  Training Step: 2796...  Training loss: 1.2804...  2.8039 sec/batch
Epoch: 15/20...  Training Step: 2797...  Training loss: 1.2709...  2.8062 sec/batch
Epoch: 15/20...  Training Step: 2798...  Training loss: 1.2243...  2.8278 sec/batch
Epoch: 15/20...  Training Step: 2799...  Training loss: 1.2399...  2.8086 sec/batch
Epoch: 15/20...  Training Step: 2800...  Training loss: 1.2800...  2.7717 sec/batch
Epoch: 15/20...  Training Step: 2801...  Training loss: 1.2768...  2.7843 sec/batch
Epoch: 15/20...  Training Step: 2802...  Training loss: 1.2800...  2.7818 sec/batch
Epoch: 15/20...  Training Step: 2803...  Training loss: 1.2438...  2.8035 sec/batch
Epoch: 15/20...  Training Step: 2804...  Training loss: 1.2443...  2.8098 sec/batch
Epoch: 15/20...  Training Step: 2805...  Training loss: 1.2745...  2.8050 sec/batch
Epoch: 15/20...  Training Step: 2806...  Training loss: 1.2666...  3.0980 sec/batch
Epoch: 15/20...  Training Step: 2807...  Training loss: 1.2537...  2.9751 sec/batch
Epoch: 15/20...  Training Step: 2808...  Training loss: 1.2754...  2.9137 sec/batch
Epoch: 15/20...  Training Step: 2809...  Training loss: 1.2446...  2.9016 sec/batch
Epoch: 15/20...  Training Step: 2810...  Training loss: 1.2189...  2.8826 sec/batch
Epoch: 15/20...  Training Step: 2811...  Training loss: 1.2215...  2.8920 sec/batch
Epoch: 15/20...  Training Step: 2812...  Training loss: 1.2499...  2.8816 sec/batch
Epoch: 15/20...  Training Step: 2813...  Training loss: 1.2382...  3.4783 sec/batch
Epoch: 15/20...  Training Step: 2814...  Training loss: 1.2933...  2.8848 sec/batch
Epoch: 15/20...  Training Step: 2815...  Training loss: 1.2544...  2.9505 sec/batch
Epoch: 15/20...  Training Step: 2816...  Training loss: 1.2378...  2.8804 sec/batch
Epoch: 15/20...  Training Step: 2817...  Training loss: 1.2767...  2.8804 sec/batch
Epoch: 15/20...  Training Step: 2818...  Training loss: 1.2455...  2.9294 sec/batch
Epoch: 15/20...  Training Step: 2819...  Training loss: 1.2545...  2.9008 sec/batch
Epoch: 15/20...  Training Step: 2820...  Training loss: 1.2691...  2.9695 sec/batch
Epoch: 15/20...  Training Step: 2821...  Training loss: 1.2674...  2.9129 sec/batch
Epoch: 15/20...  Training Step: 2822...  Training loss: 1.2804...  2.9229 sec/batch
Epoch: 15/20...  Training Step: 2823...  Training loss: 1.2414...  2.9064 sec/batch
Epoch: 15/20...  Training Step: 2824...  Training loss: 1.3064...  2.8865 sec/batch
Epoch: 15/20...  Training Step: 2825...  Training loss: 1.2712...  3.0039 sec/batch
Epoch: 15/20...  Training Step: 2826...  Training loss: 1.2742...  2.8682 sec/batch
Epoch: 15/20...  Training Step: 2827...  Training loss: 1.2489...  2.9012 sec/batch
Epoch: 15/20...  Training Step: 2828...  Training loss: 1.2651...  2.8912 sec/batch
Epoch: 15/20...  Training Step: 2829...  Training loss: 1.2838...  2.8616 sec/batch
Epoch: 15/20...  Training Step: 2830...  Training loss: 1.2580...  2.8932 sec/batch
Epoch: 15/20...  Training Step: 2831...  Training loss: 1.2452...  2.8827 sec/batch
Epoch: 15/20...  Training Step: 2832...  Training loss: 1.3042...  2.9336 sec/batch
Epoch: 15/20...  Training Step: 2833...  Training loss: 1.2668...  2.9282 sec/batch
Epoch: 15/20...  Training Step: 2834...  Training loss: 1.3030...  2.8977 sec/batch
Epoch: 15/20...  Training Step: 2835...  Training loss: 1.2923...  2.8801 sec/batch
Epoch: 15/20...  Training Step: 2836...  Training loss: 1.2667...  3.3677 sec/batch
Epoch: 15/20...  Training Step: 2837...  Training loss: 1.2682...  2.8849 sec/batch
Epoch: 15/20...  Training Step: 2838...  Training loss: 1.2787...  2.8684 sec/batch
Epoch: 15/20...  Training Step: 2839...  Training loss: 1.2786...  2.9455 sec/batch
Epoch: 15/20...  Training Step: 2840...  Training loss: 1.2502...  2.8941 sec/batch
Epoch: 15/20...  Training Step: 2841...  Training loss: 1.2668...  2.9166 sec/batch
Epoch: 15/20...  Training Step: 2842...  Training loss: 1.2569...  2.8761 sec/batch
Epoch: 15/20...  Training Step: 2843...  Training loss: 1.3078...  2.8946 sec/batch
Epoch: 15/20...  Training Step: 2844...  Training loss: 1.2795...  2.9879 sec/batch
Epoch: 15/20...  Training Step: 2845...  Training loss: 1.2910...  2.9269 sec/batch
Epoch: 15/20...  Training Step: 2846...  Training loss: 1.2406...  3.5049 sec/batch
Epoch: 15/20...  Training Step: 2847...  Training loss: 1.2684...  2.8862 sec/batch
Epoch: 15/20...  Training Step: 2848...  Training loss: 1.2807...  2.9073 sec/batch
Epoch: 15/20...  Training Step: 2849...  Training loss: 1.2555...  2.9234 sec/batch
Epoch: 15/20...  Training Step: 2850...  Training loss: 1.2533...  2.8703 sec/batch
Epoch: 15/20...  Training Step: 2851...  Training loss: 1.2194...  2.8762 sec/batch
Epoch: 15/20...  Training Step: 2852...  Training loss: 1.2524...  2.9148 sec/batch
Epoch: 15/20...  Training Step: 2853...  Training loss: 1.2307...  2.8826 sec/batch
Epoch: 15/20...  Training Step: 2854...  Training loss: 1.2703...  2.9077 sec/batch
Epoch: 15/20...  Training Step: 2855...  Training loss: 1.2254...  2.8670 sec/batch
Epoch: 15/20...  Training Step: 2856...  Training loss: 1.2560...  2.9280 sec/batch
Epoch: 15/20...  Training Step: 2857...  Training loss: 1.2378...  2.9019 sec/batch
Epoch: 15/20...  Training Step: 2858...  Training loss: 1.2617...  2.8889 sec/batch
Epoch: 15/20...  Training Step: 2859...  Training loss: 1.2412...  2.8980 sec/batch
Epoch: 15/20...  Training Step: 2860...  Training loss: 1.2364...  2.9348 sec/batch
Epoch: 15/20...  Training Step: 2861...  Training loss: 1.2275...  2.9343 sec/batch
Epoch: 15/20...  Training Step: 2862...  Training loss: 1.2721...  2.8918 sec/batch
Epoch: 15/20...  Training Step: 2863...  Training loss: 1.2433...  2.9064 sec/batch
Epoch: 15/20...  Training Step: 2864...  Training loss: 1.2499...  2.9477 sec/batch
Epoch: 15/20...  Training Step: 2865...  Training loss: 1.2293...  2.8836 sec/batch
Epoch: 15/20...  Training Step: 2866...  Training loss: 1.2300...  2.9137 sec/batch
Epoch: 15/20...  Training Step: 2867...  Training loss: 1.2383...  2.8523 sec/batch
Epoch: 15/20...  Training Step: 2868...  Training loss: 1.2745...  2.9293 sec/batch
Epoch: 15/20...  Training Step: 2869...  Training loss: 1.2613...  2.8640 sec/batch
Epoch: 15/20...  Training Step: 2870...  Training loss: 1.2280...  2.9416 sec/batch
Epoch: 15/20...  Training Step: 2871...  Training loss: 1.2407...  2.7834 sec/batch
Epoch: 15/20...  Training Step: 2872...  Training loss: 1.2390...  2.7821 sec/batch
Epoch: 15/20...  Training Step: 2873...  Training loss: 1.2606...  2.8018 sec/batch
Epoch: 15/20...  Training Step: 2874...  Training loss: 1.2498...  2.8274 sec/batch
Epoch: 15/20...  Training Step: 2875...  Training loss: 1.2634...  2.7713 sec/batch
Epoch: 15/20...  Training Step: 2876...  Training loss: 1.2445...  2.7791 sec/batch
Epoch: 15/20...  Training Step: 2877...  Training loss: 1.2536...  2.7879 sec/batch
Epoch: 15/20...  Training Step: 2878...  Training loss: 1.2525...  2.8006 sec/batch
Epoch: 15/20...  Training Step: 2879...  Training loss: 1.2713...  2.7982 sec/batch
Epoch: 15/20...  Training Step: 2880...  Training loss: 1.2648...  2.7912 sec/batch
Epoch: 15/20...  Training Step: 2881...  Training loss: 1.2487...  2.8000 sec/batch
Epoch: 15/20...  Training Step: 2882...  Training loss: 1.2709...  2.7735 sec/batch
Epoch: 15/20...  Training Step: 2883...  Training loss: 1.2488...  2.7790 sec/batch
Epoch: 15/20...  Training Step: 2884...  Training loss: 1.2623...  2.7834 sec/batch
Epoch: 15/20...  Training Step: 2885...  Training loss: 1.2671...  2.8055 sec/batch
Epoch: 15/20...  Training Step: 2886...  Training loss: 1.2555...  2.7963 sec/batch
Epoch: 15/20...  Training Step: 2887...  Training loss: 1.2313...  2.8402 sec/batch
Epoch: 15/20...  Training Step: 2888...  Training loss: 1.2252...  2.7992 sec/batch
Epoch: 15/20...  Training Step: 2889...  Training loss: 1.2628...  2.8000 sec/batch
Epoch: 15/20...  Training Step: 2890...  Training loss: 1.2559...  2.7816 sec/batch
Epoch: 15/20...  Training Step: 2891...  Training loss: 1.2607...  2.7957 sec/batch
Epoch: 15/20...  Training Step: 2892...  Training loss: 1.2549...  2.8287 sec/batch
Epoch: 15/20...  Training Step: 2893...  Training loss: 1.2595...  2.8121 sec/batch
Epoch: 15/20...  Training Step: 2894...  Training loss: 1.2300...  2.8173 sec/batch
Epoch: 15/20...  Training Step: 2895...  Training loss: 1.2151...  2.8102 sec/batch
Epoch: 15/20...  Training Step: 2896...  Training loss: 1.2570...  2.8196 sec/batch
Epoch: 15/20...  Training Step: 2897...  Training loss: 1.2338...  2.7943 sec/batch
Epoch: 15/20...  Training Step: 2898...  Training loss: 1.2105...  2.8633 sec/batch
Epoch: 15/20...  Training Step: 2899...  Training loss: 1.2715...  2.8088 sec/batch
Epoch: 15/20...  Training Step: 2900...  Training loss: 1.2539...  2.7901 sec/batch
Epoch: 15/20...  Training Step: 2901...  Training loss: 1.2347...  2.7900 sec/batch
Epoch: 15/20...  Training Step: 2902...  Training loss: 1.2215...  2.7923 sec/batch
Epoch: 15/20...  Training Step: 2903...  Training loss: 1.2152...  2.8024 sec/batch
Epoch: 15/20...  Training Step: 2904...  Training loss: 1.2398...  2.7644 sec/batch
Epoch: 15/20...  Training Step: 2905...  Training loss: 1.2735...  2.7668 sec/batch
Epoch: 15/20...  Training Step: 2906...  Training loss: 1.2525...  2.8531 sec/batch
Epoch: 15/20...  Training Step: 2907...  Training loss: 1.2562...  3.2413 sec/batch
Epoch: 15/20...  Training Step: 2908...  Training loss: 1.2545...  3.1664 sec/batch
Epoch: 15/20...  Training Step: 2909...  Training loss: 1.2809...  3.1960 sec/batch
Epoch: 15/20...  Training Step: 2910...  Training loss: 1.2693...  3.2303 sec/batch
Epoch: 15/20...  Training Step: 2911...  Training loss: 1.2640...  3.2527 sec/batch
Epoch: 15/20...  Training Step: 2912...  Training loss: 1.2584...  3.2580 sec/batch
Epoch: 15/20...  Training Step: 2913...  Training loss: 1.3128...  3.1533 sec/batch
Epoch: 15/20...  Training Step: 2914...  Training loss: 1.2694...  3.2149 sec/batch
Epoch: 15/20...  Training Step: 2915...  Training loss: 1.2496...  3.1813 sec/batch
Epoch: 15/20...  Training Step: 2916...  Training loss: 1.2905...  3.1797 sec/batch
Epoch: 15/20...  Training Step: 2917...  Training loss: 1.2411...  3.1675 sec/batch
Epoch: 15/20...  Training Step: 2918...  Training loss: 1.2804...  3.3483 sec/batch
Epoch: 15/20...  Training Step: 2919...  Training loss: 1.2658...  3.2114 sec/batch
Epoch: 15/20...  Training Step: 2920...  Training loss: 1.2935...  3.2074 sec/batch
Epoch: 15/20...  Training Step: 2921...  Training loss: 1.2893...  3.1836 sec/batch
Epoch: 15/20...  Training Step: 2922...  Training loss: 1.2503...  3.0811 sec/batch
Epoch: 15/20...  Training Step: 2923...  Training loss: 1.2311...  3.3255 sec/batch
Epoch: 15/20...  Training Step: 2924...  Training loss: 1.2426...  2.9952 sec/batch
Epoch: 15/20...  Training Step: 2925...  Training loss: 1.2709...  3.0765 sec/batch
Epoch: 15/20...  Training Step: 2926...  Training loss: 1.2466...  3.1781 sec/batch
Epoch: 15/20...  Training Step: 2927...  Training loss: 1.2378...  3.1685 sec/batch
Epoch: 15/20...  Training Step: 2928...  Training loss: 1.2493...  3.2062 sec/batch
Epoch: 15/20...  Training Step: 2929...  Training loss: 1.2513...  3.1342 sec/batch
Epoch: 15/20...  Training Step: 2930...  Training loss: 1.2535...  3.2811 sec/batch
Epoch: 15/20...  Training Step: 2931...  Training loss: 1.2236...  3.1493 sec/batch
Epoch: 15/20...  Training Step: 2932...  Training loss: 1.2810...  3.1897 sec/batch
Epoch: 15/20...  Training Step: 2933...  Training loss: 1.2823...  3.0315 sec/batch
Epoch: 15/20...  Training Step: 2934...  Training loss: 1.2591...  3.0527 sec/batch
Epoch: 15/20...  Training Step: 2935...  Training loss: 1.2540...  3.0465 sec/batch
Epoch: 15/20...  Training Step: 2936...  Training loss: 1.2541...  3.3791 sec/batch
Epoch: 15/20...  Training Step: 2937...  Training loss: 1.2568...  3.1379 sec/batch
Epoch: 15/20...  Training Step: 2938...  Training loss: 1.2613...  3.2309 sec/batch
Epoch: 15/20...  Training Step: 2939...  Training loss: 1.2831...  3.1204 sec/batch
Epoch: 15/20...  Training Step: 2940...  Training loss: 1.3243...  2.7660 sec/batch
Epoch: 15/20...  Training Step: 2941...  Training loss: 1.2676...  2.7889 sec/batch
Epoch: 15/20...  Training Step: 2942...  Training loss: 1.2647...  2.7683 sec/batch
Epoch: 15/20...  Training Step: 2943...  Training loss: 1.2467...  2.7846 sec/batch
Epoch: 15/20...  Training Step: 2944...  Training loss: 1.2435...  2.7987 sec/batch
Epoch: 15/20...  Training Step: 2945...  Training loss: 1.2979...  2.7824 sec/batch
Epoch: 15/20...  Training Step: 2946...  Training loss: 1.2568...  2.8311 sec/batch
Epoch: 15/20...  Training Step: 2947...  Training loss: 1.2692...  2.8618 sec/batch
Epoch: 15/20...  Training Step: 2948...  Training loss: 1.2403...  2.7875 sec/batch
Epoch: 15/20...  Training Step: 2949...  Training loss: 1.2394...  2.8003 sec/batch
Epoch: 15/20...  Training Step: 2950...  Training loss: 1.2962...  2.8291 sec/batch
Epoch: 15/20...  Training Step: 2951...  Training loss: 1.2400...  2.7919 sec/batch
Epoch: 15/20...  Training Step: 2952...  Training loss: 1.2451...  2.7869 sec/batch
Epoch: 15/20...  Training Step: 2953...  Training loss: 1.2363...  2.8011 sec/batch
Epoch: 15/20...  Training Step: 2954...  Training loss: 1.2566...  2.8289 sec/batch
Epoch: 15/20...  Training Step: 2955...  Training loss: 1.2544...  2.7992 sec/batch
Epoch: 15/20...  Training Step: 2956...  Training loss: 1.2521...  2.7906 sec/batch
Epoch: 15/20...  Training Step: 2957...  Training loss: 1.2444...  3.0265 sec/batch
Epoch: 15/20...  Training Step: 2958...  Training loss: 1.2297...  2.8664 sec/batch
Epoch: 15/20...  Training Step: 2959...  Training loss: 1.2821...  2.8042 sec/batch
Epoch: 15/20...  Training Step: 2960...  Training loss: 1.2491...  2.8055 sec/batch
Epoch: 15/20...  Training Step: 2961...  Training loss: 1.2480...  2.7925 sec/batch
Epoch: 15/20...  Training Step: 2962...  Training loss: 1.2528...  2.7888 sec/batch
Epoch: 15/20...  Training Step: 2963...  Training loss: 1.2324...  2.7934 sec/batch
Epoch: 15/20...  Training Step: 2964...  Training loss: 1.2386...  2.8070 sec/batch
Epoch: 15/20...  Training Step: 2965...  Training loss: 1.2583...  2.7995 sec/batch
Epoch: 15/20...  Training Step: 2966...  Training loss: 1.2488...  2.8140 sec/batch
Epoch: 15/20...  Training Step: 2967...  Training loss: 1.2088...  2.7821 sec/batch
Epoch: 15/20...  Training Step: 2968...  Training loss: 1.2588...  2.7463 sec/batch
Epoch: 15/20...  Training Step: 2969...  Training loss: 1.2480...  2.8008 sec/batch
Epoch: 15/20...  Training Step: 2970...  Training loss: 1.2313...  2.8494 sec/batch
Epoch: 16/20...  Training Step: 2971...  Training loss: 1.3813...  2.7632 sec/batch
Epoch: 16/20...  Training Step: 2972...  Training loss: 1.2744...  2.8290 sec/batch
Epoch: 16/20...  Training Step: 2973...  Training loss: 1.2552...  2.7935 sec/batch
Epoch: 16/20...  Training Step: 2974...  Training loss: 1.2816...  2.7823 sec/batch
Epoch: 16/20...  Training Step: 2975...  Training loss: 1.2335...  2.8027 sec/batch
Epoch: 16/20...  Training Step: 2976...  Training loss: 1.2175...  2.8007 sec/batch
Epoch: 16/20...  Training Step: 2977...  Training loss: 1.2597...  2.8054 sec/batch
Epoch: 16/20...  Training Step: 2978...  Training loss: 1.2564...  2.7644 sec/batch
Epoch: 16/20...  Training Step: 2979...  Training loss: 1.2483...  2.7886 sec/batch
Epoch: 16/20...  Training Step: 2980...  Training loss: 1.2490...  2.8184 sec/batch
Epoch: 16/20...  Training Step: 2981...  Training loss: 1.2343...  2.7830 sec/batch
Epoch: 16/20...  Training Step: 2982...  Training loss: 1.2501...  2.7658 sec/batch
Epoch: 16/20...  Training Step: 2983...  Training loss: 1.2509...  2.7910 sec/batch
Epoch: 16/20...  Training Step: 2984...  Training loss: 1.2762...  2.7754 sec/batch
Epoch: 16/20...  Training Step: 2985...  Training loss: 1.2430...  2.7926 sec/batch
Epoch: 16/20...  Training Step: 2986...  Training loss: 1.2322...  2.8035 sec/batch
Epoch: 16/20...  Training Step: 2987...  Training loss: 1.2713...  2.7945 sec/batch
Epoch: 16/20...  Training Step: 2988...  Training loss: 1.2728...  2.8250 sec/batch
Epoch: 16/20...  Training Step: 2989...  Training loss: 1.2472...  2.7871 sec/batch
Epoch: 16/20...  Training Step: 2990...  Training loss: 1.2822...  2.8936 sec/batch
Epoch: 16/20...  Training Step: 2991...  Training loss: 1.2480...  2.8189 sec/batch
Epoch: 16/20...  Training Step: 2992...  Training loss: 1.2532...  2.8189 sec/batch
Epoch: 16/20...  Training Step: 2993...  Training loss: 1.2458...  2.7932 sec/batch
Epoch: 16/20...  Training Step: 2994...  Training loss: 1.2783...  2.7899 sec/batch
Epoch: 16/20...  Training Step: 2995...  Training loss: 1.2564...  2.7889 sec/batch
Epoch: 16/20...  Training Step: 2996...  Training loss: 1.2103...  2.8088 sec/batch
Epoch: 16/20...  Training Step: 2997...  Training loss: 1.2233...  2.7778 sec/batch
Epoch: 16/20...  Training Step: 2998...  Training loss: 1.2721...  2.8579 sec/batch
Epoch: 16/20...  Training Step: 2999...  Training loss: 1.2585...  3.7733 sec/batch
Epoch: 16/20...  Training Step: 3000...  Training loss: 1.2800...  2.8217 sec/batch
Epoch: 16/20...  Training Step: 3001...  Training loss: 1.2420...  2.7861 sec/batch
Epoch: 16/20...  Training Step: 3002...  Training loss: 1.2290...  2.7847 sec/batch
Epoch: 16/20...  Training Step: 3003...  Training loss: 1.2588...  2.8394 sec/batch
Epoch: 16/20...  Training Step: 3004...  Training loss: 1.2571...  2.8257 sec/batch
Epoch: 16/20...  Training Step: 3005...  Training loss: 1.2417...  2.7925 sec/batch
Epoch: 16/20...  Training Step: 3006...  Training loss: 1.2586...  2.7925 sec/batch
Epoch: 16/20...  Training Step: 3007...  Training loss: 1.2397...  2.7739 sec/batch
Epoch: 16/20...  Training Step: 3008...  Training loss: 1.2144...  2.8210 sec/batch
Epoch: 16/20...  Training Step: 3009...  Training loss: 1.2100...  2.8150 sec/batch
Epoch: 16/20...  Training Step: 3010...  Training loss: 1.2387...  2.8251 sec/batch
Epoch: 16/20...  Training Step: 3011...  Training loss: 1.2302...  2.8406 sec/batch
Epoch: 16/20...  Training Step: 3012...  Training loss: 1.2946...  2.7829 sec/batch
Epoch: 16/20...  Training Step: 3013...  Training loss: 1.2413...  2.8044 sec/batch
Epoch: 16/20...  Training Step: 3014...  Training loss: 1.2227...  2.8061 sec/batch
Epoch: 16/20...  Training Step: 3015...  Training loss: 1.2617...  2.7891 sec/batch
Epoch: 16/20...  Training Step: 3016...  Training loss: 1.2332...  2.9353 sec/batch
Epoch: 16/20...  Training Step: 3017...  Training loss: 1.2500...  2.8220 sec/batch
Epoch: 16/20...  Training Step: 3018...  Training loss: 1.2525...  2.8400 sec/batch
Epoch: 16/20...  Training Step: 3019...  Training loss: 1.2473...  2.7916 sec/batch
Epoch: 16/20...  Training Step: 3020...  Training loss: 1.2682...  2.7982 sec/batch
Epoch: 16/20...  Training Step: 3021...  Training loss: 1.2317...  2.8271 sec/batch
Epoch: 16/20...  Training Step: 3022...  Training loss: 1.2902...  2.8858 sec/batch
Epoch: 16/20...  Training Step: 3023...  Training loss: 1.2501...  2.7930 sec/batch
Epoch: 16/20...  Training Step: 3024...  Training loss: 1.2735...  2.7816 sec/batch
Epoch: 16/20...  Training Step: 3025...  Training loss: 1.2370...  2.7954 sec/batch
Epoch: 16/20...  Training Step: 3026...  Training loss: 1.2522...  2.8152 sec/batch
Epoch: 16/20...  Training Step: 3027...  Training loss: 1.2645...  2.8227 sec/batch
Epoch: 16/20...  Training Step: 3028...  Training loss: 1.2364...  2.7693 sec/batch
Epoch: 16/20...  Training Step: 3029...  Training loss: 1.2277...  2.8093 sec/batch
Epoch: 16/20...  Training Step: 3030...  Training loss: 1.2876...  2.8143 sec/batch
Epoch: 16/20...  Training Step: 3031...  Training loss: 1.2617...  2.7659 sec/batch
Epoch: 16/20...  Training Step: 3032...  Training loss: 1.2942...  2.8408 sec/batch
Epoch: 16/20...  Training Step: 3033...  Training loss: 1.2712...  2.7686 sec/batch
Epoch: 16/20...  Training Step: 3034...  Training loss: 1.2529...  2.8175 sec/batch
Epoch: 16/20...  Training Step: 3035...  Training loss: 1.2599...  2.8119 sec/batch
Epoch: 16/20...  Training Step: 3036...  Training loss: 1.2733...  2.8181 sec/batch
Epoch: 16/20...  Training Step: 3037...  Training loss: 1.2772...  2.7969 sec/batch
Epoch: 16/20...  Training Step: 3038...  Training loss: 1.2433...  2.8109 sec/batch
Epoch: 16/20...  Training Step: 3039...  Training loss: 1.2579...  2.7743 sec/batch
Epoch: 16/20...  Training Step: 3040...  Training loss: 1.2339...  2.7972 sec/batch
Epoch: 16/20...  Training Step: 3041...  Training loss: 1.2873...  2.7877 sec/batch
Epoch: 16/20...  Training Step: 3042...  Training loss: 1.2749...  2.8406 sec/batch
Epoch: 16/20...  Training Step: 3043...  Training loss: 1.2689...  2.8406 sec/batch
Epoch: 16/20...  Training Step: 3044...  Training loss: 1.2323...  2.7940 sec/batch
Epoch: 16/20...  Training Step: 3045...  Training loss: 1.2517...  2.8271 sec/batch
Epoch: 16/20...  Training Step: 3046...  Training loss: 1.2733...  2.7801 sec/batch
Epoch: 16/20...  Training Step: 3047...  Training loss: 1.2471...  2.7881 sec/batch
Epoch: 16/20...  Training Step: 3048...  Training loss: 1.2413...  2.8135 sec/batch
Epoch: 16/20...  Training Step: 3049...  Training loss: 1.2178...  2.7807 sec/batch
Epoch: 16/20...  Training Step: 3050...  Training loss: 1.2459...  2.8514 sec/batch
Epoch: 16/20...  Training Step: 3051...  Training loss: 1.2092...  2.7936 sec/batch
Epoch: 16/20...  Training Step: 3052...  Training loss: 1.2456...  2.8216 sec/batch
Epoch: 16/20...  Training Step: 3053...  Training loss: 1.2154...  2.7878 sec/batch
Epoch: 16/20...  Training Step: 3054...  Training loss: 1.2385...  2.8619 sec/batch
Epoch: 16/20...  Training Step: 3055...  Training loss: 1.2258...  2.7894 sec/batch
Epoch: 16/20...  Training Step: 3056...  Training loss: 1.2541...  2.7957 sec/batch
Epoch: 16/20...  Training Step: 3057...  Training loss: 1.2264...  2.7882 sec/batch
Epoch: 16/20...  Training Step: 3058...  Training loss: 1.2247...  2.8614 sec/batch
Epoch: 16/20...  Training Step: 3059...  Training loss: 1.2206...  2.7894 sec/batch
Epoch: 16/20...  Training Step: 3060...  Training loss: 1.2522...  2.7947 sec/batch
Epoch: 16/20...  Training Step: 3061...  Training loss: 1.2322...  2.7965 sec/batch
Epoch: 16/20...  Training Step: 3062...  Training loss: 1.2430...  2.7763 sec/batch
Epoch: 16/20...  Training Step: 3063...  Training loss: 1.2262...  2.7689 sec/batch
Epoch: 16/20...  Training Step: 3064...  Training loss: 1.2211...  2.8120 sec/batch
Epoch: 16/20...  Training Step: 3065...  Training loss: 1.2287...  2.8212 sec/batch
Epoch: 16/20...  Training Step: 3066...  Training loss: 1.2591...  2.7872 sec/batch
Epoch: 16/20...  Training Step: 3067...  Training loss: 1.2558...  2.8310 sec/batch
Epoch: 16/20...  Training Step: 3068...  Training loss: 1.2155...  2.8186 sec/batch
Epoch: 16/20...  Training Step: 3069...  Training loss: 1.2334...  2.7650 sec/batch
Epoch: 16/20...  Training Step: 3070...  Training loss: 1.2229...  2.8311 sec/batch
Epoch: 16/20...  Training Step: 3071...  Training loss: 1.2469...  2.8251 sec/batch
Epoch: 16/20...  Training Step: 3072...  Training loss: 1.2407...  2.7965 sec/batch
Epoch: 16/20...  Training Step: 3073...  Training loss: 1.2447...  2.7920 sec/batch
Epoch: 16/20...  Training Step: 3074...  Training loss: 1.2317...  2.8127 sec/batch
Epoch: 16/20...  Training Step: 3075...  Training loss: 1.2411...  2.8407 sec/batch
Epoch: 16/20...  Training Step: 3076...  Training loss: 1.2369...  2.7663 sec/batch
Epoch: 16/20...  Training Step: 3077...  Training loss: 1.2506...  2.8102 sec/batch
Epoch: 16/20...  Training Step: 3078...  Training loss: 1.2513...  2.8005 sec/batch
Epoch: 16/20...  Training Step: 3079...  Training loss: 1.2421...  3.2152 sec/batch
Epoch: 16/20...  Training Step: 3080...  Training loss: 1.2661...  2.8229 sec/batch
Epoch: 16/20...  Training Step: 3081...  Training loss: 1.2356...  2.8001 sec/batch
Epoch: 16/20...  Training Step: 3082...  Training loss: 1.2540...  2.7824 sec/batch
Epoch: 16/20...  Training Step: 3083...  Training loss: 1.2562...  2.7926 sec/batch
Epoch: 16/20...  Training Step: 3084...  Training loss: 1.2436...  2.8025 sec/batch
Epoch: 16/20...  Training Step: 3085...  Training loss: 1.2222...  2.8120 sec/batch
Epoch: 16/20...  Training Step: 3086...  Training loss: 1.2129...  2.8023 sec/batch
Epoch: 16/20...  Training Step: 3087...  Training loss: 1.2584...  2.8095 sec/batch
Epoch: 16/20...  Training Step: 3088...  Training loss: 1.2469...  2.7934 sec/batch
Epoch: 16/20...  Training Step: 3089...  Training loss: 1.2414...  2.8266 sec/batch
Epoch: 16/20...  Training Step: 3090...  Training loss: 1.2413...  2.7961 sec/batch
Epoch: 16/20...  Training Step: 3091...  Training loss: 1.2403...  2.7479 sec/batch
Epoch: 16/20...  Training Step: 3092...  Training loss: 1.2105...  2.8067 sec/batch
Epoch: 16/20...  Training Step: 3093...  Training loss: 1.2056...  2.8296 sec/batch
Epoch: 16/20...  Training Step: 3094...  Training loss: 1.2451...  2.8052 sec/batch
Epoch: 16/20...  Training Step: 3095...  Training loss: 1.2418...  2.8044 sec/batch
Epoch: 16/20...  Training Step: 3096...  Training loss: 1.2031...  2.8414 sec/batch
Epoch: 16/20...  Training Step: 3097...  Training loss: 1.2494...  2.8296 sec/batch
Epoch: 16/20...  Training Step: 3098...  Training loss: 1.2416...  2.7902 sec/batch
Epoch: 16/20...  Training Step: 3099...  Training loss: 1.2371...  2.8075 sec/batch
Epoch: 16/20...  Training Step: 3100...  Training loss: 1.2040...  2.7591 sec/batch
Epoch: 16/20...  Training Step: 3101...  Training loss: 1.1980...  2.7970 sec/batch
Epoch: 16/20...  Training Step: 3102...  Training loss: 1.2280...  2.8139 sec/batch
Epoch: 16/20...  Training Step: 3103...  Training loss: 1.2596...  2.7975 sec/batch
Epoch: 16/20...  Training Step: 3104...  Training loss: 1.2475...  2.7738 sec/batch
Epoch: 16/20...  Training Step: 3105...  Training loss: 1.2524...  2.7955 sec/batch
Epoch: 16/20...  Training Step: 3106...  Training loss: 1.2426...  2.7730 sec/batch
Epoch: 16/20...  Training Step: 3107...  Training loss: 1.2742...  2.8249 sec/batch
Epoch: 16/20...  Training Step: 3108...  Training loss: 1.2643...  2.7856 sec/batch
Epoch: 16/20...  Training Step: 3109...  Training loss: 1.2459...  2.7890 sec/batch
Epoch: 16/20...  Training Step: 3110...  Training loss: 1.2458...  2.7722 sec/batch
Epoch: 16/20...  Training Step: 3111...  Training loss: 1.2971...  2.8297 sec/batch
Epoch: 16/20...  Training Step: 3112...  Training loss: 1.2615...  2.7946 sec/batch
Epoch: 16/20...  Training Step: 3113...  Training loss: 1.2408...  2.7763 sec/batch
Epoch: 16/20...  Training Step: 3114...  Training loss: 1.2735...  2.9258 sec/batch
Epoch: 16/20...  Training Step: 3115...  Training loss: 1.2320...  2.7753 sec/batch
Epoch: 16/20...  Training Step: 3116...  Training loss: 1.2698...  2.7842 sec/batch
Epoch: 16/20...  Training Step: 3117...  Training loss: 1.2612...  2.7983 sec/batch
Epoch: 16/20...  Training Step: 3118...  Training loss: 1.2751...  2.7980 sec/batch
Epoch: 16/20...  Training Step: 3119...  Training loss: 1.2794...  2.8329 sec/batch
Epoch: 16/20...  Training Step: 3120...  Training loss: 1.2435...  2.7890 sec/batch
Epoch: 16/20...  Training Step: 3121...  Training loss: 1.2130...  2.8764 sec/batch
Epoch: 16/20...  Training Step: 3122...  Training loss: 1.2147...  2.8126 sec/batch
Epoch: 16/20...  Training Step: 3123...  Training loss: 1.2593...  2.8219 sec/batch
Epoch: 16/20...  Training Step: 3124...  Training loss: 1.2361...  2.8138 sec/batch
Epoch: 16/20...  Training Step: 3125...  Training loss: 1.2338...  3.0412 sec/batch
Epoch: 16/20...  Training Step: 3126...  Training loss: 1.2428...  2.7971 sec/batch
Epoch: 16/20...  Training Step: 3127...  Training loss: 1.2421...  2.7883 sec/batch
Epoch: 16/20...  Training Step: 3128...  Training loss: 1.2399...  2.8014 sec/batch
Epoch: 16/20...  Training Step: 3129...  Training loss: 1.2147...  2.7828 sec/batch
Epoch: 16/20...  Training Step: 3130...  Training loss: 1.2659...  2.8258 sec/batch
Epoch: 16/20...  Training Step: 3131...  Training loss: 1.2751...  2.7757 sec/batch
Epoch: 16/20...  Training Step: 3132...  Training loss: 1.2631...  2.8100 sec/batch
Epoch: 16/20...  Training Step: 3133...  Training loss: 1.2520...  2.8268 sec/batch
Epoch: 16/20...  Training Step: 3134...  Training loss: 1.2463...  2.8170 sec/batch
Epoch: 16/20...  Training Step: 3135...  Training loss: 1.2488...  2.7751 sec/batch
Epoch: 16/20...  Training Step: 3136...  Training loss: 1.2441...  2.7935 sec/batch
Epoch: 16/20...  Training Step: 3137...  Training loss: 1.2537...  2.8700 sec/batch
Epoch: 16/20...  Training Step: 3138...  Training loss: 1.3016...  2.8470 sec/batch
Epoch: 16/20...  Training Step: 3139...  Training loss: 1.2561...  2.8465 sec/batch
Epoch: 16/20...  Training Step: 3140...  Training loss: 1.2511...  2.8009 sec/batch
Epoch: 16/20...  Training Step: 3141...  Training loss: 1.2328...  2.7913 sec/batch
Epoch: 16/20...  Training Step: 3142...  Training loss: 1.2362...  2.8056 sec/batch
Epoch: 16/20...  Training Step: 3143...  Training loss: 1.2798...  2.8142 sec/batch
Epoch: 16/20...  Training Step: 3144...  Training loss: 1.2369...  2.7700 sec/batch
Epoch: 16/20...  Training Step: 3145...  Training loss: 1.2566...  2.7960 sec/batch
Epoch: 16/20...  Training Step: 3146...  Training loss: 1.2068...  2.8163 sec/batch
Epoch: 16/20...  Training Step: 3147...  Training loss: 1.2295...  2.7747 sec/batch
Epoch: 16/20...  Training Step: 3148...  Training loss: 1.2732...  2.8893 sec/batch
Epoch: 16/20...  Training Step: 3149...  Training loss: 1.2219...  2.8723 sec/batch
Epoch: 16/20...  Training Step: 3150...  Training loss: 1.2319...  3.1828 sec/batch
Epoch: 16/20...  Training Step: 3151...  Training loss: 1.2330...  2.8787 sec/batch
Epoch: 16/20...  Training Step: 3152...  Training loss: 1.2418...  2.7778 sec/batch
Epoch: 16/20...  Training Step: 3153...  Training loss: 1.2405...  2.8510 sec/batch
Epoch: 16/20...  Training Step: 3154...  Training loss: 1.2453...  2.8051 sec/batch
Epoch: 16/20...  Training Step: 3155...  Training loss: 1.2357...  2.7626 sec/batch
Epoch: 16/20...  Training Step: 3156...  Training loss: 1.2196...  2.7676 sec/batch
Epoch: 16/20...  Training Step: 3157...  Training loss: 1.2675...  2.7955 sec/batch
Epoch: 16/20...  Training Step: 3158...  Training loss: 1.2344...  2.8148 sec/batch
Epoch: 16/20...  Training Step: 3159...  Training loss: 1.2361...  2.7660 sec/batch
Epoch: 16/20...  Training Step: 3160...  Training loss: 1.2407...  2.8507 sec/batch
Epoch: 16/20...  Training Step: 3161...  Training loss: 1.2199...  2.7813 sec/batch
Epoch: 16/20...  Training Step: 3162...  Training loss: 1.2286...  2.7563 sec/batch
Epoch: 16/20...  Training Step: 3163...  Training loss: 1.2392...  2.7933 sec/batch
Epoch: 16/20...  Training Step: 3164...  Training loss: 1.2184...  2.7760 sec/batch
Epoch: 16/20...  Training Step: 3165...  Training loss: 1.1978...  2.8278 sec/batch
Epoch: 16/20...  Training Step: 3166...  Training loss: 1.2440...  2.7915 sec/batch
Epoch: 16/20...  Training Step: 3167...  Training loss: 1.2357...  2.7759 sec/batch
Epoch: 16/20...  Training Step: 3168...  Training loss: 1.2290...  2.7666 sec/batch
Epoch: 17/20...  Training Step: 3169...  Training loss: 1.3740...  2.7740 sec/batch
Epoch: 17/20...  Training Step: 3170...  Training loss: 1.2610...  2.7985 sec/batch
Epoch: 17/20...  Training Step: 3171...  Training loss: 1.2348...  2.8682 sec/batch
Epoch: 17/20...  Training Step: 3172...  Training loss: 1.2660...  2.8304 sec/batch
Epoch: 17/20...  Training Step: 3173...  Training loss: 1.2156...  2.8063 sec/batch
Epoch: 17/20...  Training Step: 3174...  Training loss: 1.2114...  2.8085 sec/batch
Epoch: 17/20...  Training Step: 3175...  Training loss: 1.2425...  2.8007 sec/batch
Epoch: 17/20...  Training Step: 3176...  Training loss: 1.2464...  2.7831 sec/batch
Epoch: 17/20...  Training Step: 3177...  Training loss: 1.2504...  2.8133 sec/batch
Epoch: 17/20...  Training Step: 3178...  Training loss: 1.2313...  2.7907 sec/batch
Epoch: 17/20...  Training Step: 3179...  Training loss: 1.2282...  2.8213 sec/batch
Epoch: 17/20...  Training Step: 3180...  Training loss: 1.2322...  2.7945 sec/batch
Epoch: 17/20...  Training Step: 3181...  Training loss: 1.2434...  2.7961 sec/batch
Epoch: 17/20...  Training Step: 3182...  Training loss: 1.2526...  2.8180 sec/batch
Epoch: 17/20...  Training Step: 3183...  Training loss: 1.2333...  2.7993 sec/batch
Epoch: 17/20...  Training Step: 3184...  Training loss: 1.2231...  2.7563 sec/batch
Epoch: 17/20...  Training Step: 3185...  Training loss: 1.2534...  2.8912 sec/batch
Epoch: 17/20...  Training Step: 3186...  Training loss: 1.2612...  2.9892 sec/batch
Epoch: 17/20...  Training Step: 3187...  Training loss: 1.2345...  2.9107 sec/batch
Epoch: 17/20...  Training Step: 3188...  Training loss: 1.2709...  2.8940 sec/batch
Epoch: 17/20...  Training Step: 3189...  Training loss: 1.2482...  2.9088 sec/batch
Epoch: 17/20...  Training Step: 3190...  Training loss: 1.2503...  2.9360 sec/batch
Epoch: 17/20...  Training Step: 3191...  Training loss: 1.2323...  2.8874 sec/batch
Epoch: 17/20...  Training Step: 3192...  Training loss: 1.2620...  2.8970 sec/batch
Epoch: 17/20...  Training Step: 3193...  Training loss: 1.2517...  2.8731 sec/batch
Epoch: 17/20...  Training Step: 3194...  Training loss: 1.2079...  2.8592 sec/batch
Epoch: 17/20...  Training Step: 3195...  Training loss: 1.2177...  2.9085 sec/batch
Epoch: 17/20...  Training Step: 3196...  Training loss: 1.2568...  2.8944 sec/batch
Epoch: 17/20...  Training Step: 3197...  Training loss: 1.2508...  2.9122 sec/batch
Epoch: 17/20...  Training Step: 3198...  Training loss: 1.2649...  2.9685 sec/batch
Epoch: 17/20...  Training Step: 3199...  Training loss: 1.2218...  2.9116 sec/batch
Epoch: 17/20...  Training Step: 3200...  Training loss: 1.2189...  2.8862 sec/batch
Epoch: 17/20...  Training Step: 3201...  Training loss: 1.2530...  2.8901 sec/batch
Epoch: 17/20...  Training Step: 3202...  Training loss: 1.2489...  2.9712 sec/batch
Epoch: 17/20...  Training Step: 3203...  Training loss: 1.2375...  2.8997 sec/batch
Epoch: 17/20...  Training Step: 3204...  Training loss: 1.2409...  2.9123 sec/batch
Epoch: 17/20...  Training Step: 3205...  Training loss: 1.2251...  2.9096 sec/batch
Epoch: 17/20...  Training Step: 3206...  Training loss: 1.2057...  2.8948 sec/batch
Epoch: 17/20...  Training Step: 3207...  Training loss: 1.2012...  2.8828 sec/batch
Epoch: 17/20...  Training Step: 3208...  Training loss: 1.2252...  2.9251 sec/batch
Epoch: 17/20...  Training Step: 3209...  Training loss: 1.2153...  2.8701 sec/batch
Epoch: 17/20...  Training Step: 3210...  Training loss: 1.2742...  2.9074 sec/batch
Epoch: 17/20...  Training Step: 3211...  Training loss: 1.2224...  2.8767 sec/batch
Epoch: 17/20...  Training Step: 3212...  Training loss: 1.2150...  2.9208 sec/batch
Epoch: 17/20...  Training Step: 3213...  Training loss: 1.2455...  2.9142 sec/batch
Epoch: 17/20...  Training Step: 3214...  Training loss: 1.2269...  2.8915 sec/batch
Epoch: 17/20...  Training Step: 3215...  Training loss: 1.2319...  2.9026 sec/batch
Epoch: 17/20...  Training Step: 3216...  Training loss: 1.2310...  2.8771 sec/batch
Epoch: 17/20...  Training Step: 3217...  Training loss: 1.2380...  2.9430 sec/batch
Epoch: 17/20...  Training Step: 3218...  Training loss: 1.2583...  2.8883 sec/batch
Epoch: 17/20...  Training Step: 3219...  Training loss: 1.2223...  2.7897 sec/batch
Epoch: 17/20...  Training Step: 3220...  Training loss: 1.2792...  2.8367 sec/batch
Epoch: 17/20...  Training Step: 3221...  Training loss: 1.2482...  2.9010 sec/batch
Epoch: 17/20...  Training Step: 3222...  Training loss: 1.2487...  2.9242 sec/batch
Epoch: 17/20...  Training Step: 3223...  Training loss: 1.2327...  2.9518 sec/batch
Epoch: 17/20...  Training Step: 3224...  Training loss: 1.2405...  2.9114 sec/batch
Epoch: 17/20...  Training Step: 3225...  Training loss: 1.2435...  2.8982 sec/batch
Epoch: 17/20...  Training Step: 3226...  Training loss: 1.2257...  2.9010 sec/batch
Epoch: 17/20...  Training Step: 3227...  Training loss: 1.2221...  2.9145 sec/batch
Epoch: 17/20...  Training Step: 3228...  Training loss: 1.2664...  2.9017 sec/batch
Epoch: 17/20...  Training Step: 3229...  Training loss: 1.2447...  2.9427 sec/batch
Epoch: 17/20...  Training Step: 3230...  Training loss: 1.2815...  2.9491 sec/batch
Epoch: 17/20...  Training Step: 3231...  Training loss: 1.2611...  2.8984 sec/batch
Epoch: 17/20...  Training Step: 3232...  Training loss: 1.2508...  2.9609 sec/batch
Epoch: 17/20...  Training Step: 3233...  Training loss: 1.2378...  2.8689 sec/batch
Epoch: 17/20...  Training Step: 3234...  Training loss: 1.2467...  2.9191 sec/batch
Epoch: 17/20...  Training Step: 3235...  Training loss: 1.2614...  2.8939 sec/batch
Epoch: 17/20...  Training Step: 3236...  Training loss: 1.2262...  2.9036 sec/batch
Epoch: 17/20...  Training Step: 3237...  Training loss: 1.2490...  2.8560 sec/batch
Epoch: 17/20...  Training Step: 3238...  Training loss: 1.2354...  2.9222 sec/batch
Epoch: 17/20...  Training Step: 3239...  Training loss: 1.2804...  2.9084 sec/batch
Epoch: 17/20...  Training Step: 3240...  Training loss: 1.2596...  2.8384 sec/batch
Epoch: 17/20...  Training Step: 3241...  Training loss: 1.2662...  2.9320 sec/batch
Epoch: 17/20...  Training Step: 3242...  Training loss: 1.2286...  2.9364 sec/batch
Epoch: 17/20...  Training Step: 3243...  Training loss: 1.2442...  3.0919 sec/batch
Epoch: 17/20...  Training Step: 3244...  Training loss: 1.2546...  2.9317 sec/batch
Epoch: 17/20...  Training Step: 3245...  Training loss: 1.2373...  2.9815 sec/batch
Epoch: 17/20...  Training Step: 3246...  Training loss: 1.2294...  2.9039 sec/batch
Epoch: 17/20...  Training Step: 3247...  Training loss: 1.1954...  2.9504 sec/batch
Epoch: 17/20...  Training Step: 3248...  Training loss: 1.2399...  2.8959 sec/batch
Epoch: 17/20...  Training Step: 3249...  Training loss: 1.2195...  2.8804 sec/batch
Epoch: 17/20...  Training Step: 3250...  Training loss: 1.2363...  2.9122 sec/batch
Epoch: 17/20...  Training Step: 3251...  Training loss: 1.2021...  2.9586 sec/batch
Epoch: 17/20...  Training Step: 3252...  Training loss: 1.2347...  2.9096 sec/batch
Epoch: 17/20...  Training Step: 3253...  Training loss: 1.2185...  2.8251 sec/batch
Epoch: 17/20...  Training Step: 3254...  Training loss: 1.2517...  2.8409 sec/batch
Epoch: 17/20...  Training Step: 3255...  Training loss: 1.2124...  2.8551 sec/batch
Epoch: 17/20...  Training Step: 3256...  Training loss: 1.2218...  2.7908 sec/batch
Epoch: 17/20...  Training Step: 3257...  Training loss: 1.2119...  2.8163 sec/batch
Epoch: 17/20...  Training Step: 3258...  Training loss: 1.2402...  2.7510 sec/batch
Epoch: 17/20...  Training Step: 3259...  Training loss: 1.2123...  2.7978 sec/batch
Epoch: 17/20...  Training Step: 3260...  Training loss: 1.2307...  2.7931 sec/batch
Epoch: 17/20...  Training Step: 3261...  Training loss: 1.2060...  2.7801 sec/batch
Epoch: 17/20...  Training Step: 3262...  Training loss: 1.2119...  2.7929 sec/batch
Epoch: 17/20...  Training Step: 3263...  Training loss: 1.2289...  2.7895 sec/batch
Epoch: 17/20...  Training Step: 3264...  Training loss: 1.2461...  2.7664 sec/batch
Epoch: 17/20...  Training Step: 3265...  Training loss: 1.2586...  2.9052 sec/batch
Epoch: 17/20...  Training Step: 3266...  Training loss: 1.2125...  2.7766 sec/batch
Epoch: 17/20...  Training Step: 3267...  Training loss: 1.2204...  2.7949 sec/batch
Epoch: 17/20...  Training Step: 3268...  Training loss: 1.2178...  2.8105 sec/batch
Epoch: 17/20...  Training Step: 3269...  Training loss: 1.2381...  2.7909 sec/batch
Epoch: 17/20...  Training Step: 3270...  Training loss: 1.2224...  2.8007 sec/batch
Epoch: 17/20...  Training Step: 3271...  Training loss: 1.2302...  2.7891 sec/batch
Epoch: 17/20...  Training Step: 3272...  Training loss: 1.2292...  2.8032 sec/batch
Epoch: 17/20...  Training Step: 3273...  Training loss: 1.2334...  2.7979 sec/batch
Epoch: 17/20...  Training Step: 3274...  Training loss: 1.2239...  2.7750 sec/batch
Epoch: 17/20...  Training Step: 3275...  Training loss: 1.2461...  2.8346 sec/batch
Epoch: 17/20...  Training Step: 3276...  Training loss: 1.2443...  2.7776 sec/batch
Epoch: 17/20...  Training Step: 3277...  Training loss: 1.2240...  2.7904 sec/batch
Epoch: 17/20...  Training Step: 3278...  Training loss: 1.2567...  2.7768 sec/batch
Epoch: 17/20...  Training Step: 3279...  Training loss: 1.2260...  2.7945 sec/batch
Epoch: 17/20...  Training Step: 3280...  Training loss: 1.2386...  2.7860 sec/batch
Epoch: 17/20...  Training Step: 3281...  Training loss: 1.2442...  2.7960 sec/batch
Epoch: 17/20...  Training Step: 3282...  Training loss: 1.2336...  2.8047 sec/batch
Epoch: 17/20...  Training Step: 3283...  Training loss: 1.2066...  2.7998 sec/batch
Epoch: 17/20...  Training Step: 3284...  Training loss: 1.2137...  2.7966 sec/batch
Epoch: 17/20...  Training Step: 3285...  Training loss: 1.2487...  2.7874 sec/batch
Epoch: 17/20...  Training Step: 3286...  Training loss: 1.2428...  2.8614 sec/batch
Epoch: 17/20...  Training Step: 3287...  Training loss: 1.2302...  2.8449 sec/batch
Epoch: 17/20...  Training Step: 3288...  Training loss: 1.2348...  2.8032 sec/batch
Epoch: 17/20...  Training Step: 3289...  Training loss: 1.2292...  2.8145 sec/batch
Epoch: 17/20...  Training Step: 3290...  Training loss: 1.1979...  2.7828 sec/batch
Epoch: 17/20...  Training Step: 3291...  Training loss: 1.1879...  2.8313 sec/batch
Epoch: 17/20...  Training Step: 3292...  Training loss: 1.2291...  2.7754 sec/batch
Epoch: 17/20...  Training Step: 3293...  Training loss: 1.2233...  2.8134 sec/batch
Epoch: 17/20...  Training Step: 3294...  Training loss: 1.1986...  2.7851 sec/batch
Epoch: 17/20...  Training Step: 3295...  Training loss: 1.2321...  2.7708 sec/batch
Epoch: 17/20...  Training Step: 3296...  Training loss: 1.2408...  2.8233 sec/batch
Epoch: 17/20...  Training Step: 3297...  Training loss: 1.2142...  2.8020 sec/batch
Epoch: 17/20...  Training Step: 3298...  Training loss: 1.1988...  2.7770 sec/batch
Epoch: 17/20...  Training Step: 3299...  Training loss: 1.1838...  2.8125 sec/batch
Epoch: 17/20...  Training Step: 3300...  Training loss: 1.2144...  2.7977 sec/batch
Epoch: 17/20...  Training Step: 3301...  Training loss: 1.2588...  2.8251 sec/batch
Epoch: 17/20...  Training Step: 3302...  Training loss: 1.2349...  2.8311 sec/batch
Epoch: 17/20...  Training Step: 3303...  Training loss: 1.2307...  2.8412 sec/batch
Epoch: 17/20...  Training Step: 3304...  Training loss: 1.2336...  2.8018 sec/batch
Epoch: 17/20...  Training Step: 3305...  Training loss: 1.2606...  2.7906 sec/batch
Epoch: 17/20...  Training Step: 3306...  Training loss: 1.2551...  2.7920 sec/batch
Epoch: 17/20...  Training Step: 3307...  Training loss: 1.2492...  2.8358 sec/batch
Epoch: 17/20...  Training Step: 3308...  Training loss: 1.2325...  2.7922 sec/batch
Epoch: 17/20...  Training Step: 3309...  Training loss: 1.2822...  2.7721 sec/batch
Epoch: 17/20...  Training Step: 3310...  Training loss: 1.2461...  2.7856 sec/batch
Epoch: 17/20...  Training Step: 3311...  Training loss: 1.2303...  2.7700 sec/batch
Epoch: 17/20...  Training Step: 3312...  Training loss: 1.2671...  2.7888 sec/batch
Epoch: 17/20...  Training Step: 3313...  Training loss: 1.2316...  2.7816 sec/batch
Epoch: 17/20...  Training Step: 3314...  Training loss: 1.2604...  2.7765 sec/batch
Epoch: 17/20...  Training Step: 3315...  Training loss: 1.2426...  2.7982 sec/batch
Epoch: 17/20...  Training Step: 3316...  Training loss: 1.2603...  2.7874 sec/batch
Epoch: 17/20...  Training Step: 3317...  Training loss: 1.2649...  2.7906 sec/batch
Epoch: 17/20...  Training Step: 3318...  Training loss: 1.2319...  2.8647 sec/batch
Epoch: 17/20...  Training Step: 3319...  Training loss: 1.2123...  2.8175 sec/batch
Epoch: 17/20...  Training Step: 3320...  Training loss: 1.2057...  2.8009 sec/batch
Epoch: 17/20...  Training Step: 3321...  Training loss: 1.2544...  2.7958 sec/batch
Epoch: 17/20...  Training Step: 3322...  Training loss: 1.2290...  2.8124 sec/batch
Epoch: 17/20...  Training Step: 3323...  Training loss: 1.2270...  2.7578 sec/batch
Epoch: 17/20...  Training Step: 3324...  Training loss: 1.2364...  2.8498 sec/batch
Epoch: 17/20...  Training Step: 3325...  Training loss: 1.2396...  2.8018 sec/batch
Epoch: 17/20...  Training Step: 3326...  Training loss: 1.2247...  2.8276 sec/batch
Epoch: 17/20...  Training Step: 3327...  Training loss: 1.2081...  2.7736 sec/batch
Epoch: 17/20...  Training Step: 3328...  Training loss: 1.2560...  2.7624 sec/batch
Epoch: 17/20...  Training Step: 3329...  Training loss: 1.2613...  2.8928 sec/batch
Epoch: 17/20...  Training Step: 3330...  Training loss: 1.2431...  2.7841 sec/batch
Epoch: 17/20...  Training Step: 3331...  Training loss: 1.2286...  2.7764 sec/batch
Epoch: 17/20...  Training Step: 3332...  Training loss: 1.2346...  2.8234 sec/batch
Epoch: 17/20...  Training Step: 3333...  Training loss: 1.2485...  2.7943 sec/batch
Epoch: 17/20...  Training Step: 3334...  Training loss: 1.2281...  2.8216 sec/batch
Epoch: 17/20...  Training Step: 3335...  Training loss: 1.2511...  2.7741 sec/batch
Epoch: 17/20...  Training Step: 3336...  Training loss: 1.2975...  2.8248 sec/batch
Epoch: 17/20...  Training Step: 3337...  Training loss: 1.2408...  2.8110 sec/batch
Epoch: 17/20...  Training Step: 3338...  Training loss: 1.2381...  2.7840 sec/batch
Epoch: 17/20...  Training Step: 3339...  Training loss: 1.2225...  2.7819 sec/batch
Epoch: 17/20...  Training Step: 3340...  Training loss: 1.2188...  2.8305 sec/batch
Epoch: 17/20...  Training Step: 3341...  Training loss: 1.2639...  2.8054 sec/batch
Epoch: 17/20...  Training Step: 3342...  Training loss: 1.2232...  2.7945 sec/batch
Epoch: 17/20...  Training Step: 3343...  Training loss: 1.2475...  2.7862 sec/batch
Epoch: 17/20...  Training Step: 3344...  Training loss: 1.2101...  2.8052 sec/batch
Epoch: 17/20...  Training Step: 3345...  Training loss: 1.2309...  3.8000 sec/batch
Epoch: 17/20...  Training Step: 3346...  Training loss: 1.2713...  2.8034 sec/batch
Epoch: 17/20...  Training Step: 3347...  Training loss: 1.2104...  2.7892 sec/batch
Epoch: 17/20...  Training Step: 3348...  Training loss: 1.2059...  2.8030 sec/batch
Epoch: 17/20...  Training Step: 3349...  Training loss: 1.2092...  2.7762 sec/batch
Epoch: 17/20...  Training Step: 3350...  Training loss: 1.2334...  2.8265 sec/batch
Epoch: 17/20...  Training Step: 3351...  Training loss: 1.2241...  2.7871 sec/batch
Epoch: 17/20...  Training Step: 3352...  Training loss: 1.2277...  2.8085 sec/batch
Epoch: 17/20...  Training Step: 3353...  Training loss: 1.2198...  2.7985 sec/batch
Epoch: 17/20...  Training Step: 3354...  Training loss: 1.2091...  3.1009 sec/batch
Epoch: 17/20...  Training Step: 3355...  Training loss: 1.2631...  2.7924 sec/batch
Epoch: 17/20...  Training Step: 3356...  Training loss: 1.2202...  2.7985 sec/batch
Epoch: 17/20...  Training Step: 3357...  Training loss: 1.2223...  2.8065 sec/batch
Epoch: 17/20...  Training Step: 3358...  Training loss: 1.2372...  2.8074 sec/batch
Epoch: 17/20...  Training Step: 3359...  Training loss: 1.2112...  2.9318 sec/batch
Epoch: 17/20...  Training Step: 3360...  Training loss: 1.2204...  2.9201 sec/batch
Epoch: 17/20...  Training Step: 3361...  Training loss: 1.2381...  2.9864 sec/batch
Epoch: 17/20...  Training Step: 3362...  Training loss: 1.2197...  2.7990 sec/batch
Epoch: 17/20...  Training Step: 3363...  Training loss: 1.1973...  3.2620 sec/batch
Epoch: 17/20...  Training Step: 3364...  Training loss: 1.2359...  2.8049 sec/batch
Epoch: 17/20...  Training Step: 3365...  Training loss: 1.2248...  2.7965 sec/batch
Epoch: 17/20...  Training Step: 3366...  Training loss: 1.2155...  2.8247 sec/batch
Epoch: 18/20...  Training Step: 3367...  Training loss: 1.3603...  2.8052 sec/batch
Epoch: 18/20...  Training Step: 3368...  Training loss: 1.2465...  2.7904 sec/batch
Epoch: 18/20...  Training Step: 3369...  Training loss: 1.2348...  2.8013 sec/batch
Epoch: 18/20...  Training Step: 3370...  Training loss: 1.2565...  2.7585 sec/batch
Epoch: 18/20...  Training Step: 3371...  Training loss: 1.2094...  2.8812 sec/batch
Epoch: 18/20...  Training Step: 3372...  Training loss: 1.1985...  2.7936 sec/batch
Epoch: 18/20...  Training Step: 3373...  Training loss: 1.2314...  2.7695 sec/batch
Epoch: 18/20...  Training Step: 3374...  Training loss: 1.2266...  2.7810 sec/batch
Epoch: 18/20...  Training Step: 3375...  Training loss: 1.2347...  2.7857 sec/batch
Epoch: 18/20...  Training Step: 3376...  Training loss: 1.2220...  2.7942 sec/batch
Epoch: 18/20...  Training Step: 3377...  Training loss: 1.2137...  2.7775 sec/batch
Epoch: 18/20...  Training Step: 3378...  Training loss: 1.2231...  2.7907 sec/batch
Epoch: 18/20...  Training Step: 3379...  Training loss: 1.2360...  2.8037 sec/batch
Epoch: 18/20...  Training Step: 3380...  Training loss: 1.2519...  2.8304 sec/batch
Epoch: 18/20...  Training Step: 3381...  Training loss: 1.2190...  2.7873 sec/batch
Epoch: 18/20...  Training Step: 3382...  Training loss: 1.2123...  2.8119 sec/batch
Epoch: 18/20...  Training Step: 3383...  Training loss: 1.2426...  2.8331 sec/batch
Epoch: 18/20...  Training Step: 3384...  Training loss: 1.2533...  2.7904 sec/batch
Epoch: 18/20...  Training Step: 3385...  Training loss: 1.2359...  2.7933 sec/batch
Epoch: 18/20...  Training Step: 3386...  Training loss: 1.2572...  2.8408 sec/batch
Epoch: 18/20...  Training Step: 3387...  Training loss: 1.2367...  2.8105 sec/batch
Epoch: 18/20...  Training Step: 3388...  Training loss: 1.2449...  2.8302 sec/batch
Epoch: 18/20...  Training Step: 3389...  Training loss: 1.2201...  2.7860 sec/batch
Epoch: 18/20...  Training Step: 3390...  Training loss: 1.2528...  2.8043 sec/batch
Epoch: 18/20...  Training Step: 3391...  Training loss: 1.2404...  2.7865 sec/batch
Epoch: 18/20...  Training Step: 3392...  Training loss: 1.1860...  2.8000 sec/batch
Epoch: 18/20...  Training Step: 3393...  Training loss: 1.1924...  2.8410 sec/batch
Epoch: 18/20...  Training Step: 3394...  Training loss: 1.2483...  2.7653 sec/batch
Epoch: 18/20...  Training Step: 3395...  Training loss: 1.2442...  2.7926 sec/batch
Epoch: 18/20...  Training Step: 3396...  Training loss: 1.2497...  2.7896 sec/batch
Epoch: 18/20...  Training Step: 3397...  Training loss: 1.2112...  2.7971 sec/batch
Epoch: 18/20...  Training Step: 3398...  Training loss: 1.2074...  2.7962 sec/batch
Epoch: 18/20...  Training Step: 3399...  Training loss: 1.2353...  2.7716 sec/batch
Epoch: 18/20...  Training Step: 3400...  Training loss: 1.2416...  2.7816 sec/batch
Epoch: 18/20...  Training Step: 3401...  Training loss: 1.2293...  2.7819 sec/batch
Epoch: 18/20...  Training Step: 3402...  Training loss: 1.2364...  2.7981 sec/batch
Epoch: 18/20...  Training Step: 3403...  Training loss: 1.2090...  2.8453 sec/batch
Epoch: 18/20...  Training Step: 3404...  Training loss: 1.1900...  2.7854 sec/batch
Epoch: 18/20...  Training Step: 3405...  Training loss: 1.1970...  2.7512 sec/batch
Epoch: 18/20...  Training Step: 3406...  Training loss: 1.2225...  2.7981 sec/batch
Epoch: 18/20...  Training Step: 3407...  Training loss: 1.2070...  2.8321 sec/batch
Epoch: 18/20...  Training Step: 3408...  Training loss: 1.2675...  2.7874 sec/batch
Epoch: 18/20...  Training Step: 3409...  Training loss: 1.2173...  2.7963 sec/batch
Epoch: 18/20...  Training Step: 3410...  Training loss: 1.2035...  2.8078 sec/batch
Epoch: 18/20...  Training Step: 3411...  Training loss: 1.2376...  2.8128 sec/batch
Epoch: 18/20...  Training Step: 3412...  Training loss: 1.2140...  2.7977 sec/batch
Epoch: 18/20...  Training Step: 3413...  Training loss: 1.2264...  2.8293 sec/batch
Epoch: 18/20...  Training Step: 3414...  Training loss: 1.2194...  2.7948 sec/batch
Epoch: 18/20...  Training Step: 3415...  Training loss: 1.2287...  2.8004 sec/batch
Epoch: 18/20...  Training Step: 3416...  Training loss: 1.2551...  2.8245 sec/batch
Epoch: 18/20...  Training Step: 3417...  Training loss: 1.2099...  2.7966 sec/batch
Epoch: 18/20...  Training Step: 3418...  Training loss: 1.2678...  2.7786 sec/batch
Epoch: 18/20...  Training Step: 3419...  Training loss: 1.2382...  2.7921 sec/batch
Epoch: 18/20...  Training Step: 3420...  Training loss: 1.2474...  2.7806 sec/batch
Epoch: 18/20...  Training Step: 3421...  Training loss: 1.2286...  2.7680 sec/batch
Epoch: 18/20...  Training Step: 3422...  Training loss: 1.2334...  2.7953 sec/batch
Epoch: 18/20...  Training Step: 3423...  Training loss: 1.2494...  2.8362 sec/batch
Epoch: 18/20...  Training Step: 3424...  Training loss: 1.2222...  2.7894 sec/batch
Epoch: 18/20...  Training Step: 3425...  Training loss: 1.2044...  2.8044 sec/batch
Epoch: 18/20...  Training Step: 3426...  Training loss: 1.2680...  2.8025 sec/batch
Epoch: 18/20...  Training Step: 3427...  Training loss: 1.2391...  2.7854 sec/batch
Epoch: 18/20...  Training Step: 3428...  Training loss: 1.2758...  2.8452 sec/batch
Epoch: 18/20...  Training Step: 3429...  Training loss: 1.2529...  2.8454 sec/batch
Epoch: 18/20...  Training Step: 3430...  Training loss: 1.2419...  2.7905 sec/batch
Epoch: 18/20...  Training Step: 3431...  Training loss: 1.2306...  2.7586 sec/batch
Epoch: 18/20...  Training Step: 3432...  Training loss: 1.2424...  2.7793 sec/batch
Epoch: 18/20...  Training Step: 3433...  Training loss: 1.2485...  2.7787 sec/batch
Epoch: 18/20...  Training Step: 3434...  Training loss: 1.2265...  2.7896 sec/batch
Epoch: 18/20...  Training Step: 3435...  Training loss: 1.2381...  3.6508 sec/batch
Epoch: 18/20...  Training Step: 3436...  Training loss: 1.2196...  2.7913 sec/batch
Epoch: 18/20...  Training Step: 3437...  Training loss: 1.2710...  2.7954 sec/batch
Epoch: 18/20...  Training Step: 3438...  Training loss: 1.2531...  2.8872 sec/batch
Epoch: 18/20...  Training Step: 3439...  Training loss: 1.2553...  2.8347 sec/batch
Epoch: 18/20...  Training Step: 3440...  Training loss: 1.2067...  2.8338 sec/batch
Epoch: 18/20...  Training Step: 3441...  Training loss: 1.2372...  2.7796 sec/batch
Epoch: 18/20...  Training Step: 3442...  Training loss: 1.2533...  2.8186 sec/batch
Epoch: 18/20...  Training Step: 3443...  Training loss: 1.2295...  2.8055 sec/batch
Epoch: 18/20...  Training Step: 3444...  Training loss: 1.2177...  2.8161 sec/batch
Epoch: 18/20...  Training Step: 3445...  Training loss: 1.1867...  2.9266 sec/batch
Epoch: 18/20...  Training Step: 3446...  Training loss: 1.2373...  2.7861 sec/batch
Epoch: 18/20...  Training Step: 3447...  Training loss: 1.1976...  2.7798 sec/batch
Epoch: 18/20...  Training Step: 3448...  Training loss: 1.2354...  2.7983 sec/batch
Epoch: 18/20...  Training Step: 3449...  Training loss: 1.1972...  2.8003 sec/batch
Epoch: 18/20...  Training Step: 3450...  Training loss: 1.2179...  2.8351 sec/batch
Epoch: 18/20...  Training Step: 3451...  Training loss: 1.2044...  2.8105 sec/batch
Epoch: 18/20...  Training Step: 3452...  Training loss: 1.2324...  2.8073 sec/batch
Epoch: 18/20...  Training Step: 3453...  Training loss: 1.2020...  2.8058 sec/batch
Epoch: 18/20...  Training Step: 3454...  Training loss: 1.2112...  2.7962 sec/batch
Epoch: 18/20...  Training Step: 3455...  Training loss: 1.1947...  2.8631 sec/batch
Epoch: 18/20...  Training Step: 3456...  Training loss: 1.2340...  2.8469 sec/batch
Epoch: 18/20...  Training Step: 3457...  Training loss: 1.1989...  2.8310 sec/batch
Epoch: 18/20...  Training Step: 3458...  Training loss: 1.2140...  2.7926 sec/batch
Epoch: 18/20...  Training Step: 3459...  Training loss: 1.1983...  2.7854 sec/batch
Epoch: 18/20...  Training Step: 3460...  Training loss: 1.1957...  2.7855 sec/batch
Epoch: 18/20...  Training Step: 3461...  Training loss: 1.2133...  2.7718 sec/batch
Epoch: 18/20...  Training Step: 3462...  Training loss: 1.2405...  2.8184 sec/batch
Epoch: 18/20...  Training Step: 3463...  Training loss: 1.2426...  2.7877 sec/batch
Epoch: 18/20...  Training Step: 3464...  Training loss: 1.1920...  2.7701 sec/batch
Epoch: 18/20...  Training Step: 3465...  Training loss: 1.2111...  2.8562 sec/batch
Epoch: 18/20...  Training Step: 3466...  Training loss: 1.2153...  2.7892 sec/batch
Epoch: 18/20...  Training Step: 3467...  Training loss: 1.2272...  2.8471 sec/batch
Epoch: 18/20...  Training Step: 3468...  Training loss: 1.2134...  2.8014 sec/batch
Epoch: 18/20...  Training Step: 3469...  Training loss: 1.2344...  2.7996 sec/batch
Epoch: 18/20...  Training Step: 3470...  Training loss: 1.2137...  2.8056 sec/batch
Epoch: 18/20...  Training Step: 3471...  Training loss: 1.2189...  2.8145 sec/batch
Epoch: 18/20...  Training Step: 3472...  Training loss: 1.2146...  2.8094 sec/batch
Epoch: 18/20...  Training Step: 3473...  Training loss: 1.2256...  2.7812 sec/batch
Epoch: 18/20...  Training Step: 3474...  Training loss: 1.2352...  2.7926 sec/batch
Epoch: 18/20...  Training Step: 3475...  Training loss: 1.2104...  2.7996 sec/batch
Epoch: 18/20...  Training Step: 3476...  Training loss: 1.2372...  2.7801 sec/batch
Epoch: 18/20...  Training Step: 3477...  Training loss: 1.2157...  2.7798 sec/batch
Epoch: 18/20...  Training Step: 3478...  Training loss: 1.2267...  2.8062 sec/batch
Epoch: 18/20...  Training Step: 3479...  Training loss: 1.2351...  2.7753 sec/batch
Epoch: 18/20...  Training Step: 3480...  Training loss: 1.2232...  2.7804 sec/batch
Epoch: 18/20...  Training Step: 3481...  Training loss: 1.2050...  2.8209 sec/batch
Epoch: 18/20...  Training Step: 3482...  Training loss: 1.1999...  2.7662 sec/batch
Epoch: 18/20...  Training Step: 3483...  Training loss: 1.2294...  2.7900 sec/batch
Epoch: 18/20...  Training Step: 3484...  Training loss: 1.2293...  2.8624 sec/batch
Epoch: 18/20...  Training Step: 3485...  Training loss: 1.2247...  2.8095 sec/batch
Epoch: 18/20...  Training Step: 3486...  Training loss: 1.2154...  2.7908 sec/batch
Epoch: 18/20...  Training Step: 3487...  Training loss: 1.2208...  2.8609 sec/batch
Epoch: 18/20...  Training Step: 3488...  Training loss: 1.1945...  2.8228 sec/batch
Epoch: 18/20...  Training Step: 3489...  Training loss: 1.1818...  2.7998 sec/batch
Epoch: 18/20...  Training Step: 3490...  Training loss: 1.2163...  2.7839 sec/batch
Epoch: 18/20...  Training Step: 3491...  Training loss: 1.2221...  2.8154 sec/batch
Epoch: 18/20...  Training Step: 3492...  Training loss: 1.1781...  2.8205 sec/batch
Epoch: 18/20...  Training Step: 3493...  Training loss: 1.2280...  2.7903 sec/batch
Epoch: 18/20...  Training Step: 3494...  Training loss: 1.2241...  2.7647 sec/batch
Epoch: 18/20...  Training Step: 3495...  Training loss: 1.2042...  2.9283 sec/batch
Epoch: 18/20...  Training Step: 3496...  Training loss: 1.1910...  2.8294 sec/batch
Epoch: 18/20...  Training Step: 3497...  Training loss: 1.1745...  2.8040 sec/batch
Epoch: 18/20...  Training Step: 3498...  Training loss: 1.2091...  2.8145 sec/batch
Epoch: 18/20...  Training Step: 3499...  Training loss: 1.2455...  2.8721 sec/batch
Epoch: 18/20...  Training Step: 3500...  Training loss: 1.2276...  2.7879 sec/batch
Epoch: 18/20...  Training Step: 3501...  Training loss: 1.2319...  2.7887 sec/batch
Epoch: 18/20...  Training Step: 3502...  Training loss: 1.2269...  2.7863 sec/batch
Epoch: 18/20...  Training Step: 3503...  Training loss: 1.2508...  2.7858 sec/batch
Epoch: 18/20...  Training Step: 3504...  Training loss: 1.2467...  2.7882 sec/batch
Epoch: 18/20...  Training Step: 3505...  Training loss: 1.2274...  2.7864 sec/batch
Epoch: 18/20...  Training Step: 3506...  Training loss: 1.2289...  2.7852 sec/batch
Epoch: 18/20...  Training Step: 3507...  Training loss: 1.2802...  2.8300 sec/batch
Epoch: 18/20...  Training Step: 3508...  Training loss: 1.2358...  2.7970 sec/batch
Epoch: 18/20...  Training Step: 3509...  Training loss: 1.2269...  2.8177 sec/batch
Epoch: 18/20...  Training Step: 3510...  Training loss: 1.2523...  2.8081 sec/batch
Epoch: 18/20...  Training Step: 3511...  Training loss: 1.2128...  2.8249 sec/batch
Epoch: 18/20...  Training Step: 3512...  Training loss: 1.2446...  2.7952 sec/batch
Epoch: 18/20...  Training Step: 3513...  Training loss: 1.2384...  2.7812 sec/batch
Epoch: 18/20...  Training Step: 3514...  Training loss: 1.2496...  2.8000 sec/batch
Epoch: 18/20...  Training Step: 3515...  Training loss: 1.2507...  2.8208 sec/batch
Epoch: 18/20...  Training Step: 3516...  Training loss: 1.2274...  2.7995 sec/batch
Epoch: 18/20...  Training Step: 3517...  Training loss: 1.1919...  2.7951 sec/batch
Epoch: 18/20...  Training Step: 3518...  Training loss: 1.2070...  2.8131 sec/batch
Epoch: 18/20...  Training Step: 3519...  Training loss: 1.2364...  2.8245 sec/batch
Epoch: 18/20...  Training Step: 3520...  Training loss: 1.2174...  2.7903 sec/batch
Epoch: 18/20...  Training Step: 3521...  Training loss: 1.2139...  2.8016 sec/batch
Epoch: 18/20...  Training Step: 3522...  Training loss: 1.2307...  2.7879 sec/batch
Epoch: 18/20...  Training Step: 3523...  Training loss: 1.2233...  2.8150 sec/batch
Epoch: 18/20...  Training Step: 3524...  Training loss: 1.2172...  2.7942 sec/batch
Epoch: 18/20...  Training Step: 3525...  Training loss: 1.1839...  2.8269 sec/batch
Epoch: 18/20...  Training Step: 3526...  Training loss: 1.2449...  2.7830 sec/batch
Epoch: 18/20...  Training Step: 3527...  Training loss: 1.2500...  2.8033 sec/batch
Epoch: 18/20...  Training Step: 3528...  Training loss: 1.2284...  2.7998 sec/batch
Epoch: 18/20...  Training Step: 3529...  Training loss: 1.2269...  2.8281 sec/batch
Epoch: 18/20...  Training Step: 3530...  Training loss: 1.2338...  2.7925 sec/batch
Epoch: 18/20...  Training Step: 3531...  Training loss: 1.2279...  2.8236 sec/batch
Epoch: 18/20...  Training Step: 3532...  Training loss: 1.2288...  2.8013 sec/batch
Epoch: 18/20...  Training Step: 3533...  Training loss: 1.2465...  2.7914 sec/batch
Epoch: 18/20...  Training Step: 3534...  Training loss: 1.2947...  2.8197 sec/batch
Epoch: 18/20...  Training Step: 3535...  Training loss: 1.2352...  2.7764 sec/batch
Epoch: 18/20...  Training Step: 3536...  Training loss: 1.2421...  2.7841 sec/batch
Epoch: 18/20...  Training Step: 3537...  Training loss: 1.2218...  2.7710 sec/batch
Epoch: 18/20...  Training Step: 3538...  Training loss: 1.2060...  2.7998 sec/batch
Epoch: 18/20...  Training Step: 3539...  Training loss: 1.2563...  2.8103 sec/batch
Epoch: 18/20...  Training Step: 3540...  Training loss: 1.2360...  2.8092 sec/batch
Epoch: 18/20...  Training Step: 3541...  Training loss: 1.2304...  2.8466 sec/batch
Epoch: 18/20...  Training Step: 3542...  Training loss: 1.1934...  2.8209 sec/batch
Epoch: 18/20...  Training Step: 3543...  Training loss: 1.2249...  2.7948 sec/batch
Epoch: 18/20...  Training Step: 3544...  Training loss: 1.2613...  2.8061 sec/batch
Epoch: 18/20...  Training Step: 3545...  Training loss: 1.2019...  2.8109 sec/batch
Epoch: 18/20...  Training Step: 3546...  Training loss: 1.2072...  2.8182 sec/batch
Epoch: 18/20...  Training Step: 3547...  Training loss: 1.2140...  2.7846 sec/batch
Epoch: 18/20...  Training Step: 3548...  Training loss: 1.2251...  2.8251 sec/batch
Epoch: 18/20...  Training Step: 3549...  Training loss: 1.2236...  2.8008 sec/batch
Epoch: 18/20...  Training Step: 3550...  Training loss: 1.2141...  2.7781 sec/batch
Epoch: 18/20...  Training Step: 3551...  Training loss: 1.2188...  2.8242 sec/batch
Epoch: 18/20...  Training Step: 3552...  Training loss: 1.1990...  2.8128 sec/batch
Epoch: 18/20...  Training Step: 3553...  Training loss: 1.2468...  2.8143 sec/batch
Epoch: 18/20...  Training Step: 3554...  Training loss: 1.2168...  2.7545 sec/batch
Epoch: 18/20...  Training Step: 3555...  Training loss: 1.2197...  2.8348 sec/batch
Epoch: 18/20...  Training Step: 3556...  Training loss: 1.2253...  3.4898 sec/batch
Epoch: 18/20...  Training Step: 3557...  Training loss: 1.1985...  2.8357 sec/batch
Epoch: 18/20...  Training Step: 3558...  Training loss: 1.2111...  2.7788 sec/batch
Epoch: 18/20...  Training Step: 3559...  Training loss: 1.2327...  2.7857 sec/batch
Epoch: 18/20...  Training Step: 3560...  Training loss: 1.2080...  2.7830 sec/batch
Epoch: 18/20...  Training Step: 3561...  Training loss: 1.1910...  2.8148 sec/batch
Epoch: 18/20...  Training Step: 3562...  Training loss: 1.2253...  2.7834 sec/batch
Epoch: 18/20...  Training Step: 3563...  Training loss: 1.2228...  2.8043 sec/batch
Epoch: 18/20...  Training Step: 3564...  Training loss: 1.2093...  2.8620 sec/batch
Epoch: 19/20...  Training Step: 3565...  Training loss: 1.3437...  2.8144 sec/batch
Epoch: 19/20...  Training Step: 3566...  Training loss: 1.2350...  2.8098 sec/batch
Epoch: 19/20...  Training Step: 3567...  Training loss: 1.2184...  2.7987 sec/batch
Epoch: 19/20...  Training Step: 3568...  Training loss: 1.2426...  2.8200 sec/batch
Epoch: 19/20...  Training Step: 3569...  Training loss: 1.2032...  2.8091 sec/batch
Epoch: 19/20...  Training Step: 3570...  Training loss: 1.1894...  2.8270 sec/batch
Epoch: 19/20...  Training Step: 3571...  Training loss: 1.2231...  3.0326 sec/batch
Epoch: 19/20...  Training Step: 3572...  Training loss: 1.2278...  3.0381 sec/batch
Epoch: 19/20...  Training Step: 3573...  Training loss: 1.2329...  2.9303 sec/batch
Epoch: 19/20...  Training Step: 3574...  Training loss: 1.2106...  2.8747 sec/batch
Epoch: 19/20...  Training Step: 3575...  Training loss: 1.2116...  2.8621 sec/batch
Epoch: 19/20...  Training Step: 3576...  Training loss: 1.2278...  2.9227 sec/batch
Epoch: 19/20...  Training Step: 3577...  Training loss: 1.2235...  2.9503 sec/batch
Epoch: 19/20...  Training Step: 3578...  Training loss: 1.2356...  2.9002 sec/batch
Epoch: 19/20...  Training Step: 3579...  Training loss: 1.2119...  2.8795 sec/batch
Epoch: 19/20...  Training Step: 3580...  Training loss: 1.1945...  2.9683 sec/batch
Epoch: 19/20...  Training Step: 3581...  Training loss: 1.2391...  2.8977 sec/batch
Epoch: 19/20...  Training Step: 3582...  Training loss: 1.2451...  2.9196 sec/batch
Epoch: 19/20...  Training Step: 3583...  Training loss: 1.2226...  2.9257 sec/batch
Epoch: 19/20...  Training Step: 3584...  Training loss: 1.2542...  2.9074 sec/batch
Epoch: 19/20...  Training Step: 3585...  Training loss: 1.2194...  2.9226 sec/batch
Epoch: 19/20...  Training Step: 3586...  Training loss: 1.2311...  2.8899 sec/batch
Epoch: 19/20...  Training Step: 3587...  Training loss: 1.2216...  2.8846 sec/batch
Epoch: 19/20...  Training Step: 3588...  Training loss: 1.2398...  2.8987 sec/batch
Epoch: 19/20...  Training Step: 3589...  Training loss: 1.2288...  3.2883 sec/batch
Epoch: 19/20...  Training Step: 3590...  Training loss: 1.1927...  2.8430 sec/batch
Epoch: 19/20...  Training Step: 3591...  Training loss: 1.2017...  2.8557 sec/batch
Epoch: 19/20...  Training Step: 3592...  Training loss: 1.2414...  2.9350 sec/batch
Epoch: 19/20...  Training Step: 3593...  Training loss: 1.2403...  2.9705 sec/batch
Epoch: 19/20...  Training Step: 3594...  Training loss: 1.2305...  2.9188 sec/batch
Epoch: 19/20...  Training Step: 3595...  Training loss: 1.2074...  2.9471 sec/batch
Epoch: 19/20...  Training Step: 3596...  Training loss: 1.1984...  2.8931 sec/batch
Epoch: 19/20...  Training Step: 3597...  Training loss: 1.2225...  2.9279 sec/batch
Epoch: 19/20...  Training Step: 3598...  Training loss: 1.2359...  2.9065 sec/batch
Epoch: 19/20...  Training Step: 3599...  Training loss: 1.2093...  2.9096 sec/batch
Epoch: 19/20...  Training Step: 3600...  Training loss: 1.2286...  2.8871 sec/batch
Epoch: 19/20...  Training Step: 3601...  Training loss: 1.2042...  2.8799 sec/batch
Epoch: 19/20...  Training Step: 3602...  Training loss: 1.1805...  2.8955 sec/batch
Epoch: 19/20...  Training Step: 3603...  Training loss: 1.1865...  3.2966 sec/batch
Epoch: 19/20...  Training Step: 3604...  Training loss: 1.2140...  2.9405 sec/batch
Epoch: 19/20...  Training Step: 3605...  Training loss: 1.1900...  2.8996 sec/batch
Epoch: 19/20...  Training Step: 3606...  Training loss: 1.2644...  2.9222 sec/batch
Epoch: 19/20...  Training Step: 3607...  Training loss: 1.2083...  2.9397 sec/batch
Epoch: 19/20...  Training Step: 3608...  Training loss: 1.1946...  2.9231 sec/batch
Epoch: 19/20...  Training Step: 3609...  Training loss: 1.2307...  2.9504 sec/batch
Epoch: 19/20...  Training Step: 3610...  Training loss: 1.1958...  2.8776 sec/batch
Epoch: 19/20...  Training Step: 3611...  Training loss: 1.2120...  2.9322 sec/batch
Epoch: 19/20...  Training Step: 3612...  Training loss: 1.2109...  2.8711 sec/batch
Epoch: 19/20...  Training Step: 3613...  Training loss: 1.2092...  2.9161 sec/batch
Epoch: 19/20...  Training Step: 3614...  Training loss: 1.2327...  2.9384 sec/batch
Epoch: 19/20...  Training Step: 3615...  Training loss: 1.1976...  2.8860 sec/batch
Epoch: 19/20...  Training Step: 3616...  Training loss: 1.2601...  2.8882 sec/batch
Epoch: 19/20...  Training Step: 3617...  Training loss: 1.2205...  2.9476 sec/batch
Epoch: 19/20...  Training Step: 3618...  Training loss: 1.2433...  2.8925 sec/batch
Epoch: 19/20...  Training Step: 3619...  Training loss: 1.2117...  2.9305 sec/batch
Epoch: 19/20...  Training Step: 3620...  Training loss: 1.2248...  2.9822 sec/batch
Epoch: 19/20...  Training Step: 3621...  Training loss: 1.2314...  2.9302 sec/batch
Epoch: 19/20...  Training Step: 3622...  Training loss: 1.2122...  2.8956 sec/batch
Epoch: 19/20...  Training Step: 3623...  Training loss: 1.2006...  2.9241 sec/batch
Epoch: 19/20...  Training Step: 3624...  Training loss: 1.2401...  2.9458 sec/batch
Epoch: 19/20...  Training Step: 3625...  Training loss: 1.2247...  2.9016 sec/batch
Epoch: 19/20...  Training Step: 3626...  Training loss: 1.2626...  2.8802 sec/batch
Epoch: 19/20...  Training Step: 3627...  Training loss: 1.2379...  2.9151 sec/batch
Epoch: 19/20...  Training Step: 3628...  Training loss: 1.2270...  2.9212 sec/batch
Epoch: 19/20...  Training Step: 3629...  Training loss: 1.2278...  2.9543 sec/batch
Epoch: 19/20...  Training Step: 3630...  Training loss: 1.2278...  2.9263 sec/batch
Epoch: 19/20...  Training Step: 3631...  Training loss: 1.2509...  2.8894 sec/batch
Epoch: 19/20...  Training Step: 3632...  Training loss: 1.2112...  2.8625 sec/batch
Epoch: 19/20...  Training Step: 3633...  Training loss: 1.2239...  2.9011 sec/batch
Epoch: 19/20...  Training Step: 3634...  Training loss: 1.2141...  2.8942 sec/batch
Epoch: 19/20...  Training Step: 3635...  Training loss: 1.2596...  2.8309 sec/batch
Epoch: 19/20...  Training Step: 3636...  Training loss: 1.2431...  2.7691 sec/batch
Epoch: 19/20...  Training Step: 3637...  Training loss: 1.2466...  2.7899 sec/batch
Epoch: 19/20...  Training Step: 3638...  Training loss: 1.1985...  2.7857 sec/batch
Epoch: 19/20...  Training Step: 3639...  Training loss: 1.2226...  2.8144 sec/batch
Epoch: 19/20...  Training Step: 3640...  Training loss: 1.2447...  2.7668 sec/batch
Epoch: 19/20...  Training Step: 3641...  Training loss: 1.2140...  2.8004 sec/batch
Epoch: 19/20...  Training Step: 3642...  Training loss: 1.2124...  2.8019 sec/batch
Epoch: 19/20...  Training Step: 3643...  Training loss: 1.1724...  2.7779 sec/batch
Epoch: 19/20...  Training Step: 3644...  Training loss: 1.2172...  2.8077 sec/batch
Epoch: 19/20...  Training Step: 3645...  Training loss: 1.1942...  2.7947 sec/batch
Epoch: 19/20...  Training Step: 3646...  Training loss: 1.2245...  2.8752 sec/batch
Epoch: 19/20...  Training Step: 3647...  Training loss: 1.1923...  2.7613 sec/batch
Epoch: 19/20...  Training Step: 3648...  Training loss: 1.2125...  2.8466 sec/batch
Epoch: 19/20...  Training Step: 3649...  Training loss: 1.1996...  2.7997 sec/batch
Epoch: 19/20...  Training Step: 3650...  Training loss: 1.2177...  2.7762 sec/batch
Epoch: 19/20...  Training Step: 3651...  Training loss: 1.1933...  2.7809 sec/batch
Epoch: 19/20...  Training Step: 3652...  Training loss: 1.1923...  2.7741 sec/batch
Epoch: 19/20...  Training Step: 3653...  Training loss: 1.1901...  2.8065 sec/batch
Epoch: 19/20...  Training Step: 3654...  Training loss: 1.2266...  2.8172 sec/batch
Epoch: 19/20...  Training Step: 3655...  Training loss: 1.2051...  2.7921 sec/batch
Epoch: 19/20...  Training Step: 3656...  Training loss: 1.2086...  2.8301 sec/batch
Epoch: 19/20...  Training Step: 3657...  Training loss: 1.1958...  2.8172 sec/batch
Epoch: 19/20...  Training Step: 3658...  Training loss: 1.1894...  2.7889 sec/batch
Epoch: 19/20...  Training Step: 3659...  Training loss: 1.2058...  2.7738 sec/batch
Epoch: 19/20...  Training Step: 3660...  Training loss: 1.2290...  2.8064 sec/batch
Epoch: 19/20...  Training Step: 3661...  Training loss: 1.2231...  2.7909 sec/batch
Epoch: 19/20...  Training Step: 3662...  Training loss: 1.1822...  2.7913 sec/batch
Epoch: 19/20...  Training Step: 3663...  Training loss: 1.2035...  2.7758 sec/batch
Epoch: 19/20...  Training Step: 3664...  Training loss: 1.1955...  2.7752 sec/batch
Epoch: 19/20...  Training Step: 3665...  Training loss: 1.2241...  2.7818 sec/batch
Epoch: 19/20...  Training Step: 3666...  Training loss: 1.2110...  2.8208 sec/batch
Epoch: 19/20...  Training Step: 3667...  Training loss: 1.2155...  2.8651 sec/batch
Epoch: 19/20...  Training Step: 3668...  Training loss: 1.2074...  2.7798 sec/batch
Epoch: 19/20...  Training Step: 3669...  Training loss: 1.2055...  2.8141 sec/batch
Epoch: 19/20...  Training Step: 3670...  Training loss: 1.2121...  2.7795 sec/batch
Epoch: 19/20...  Training Step: 3671...  Training loss: 1.2207...  2.7910 sec/batch
Epoch: 19/20...  Training Step: 3672...  Training loss: 1.2379...  2.8145 sec/batch
Epoch: 19/20...  Training Step: 3673...  Training loss: 1.1965...  2.7830 sec/batch
Epoch: 19/20...  Training Step: 3674...  Training loss: 1.2280...  2.7925 sec/batch
Epoch: 19/20...  Training Step: 3675...  Training loss: 1.2011...  2.7733 sec/batch
Epoch: 19/20...  Training Step: 3676...  Training loss: 1.2234...  2.8666 sec/batch
Epoch: 19/20...  Training Step: 3677...  Training loss: 1.2256...  2.7684 sec/batch
Epoch: 19/20...  Training Step: 3678...  Training loss: 1.2093...  2.8606 sec/batch
Epoch: 19/20...  Training Step: 3679...  Training loss: 1.1931...  2.8200 sec/batch
Epoch: 19/20...  Training Step: 3680...  Training loss: 1.1975...  2.8079 sec/batch
Epoch: 19/20...  Training Step: 3681...  Training loss: 1.2298...  2.8087 sec/batch
Epoch: 19/20...  Training Step: 3682...  Training loss: 1.2246...  2.7735 sec/batch
Epoch: 19/20...  Training Step: 3683...  Training loss: 1.2163...  2.7904 sec/batch
Epoch: 19/20...  Training Step: 3684...  Training loss: 1.2111...  2.7642 sec/batch
Epoch: 19/20...  Training Step: 3685...  Training loss: 1.2150...  2.7998 sec/batch
Epoch: 19/20...  Training Step: 3686...  Training loss: 1.1816...  2.7854 sec/batch
Epoch: 19/20...  Training Step: 3687...  Training loss: 1.1763...  3.5170 sec/batch
Epoch: 19/20...  Training Step: 3688...  Training loss: 1.2098...  2.8472 sec/batch
Epoch: 19/20...  Training Step: 3689...  Training loss: 1.2054...  2.8117 sec/batch
Epoch: 19/20...  Training Step: 3690...  Training loss: 1.1792...  2.7684 sec/batch
Epoch: 19/20...  Training Step: 3691...  Training loss: 1.2263...  2.7627 sec/batch
Epoch: 19/20...  Training Step: 3692...  Training loss: 1.2128...  2.7713 sec/batch
Epoch: 19/20...  Training Step: 3693...  Training loss: 1.1930...  2.8022 sec/batch
Epoch: 19/20...  Training Step: 3694...  Training loss: 1.1739...  2.7900 sec/batch
Epoch: 19/20...  Training Step: 3695...  Training loss: 1.1659...  2.8812 sec/batch
Epoch: 19/20...  Training Step: 3696...  Training loss: 1.2078...  2.7939 sec/batch
Epoch: 19/20...  Training Step: 3697...  Training loss: 1.2384...  2.7903 sec/batch
Epoch: 19/20...  Training Step: 3698...  Training loss: 1.2190...  2.8071 sec/batch
Epoch: 19/20...  Training Step: 3699...  Training loss: 1.2252...  2.7990 sec/batch
Epoch: 19/20...  Training Step: 3700...  Training loss: 1.2199...  2.8263 sec/batch
Epoch: 19/20...  Training Step: 3701...  Training loss: 1.2398...  2.8294 sec/batch
Epoch: 19/20...  Training Step: 3702...  Training loss: 1.2280...  2.8126 sec/batch
Epoch: 19/20...  Training Step: 3703...  Training loss: 1.2185...  2.7817 sec/batch
Epoch: 19/20...  Training Step: 3704...  Training loss: 1.2238...  2.7994 sec/batch
Epoch: 19/20...  Training Step: 3705...  Training loss: 1.2680...  2.8198 sec/batch
Epoch: 19/20...  Training Step: 3706...  Training loss: 1.2377...  3.0928 sec/batch
Epoch: 19/20...  Training Step: 3707...  Training loss: 1.2045...  2.7775 sec/batch
Epoch: 19/20...  Training Step: 3708...  Training loss: 1.2463...  2.8444 sec/batch
Epoch: 19/20...  Training Step: 3709...  Training loss: 1.2060...  2.8669 sec/batch
Epoch: 19/20...  Training Step: 3710...  Training loss: 1.2345...  2.7913 sec/batch
Epoch: 19/20...  Training Step: 3711...  Training loss: 1.2326...  2.7799 sec/batch
Epoch: 19/20...  Training Step: 3712...  Training loss: 1.2555...  2.7953 sec/batch
Epoch: 19/20...  Training Step: 3713...  Training loss: 1.2365...  2.7945 sec/batch
Epoch: 19/20...  Training Step: 3714...  Training loss: 1.2135...  2.7882 sec/batch
Epoch: 19/20...  Training Step: 3715...  Training loss: 1.1889...  2.7956 sec/batch
Epoch: 19/20...  Training Step: 3716...  Training loss: 1.1930...  2.8033 sec/batch
Epoch: 19/20...  Training Step: 3717...  Training loss: 1.2288...  2.8274 sec/batch
Epoch: 19/20...  Training Step: 3718...  Training loss: 1.2155...  2.7678 sec/batch
Epoch: 19/20...  Training Step: 3719...  Training loss: 1.2113...  2.7716 sec/batch
Epoch: 19/20...  Training Step: 3720...  Training loss: 1.2135...  2.8205 sec/batch
Epoch: 19/20...  Training Step: 3721...  Training loss: 1.2161...  2.7879 sec/batch
Epoch: 19/20...  Training Step: 3722...  Training loss: 1.2088...  2.7941 sec/batch
Epoch: 19/20...  Training Step: 3723...  Training loss: 1.1905...  2.7936 sec/batch
Epoch: 19/20...  Training Step: 3724...  Training loss: 1.2430...  2.8039 sec/batch
Epoch: 19/20...  Training Step: 3725...  Training loss: 1.2406...  2.7963 sec/batch
Epoch: 19/20...  Training Step: 3726...  Training loss: 1.2248...  2.7806 sec/batch
Epoch: 19/20...  Training Step: 3727...  Training loss: 1.2178...  2.7824 sec/batch
Epoch: 19/20...  Training Step: 3728...  Training loss: 1.2201...  2.8146 sec/batch
Epoch: 19/20...  Training Step: 3729...  Training loss: 1.2230...  2.7960 sec/batch
Epoch: 19/20...  Training Step: 3730...  Training loss: 1.2229...  2.7982 sec/batch
Epoch: 19/20...  Training Step: 3731...  Training loss: 1.2307...  2.7837 sec/batch
Epoch: 19/20...  Training Step: 3732...  Training loss: 1.2837...  2.7890 sec/batch
Epoch: 19/20...  Training Step: 3733...  Training loss: 1.2239...  2.8002 sec/batch
Epoch: 19/20...  Training Step: 3734...  Training loss: 1.2275...  2.8222 sec/batch
Epoch: 19/20...  Training Step: 3735...  Training loss: 1.2054...  2.7790 sec/batch
Epoch: 19/20...  Training Step: 3736...  Training loss: 1.2128...  2.7636 sec/batch
Epoch: 19/20...  Training Step: 3737...  Training loss: 1.2498...  2.7840 sec/batch
Epoch: 19/20...  Training Step: 3738...  Training loss: 1.2319...  2.7955 sec/batch
Epoch: 19/20...  Training Step: 3739...  Training loss: 1.2190...  3.7961 sec/batch
Epoch: 19/20...  Training Step: 3740...  Training loss: 1.1892...  2.8042 sec/batch
Epoch: 19/20...  Training Step: 3741...  Training loss: 1.2129...  2.8044 sec/batch
Epoch: 19/20...  Training Step: 3742...  Training loss: 1.2611...  2.8436 sec/batch
Epoch: 19/20...  Training Step: 3743...  Training loss: 1.1977...  2.7794 sec/batch
Epoch: 19/20...  Training Step: 3744...  Training loss: 1.1971...  2.8148 sec/batch
Epoch: 19/20...  Training Step: 3745...  Training loss: 1.2075...  2.8072 sec/batch
Epoch: 19/20...  Training Step: 3746...  Training loss: 1.2156...  2.8257 sec/batch
Epoch: 19/20...  Training Step: 3747...  Training loss: 1.2221...  2.7624 sec/batch
Epoch: 19/20...  Training Step: 3748...  Training loss: 1.2151...  2.8375 sec/batch
Epoch: 19/20...  Training Step: 3749...  Training loss: 1.2110...  2.8359 sec/batch
Epoch: 19/20...  Training Step: 3750...  Training loss: 1.1901...  2.8141 sec/batch
Epoch: 19/20...  Training Step: 3751...  Training loss: 1.2501...  2.7713 sec/batch
Epoch: 19/20...  Training Step: 3752...  Training loss: 1.2070...  2.8647 sec/batch
Epoch: 19/20...  Training Step: 3753...  Training loss: 1.2093...  2.7951 sec/batch
Epoch: 19/20...  Training Step: 3754...  Training loss: 1.2234...  2.8167 sec/batch
Epoch: 19/20...  Training Step: 3755...  Training loss: 1.1883...  2.9012 sec/batch
Epoch: 19/20...  Training Step: 3756...  Training loss: 1.2042...  2.8200 sec/batch
Epoch: 19/20...  Training Step: 3757...  Training loss: 1.2169...  2.8112 sec/batch
Epoch: 19/20...  Training Step: 3758...  Training loss: 1.1928...  2.8030 sec/batch
Epoch: 19/20...  Training Step: 3759...  Training loss: 1.1815...  2.8004 sec/batch
Epoch: 19/20...  Training Step: 3760...  Training loss: 1.2233...  2.8109 sec/batch
Epoch: 19/20...  Training Step: 3761...  Training loss: 1.2005...  2.7783 sec/batch
Epoch: 19/20...  Training Step: 3762...  Training loss: 1.1957...  2.8057 sec/batch
Epoch: 20/20...  Training Step: 3763...  Training loss: 1.3340...  2.7986 sec/batch
Epoch: 20/20...  Training Step: 3764...  Training loss: 1.2337...  2.7784 sec/batch
Epoch: 20/20...  Training Step: 3765...  Training loss: 1.2111...  2.7942 sec/batch
Epoch: 20/20...  Training Step: 3766...  Training loss: 1.2397...  2.7626 sec/batch
Epoch: 20/20...  Training Step: 3767...  Training loss: 1.1921...  2.7759 sec/batch
Epoch: 20/20...  Training Step: 3768...  Training loss: 1.1773...  2.8347 sec/batch
Epoch: 20/20...  Training Step: 3769...  Training loss: 1.2059...  2.7752 sec/batch
Epoch: 20/20...  Training Step: 3770...  Training loss: 1.2113...  2.7784 sec/batch
Epoch: 20/20...  Training Step: 3771...  Training loss: 1.2171...  2.8029 sec/batch
Epoch: 20/20...  Training Step: 3772...  Training loss: 1.2060...  2.7753 sec/batch
Epoch: 20/20...  Training Step: 3773...  Training loss: 1.1947...  2.8275 sec/batch
Epoch: 20/20...  Training Step: 3774...  Training loss: 1.2159...  2.8039 sec/batch
Epoch: 20/20...  Training Step: 3775...  Training loss: 1.2105...  2.7873 sec/batch
Epoch: 20/20...  Training Step: 3776...  Training loss: 1.2252...  2.8439 sec/batch
Epoch: 20/20...  Training Step: 3777...  Training loss: 1.2091...  2.8234 sec/batch
Epoch: 20/20...  Training Step: 3778...  Training loss: 1.1871...  2.8099 sec/batch
Epoch: 20/20...  Training Step: 3779...  Training loss: 1.2338...  2.8881 sec/batch
Epoch: 20/20...  Training Step: 3780...  Training loss: 1.2380...  2.8248 sec/batch
Epoch: 20/20...  Training Step: 3781...  Training loss: 1.2191...  2.7758 sec/batch
Epoch: 20/20...  Training Step: 3782...  Training loss: 1.2414...  2.8227 sec/batch
Epoch: 20/20...  Training Step: 3783...  Training loss: 1.2023...  2.7970 sec/batch
Epoch: 20/20...  Training Step: 3784...  Training loss: 1.2311...  2.8258 sec/batch
Epoch: 20/20...  Training Step: 3785...  Training loss: 1.2101...  2.7975 sec/batch
Epoch: 20/20...  Training Step: 3786...  Training loss: 1.2322...  2.8871 sec/batch
Epoch: 20/20...  Training Step: 3787...  Training loss: 1.2203...  2.7828 sec/batch
Epoch: 20/20...  Training Step: 3788...  Training loss: 1.1740...  2.8993 sec/batch
Epoch: 20/20...  Training Step: 3789...  Training loss: 1.1789...  2.8306 sec/batch
Epoch: 20/20...  Training Step: 3790...  Training loss: 1.2359...  2.8596 sec/batch
Epoch: 20/20...  Training Step: 3791...  Training loss: 1.2282...  2.7982 sec/batch
Epoch: 20/20...  Training Step: 3792...  Training loss: 1.2385...  2.7947 sec/batch
Epoch: 20/20...  Training Step: 3793...  Training loss: 1.2048...  2.8097 sec/batch
Epoch: 20/20...  Training Step: 3794...  Training loss: 1.1906...  2.8355 sec/batch
Epoch: 20/20...  Training Step: 3795...  Training loss: 1.2129...  2.8676 sec/batch
Epoch: 20/20...  Training Step: 3796...  Training loss: 1.2231...  2.8080 sec/batch
Epoch: 20/20...  Training Step: 3797...  Training loss: 1.1976...  2.8608 sec/batch
Epoch: 20/20...  Training Step: 3798...  Training loss: 1.2213...  2.7607 sec/batch
Epoch: 20/20...  Training Step: 3799...  Training loss: 1.2001...  2.8049 sec/batch
Epoch: 20/20...  Training Step: 3800...  Training loss: 1.1867...  2.7823 sec/batch
Epoch: 20/20...  Training Step: 3801...  Training loss: 1.1795...  2.8174 sec/batch
Epoch: 20/20...  Training Step: 3802...  Training loss: 1.1985...  2.8609 sec/batch
Epoch: 20/20...  Training Step: 3803...  Training loss: 1.1857...  2.7646 sec/batch
Epoch: 20/20...  Training Step: 3804...  Training loss: 1.2578...  2.8123 sec/batch
Epoch: 20/20...  Training Step: 3805...  Training loss: 1.1999...  2.8518 sec/batch
Epoch: 20/20...  Training Step: 3806...  Training loss: 1.1913...  2.7930 sec/batch
Epoch: 20/20...  Training Step: 3807...  Training loss: 1.2211...  2.7941 sec/batch
Epoch: 20/20...  Training Step: 3808...  Training loss: 1.1891...  2.8358 sec/batch
Epoch: 20/20...  Training Step: 3809...  Training loss: 1.2069...  2.8035 sec/batch
Epoch: 20/20...  Training Step: 3810...  Training loss: 1.2189...  2.7857 sec/batch
Epoch: 20/20...  Training Step: 3811...  Training loss: 1.2017...  2.7923 sec/batch
Epoch: 20/20...  Training Step: 3812...  Training loss: 1.2237...  2.7852 sec/batch
Epoch: 20/20...  Training Step: 3813...  Training loss: 1.1937...  2.7968 sec/batch
Epoch: 20/20...  Training Step: 3814...  Training loss: 1.2573...  2.8324 sec/batch
Epoch: 20/20...  Training Step: 3815...  Training loss: 1.2162...  2.8038 sec/batch
Epoch: 20/20...  Training Step: 3816...  Training loss: 1.2262...  2.8305 sec/batch
Epoch: 20/20...  Training Step: 3817...  Training loss: 1.2093...  2.7991 sec/batch
Epoch: 20/20...  Training Step: 3818...  Training loss: 1.2171...  2.7974 sec/batch
Epoch: 20/20...  Training Step: 3819...  Training loss: 1.2148...  2.8269 sec/batch
Epoch: 20/20...  Training Step: 3820...  Training loss: 1.1990...  2.7909 sec/batch
Epoch: 20/20...  Training Step: 3821...  Training loss: 1.1870...  2.8018 sec/batch
Epoch: 20/20...  Training Step: 3822...  Training loss: 1.2421...  2.8102 sec/batch
Epoch: 20/20...  Training Step: 3823...  Training loss: 1.2249...  2.7826 sec/batch
Epoch: 20/20...  Training Step: 3824...  Training loss: 1.2600...  2.8429 sec/batch
Epoch: 20/20...  Training Step: 3825...  Training loss: 1.2449...  2.8109 sec/batch
Epoch: 20/20...  Training Step: 3826...  Training loss: 1.2225...  2.7982 sec/batch
Epoch: 20/20...  Training Step: 3827...  Training loss: 1.2206...  2.7999 sec/batch
Epoch: 20/20...  Training Step: 3828...  Training loss: 1.2230...  2.8080 sec/batch
Epoch: 20/20...  Training Step: 3829...  Training loss: 1.2390...  2.7933 sec/batch
Epoch: 20/20...  Training Step: 3830...  Training loss: 1.2093...  2.8024 sec/batch
Epoch: 20/20...  Training Step: 3831...  Training loss: 1.2261...  2.7949 sec/batch
Epoch: 20/20...  Training Step: 3832...  Training loss: 1.2127...  2.7852 sec/batch
Epoch: 20/20...  Training Step: 3833...  Training loss: 1.2539...  2.8169 sec/batch
Epoch: 20/20...  Training Step: 3834...  Training loss: 1.2392...  2.7768 sec/batch
Epoch: 20/20...  Training Step: 3835...  Training loss: 1.2467...  2.8505 sec/batch
Epoch: 20/20...  Training Step: 3836...  Training loss: 1.1878...  2.7920 sec/batch
Epoch: 20/20...  Training Step: 3837...  Training loss: 1.2222...  2.9239 sec/batch
Epoch: 20/20...  Training Step: 3838...  Training loss: 1.2360...  2.7970 sec/batch
Epoch: 20/20...  Training Step: 3839...  Training loss: 1.2099...  3.1661 sec/batch
Epoch: 20/20...  Training Step: 3840...  Training loss: 1.1983...  2.7688 sec/batch
Epoch: 20/20...  Training Step: 3841...  Training loss: 1.1664...  2.8235 sec/batch
Epoch: 20/20...  Training Step: 3842...  Training loss: 1.2146...  2.7775 sec/batch
Epoch: 20/20...  Training Step: 3843...  Training loss: 1.1836...  2.8049 sec/batch
Epoch: 20/20...  Training Step: 3844...  Training loss: 1.2159...  2.7990 sec/batch
Epoch: 20/20...  Training Step: 3845...  Training loss: 1.1934...  2.8167 sec/batch
Epoch: 20/20...  Training Step: 3846...  Training loss: 1.2161...  3.3749 sec/batch
Epoch: 20/20...  Training Step: 3847...  Training loss: 1.1885...  2.8369 sec/batch
Epoch: 20/20...  Training Step: 3848...  Training loss: 1.2134...  2.7888 sec/batch
Epoch: 20/20...  Training Step: 3849...  Training loss: 1.1929...  2.7898 sec/batch
Epoch: 20/20...  Training Step: 3850...  Training loss: 1.1943...  2.7779 sec/batch
Epoch: 20/20...  Training Step: 3851...  Training loss: 1.1824...  2.8032 sec/batch
Epoch: 20/20...  Training Step: 3852...  Training loss: 1.2250...  2.7860 sec/batch
Epoch: 20/20...  Training Step: 3853...  Training loss: 1.2012...  2.7673 sec/batch
Epoch: 20/20...  Training Step: 3854...  Training loss: 1.2007...  2.7771 sec/batch
Epoch: 20/20...  Training Step: 3855...  Training loss: 1.1792...  2.8063 sec/batch
Epoch: 20/20...  Training Step: 3856...  Training loss: 1.1859...  2.8518 sec/batch
Epoch: 20/20...  Training Step: 3857...  Training loss: 1.1926...  2.7963 sec/batch
Epoch: 20/20...  Training Step: 3858...  Training loss: 1.2176...  2.8107 sec/batch
Epoch: 20/20...  Training Step: 3859...  Training loss: 1.2100...  2.7942 sec/batch
Epoch: 20/20...  Training Step: 3860...  Training loss: 1.1743...  2.8340 sec/batch
Epoch: 20/20...  Training Step: 3861...  Training loss: 1.1981...  2.7905 sec/batch
Epoch: 20/20...  Training Step: 3862...  Training loss: 1.1970...  2.7948 sec/batch
Epoch: 20/20...  Training Step: 3863...  Training loss: 1.2171...  2.7816 sec/batch
Epoch: 20/20...  Training Step: 3864...  Training loss: 1.1974...  2.7978 sec/batch
Epoch: 20/20...  Training Step: 3865...  Training loss: 1.2128...  2.7906 sec/batch
Epoch: 20/20...  Training Step: 3866...  Training loss: 1.2032...  2.8057 sec/batch
Epoch: 20/20...  Training Step: 3867...  Training loss: 1.2066...  2.8242 sec/batch
Epoch: 20/20...  Training Step: 3868...  Training loss: 1.1953...  2.7919 sec/batch
Epoch: 20/20...  Training Step: 3869...  Training loss: 1.2138...  2.8701 sec/batch
Epoch: 20/20...  Training Step: 3870...  Training loss: 1.2131...  2.7780 sec/batch
Epoch: 20/20...  Training Step: 3871...  Training loss: 1.1928...  2.7992 sec/batch
Epoch: 20/20...  Training Step: 3872...  Training loss: 1.2333...  2.8172 sec/batch
Epoch: 20/20...  Training Step: 3873...  Training loss: 1.1983...  2.7753 sec/batch
Epoch: 20/20...  Training Step: 3874...  Training loss: 1.2221...  2.7927 sec/batch
Epoch: 20/20...  Training Step: 3875...  Training loss: 1.2090...  2.8316 sec/batch
Epoch: 20/20...  Training Step: 3876...  Training loss: 1.1922...  2.8678 sec/batch
Epoch: 20/20...  Training Step: 3877...  Training loss: 1.1957...  2.7697 sec/batch
Epoch: 20/20...  Training Step: 3878...  Training loss: 1.1842...  2.7754 sec/batch
Epoch: 20/20...  Training Step: 3879...  Training loss: 1.2163...  2.7981 sec/batch
Epoch: 20/20...  Training Step: 3880...  Training loss: 1.2220...  2.8555 sec/batch
Epoch: 20/20...  Training Step: 3881...  Training loss: 1.2072...  2.7707 sec/batch
Epoch: 20/20...  Training Step: 3882...  Training loss: 1.2112...  2.7934 sec/batch
Epoch: 20/20...  Training Step: 3883...  Training loss: 1.2077...  2.8403 sec/batch
Epoch: 20/20...  Training Step: 3884...  Training loss: 1.1699...  2.8617 sec/batch
Epoch: 20/20...  Training Step: 3885...  Training loss: 1.1682...  2.7941 sec/batch
Epoch: 20/20...  Training Step: 3886...  Training loss: 1.2067...  2.7595 sec/batch
Epoch: 20/20...  Training Step: 3887...  Training loss: 1.2006...  2.7677 sec/batch
Epoch: 20/20...  Training Step: 3888...  Training loss: 1.1823...  2.7890 sec/batch
Epoch: 20/20...  Training Step: 3889...  Training loss: 1.2031...  2.8946 sec/batch
Epoch: 20/20...  Training Step: 3890...  Training loss: 1.2094...  2.8426 sec/batch
Epoch: 20/20...  Training Step: 3891...  Training loss: 1.1878...  2.7975 sec/batch
Epoch: 20/20...  Training Step: 3892...  Training loss: 1.1699...  2.8147 sec/batch
Epoch: 20/20...  Training Step: 3893...  Training loss: 1.1647...  2.8562 sec/batch
Epoch: 20/20...  Training Step: 3894...  Training loss: 1.2007...  2.8702 sec/batch
Epoch: 20/20...  Training Step: 3895...  Training loss: 1.2241...  2.7810 sec/batch
Epoch: 20/20...  Training Step: 3896...  Training loss: 1.2126...  2.7744 sec/batch
Epoch: 20/20...  Training Step: 3897...  Training loss: 1.2115...  2.8274 sec/batch
Epoch: 20/20...  Training Step: 3898...  Training loss: 1.2181...  2.7944 sec/batch
Epoch: 20/20...  Training Step: 3899...  Training loss: 1.2352...  2.8362 sec/batch
Epoch: 20/20...  Training Step: 3900...  Training loss: 1.2290...  2.7948 sec/batch
Epoch: 20/20...  Training Step: 3901...  Training loss: 1.2149...  2.8377 sec/batch
Epoch: 20/20...  Training Step: 3902...  Training loss: 1.2136...  2.8688 sec/batch
Epoch: 20/20...  Training Step: 3903...  Training loss: 1.2526...  2.8443 sec/batch
Epoch: 20/20...  Training Step: 3904...  Training loss: 1.2201...  2.7952 sec/batch
Epoch: 20/20...  Training Step: 3905...  Training loss: 1.2037...  2.8209 sec/batch
Epoch: 20/20...  Training Step: 3906...  Training loss: 1.2401...  2.8758 sec/batch
Epoch: 20/20...  Training Step: 3907...  Training loss: 1.1922...  2.8100 sec/batch
Epoch: 20/20...  Training Step: 3908...  Training loss: 1.2370...  2.8016 sec/batch
Epoch: 20/20...  Training Step: 3909...  Training loss: 1.2280...  2.7936 sec/batch
Epoch: 20/20...  Training Step: 3910...  Training loss: 1.2365...  2.8211 sec/batch
Epoch: 20/20...  Training Step: 3911...  Training loss: 1.2479...  2.8238 sec/batch
Epoch: 20/20...  Training Step: 3912...  Training loss: 1.1969...  2.8032 sec/batch
Epoch: 20/20...  Training Step: 3913...  Training loss: 1.1857...  2.7956 sec/batch
Epoch: 20/20...  Training Step: 3914...  Training loss: 1.1774...  2.7838 sec/batch
Epoch: 20/20...  Training Step: 3915...  Training loss: 1.2291...  2.7957 sec/batch
Epoch: 20/20...  Training Step: 3916...  Training loss: 1.2070...  2.7687 sec/batch
Epoch: 20/20...  Training Step: 3917...  Training loss: 1.1898...  2.8088 sec/batch
Epoch: 20/20...  Training Step: 3918...  Training loss: 1.2138...  2.8118 sec/batch
Epoch: 20/20...  Training Step: 3919...  Training loss: 1.2135...  2.8471 sec/batch
Epoch: 20/20...  Training Step: 3920...  Training loss: 1.1999...  2.7966 sec/batch
Epoch: 20/20...  Training Step: 3921...  Training loss: 1.1765...  2.8150 sec/batch
Epoch: 20/20...  Training Step: 3922...  Training loss: 1.2271...  2.8916 sec/batch
Epoch: 20/20...  Training Step: 3923...  Training loss: 1.2260...  2.7927 sec/batch
Epoch: 20/20...  Training Step: 3924...  Training loss: 1.2213...  2.8154 sec/batch
Epoch: 20/20...  Training Step: 3925...  Training loss: 1.2112...  2.8133 sec/batch
Epoch: 20/20...  Training Step: 3926...  Training loss: 1.2172...  2.8284 sec/batch
Epoch: 20/20...  Training Step: 3927...  Training loss: 1.2102...  2.8404 sec/batch
Epoch: 20/20...  Training Step: 3928...  Training loss: 1.2034...  2.7907 sec/batch
Epoch: 20/20...  Training Step: 3929...  Training loss: 1.2324...  2.7826 sec/batch
Epoch: 20/20...  Training Step: 3930...  Training loss: 1.2720...  2.8139 sec/batch
Epoch: 20/20...  Training Step: 3931...  Training loss: 1.2202...  3.1487 sec/batch
Epoch: 20/20...  Training Step: 3932...  Training loss: 1.2167...  2.7893 sec/batch
Epoch: 20/20...  Training Step: 3933...  Training loss: 1.2081...  2.8175 sec/batch
Epoch: 20/20...  Training Step: 3934...  Training loss: 1.2086...  2.8676 sec/batch
Epoch: 20/20...  Training Step: 3935...  Training loss: 1.2443...  2.7925 sec/batch
Epoch: 20/20...  Training Step: 3936...  Training loss: 1.2128...  2.7856 sec/batch
Epoch: 20/20...  Training Step: 3937...  Training loss: 1.2184...  2.7914 sec/batch
Epoch: 20/20...  Training Step: 3938...  Training loss: 1.1891...  2.7940 sec/batch
Epoch: 20/20...  Training Step: 3939...  Training loss: 1.2098...  2.8031 sec/batch
Epoch: 20/20...  Training Step: 3940...  Training loss: 1.2467...  2.8219 sec/batch
Epoch: 20/20...  Training Step: 3941...  Training loss: 1.1977...  2.8181 sec/batch
Epoch: 20/20...  Training Step: 3942...  Training loss: 1.1897...  2.7797 sec/batch
Epoch: 20/20...  Training Step: 3943...  Training loss: 1.1910...  2.8621 sec/batch
Epoch: 20/20...  Training Step: 3944...  Training loss: 1.2024...  2.8203 sec/batch
Epoch: 20/20...  Training Step: 3945...  Training loss: 1.1994...  2.7819 sec/batch
Epoch: 20/20...  Training Step: 3946...  Training loss: 1.1968...  2.7977 sec/batch
Epoch: 20/20...  Training Step: 3947...  Training loss: 1.1944...  2.7982 sec/batch
Epoch: 20/20...  Training Step: 3948...  Training loss: 1.1819...  2.8208 sec/batch
Epoch: 20/20...  Training Step: 3949...  Training loss: 1.2378...  2.8070 sec/batch
Epoch: 20/20...  Training Step: 3950...  Training loss: 1.2010...  3.2539 sec/batch
Epoch: 20/20...  Training Step: 3951...  Training loss: 1.2020...  2.8400 sec/batch
Epoch: 20/20...  Training Step: 3952...  Training loss: 1.2116...  2.8520 sec/batch
Epoch: 20/20...  Training Step: 3953...  Training loss: 1.1863...  3.0417 sec/batch
Epoch: 20/20...  Training Step: 3954...  Training loss: 1.1973...  2.9535 sec/batch
Epoch: 20/20...  Training Step: 3955...  Training loss: 1.2019...  2.9578 sec/batch
Epoch: 20/20...  Training Step: 3956...  Training loss: 1.1940...  2.9122 sec/batch
Epoch: 20/20...  Training Step: 3957...  Training loss: 1.1726...  2.8976 sec/batch
Epoch: 20/20...  Training Step: 3958...  Training loss: 1.2087...  2.9483 sec/batch
Epoch: 20/20...  Training Step: 3959...  Training loss: 1.1956...  2.8921 sec/batch
Epoch: 20/20...  Training Step: 3960...  Training loss: 1.2013...  2.9540 sec/batch

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [22]:
tf.train.get_checkpoint_state('checkpoints')


Out[22]:
model_checkpoint_path: "checkpoints/i3960_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3960_l512.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [23]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [24]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [25]:
tf.train.latest_checkpoint('checkpoints')


Out[25]:
'checkpoints/i3960_l512.ckpt'

In [26]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i3960_l512.ckpt
Farment, he saw, a laty at love, all of all the place were contracted
of two marnifices, and he would be a spling in which there was on
anything were so forced and done all this to suffering. All that
she would see them about herself as he, and went out of all suppositions
to see them, and he depressed her and there become the people. He had
told the place in the muddle to the portrait, his face was an approaching
shirt. But the desire to have a secret same feeling of the commenter
of her prriess and her. They had been answered of her serfance,
though his wife still see in the desire to tell her; his barrel
showed that she caught a positive, think of the colonel, and this
shouting and a short attempt and state of the counting has sent
to the papirator, his frather and all that in his brother, as though
he was that to all that had been a chance to see her waiting at him, and would
have been so successful to the sound of this man. They're a late of
marshals. And how he has never had been dearer about the throw and the
man and his matter.

She saw her takon over with horse at any mistake. "And then, to
be indecised, it is something at this suppose. If I won't say," he said
to her sisten.

"They could have the sort, I want to say it with makes a profects; but a
man so touching anything than impossible, and we shall not be to be and
short to the peasants.

He did not see that it was it all, and I can't help she has surried.
There are sent a success."

"What was the same?"

"Oh, what doesn't I heard?" he said again.

"I am visible in the meeting or society. I can't care all, and I stand
to tell him all the performance of all," answered the sound hangly
companing; as he had now seen the side of the sight of his father, and
he was sitting on his face with her eyes.

"Ah, always, as they were to mertion and does. I want to be done, and the
man, a care for me, I can't be in a first man."

"Why, all I doubt in the carriage? That's a sisting out and servoch,
and he's being absorbed

In [27]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i200_l512.ckpt
Farry oo sar ind antitos tot on wing hint the thors hor, sedent at war has he hosered. She has hed wong hed when tho ses on ton hint and oul the
sore thon tor terson thot so mise too leritg tote ant thar saser sas toun sat he wistor toot to he sins antitg at oud att ins toun thit too the mas tha tounth te ansitg on thet on whe he soud sand she son to wot of tare an arere tas the sithers ont and, sade ho cant out thins, sere ton
the thith alsan has soreride, and he the he sas on he ans on at at he sorsete, af and th the
shesthererd, had hiss anes tout ho sore she wat of at her anse sas the the couthonser ouded that thar wher anserint ons tithe tishe and sound whis he the the cans the the te wers ote whe the sho sart he ceon on tha cimo norentit the
con at he harend and to mame tim thet ore sher sot the there sher hant and the cos at al hes the wass aled tin he hes ansing oud.
"
I ther he sathens sor afinthe, the sat sore to tile, ant th thath soters hor ald, whe tho sone wondes of hems ont 

In [28]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i600_l512.ckpt
Farying, and see thay alloved to and have been to him of she had been how he some
to hin the same of aloust her said, she said of cherterest, the could ot the werl thenese the seres and the heare, said it a cartunces, and the sand with him, and he said to a compention what he calling to the pase the carlon to brather and thought would her then she was say the sand and she wistle who were sompling three atthant his went of she seid and the seeting to seilien offilly to her his stold to his bonding. She heading him, all the corsting had her has ase thay she tanded ofter the carsion.

"The chally of the post on a lighting out, to there wored the ment into the sand of her," see he sand to his allower, and hum another andicent of the stracked of theme of seep of the sores of the samperss, was, and hasts andwhing
to her the was of having aten her wout and werling hom."

"Yos,'re sair, atden," see he saud, and her then han streem of the cererom of his chare and trisked, and went a mince sere of s

In [29]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i1200_l512.ckpt
Farring
hurrs to ser the care and straight to the same as it was to say the string of the
something, but something that he distrace time, with
his words, the poret and sen of serious than and with him he could not see the steps of hap a first at the
monert of her.

"That say to may he
would be some so interesting it, it's tire her than another," said Vassily.

"And what a cemtail it." she said to the marred to
seeing the secrie of
the pleasure, and with sere the sector, that she would have been in her hand, she was at her fine of setitions, taken indown and stringing to
the could say that so should be worked
and heated all them
the sens of her and to tear her haping of the chief always with her, and had tellous the potition and houses and his works he cared to a most side a steps.

The since
had not stopped he was say, and the same shoulder and treatle
were a place in white houre as though he had always
trought them that he he done his way, but the minsten would see
hims and at
that was th

In [ ]: