Anna KaRNNa

In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [1]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [3]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [4]:
text[:100]


Out[4]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

And we can see the characters encoded as integers.


In [5]:
encoded[:100]


Out[5]:
array([72, 16, 74, 63, 77, 78, 41, 12, 64,  7,  7,  7, 15, 74, 63, 63, 30,
       12, 68, 74, 29, 75, 61, 75, 78, 50, 12, 74, 41, 78, 12, 74, 61, 61,
       12, 74, 61, 75, 42, 78, 79, 12, 78, 13, 78, 41, 30, 12, 51, 20, 16,
       74, 63, 63, 30, 12, 68, 74, 29, 75, 61, 30, 12, 75, 50, 12, 51, 20,
       16, 74, 63, 63, 30, 12, 75, 20, 12, 75, 77, 50, 12, 69, 14, 20,  7,
       14, 74, 30, 66,  7,  7,  1, 13, 78, 41, 30, 77, 16, 75, 20], dtype=int32)

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [6]:
len(vocab)


Out[6]:
83

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.

Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.

위의 사진으로 보면

$N$ = 2 = n_seqs

$M$ = 3 = n_steps


In [7]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the batch size and number of batches we can make
    batch_size = n_seqs * n_steps # 우리가 리턴할 batch 의 크기 (즉, 한 batch안에 몇 개의 character가 있는지)
    n_batches = len(arr) // batch_size # 우리가 만들 batch 들의 갯수
    
    # Keep only enough characters to make full batches
    arr = arr[:batch_size * n_batches]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs, -1))
    
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:, n:n+n_steps]
        # The targets, shifted by one
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] # You'll usually see the first input character used as the last target character
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [8]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)

In [9]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[58 13  0  5 82 52 80 75 50  1]
 [75  0 27 75 48 66 82 75 23 66]
 [21  6 48 72  1  1 73 19 52 56]
 [48 75 77 61 80  6 48 23 75 13]
 [75  6 82 75  6 56  8 75 56  6]
 [75 24 82 75 76  0 56  1 66 48]
 [13 52 48 75 68 66 27 52 75 38]
 [59 75 51 61 82 75 48 66 76 75]
 [82 75  6 56 48 49 82 72 75 36]
 [75 56  0  6 77 75 82 66 75 13]]

y
 [[13  0  5 82 52 80 75 50  1  1]
 [ 0 27 75 48 66 82 75 23 66  6]
 [ 6 48 72  1  1 73 19 52 56  8]
 [75 77 61 80  6 48 23 75 13  6]
 [ 6 82 75  6 56  8 75 56  6 80]
 [24 82 75 76  0 56  1 66 48 37]
 [52 48 75 68 66 27 52 75 38 66]
 [75 51 61 82 75 48 66 76 75 56]
 [75  6 56 48 49 82 72 75 36 13]
 [56  0  6 77 75 82 66 75 13 52]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.

Exercise: Create the input placeholders in the function below.


In [10]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32, name="keep_prob")
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.


In [11]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    # Use a basic LSTM cell
    lstms = [tf.contrib.rnn.BasicLSTMCell(lstm_size) for _ in range(num_layers)]
    
    # Add dropout to the cell outputs
    drops = [tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) for lstm in lstms]
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell(drops)
    initial_state = cell.zero_state(batch_size, tf.float32)
    
    return cell, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.

Exercise: Implement the output layer in the function below.


In [12]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        lstm_output: List of output tensors from the LSTM layer
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''

    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    # Concatenate lstm_output over axis 1 (the columns)
    seq_output = tf.concat(lstm_output, axis=1)
    
    # Reshape seq_output to a 2D tensor with lstm_size columns
    x = tf.reshape(seq_output, [-1, in_size])
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        # Create the weight and bias variables here
        softmax_w = tf.Variable(tf.truncated_normal([in_size, out_size], stddev=0.1))
        softmax_b = tf.Variable(tf.zeros(out_size))
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.add(tf.matmul(x, softmax_w), softmax_b)
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits, name="predictions")
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.

Exercise: Implement the loss calculation in the function below.


In [13]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    
    # One-hot encode targets and reshape to match logits, one row per sequence per step
    y_one_hot = tf.one_hot(targets, num_classes)
    y_reshaped = tf.reshape(y_one_hot, logits.get_shape())  #  tf.reshape(y_one_hot, [-1, lstm_size])
    
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_reshaped, logits=logits)
    loss = tf.reduce_mean(loss)
    return loss

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [14]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.

Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.


In [15]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)

        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN with tf.nn.dynamic_rnn 
        outputs, state = tf.nn.dynamic_rnn(cell, inputs=x_one_hot, initial_state=self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) 
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here are the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [16]:
batch_size = 100         # Sequences per batch
num_steps = 100          # Number of sequence steps per batch
lstm_size = 512         # Size of hidden layers in LSTMs
num_layers = 2          # Number of LSTM layers
learning_rate = 0.001    # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt

Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.


In [17]:
epochs = 20
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


Epoch: 1/20...  Training Step: 1...  Training loss: 4.4195...  0.6459 sec/batch
Epoch: 1/20...  Training Step: 2...  Training loss: 4.3401...  0.3539 sec/batch
Epoch: 1/20...  Training Step: 3...  Training loss: 3.9019...  0.3150 sec/batch
Epoch: 1/20...  Training Step: 4...  Training loss: 5.6678...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 5...  Training loss: 4.1009...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 6...  Training loss: 3.7862...  0.3094 sec/batch
Epoch: 1/20...  Training Step: 7...  Training loss: 3.6988...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 8...  Training loss: 3.5835...  0.3083 sec/batch
Epoch: 1/20...  Training Step: 9...  Training loss: 3.4850...  0.3086 sec/batch
Epoch: 1/20...  Training Step: 10...  Training loss: 3.4325...  0.3088 sec/batch
Epoch: 1/20...  Training Step: 11...  Training loss: 3.4017...  0.3165 sec/batch
Epoch: 1/20...  Training Step: 12...  Training loss: 3.4008...  0.3082 sec/batch
Epoch: 1/20...  Training Step: 13...  Training loss: 3.3686...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 14...  Training loss: 3.3664...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 15...  Training loss: 3.3359...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 16...  Training loss: 3.3075...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 17...  Training loss: 3.2896...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 18...  Training loss: 3.3252...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 19...  Training loss: 3.2860...  0.3088 sec/batch
Epoch: 1/20...  Training Step: 20...  Training loss: 3.2381...  0.3087 sec/batch
Epoch: 1/20...  Training Step: 21...  Training loss: 3.2671...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 22...  Training loss: 3.2495...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 23...  Training loss: 3.2410...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 24...  Training loss: 3.2421...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 25...  Training loss: 3.2294...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 26...  Training loss: 3.2307...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 27...  Training loss: 3.2308...  0.3072 sec/batch
Epoch: 1/20...  Training Step: 28...  Training loss: 3.2009...  0.3087 sec/batch
Epoch: 1/20...  Training Step: 29...  Training loss: 3.2106...  0.3086 sec/batch
Epoch: 1/20...  Training Step: 30...  Training loss: 3.2156...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 31...  Training loss: 3.2397...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 32...  Training loss: 3.2040...  0.3096 sec/batch
Epoch: 1/20...  Training Step: 33...  Training loss: 3.1844...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 34...  Training loss: 3.2138...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 35...  Training loss: 3.1942...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 36...  Training loss: 3.2055...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 37...  Training loss: 3.1710...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 38...  Training loss: 3.1758...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 39...  Training loss: 3.1732...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 40...  Training loss: 3.1780...  0.3162 sec/batch
Epoch: 1/20...  Training Step: 41...  Training loss: 3.1642...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 42...  Training loss: 3.1624...  0.3096 sec/batch
Epoch: 1/20...  Training Step: 43...  Training loss: 3.1577...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 44...  Training loss: 3.1605...  0.3083 sec/batch
Epoch: 1/20...  Training Step: 45...  Training loss: 3.1573...  0.3089 sec/batch
Epoch: 1/20...  Training Step: 46...  Training loss: 3.1695...  0.3094 sec/batch
Epoch: 1/20...  Training Step: 47...  Training loss: 3.1685...  0.3088 sec/batch
Epoch: 1/20...  Training Step: 48...  Training loss: 3.1729...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 49...  Training loss: 3.1675...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 50...  Training loss: 3.1650...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 51...  Training loss: 3.1597...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 52...  Training loss: 3.1488...  0.3090 sec/batch
Epoch: 1/20...  Training Step: 53...  Training loss: 3.1638...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 54...  Training loss: 3.1414...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 55...  Training loss: 3.1560...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 56...  Training loss: 3.1300...  0.3088 sec/batch
Epoch: 1/20...  Training Step: 57...  Training loss: 3.1449...  0.3096 sec/batch
Epoch: 1/20...  Training Step: 58...  Training loss: 3.1456...  0.3090 sec/batch
Epoch: 1/20...  Training Step: 59...  Training loss: 3.1395...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 60...  Training loss: 3.1436...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 61...  Training loss: 3.1468...  0.3078 sec/batch
Epoch: 1/20...  Training Step: 62...  Training loss: 3.1554...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 63...  Training loss: 3.1615...  0.3116 sec/batch
Epoch: 1/20...  Training Step: 64...  Training loss: 3.1194...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 65...  Training loss: 3.1242...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 66...  Training loss: 3.1448...  0.3088 sec/batch
Epoch: 1/20...  Training Step: 67...  Training loss: 3.1419...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 68...  Training loss: 3.1002...  0.3087 sec/batch
Epoch: 1/20...  Training Step: 69...  Training loss: 3.1160...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 70...  Training loss: 3.1395...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 71...  Training loss: 3.1288...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 72...  Training loss: 3.1446...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 73...  Training loss: 3.1199...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 74...  Training loss: 3.1286...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 75...  Training loss: 3.1333...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 76...  Training loss: 3.1411...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 77...  Training loss: 3.1311...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 78...  Training loss: 3.1185...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 79...  Training loss: 3.1114...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 80...  Training loss: 3.1019...  0.3094 sec/batch
Epoch: 1/20...  Training Step: 81...  Training loss: 3.1057...  0.3116 sec/batch
Epoch: 1/20...  Training Step: 82...  Training loss: 3.1120...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 83...  Training loss: 3.1143...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 84...  Training loss: 3.1008...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 85...  Training loss: 3.0850...  0.3135 sec/batch
Epoch: 1/20...  Training Step: 86...  Training loss: 3.0911...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 87...  Training loss: 3.0856...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 88...  Training loss: 3.0885...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 89...  Training loss: 3.1031...  0.3086 sec/batch
Epoch: 1/20...  Training Step: 90...  Training loss: 3.0962...  0.3105 sec/batch
Epoch: 1/20...  Training Step: 91...  Training loss: 3.0930...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 92...  Training loss: 3.0836...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 93...  Training loss: 3.0824...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 94...  Training loss: 3.0827...  0.3094 sec/batch
Epoch: 1/20...  Training Step: 95...  Training loss: 3.0682...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 96...  Training loss: 3.0580...  0.3090 sec/batch
Epoch: 1/20...  Training Step: 97...  Training loss: 3.1326...  0.3110 sec/batch
Epoch: 1/20...  Training Step: 98...  Training loss: 3.0998...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 99...  Training loss: 3.0934...  0.3089 sec/batch
Epoch: 1/20...  Training Step: 100...  Training loss: 3.0800...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 101...  Training loss: 3.0753...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 102...  Training loss: 3.0788...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 103...  Training loss: 3.0787...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 104...  Training loss: 3.0679...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 105...  Training loss: 3.0694...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 106...  Training loss: 3.0609...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 107...  Training loss: 3.0376...  0.3112 sec/batch
Epoch: 1/20...  Training Step: 108...  Training loss: 3.0473...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 109...  Training loss: 3.0523...  0.3114 sec/batch
Epoch: 1/20...  Training Step: 110...  Training loss: 3.0053...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 111...  Training loss: 3.0217...  0.3114 sec/batch
Epoch: 1/20...  Training Step: 112...  Training loss: 3.2015...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 113...  Training loss: 3.0650...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 114...  Training loss: 3.0334...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 115...  Training loss: 3.0280...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 116...  Training loss: 3.0348...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 117...  Training loss: 3.0309...  0.3093 sec/batch
Epoch: 1/20...  Training Step: 118...  Training loss: 3.0395...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 119...  Training loss: 3.0408...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 120...  Training loss: 3.0188...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 121...  Training loss: 3.0466...  0.3095 sec/batch
Epoch: 1/20...  Training Step: 122...  Training loss: 3.0238...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 123...  Training loss: 3.0209...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 124...  Training loss: 3.0266...  0.3114 sec/batch
Epoch: 1/20...  Training Step: 125...  Training loss: 2.9944...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 126...  Training loss: 2.9711...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 127...  Training loss: 2.9945...  0.3113 sec/batch
Epoch: 1/20...  Training Step: 128...  Training loss: 3.0004...  0.3089 sec/batch
Epoch: 1/20...  Training Step: 129...  Training loss: 2.9756...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 130...  Training loss: 2.9791...  0.3138 sec/batch
Epoch: 1/20...  Training Step: 131...  Training loss: 2.9786...  0.3112 sec/batch
Epoch: 1/20...  Training Step: 132...  Training loss: 2.9457...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 133...  Training loss: 2.9606...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 134...  Training loss: 2.9420...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 135...  Training loss: 2.9120...  0.3092 sec/batch
Epoch: 1/20...  Training Step: 136...  Training loss: 2.9013...  0.3120 sec/batch
Epoch: 1/20...  Training Step: 137...  Training loss: 2.9050...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 138...  Training loss: 2.8917...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 139...  Training loss: 2.9035...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 140...  Training loss: 2.8879...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 141...  Training loss: 2.8996...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 142...  Training loss: 2.8822...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 143...  Training loss: 2.8819...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 144...  Training loss: 2.8542...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 145...  Training loss: 2.8749...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 146...  Training loss: 2.8715...  0.3119 sec/batch
Epoch: 1/20...  Training Step: 147...  Training loss: 2.8666...  0.3091 sec/batch
Epoch: 1/20...  Training Step: 148...  Training loss: 2.8624...  0.3118 sec/batch
Epoch: 1/20...  Training Step: 149...  Training loss: 2.8229...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 150...  Training loss: 2.8465...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 151...  Training loss: 2.8502...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 152...  Training loss: 2.8649...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 153...  Training loss: 2.8269...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 154...  Training loss: 2.8217...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 155...  Training loss: 2.7886...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 156...  Training loss: 2.7786...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 157...  Training loss: 2.7555...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 158...  Training loss: 2.7665...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 159...  Training loss: 2.7414...  0.3100 sec/batch
Epoch: 1/20...  Training Step: 160...  Training loss: 2.7832...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 161...  Training loss: 2.7535...  0.3111 sec/batch
Epoch: 1/20...  Training Step: 162...  Training loss: 2.7116...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 163...  Training loss: 2.7027...  0.3097 sec/batch
Epoch: 1/20...  Training Step: 164...  Training loss: 2.7200...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 165...  Training loss: 2.7203...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 166...  Training loss: 2.6995...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 167...  Training loss: 2.7029...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 168...  Training loss: 2.6896...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 169...  Training loss: 2.6950...  0.3113 sec/batch
Epoch: 1/20...  Training Step: 170...  Training loss: 2.6597...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 171...  Training loss: 2.6734...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 172...  Training loss: 2.7057...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 173...  Training loss: 2.7052...  0.3110 sec/batch
Epoch: 1/20...  Training Step: 174...  Training loss: 2.6917...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 175...  Training loss: 2.6749...  0.3098 sec/batch
Epoch: 1/20...  Training Step: 176...  Training loss: 2.6465...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 177...  Training loss: 2.6340...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 178...  Training loss: 2.6047...  0.3096 sec/batch
Epoch: 1/20...  Training Step: 179...  Training loss: 2.6045...  0.3141 sec/batch
Epoch: 1/20...  Training Step: 180...  Training loss: 2.5936...  0.3105 sec/batch
Epoch: 1/20...  Training Step: 181...  Training loss: 2.6006...  0.3106 sec/batch
Epoch: 1/20...  Training Step: 182...  Training loss: 2.6128...  0.3109 sec/batch
Epoch: 1/20...  Training Step: 183...  Training loss: 2.5863...  0.3120 sec/batch
Epoch: 1/20...  Training Step: 184...  Training loss: 2.6085...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 185...  Training loss: 2.6661...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 186...  Training loss: 2.6376...  0.3101 sec/batch
Epoch: 1/20...  Training Step: 187...  Training loss: 2.5639...  0.3102 sec/batch
Epoch: 1/20...  Training Step: 188...  Training loss: 2.5555...  0.3104 sec/batch
Epoch: 1/20...  Training Step: 189...  Training loss: 2.5538...  0.3124 sec/batch
Epoch: 1/20...  Training Step: 190...  Training loss: 2.5537...  0.3144 sec/batch
Epoch: 1/20...  Training Step: 191...  Training loss: 2.5613...  0.3107 sec/batch
Epoch: 1/20...  Training Step: 192...  Training loss: 2.5290...  0.3099 sec/batch
Epoch: 1/20...  Training Step: 193...  Training loss: 2.5533...  0.3103 sec/batch
Epoch: 1/20...  Training Step: 194...  Training loss: 2.5404...  0.3094 sec/batch
Epoch: 1/20...  Training Step: 195...  Training loss: 2.5315...  0.3108 sec/batch
Epoch: 1/20...  Training Step: 196...  Training loss: 2.5224...  0.3119 sec/batch
Epoch: 1/20...  Training Step: 197...  Training loss: 2.5304...  0.3113 sec/batch
Epoch: 1/20...  Training Step: 198...  Training loss: 2.5133...  0.3105 sec/batch
Epoch: 2/20...  Training Step: 199...  Training loss: 2.5983...  0.3115 sec/batch
Epoch: 2/20...  Training Step: 200...  Training loss: 2.4972...  0.3102 sec/batch
Epoch: 2/20...  Training Step: 201...  Training loss: 2.5034...  0.3112 sec/batch
Epoch: 2/20...  Training Step: 202...  Training loss: 2.5206...  0.3109 sec/batch
Epoch: 2/20...  Training Step: 203...  Training loss: 2.5077...  0.3111 sec/batch
Epoch: 2/20...  Training Step: 204...  Training loss: 2.5090...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 205...  Training loss: 2.5138...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 206...  Training loss: 2.5150...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 207...  Training loss: 2.5230...  0.3103 sec/batch
Epoch: 2/20...  Training Step: 208...  Training loss: 2.4909...  0.3111 sec/batch
Epoch: 2/20...  Training Step: 209...  Training loss: 2.4913...  0.3116 sec/batch
Epoch: 2/20...  Training Step: 210...  Training loss: 2.5047...  0.3103 sec/batch
Epoch: 2/20...  Training Step: 211...  Training loss: 2.4949...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 212...  Training loss: 2.5227...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 213...  Training loss: 2.4872...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 214...  Training loss: 2.4907...  0.3140 sec/batch
Epoch: 2/20...  Training Step: 215...  Training loss: 2.4859...  0.3110 sec/batch
Epoch: 2/20...  Training Step: 216...  Training loss: 2.5204...  0.3095 sec/batch
Epoch: 2/20...  Training Step: 217...  Training loss: 2.4833...  0.3115 sec/batch
Epoch: 2/20...  Training Step: 218...  Training loss: 2.4565...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 219...  Training loss: 2.4550...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 220...  Training loss: 2.5051...  0.3134 sec/batch
Epoch: 2/20...  Training Step: 221...  Training loss: 2.4728...  0.3105 sec/batch
Epoch: 2/20...  Training Step: 222...  Training loss: 2.4606...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 223...  Training loss: 2.4570...  0.3101 sec/batch
Epoch: 2/20...  Training Step: 224...  Training loss: 2.4627...  0.3111 sec/batch
Epoch: 2/20...  Training Step: 225...  Training loss: 2.4409...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 226...  Training loss: 2.4557...  0.3108 sec/batch
Epoch: 2/20...  Training Step: 227...  Training loss: 2.4661...  0.3151 sec/batch
Epoch: 2/20...  Training Step: 228...  Training loss: 2.4605...  0.3121 sec/batch
Epoch: 2/20...  Training Step: 229...  Training loss: 2.4716...  0.3153 sec/batch
Epoch: 2/20...  Training Step: 230...  Training loss: 2.4321...  0.3100 sec/batch
Epoch: 2/20...  Training Step: 231...  Training loss: 2.4270...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 232...  Training loss: 2.4526...  0.3127 sec/batch
Epoch: 2/20...  Training Step: 233...  Training loss: 2.4311...  0.3102 sec/batch
Epoch: 2/20...  Training Step: 234...  Training loss: 2.4499...  0.3102 sec/batch
Epoch: 2/20...  Training Step: 235...  Training loss: 2.4265...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 236...  Training loss: 2.3988...  0.3104 sec/batch
Epoch: 2/20...  Training Step: 237...  Training loss: 2.4139...  0.3117 sec/batch
Epoch: 2/20...  Training Step: 238...  Training loss: 2.4193...  0.3121 sec/batch
Epoch: 2/20...  Training Step: 239...  Training loss: 2.4136...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 240...  Training loss: 2.4085...  0.3104 sec/batch
Epoch: 2/20...  Training Step: 241...  Training loss: 2.4054...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 242...  Training loss: 2.4033...  0.3121 sec/batch
Epoch: 2/20...  Training Step: 243...  Training loss: 2.4070...  0.3096 sec/batch
Epoch: 2/20...  Training Step: 244...  Training loss: 2.3762...  0.3103 sec/batch
Epoch: 2/20...  Training Step: 245...  Training loss: 2.4259...  0.3110 sec/batch
Epoch: 2/20...  Training Step: 246...  Training loss: 2.3999...  0.3130 sec/batch
Epoch: 2/20...  Training Step: 247...  Training loss: 2.4018...  0.3143 sec/batch
Epoch: 2/20...  Training Step: 248...  Training loss: 2.4305...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 249...  Training loss: 2.3874...  0.3116 sec/batch
Epoch: 2/20...  Training Step: 250...  Training loss: 2.4108...  0.3112 sec/batch
Epoch: 2/20...  Training Step: 251...  Training loss: 2.4005...  0.3120 sec/batch
Epoch: 2/20...  Training Step: 252...  Training loss: 2.3850...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 253...  Training loss: 2.3834...  0.3110 sec/batch
Epoch: 2/20...  Training Step: 254...  Training loss: 2.3985...  0.3118 sec/batch
Epoch: 2/20...  Training Step: 255...  Training loss: 2.3855...  0.3166 sec/batch
Epoch: 2/20...  Training Step: 256...  Training loss: 2.3740...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 257...  Training loss: 2.3716...  0.3131 sec/batch
Epoch: 2/20...  Training Step: 258...  Training loss: 2.4069...  0.3115 sec/batch
Epoch: 2/20...  Training Step: 259...  Training loss: 2.3841...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 260...  Training loss: 2.3981...  0.3109 sec/batch
Epoch: 2/20...  Training Step: 261...  Training loss: 2.4058...  0.3131 sec/batch
Epoch: 2/20...  Training Step: 262...  Training loss: 2.3770...  0.3133 sec/batch
Epoch: 2/20...  Training Step: 263...  Training loss: 2.3591...  0.3137 sec/batch
Epoch: 2/20...  Training Step: 264...  Training loss: 2.3894...  0.3117 sec/batch
Epoch: 2/20...  Training Step: 265...  Training loss: 2.3695...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 266...  Training loss: 2.3406...  0.3113 sec/batch
Epoch: 2/20...  Training Step: 267...  Training loss: 2.3452...  0.3111 sec/batch
Epoch: 2/20...  Training Step: 268...  Training loss: 2.3714...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 269...  Training loss: 2.3780...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 270...  Training loss: 2.3706...  0.3112 sec/batch
Epoch: 2/20...  Training Step: 271...  Training loss: 2.3546...  0.3102 sec/batch
Epoch: 2/20...  Training Step: 272...  Training loss: 2.3498...  0.3171 sec/batch
Epoch: 2/20...  Training Step: 273...  Training loss: 2.3493...  0.3110 sec/batch
Epoch: 2/20...  Training Step: 274...  Training loss: 2.3910...  0.3127 sec/batch
Epoch: 2/20...  Training Step: 275...  Training loss: 2.3527...  0.3107 sec/batch
Epoch: 2/20...  Training Step: 276...  Training loss: 2.3612...  0.3128 sec/batch
Epoch: 2/20...  Training Step: 277...  Training loss: 2.3265...  0.3110 sec/batch
Epoch: 2/20...  Training Step: 278...  Training loss: 2.3301...  0.3112 sec/batch
Epoch: 2/20...  Training Step: 279...  Training loss: 2.3140...  0.3153 sec/batch
Epoch: 2/20...  Training Step: 280...  Training loss: 2.3639...  0.3145 sec/batch
Epoch: 2/20...  Training Step: 281...  Training loss: 2.3317...  0.3143 sec/batch
Epoch: 2/20...  Training Step: 282...  Training loss: 2.3079...  0.3129 sec/batch
Epoch: 2/20...  Training Step: 283...  Training loss: 2.2851...  0.3125 sec/batch
Epoch: 2/20...  Training Step: 284...  Training loss: 2.3252...  0.3133 sec/batch
Epoch: 2/20...  Training Step: 285...  Training loss: 2.3308...  0.3125 sec/batch
Epoch: 2/20...  Training Step: 286...  Training loss: 2.3179...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 287...  Training loss: 2.3056...  0.3109 sec/batch
Epoch: 2/20...  Training Step: 288...  Training loss: 2.3318...  0.3143 sec/batch
Epoch: 2/20...  Training Step: 289...  Training loss: 2.3084...  0.3149 sec/batch
Epoch: 2/20...  Training Step: 290...  Training loss: 2.3210...  0.3141 sec/batch
Epoch: 2/20...  Training Step: 291...  Training loss: 2.2994...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 292...  Training loss: 2.3027...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 293...  Training loss: 2.2949...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 294...  Training loss: 2.2885...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 295...  Training loss: 2.3061...  0.3151 sec/batch
Epoch: 2/20...  Training Step: 296...  Training loss: 2.2937...  0.3112 sec/batch
Epoch: 2/20...  Training Step: 297...  Training loss: 2.2885...  0.3127 sec/batch
Epoch: 2/20...  Training Step: 298...  Training loss: 2.2774...  0.3130 sec/batch
Epoch: 2/20...  Training Step: 299...  Training loss: 2.3153...  0.3119 sec/batch
Epoch: 2/20...  Training Step: 300...  Training loss: 2.3001...  0.3134 sec/batch
Epoch: 2/20...  Training Step: 301...  Training loss: 2.2698...  0.3116 sec/batch
Epoch: 2/20...  Training Step: 302...  Training loss: 2.2875...  0.3159 sec/batch
Epoch: 2/20...  Training Step: 303...  Training loss: 2.2717...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 304...  Training loss: 2.2821...  0.3150 sec/batch
Epoch: 2/20...  Training Step: 305...  Training loss: 2.2869...  0.3125 sec/batch
Epoch: 2/20...  Training Step: 306...  Training loss: 2.3045...  0.3108 sec/batch
Epoch: 2/20...  Training Step: 307...  Training loss: 2.2847...  0.3116 sec/batch
Epoch: 2/20...  Training Step: 308...  Training loss: 2.2730...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 309...  Training loss: 2.2765...  0.3143 sec/batch
Epoch: 2/20...  Training Step: 310...  Training loss: 2.3003...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 311...  Training loss: 2.2770...  0.3111 sec/batch
Epoch: 2/20...  Training Step: 312...  Training loss: 2.2594...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 313...  Training loss: 2.2637...  0.3161 sec/batch
Epoch: 2/20...  Training Step: 314...  Training loss: 2.2286...  0.3138 sec/batch
Epoch: 2/20...  Training Step: 315...  Training loss: 2.2785...  0.3126 sec/batch
Epoch: 2/20...  Training Step: 316...  Training loss: 2.2570...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 317...  Training loss: 2.2906...  0.3150 sec/batch
Epoch: 2/20...  Training Step: 318...  Training loss: 2.2711...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 319...  Training loss: 2.2842...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 320...  Training loss: 2.2472...  0.3126 sec/batch
Epoch: 2/20...  Training Step: 321...  Training loss: 2.2497...  0.3115 sec/batch
Epoch: 2/20...  Training Step: 322...  Training loss: 2.2792...  0.3136 sec/batch
Epoch: 2/20...  Training Step: 323...  Training loss: 2.2588...  0.3129 sec/batch
Epoch: 2/20...  Training Step: 324...  Training loss: 2.2289...  0.3149 sec/batch
Epoch: 2/20...  Training Step: 325...  Training loss: 2.2658...  0.3137 sec/batch
Epoch: 2/20...  Training Step: 326...  Training loss: 2.2589...  0.3106 sec/batch
Epoch: 2/20...  Training Step: 327...  Training loss: 2.2649...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 328...  Training loss: 2.2589...  0.3128 sec/batch
Epoch: 2/20...  Training Step: 329...  Training loss: 2.2478...  0.3124 sec/batch
Epoch: 2/20...  Training Step: 330...  Training loss: 2.2272...  0.3162 sec/batch
Epoch: 2/20...  Training Step: 331...  Training loss: 2.2560...  0.3154 sec/batch
Epoch: 2/20...  Training Step: 332...  Training loss: 2.2612...  0.3149 sec/batch
Epoch: 2/20...  Training Step: 333...  Training loss: 2.2394...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 334...  Training loss: 2.2484...  0.3114 sec/batch
Epoch: 2/20...  Training Step: 335...  Training loss: 2.2416...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 336...  Training loss: 2.2367...  0.3125 sec/batch
Epoch: 2/20...  Training Step: 337...  Training loss: 2.2685...  0.3118 sec/batch
Epoch: 2/20...  Training Step: 338...  Training loss: 2.2317...  0.3151 sec/batch
Epoch: 2/20...  Training Step: 339...  Training loss: 2.2506...  0.3144 sec/batch
Epoch: 2/20...  Training Step: 340...  Training loss: 2.2294...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 341...  Training loss: 2.2248...  0.3137 sec/batch
Epoch: 2/20...  Training Step: 342...  Training loss: 2.2250...  0.3120 sec/batch
Epoch: 2/20...  Training Step: 343...  Training loss: 2.2234...  0.3159 sec/batch
Epoch: 2/20...  Training Step: 344...  Training loss: 2.2463...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 345...  Training loss: 2.2302...  0.3170 sec/batch
Epoch: 2/20...  Training Step: 346...  Training loss: 2.2377...  0.3166 sec/batch
Epoch: 2/20...  Training Step: 347...  Training loss: 2.2201...  0.3153 sec/batch
Epoch: 2/20...  Training Step: 348...  Training loss: 2.2081...  0.3150 sec/batch
Epoch: 2/20...  Training Step: 349...  Training loss: 2.2316...  0.3130 sec/batch
Epoch: 2/20...  Training Step: 350...  Training loss: 2.2488...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 351...  Training loss: 2.2294...  0.3121 sec/batch
Epoch: 2/20...  Training Step: 352...  Training loss: 2.2256...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 353...  Training loss: 2.1980...  0.3147 sec/batch
Epoch: 2/20...  Training Step: 354...  Training loss: 2.2027...  0.3156 sec/batch
Epoch: 2/20...  Training Step: 355...  Training loss: 2.1969...  0.3128 sec/batch
Epoch: 2/20...  Training Step: 356...  Training loss: 2.2018...  0.3132 sec/batch
Epoch: 2/20...  Training Step: 357...  Training loss: 2.1780...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 358...  Training loss: 2.2418...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 359...  Training loss: 2.2113...  0.3161 sec/batch
Epoch: 2/20...  Training Step: 360...  Training loss: 2.1855...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 361...  Training loss: 2.1955...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 362...  Training loss: 2.1965...  0.3159 sec/batch
Epoch: 2/20...  Training Step: 363...  Training loss: 2.1993...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 364...  Training loss: 2.1929...  0.3144 sec/batch
Epoch: 2/20...  Training Step: 365...  Training loss: 2.2012...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 366...  Training loss: 2.2197...  0.3157 sec/batch
Epoch: 2/20...  Training Step: 367...  Training loss: 2.1947...  0.3150 sec/batch
Epoch: 2/20...  Training Step: 368...  Training loss: 2.1701...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 369...  Training loss: 2.1902...  0.3138 sec/batch
Epoch: 2/20...  Training Step: 370...  Training loss: 2.2118...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 371...  Training loss: 2.2152...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 372...  Training loss: 2.2023...  0.3132 sec/batch
Epoch: 2/20...  Training Step: 373...  Training loss: 2.2068...  0.3172 sec/batch
Epoch: 2/20...  Training Step: 374...  Training loss: 2.1883...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 375...  Training loss: 2.1711...  0.3150 sec/batch
Epoch: 2/20...  Training Step: 376...  Training loss: 2.1722...  0.3123 sec/batch
Epoch: 2/20...  Training Step: 377...  Training loss: 2.1586...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 378...  Training loss: 2.1432...  0.3136 sec/batch
Epoch: 2/20...  Training Step: 379...  Training loss: 2.1592...  0.3158 sec/batch
Epoch: 2/20...  Training Step: 380...  Training loss: 2.1796...  0.3139 sec/batch
Epoch: 2/20...  Training Step: 381...  Training loss: 2.1813...  0.3169 sec/batch
Epoch: 2/20...  Training Step: 382...  Training loss: 2.1890...  0.3158 sec/batch
Epoch: 2/20...  Training Step: 383...  Training loss: 2.1833...  0.3137 sec/batch
Epoch: 2/20...  Training Step: 384...  Training loss: 2.1632...  0.3129 sec/batch
Epoch: 2/20...  Training Step: 385...  Training loss: 2.1690...  0.3154 sec/batch
Epoch: 2/20...  Training Step: 386...  Training loss: 2.1429...  0.3152 sec/batch
Epoch: 2/20...  Training Step: 387...  Training loss: 2.1514...  0.3162 sec/batch
Epoch: 2/20...  Training Step: 388...  Training loss: 2.1617...  0.3149 sec/batch
Epoch: 2/20...  Training Step: 389...  Training loss: 2.1779...  0.3145 sec/batch
Epoch: 2/20...  Training Step: 390...  Training loss: 2.1296...  0.3145 sec/batch
Epoch: 2/20...  Training Step: 391...  Training loss: 2.1583...  0.3157 sec/batch
Epoch: 2/20...  Training Step: 392...  Training loss: 2.1545...  0.3164 sec/batch
Epoch: 2/20...  Training Step: 393...  Training loss: 2.1315...  0.3167 sec/batch
Epoch: 2/20...  Training Step: 394...  Training loss: 2.1467...  0.3148 sec/batch
Epoch: 2/20...  Training Step: 395...  Training loss: 2.1460...  0.3146 sec/batch
Epoch: 2/20...  Training Step: 396...  Training loss: 2.1304...  0.3129 sec/batch
Epoch: 3/20...  Training Step: 397...  Training loss: 2.2240...  0.3136 sec/batch
Epoch: 3/20...  Training Step: 398...  Training loss: 2.1135...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 399...  Training loss: 2.1234...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 400...  Training loss: 2.1306...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 401...  Training loss: 2.1311...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 402...  Training loss: 2.1072...  0.3121 sec/batch
Epoch: 3/20...  Training Step: 403...  Training loss: 2.1345...  0.3148 sec/batch
Epoch: 3/20...  Training Step: 404...  Training loss: 2.1379...  0.3135 sec/batch
Epoch: 3/20...  Training Step: 405...  Training loss: 2.1629...  0.3143 sec/batch
Epoch: 3/20...  Training Step: 406...  Training loss: 2.1251...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 407...  Training loss: 2.1125...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 408...  Training loss: 2.1111...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 409...  Training loss: 2.1308...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 410...  Training loss: 2.1600...  0.3123 sec/batch
Epoch: 3/20...  Training Step: 411...  Training loss: 2.1203...  0.3145 sec/batch
Epoch: 3/20...  Training Step: 412...  Training loss: 2.1173...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 413...  Training loss: 2.1099...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 414...  Training loss: 2.1637...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 415...  Training loss: 2.1262...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 416...  Training loss: 2.1194...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 417...  Training loss: 2.1049...  0.3145 sec/batch
Epoch: 3/20...  Training Step: 418...  Training loss: 2.1547...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 419...  Training loss: 2.1168...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 420...  Training loss: 2.1018...  0.3143 sec/batch
Epoch: 3/20...  Training Step: 421...  Training loss: 2.1099...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 422...  Training loss: 2.0908...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 423...  Training loss: 2.0879...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 424...  Training loss: 2.1205...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 425...  Training loss: 2.1399...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 426...  Training loss: 2.1277...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 427...  Training loss: 2.1074...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 428...  Training loss: 2.0835...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 429...  Training loss: 2.1033...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 430...  Training loss: 2.1403...  0.3148 sec/batch
Epoch: 3/20...  Training Step: 431...  Training loss: 2.0862...  0.3170 sec/batch
Epoch: 3/20...  Training Step: 432...  Training loss: 2.0995...  0.3165 sec/batch
Epoch: 3/20...  Training Step: 433...  Training loss: 2.1008...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 434...  Training loss: 2.0603...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 435...  Training loss: 2.0617...  0.3161 sec/batch
Epoch: 3/20...  Training Step: 436...  Training loss: 2.0692...  0.3140 sec/batch
Epoch: 3/20...  Training Step: 437...  Training loss: 2.0885...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 438...  Training loss: 2.1095...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 439...  Training loss: 2.0632...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 440...  Training loss: 2.0671...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 441...  Training loss: 2.0893...  0.3140 sec/batch
Epoch: 3/20...  Training Step: 442...  Training loss: 2.0200...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 443...  Training loss: 2.0833...  0.3130 sec/batch
Epoch: 3/20...  Training Step: 444...  Training loss: 2.0724...  0.3203 sec/batch
Epoch: 3/20...  Training Step: 445...  Training loss: 2.0687...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 446...  Training loss: 2.1103...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 447...  Training loss: 2.0414...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 448...  Training loss: 2.1268...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 449...  Training loss: 2.0696...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 450...  Training loss: 2.0658...  0.3167 sec/batch
Epoch: 3/20...  Training Step: 451...  Training loss: 2.0632...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 452...  Training loss: 2.0825...  0.3147 sec/batch
Epoch: 3/20...  Training Step: 453...  Training loss: 2.0755...  0.3145 sec/batch
Epoch: 3/20...  Training Step: 454...  Training loss: 2.0597...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 455...  Training loss: 2.0544...  0.3147 sec/batch
Epoch: 3/20...  Training Step: 456...  Training loss: 2.0900...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 457...  Training loss: 2.0660...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 458...  Training loss: 2.1003...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 459...  Training loss: 2.1017...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 460...  Training loss: 2.0718...  0.3169 sec/batch
Epoch: 3/20...  Training Step: 461...  Training loss: 2.0518...  0.3134 sec/batch
Epoch: 3/20...  Training Step: 462...  Training loss: 2.0928...  0.3128 sec/batch
Epoch: 3/20...  Training Step: 463...  Training loss: 2.0645...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 464...  Training loss: 2.0314...  0.3165 sec/batch
Epoch: 3/20...  Training Step: 465...  Training loss: 2.0410...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 466...  Training loss: 2.0564...  0.3143 sec/batch
Epoch: 3/20...  Training Step: 467...  Training loss: 2.0827...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 468...  Training loss: 2.0533...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 469...  Training loss: 2.0658...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 470...  Training loss: 2.0401...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 471...  Training loss: 2.0460...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 472...  Training loss: 2.0879...  0.3209 sec/batch
Epoch: 3/20...  Training Step: 473...  Training loss: 2.0425...  0.3142 sec/batch
Epoch: 3/20...  Training Step: 474...  Training loss: 2.0516...  0.3139 sec/batch
Epoch: 3/20...  Training Step: 475...  Training loss: 2.0109...  0.3148 sec/batch
Epoch: 3/20...  Training Step: 476...  Training loss: 2.0274...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 477...  Training loss: 2.0173...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 478...  Training loss: 2.0632...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 479...  Training loss: 2.0046...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 480...  Training loss: 2.0361...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 481...  Training loss: 1.9961...  0.3137 sec/batch
Epoch: 3/20...  Training Step: 482...  Training loss: 2.0186...  0.3165 sec/batch
Epoch: 3/20...  Training Step: 483...  Training loss: 2.0207...  0.3165 sec/batch
Epoch: 3/20...  Training Step: 484...  Training loss: 2.0208...  0.3163 sec/batch
Epoch: 3/20...  Training Step: 485...  Training loss: 1.9984...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 486...  Training loss: 2.0442...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 487...  Training loss: 2.0093...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 488...  Training loss: 2.0189...  0.3145 sec/batch
Epoch: 3/20...  Training Step: 489...  Training loss: 1.9922...  0.3185 sec/batch
Epoch: 3/20...  Training Step: 490...  Training loss: 1.9920...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 491...  Training loss: 2.0020...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 492...  Training loss: 2.0171...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 493...  Training loss: 2.0157...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 494...  Training loss: 1.9968...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 495...  Training loss: 1.9944...  0.3168 sec/batch
Epoch: 3/20...  Training Step: 496...  Training loss: 1.9767...  0.3172 sec/batch
Epoch: 3/20...  Training Step: 497...  Training loss: 2.0337...  0.3171 sec/batch
Epoch: 3/20...  Training Step: 498...  Training loss: 2.0213...  0.3173 sec/batch
Epoch: 3/20...  Training Step: 499...  Training loss: 1.9925...  0.3142 sec/batch
Epoch: 3/20...  Training Step: 500...  Training loss: 2.0013...  0.3118 sec/batch
Epoch: 3/20...  Training Step: 501...  Training loss: 1.9972...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 502...  Training loss: 2.0098...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 503...  Training loss: 2.0124...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 504...  Training loss: 2.0220...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 505...  Training loss: 2.0295...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 506...  Training loss: 2.0131...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 507...  Training loss: 2.0046...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 508...  Training loss: 2.0042...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 509...  Training loss: 2.0011...  0.3183 sec/batch
Epoch: 3/20...  Training Step: 510...  Training loss: 1.9811...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 511...  Training loss: 1.9813...  0.3146 sec/batch
Epoch: 3/20...  Training Step: 512...  Training loss: 1.9684...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 513...  Training loss: 2.0037...  0.3188 sec/batch
Epoch: 3/20...  Training Step: 514...  Training loss: 1.9819...  0.3192 sec/batch
Epoch: 3/20...  Training Step: 515...  Training loss: 2.0029...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 516...  Training loss: 1.9927...  0.3167 sec/batch
Epoch: 3/20...  Training Step: 517...  Training loss: 2.0036...  0.3170 sec/batch
Epoch: 3/20...  Training Step: 518...  Training loss: 1.9670...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 519...  Training loss: 1.9811...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 520...  Training loss: 2.0182...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 521...  Training loss: 1.9939...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 522...  Training loss: 1.9465...  0.3163 sec/batch
Epoch: 3/20...  Training Step: 523...  Training loss: 1.9993...  0.3167 sec/batch
Epoch: 3/20...  Training Step: 524...  Training loss: 2.0004...  0.3176 sec/batch
Epoch: 3/20...  Training Step: 525...  Training loss: 1.9991...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 526...  Training loss: 1.9919...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 527...  Training loss: 1.9686...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 528...  Training loss: 1.9639...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 529...  Training loss: 1.9987...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 530...  Training loss: 2.0015...  0.3165 sec/batch
Epoch: 3/20...  Training Step: 531...  Training loss: 1.9859...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 532...  Training loss: 2.0009...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 533...  Training loss: 1.9875...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 534...  Training loss: 1.9843...  0.3170 sec/batch
Epoch: 3/20...  Training Step: 535...  Training loss: 2.0148...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 536...  Training loss: 1.9774...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 537...  Training loss: 2.0052...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 538...  Training loss: 1.9722...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 539...  Training loss: 1.9705...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 540...  Training loss: 1.9760...  0.3161 sec/batch
Epoch: 3/20...  Training Step: 541...  Training loss: 1.9653...  0.3160 sec/batch
Epoch: 3/20...  Training Step: 542...  Training loss: 1.9935...  0.3169 sec/batch
Epoch: 3/20...  Training Step: 543...  Training loss: 1.9794...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 544...  Training loss: 2.0006...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 545...  Training loss: 1.9814...  0.3147 sec/batch
Epoch: 3/20...  Training Step: 546...  Training loss: 1.9630...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 547...  Training loss: 1.9681...  0.3152 sec/batch
Epoch: 3/20...  Training Step: 548...  Training loss: 2.0091...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 549...  Training loss: 1.9724...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 550...  Training loss: 1.9800...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 551...  Training loss: 1.9615...  0.3148 sec/batch
Epoch: 3/20...  Training Step: 552...  Training loss: 1.9722...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 553...  Training loss: 1.9643...  0.3168 sec/batch
Epoch: 3/20...  Training Step: 554...  Training loss: 1.9545...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 555...  Training loss: 1.9391...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 556...  Training loss: 2.0007...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 557...  Training loss: 1.9831...  0.3167 sec/batch
Epoch: 3/20...  Training Step: 558...  Training loss: 1.9503...  0.3172 sec/batch
Epoch: 3/20...  Training Step: 559...  Training loss: 1.9767...  0.3196 sec/batch
Epoch: 3/20...  Training Step: 560...  Training loss: 1.9631...  0.3172 sec/batch
Epoch: 3/20...  Training Step: 561...  Training loss: 1.9545...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 562...  Training loss: 1.9487...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 563...  Training loss: 1.9734...  0.3149 sec/batch
Epoch: 3/20...  Training Step: 564...  Training loss: 2.0034...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 565...  Training loss: 1.9512...  0.3183 sec/batch
Epoch: 3/20...  Training Step: 566...  Training loss: 1.9536...  0.3162 sec/batch
Epoch: 3/20...  Training Step: 567...  Training loss: 1.9344...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 568...  Training loss: 1.9429...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 569...  Training loss: 1.9713...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 570...  Training loss: 1.9587...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 571...  Training loss: 1.9510...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 572...  Training loss: 1.9452...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 573...  Training loss: 1.9357...  0.3147 sec/batch
Epoch: 3/20...  Training Step: 574...  Training loss: 1.9516...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 575...  Training loss: 1.9134...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 576...  Training loss: 1.9127...  0.3151 sec/batch
Epoch: 3/20...  Training Step: 577...  Training loss: 1.9163...  0.3156 sec/batch
Epoch: 3/20...  Training Step: 578...  Training loss: 1.9396...  0.3157 sec/batch
Epoch: 3/20...  Training Step: 579...  Training loss: 1.9509...  0.3164 sec/batch
Epoch: 3/20...  Training Step: 580...  Training loss: 1.9619...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 581...  Training loss: 1.9388...  0.3145 sec/batch
Epoch: 3/20...  Training Step: 582...  Training loss: 1.9271...  0.3150 sec/batch
Epoch: 3/20...  Training Step: 583...  Training loss: 1.9379...  0.3155 sec/batch
Epoch: 3/20...  Training Step: 584...  Training loss: 1.9088...  0.3180 sec/batch
Epoch: 3/20...  Training Step: 585...  Training loss: 1.9297...  0.3191 sec/batch
Epoch: 3/20...  Training Step: 586...  Training loss: 1.9357...  0.3153 sec/batch
Epoch: 3/20...  Training Step: 587...  Training loss: 1.9531...  0.3166 sec/batch
Epoch: 3/20...  Training Step: 588...  Training loss: 1.9128...  0.3159 sec/batch
Epoch: 3/20...  Training Step: 589...  Training loss: 1.9298...  0.3158 sec/batch
Epoch: 3/20...  Training Step: 590...  Training loss: 1.9143...  0.3191 sec/batch
Epoch: 3/20...  Training Step: 591...  Training loss: 1.9072...  0.3183 sec/batch
Epoch: 3/20...  Training Step: 592...  Training loss: 1.9267...  0.3141 sec/batch
Epoch: 3/20...  Training Step: 593...  Training loss: 1.9231...  0.3154 sec/batch
Epoch: 3/20...  Training Step: 594...  Training loss: 1.9079...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 595...  Training loss: 2.0122...  0.3213 sec/batch
Epoch: 4/20...  Training Step: 596...  Training loss: 1.9058...  0.3183 sec/batch
Epoch: 4/20...  Training Step: 597...  Training loss: 1.8981...  0.3167 sec/batch
Epoch: 4/20...  Training Step: 598...  Training loss: 1.9235...  0.3152 sec/batch
Epoch: 4/20...  Training Step: 599...  Training loss: 1.9137...  0.3157 sec/batch
Epoch: 4/20...  Training Step: 600...  Training loss: 1.8775...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 601...  Training loss: 1.9134...  0.3192 sec/batch
Epoch: 4/20...  Training Step: 602...  Training loss: 1.9097...  0.3141 sec/batch
Epoch: 4/20...  Training Step: 603...  Training loss: 1.9479...  0.3192 sec/batch
Epoch: 4/20...  Training Step: 604...  Training loss: 1.9132...  0.3153 sec/batch
Epoch: 4/20...  Training Step: 605...  Training loss: 1.8913...  0.3157 sec/batch
Epoch: 4/20...  Training Step: 606...  Training loss: 1.8876...  0.3145 sec/batch
Epoch: 4/20...  Training Step: 607...  Training loss: 1.9116...  0.3177 sec/batch
Epoch: 4/20...  Training Step: 608...  Training loss: 1.9501...  0.3198 sec/batch
Epoch: 4/20...  Training Step: 609...  Training loss: 1.8996...  0.3171 sec/batch
Epoch: 4/20...  Training Step: 610...  Training loss: 1.8838...  0.3150 sec/batch
Epoch: 4/20...  Training Step: 611...  Training loss: 1.9080...  0.3159 sec/batch
Epoch: 4/20...  Training Step: 612...  Training loss: 1.9422...  0.3155 sec/batch
Epoch: 4/20...  Training Step: 613...  Training loss: 1.9136...  0.3152 sec/batch
Epoch: 4/20...  Training Step: 614...  Training loss: 1.9068...  0.3189 sec/batch
Epoch: 4/20...  Training Step: 615...  Training loss: 1.8921...  0.3179 sec/batch
Epoch: 4/20...  Training Step: 616...  Training loss: 1.9515...  0.3162 sec/batch
Epoch: 4/20...  Training Step: 617...  Training loss: 1.8936...  0.3168 sec/batch
Epoch: 4/20...  Training Step: 618...  Training loss: 1.9087...  0.3153 sec/batch
Epoch: 4/20...  Training Step: 619...  Training loss: 1.8914...  0.3230 sec/batch
Epoch: 4/20...  Training Step: 620...  Training loss: 1.8725...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 621...  Training loss: 1.8743...  0.3192 sec/batch
Epoch: 4/20...  Training Step: 622...  Training loss: 1.9123...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 623...  Training loss: 1.9341...  0.3161 sec/batch
Epoch: 4/20...  Training Step: 624...  Training loss: 1.9064...  0.3150 sec/batch
Epoch: 4/20...  Training Step: 625...  Training loss: 1.8987...  0.3213 sec/batch
Epoch: 4/20...  Training Step: 626...  Training loss: 1.8785...  0.3188 sec/batch
Epoch: 4/20...  Training Step: 627...  Training loss: 1.9005...  0.3171 sec/batch
Epoch: 4/20...  Training Step: 628...  Training loss: 1.9339...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 629...  Training loss: 1.8787...  0.3165 sec/batch
Epoch: 4/20...  Training Step: 630...  Training loss: 1.8955...  0.3174 sec/batch
Epoch: 4/20...  Training Step: 631...  Training loss: 1.8848...  0.3162 sec/batch
Epoch: 4/20...  Training Step: 632...  Training loss: 1.8569...  0.3178 sec/batch
Epoch: 4/20...  Training Step: 633...  Training loss: 1.8487...  0.3326 sec/batch
Epoch: 4/20...  Training Step: 634...  Training loss: 1.8546...  0.3147 sec/batch
Epoch: 4/20...  Training Step: 635...  Training loss: 1.8637...  0.3215 sec/batch
Epoch: 4/20...  Training Step: 636...  Training loss: 1.8901...  0.3197 sec/batch
Epoch: 4/20...  Training Step: 637...  Training loss: 1.8696...  0.3164 sec/batch
Epoch: 4/20...  Training Step: 638...  Training loss: 1.8582...  0.3196 sec/batch
Epoch: 4/20...  Training Step: 639...  Training loss: 1.8937...  0.3159 sec/batch
Epoch: 4/20...  Training Step: 640...  Training loss: 1.8310...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 641...  Training loss: 1.8745...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 642...  Training loss: 1.8633...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 643...  Training loss: 1.8629...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 644...  Training loss: 1.9146...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 645...  Training loss: 1.8382...  0.3178 sec/batch
Epoch: 4/20...  Training Step: 646...  Training loss: 1.9287...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 647...  Training loss: 1.8750...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 648...  Training loss: 1.8810...  0.3173 sec/batch
Epoch: 4/20...  Training Step: 649...  Training loss: 1.8719...  0.3183 sec/batch
Epoch: 4/20...  Training Step: 650...  Training loss: 1.8842...  0.3143 sec/batch
Epoch: 4/20...  Training Step: 651...  Training loss: 1.8879...  0.3158 sec/batch
Epoch: 4/20...  Training Step: 652...  Training loss: 1.8626...  0.3159 sec/batch
Epoch: 4/20...  Training Step: 653...  Training loss: 1.8517...  0.3202 sec/batch
Epoch: 4/20...  Training Step: 654...  Training loss: 1.9133...  0.3197 sec/batch
Epoch: 4/20...  Training Step: 655...  Training loss: 1.8659...  0.3177 sec/batch
Epoch: 4/20...  Training Step: 656...  Training loss: 1.9204...  0.3167 sec/batch
Epoch: 4/20...  Training Step: 657...  Training loss: 1.8979...  0.3165 sec/batch
Epoch: 4/20...  Training Step: 658...  Training loss: 1.8836...  0.3150 sec/batch
Epoch: 4/20...  Training Step: 659...  Training loss: 1.8606...  0.3183 sec/batch
Epoch: 4/20...  Training Step: 660...  Training loss: 1.8958...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 661...  Training loss: 1.8881...  0.3189 sec/batch
Epoch: 4/20...  Training Step: 662...  Training loss: 1.8484...  0.3178 sec/batch
Epoch: 4/20...  Training Step: 663...  Training loss: 1.8535...  0.3154 sec/batch
Epoch: 4/20...  Training Step: 664...  Training loss: 1.8688...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 665...  Training loss: 1.9097...  0.3206 sec/batch
Epoch: 4/20...  Training Step: 666...  Training loss: 1.8713...  0.3188 sec/batch
Epoch: 4/20...  Training Step: 667...  Training loss: 1.8850...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 668...  Training loss: 1.8527...  0.3149 sec/batch
Epoch: 4/20...  Training Step: 669...  Training loss: 1.8567...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 670...  Training loss: 1.8910...  0.3158 sec/batch
Epoch: 4/20...  Training Step: 671...  Training loss: 1.8643...  0.3200 sec/batch
Epoch: 4/20...  Training Step: 672...  Training loss: 1.8630...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 673...  Training loss: 1.8188...  0.3186 sec/batch
Epoch: 4/20...  Training Step: 674...  Training loss: 1.8497...  0.3174 sec/batch
Epoch: 4/20...  Training Step: 675...  Training loss: 1.8222...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 676...  Training loss: 1.8717...  0.3196 sec/batch
Epoch: 4/20...  Training Step: 677...  Training loss: 1.8272...  0.3164 sec/batch
Epoch: 4/20...  Training Step: 678...  Training loss: 1.8480...  0.3202 sec/batch
Epoch: 4/20...  Training Step: 679...  Training loss: 1.8154...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 680...  Training loss: 1.8212...  0.3168 sec/batch
Epoch: 4/20...  Training Step: 681...  Training loss: 1.8289...  0.3167 sec/batch
Epoch: 4/20...  Training Step: 682...  Training loss: 1.8237...  0.3206 sec/batch
Epoch: 4/20...  Training Step: 683...  Training loss: 1.8059...  0.3197 sec/batch
Epoch: 4/20...  Training Step: 684...  Training loss: 1.8690...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 685...  Training loss: 1.8178...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 686...  Training loss: 1.8436...  0.3186 sec/batch
Epoch: 4/20...  Training Step: 687...  Training loss: 1.8020...  0.3194 sec/batch
Epoch: 4/20...  Training Step: 688...  Training loss: 1.8193...  0.3244 sec/batch
Epoch: 4/20...  Training Step: 689...  Training loss: 1.8179...  0.3167 sec/batch
Epoch: 4/20...  Training Step: 690...  Training loss: 1.8472...  0.3187 sec/batch
Epoch: 4/20...  Training Step: 691...  Training loss: 1.8327...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 692...  Training loss: 1.8080...  0.3194 sec/batch
Epoch: 4/20...  Training Step: 693...  Training loss: 1.8188...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 694...  Training loss: 1.7942...  0.3161 sec/batch
Epoch: 4/20...  Training Step: 695...  Training loss: 1.8441...  0.3187 sec/batch
Epoch: 4/20...  Training Step: 696...  Training loss: 1.8375...  0.3152 sec/batch
Epoch: 4/20...  Training Step: 697...  Training loss: 1.8309...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 698...  Training loss: 1.8176...  0.3208 sec/batch
Epoch: 4/20...  Training Step: 699...  Training loss: 1.8209...  0.3182 sec/batch
Epoch: 4/20...  Training Step: 700...  Training loss: 1.8306...  0.3153 sec/batch
Epoch: 4/20...  Training Step: 701...  Training loss: 1.8347...  0.3159 sec/batch
Epoch: 4/20...  Training Step: 702...  Training loss: 1.8387...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 703...  Training loss: 1.8438...  0.3149 sec/batch
Epoch: 4/20...  Training Step: 704...  Training loss: 1.8364...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 705...  Training loss: 1.8262...  0.3191 sec/batch
Epoch: 4/20...  Training Step: 706...  Training loss: 1.8164...  0.3180 sec/batch
Epoch: 4/20...  Training Step: 707...  Training loss: 1.8212...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 708...  Training loss: 1.8170...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 709...  Training loss: 1.8060...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 710...  Training loss: 1.7837...  0.3209 sec/batch
Epoch: 4/20...  Training Step: 711...  Training loss: 1.8291...  0.3191 sec/batch
Epoch: 4/20...  Training Step: 712...  Training loss: 1.8111...  0.3160 sec/batch
Epoch: 4/20...  Training Step: 713...  Training loss: 1.8141...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 714...  Training loss: 1.8070...  0.3174 sec/batch
Epoch: 4/20...  Training Step: 715...  Training loss: 1.8311...  0.3175 sec/batch
Epoch: 4/20...  Training Step: 716...  Training loss: 1.7913...  0.3216 sec/batch
Epoch: 4/20...  Training Step: 717...  Training loss: 1.7942...  0.3188 sec/batch
Epoch: 4/20...  Training Step: 718...  Training loss: 1.8385...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 719...  Training loss: 1.8179...  0.3176 sec/batch
Epoch: 4/20...  Training Step: 720...  Training loss: 1.7629...  0.3196 sec/batch
Epoch: 4/20...  Training Step: 721...  Training loss: 1.8293...  0.3203 sec/batch
Epoch: 4/20...  Training Step: 722...  Training loss: 1.8328...  0.3177 sec/batch
Epoch: 4/20...  Training Step: 723...  Training loss: 1.8023...  0.3175 sec/batch
Epoch: 4/20...  Training Step: 724...  Training loss: 1.8046...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 725...  Training loss: 1.7802...  0.3178 sec/batch
Epoch: 4/20...  Training Step: 726...  Training loss: 1.7820...  0.3193 sec/batch
Epoch: 4/20...  Training Step: 727...  Training loss: 1.8193...  0.3201 sec/batch
Epoch: 4/20...  Training Step: 728...  Training loss: 1.8269...  0.3185 sec/batch
Epoch: 4/20...  Training Step: 729...  Training loss: 1.8127...  0.3165 sec/batch
Epoch: 4/20...  Training Step: 730...  Training loss: 1.8165...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 731...  Training loss: 1.8288...  0.3163 sec/batch
Epoch: 4/20...  Training Step: 732...  Training loss: 1.8150...  0.3187 sec/batch
Epoch: 4/20...  Training Step: 733...  Training loss: 1.8494...  0.3168 sec/batch
Epoch: 4/20...  Training Step: 734...  Training loss: 1.8037...  0.3200 sec/batch
Epoch: 4/20...  Training Step: 735...  Training loss: 1.8462...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 736...  Training loss: 1.8012...  0.3157 sec/batch
Epoch: 4/20...  Training Step: 737...  Training loss: 1.8000...  0.3226 sec/batch
Epoch: 4/20...  Training Step: 738...  Training loss: 1.8099...  0.3200 sec/batch
Epoch: 4/20...  Training Step: 739...  Training loss: 1.7824...  0.3182 sec/batch
Epoch: 4/20...  Training Step: 740...  Training loss: 1.8140...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 741...  Training loss: 1.8180...  0.3191 sec/batch
Epoch: 4/20...  Training Step: 742...  Training loss: 1.8325...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 743...  Training loss: 1.8126...  0.3197 sec/batch
Epoch: 4/20...  Training Step: 744...  Training loss: 1.7947...  0.3155 sec/batch
Epoch: 4/20...  Training Step: 745...  Training loss: 1.7834...  0.3194 sec/batch
Epoch: 4/20...  Training Step: 746...  Training loss: 1.8309...  0.3175 sec/batch
Epoch: 4/20...  Training Step: 747...  Training loss: 1.8118...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 748...  Training loss: 1.8221...  0.3208 sec/batch
Epoch: 4/20...  Training Step: 749...  Training loss: 1.8002...  0.3178 sec/batch
Epoch: 4/20...  Training Step: 750...  Training loss: 1.7937...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 751...  Training loss: 1.8063...  0.3201 sec/batch
Epoch: 4/20...  Training Step: 752...  Training loss: 1.8001...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 753...  Training loss: 1.7630...  0.3190 sec/batch
Epoch: 4/20...  Training Step: 754...  Training loss: 1.8197...  0.3158 sec/batch
Epoch: 4/20...  Training Step: 755...  Training loss: 1.8187...  0.3207 sec/batch
Epoch: 4/20...  Training Step: 756...  Training loss: 1.7976...  0.3191 sec/batch
Epoch: 4/20...  Training Step: 757...  Training loss: 1.8098...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 758...  Training loss: 1.7998...  0.3153 sec/batch
Epoch: 4/20...  Training Step: 759...  Training loss: 1.7929...  0.3195 sec/batch
Epoch: 4/20...  Training Step: 760...  Training loss: 1.7902...  0.3179 sec/batch
Epoch: 4/20...  Training Step: 761...  Training loss: 1.8141...  0.3165 sec/batch
Epoch: 4/20...  Training Step: 762...  Training loss: 1.8533...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 763...  Training loss: 1.7925...  0.3173 sec/batch
Epoch: 4/20...  Training Step: 764...  Training loss: 1.7946...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 765...  Training loss: 1.7746...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 766...  Training loss: 1.7867...  0.3208 sec/batch
Epoch: 4/20...  Training Step: 767...  Training loss: 1.8085...  0.3173 sec/batch
Epoch: 4/20...  Training Step: 768...  Training loss: 1.7980...  0.3153 sec/batch
Epoch: 4/20...  Training Step: 769...  Training loss: 1.8008...  0.3169 sec/batch
Epoch: 4/20...  Training Step: 770...  Training loss: 1.7871...  0.3203 sec/batch
Epoch: 4/20...  Training Step: 771...  Training loss: 1.7792...  0.3173 sec/batch
Epoch: 4/20...  Training Step: 772...  Training loss: 1.8148...  0.3161 sec/batch
Epoch: 4/20...  Training Step: 773...  Training loss: 1.7607...  0.3180 sec/batch
Epoch: 4/20...  Training Step: 774...  Training loss: 1.7569...  0.3170 sec/batch
Epoch: 4/20...  Training Step: 775...  Training loss: 1.7618...  0.3174 sec/batch
Epoch: 4/20...  Training Step: 776...  Training loss: 1.7768...  0.3195 sec/batch
Epoch: 4/20...  Training Step: 777...  Training loss: 1.7825...  0.3159 sec/batch
Epoch: 4/20...  Training Step: 778...  Training loss: 1.8059...  0.3198 sec/batch
Epoch: 4/20...  Training Step: 779...  Training loss: 1.7770...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 780...  Training loss: 1.7691...  0.3172 sec/batch
Epoch: 4/20...  Training Step: 781...  Training loss: 1.8011...  0.3199 sec/batch
Epoch: 4/20...  Training Step: 782...  Training loss: 1.7666...  0.3176 sec/batch
Epoch: 4/20...  Training Step: 783...  Training loss: 1.7846...  0.3176 sec/batch
Epoch: 4/20...  Training Step: 784...  Training loss: 1.7872...  0.3173 sec/batch
Epoch: 4/20...  Training Step: 785...  Training loss: 1.7902...  0.3154 sec/batch
Epoch: 4/20...  Training Step: 786...  Training loss: 1.7566...  0.3156 sec/batch
Epoch: 4/20...  Training Step: 787...  Training loss: 1.7793...  0.3193 sec/batch
Epoch: 4/20...  Training Step: 788...  Training loss: 1.7559...  0.3198 sec/batch
Epoch: 4/20...  Training Step: 789...  Training loss: 1.7521...  0.3198 sec/batch
Epoch: 4/20...  Training Step: 790...  Training loss: 1.7880...  0.3182 sec/batch
Epoch: 4/20...  Training Step: 791...  Training loss: 1.7699...  0.3166 sec/batch
Epoch: 4/20...  Training Step: 792...  Training loss: 1.7609...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 793...  Training loss: 1.8718...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 794...  Training loss: 1.7559...  0.3188 sec/batch
Epoch: 5/20...  Training Step: 795...  Training loss: 1.7507...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 796...  Training loss: 1.7726...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 797...  Training loss: 1.7669...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 798...  Training loss: 1.7292...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 799...  Training loss: 1.7645...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 800...  Training loss: 1.7468...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 801...  Training loss: 1.7889...  0.3146 sec/batch
Epoch: 5/20...  Training Step: 802...  Training loss: 1.7570...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 803...  Training loss: 1.7433...  0.3253 sec/batch
Epoch: 5/20...  Training Step: 804...  Training loss: 1.7415...  0.3190 sec/batch
Epoch: 5/20...  Training Step: 805...  Training loss: 1.7581...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 806...  Training loss: 1.7999...  0.3172 sec/batch
Epoch: 5/20...  Training Step: 807...  Training loss: 1.7608...  0.3159 sec/batch
Epoch: 5/20...  Training Step: 808...  Training loss: 1.7553...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 809...  Training loss: 1.7625...  0.3184 sec/batch
Epoch: 5/20...  Training Step: 810...  Training loss: 1.7990...  0.3166 sec/batch
Epoch: 5/20...  Training Step: 811...  Training loss: 1.7688...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 812...  Training loss: 1.7810...  0.3166 sec/batch
Epoch: 5/20...  Training Step: 813...  Training loss: 1.7512...  0.3151 sec/batch
Epoch: 5/20...  Training Step: 814...  Training loss: 1.8003...  0.3199 sec/batch
Epoch: 5/20...  Training Step: 815...  Training loss: 1.7482...  0.3207 sec/batch
Epoch: 5/20...  Training Step: 816...  Training loss: 1.7657...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 817...  Training loss: 1.7624...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 818...  Training loss: 1.7197...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 819...  Training loss: 1.7147...  0.3164 sec/batch
Epoch: 5/20...  Training Step: 820...  Training loss: 1.7626...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 821...  Training loss: 1.7784...  0.3181 sec/batch
Epoch: 5/20...  Training Step: 822...  Training loss: 1.7789...  0.3154 sec/batch
Epoch: 5/20...  Training Step: 823...  Training loss: 1.7571...  0.3150 sec/batch
Epoch: 5/20...  Training Step: 824...  Training loss: 1.7283...  0.3278 sec/batch
Epoch: 5/20...  Training Step: 825...  Training loss: 1.7723...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 826...  Training loss: 1.7826...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 827...  Training loss: 1.7339...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 828...  Training loss: 1.7503...  0.3151 sec/batch
Epoch: 5/20...  Training Step: 829...  Training loss: 1.7282...  0.3169 sec/batch
Epoch: 5/20...  Training Step: 830...  Training loss: 1.7059...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 831...  Training loss: 1.7084...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 832...  Training loss: 1.7255...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 833...  Training loss: 1.7225...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 834...  Training loss: 1.7602...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 835...  Training loss: 1.7245...  0.3177 sec/batch
Epoch: 5/20...  Training Step: 836...  Training loss: 1.7091...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 837...  Training loss: 1.7451...  0.3213 sec/batch
Epoch: 5/20...  Training Step: 838...  Training loss: 1.6950...  0.3190 sec/batch
Epoch: 5/20...  Training Step: 839...  Training loss: 1.7409...  0.3180 sec/batch
Epoch: 5/20...  Training Step: 840...  Training loss: 1.7260...  0.3206 sec/batch
Epoch: 5/20...  Training Step: 841...  Training loss: 1.7081...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 842...  Training loss: 1.7759...  0.3186 sec/batch
Epoch: 5/20...  Training Step: 843...  Training loss: 1.7019...  0.3194 sec/batch
Epoch: 5/20...  Training Step: 844...  Training loss: 1.7823...  0.3174 sec/batch
Epoch: 5/20...  Training Step: 845...  Training loss: 1.7335...  0.3167 sec/batch
Epoch: 5/20...  Training Step: 846...  Training loss: 1.7394...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 847...  Training loss: 1.7305...  0.3199 sec/batch
Epoch: 5/20...  Training Step: 848...  Training loss: 1.7391...  0.3199 sec/batch
Epoch: 5/20...  Training Step: 849...  Training loss: 1.7595...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 850...  Training loss: 1.7276...  0.3161 sec/batch
Epoch: 5/20...  Training Step: 851...  Training loss: 1.7179...  0.3180 sec/batch
Epoch: 5/20...  Training Step: 852...  Training loss: 1.7685...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 853...  Training loss: 1.7393...  0.3187 sec/batch
Epoch: 5/20...  Training Step: 854...  Training loss: 1.7797...  0.3167 sec/batch
Epoch: 5/20...  Training Step: 855...  Training loss: 1.7686...  0.3192 sec/batch
Epoch: 5/20...  Training Step: 856...  Training loss: 1.7506...  0.3166 sec/batch
Epoch: 5/20...  Training Step: 857...  Training loss: 1.7196...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 858...  Training loss: 1.7672...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 859...  Training loss: 1.7493...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 860...  Training loss: 1.7119...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 861...  Training loss: 1.7166...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 862...  Training loss: 1.7254...  0.3193 sec/batch
Epoch: 5/20...  Training Step: 863...  Training loss: 1.7670...  0.3187 sec/batch
Epoch: 5/20...  Training Step: 864...  Training loss: 1.7421...  0.3167 sec/batch
Epoch: 5/20...  Training Step: 865...  Training loss: 1.7592...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 866...  Training loss: 1.7108...  0.3183 sec/batch
Epoch: 5/20...  Training Step: 867...  Training loss: 1.7229...  0.3186 sec/batch
Epoch: 5/20...  Training Step: 868...  Training loss: 1.7540...  0.3172 sec/batch
Epoch: 5/20...  Training Step: 869...  Training loss: 1.7248...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 870...  Training loss: 1.7193...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 871...  Training loss: 1.6809...  0.3167 sec/batch
Epoch: 5/20...  Training Step: 872...  Training loss: 1.7178...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 873...  Training loss: 1.6781...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 874...  Training loss: 1.7315...  0.3204 sec/batch
Epoch: 5/20...  Training Step: 875...  Training loss: 1.6783...  0.3192 sec/batch
Epoch: 5/20...  Training Step: 876...  Training loss: 1.7097...  0.3183 sec/batch
Epoch: 5/20...  Training Step: 877...  Training loss: 1.7096...  0.3177 sec/batch
Epoch: 5/20...  Training Step: 878...  Training loss: 1.7118...  0.3155 sec/batch
Epoch: 5/20...  Training Step: 879...  Training loss: 1.7137...  0.3175 sec/batch
Epoch: 5/20...  Training Step: 880...  Training loss: 1.7120...  0.3199 sec/batch
Epoch: 5/20...  Training Step: 881...  Training loss: 1.6911...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 882...  Training loss: 1.7358...  0.3185 sec/batch
Epoch: 5/20...  Training Step: 883...  Training loss: 1.6981...  0.3188 sec/batch
Epoch: 5/20...  Training Step: 884...  Training loss: 1.7062...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 885...  Training loss: 1.7057...  0.3202 sec/batch
Epoch: 5/20...  Training Step: 886...  Training loss: 1.7078...  0.3191 sec/batch
Epoch: 5/20...  Training Step: 887...  Training loss: 1.6889...  0.3175 sec/batch
Epoch: 5/20...  Training Step: 888...  Training loss: 1.7317...  0.3191 sec/batch
Epoch: 5/20...  Training Step: 889...  Training loss: 1.7175...  0.3169 sec/batch
Epoch: 5/20...  Training Step: 890...  Training loss: 1.6854...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 891...  Training loss: 1.6924...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 892...  Training loss: 1.6704...  0.3194 sec/batch
Epoch: 5/20...  Training Step: 893...  Training loss: 1.7114...  0.3175 sec/batch
Epoch: 5/20...  Training Step: 894...  Training loss: 1.7078...  0.3162 sec/batch
Epoch: 5/20...  Training Step: 895...  Training loss: 1.7005...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 896...  Training loss: 1.7004...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 897...  Training loss: 1.6955...  0.3164 sec/batch
Epoch: 5/20...  Training Step: 898...  Training loss: 1.6979...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 899...  Training loss: 1.6889...  0.3182 sec/batch
Epoch: 5/20...  Training Step: 900...  Training loss: 1.7148...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 901...  Training loss: 1.7191...  0.3202 sec/batch
Epoch: 5/20...  Training Step: 902...  Training loss: 1.7153...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 903...  Training loss: 1.6989...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 904...  Training loss: 1.6951...  0.3180 sec/batch
Epoch: 5/20...  Training Step: 905...  Training loss: 1.7049...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 906...  Training loss: 1.6905...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 907...  Training loss: 1.6771...  0.3192 sec/batch
Epoch: 5/20...  Training Step: 908...  Training loss: 1.6618...  0.3158 sec/batch
Epoch: 5/20...  Training Step: 909...  Training loss: 1.7090...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 910...  Training loss: 1.6945...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 911...  Training loss: 1.6857...  0.3204 sec/batch
Epoch: 5/20...  Training Step: 912...  Training loss: 1.6916...  0.3211 sec/batch
Epoch: 5/20...  Training Step: 913...  Training loss: 1.7064...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 914...  Training loss: 1.6624...  0.3185 sec/batch
Epoch: 5/20...  Training Step: 915...  Training loss: 1.6665...  0.3157 sec/batch
Epoch: 5/20...  Training Step: 916...  Training loss: 1.7097...  0.3177 sec/batch
Epoch: 5/20...  Training Step: 917...  Training loss: 1.6960...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 918...  Training loss: 1.6551...  0.3182 sec/batch
Epoch: 5/20...  Training Step: 919...  Training loss: 1.7121...  0.3167 sec/batch
Epoch: 5/20...  Training Step: 920...  Training loss: 1.7024...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 921...  Training loss: 1.6885...  0.3254 sec/batch
Epoch: 5/20...  Training Step: 922...  Training loss: 1.6714...  0.3191 sec/batch
Epoch: 5/20...  Training Step: 923...  Training loss: 1.6626...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 924...  Training loss: 1.6615...  0.3204 sec/batch
Epoch: 5/20...  Training Step: 925...  Training loss: 1.7086...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 926...  Training loss: 1.7070...  0.3182 sec/batch
Epoch: 5/20...  Training Step: 927...  Training loss: 1.6944...  0.3202 sec/batch
Epoch: 5/20...  Training Step: 928...  Training loss: 1.6939...  0.3190 sec/batch
Epoch: 5/20...  Training Step: 929...  Training loss: 1.7085...  0.3186 sec/batch
Epoch: 5/20...  Training Step: 930...  Training loss: 1.6950...  0.3180 sec/batch
Epoch: 5/20...  Training Step: 931...  Training loss: 1.7068...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 932...  Training loss: 1.6836...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 933...  Training loss: 1.7343...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 934...  Training loss: 1.6884...  0.3200 sec/batch
Epoch: 5/20...  Training Step: 935...  Training loss: 1.6859...  0.3208 sec/batch
Epoch: 5/20...  Training Step: 936...  Training loss: 1.7052...  0.3189 sec/batch
Epoch: 5/20...  Training Step: 937...  Training loss: 1.6744...  0.3201 sec/batch
Epoch: 5/20...  Training Step: 938...  Training loss: 1.6987...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 939...  Training loss: 1.6978...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 940...  Training loss: 1.7233...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 941...  Training loss: 1.7061...  0.3153 sec/batch
Epoch: 5/20...  Training Step: 942...  Training loss: 1.6768...  0.3209 sec/batch
Epoch: 5/20...  Training Step: 943...  Training loss: 1.6559...  0.3193 sec/batch
Epoch: 5/20...  Training Step: 944...  Training loss: 1.7057...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 945...  Training loss: 1.6962...  0.3168 sec/batch
Epoch: 5/20...  Training Step: 946...  Training loss: 1.6905...  0.3179 sec/batch
Epoch: 5/20...  Training Step: 947...  Training loss: 1.6915...  0.3183 sec/batch
Epoch: 5/20...  Training Step: 948...  Training loss: 1.6813...  0.3194 sec/batch
Epoch: 5/20...  Training Step: 949...  Training loss: 1.7015...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 950...  Training loss: 1.6859...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 951...  Training loss: 1.6448...  0.3163 sec/batch
Epoch: 5/20...  Training Step: 952...  Training loss: 1.7059...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 953...  Training loss: 1.7140...  0.3202 sec/batch
Epoch: 5/20...  Training Step: 954...  Training loss: 1.6892...  0.3179 sec/batch
Epoch: 5/20...  Training Step: 955...  Training loss: 1.6959...  0.3169 sec/batch
Epoch: 5/20...  Training Step: 956...  Training loss: 1.6827...  0.3203 sec/batch
Epoch: 5/20...  Training Step: 957...  Training loss: 1.6871...  0.3172 sec/batch
Epoch: 5/20...  Training Step: 958...  Training loss: 1.6783...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 959...  Training loss: 1.6943...  0.3205 sec/batch
Epoch: 5/20...  Training Step: 960...  Training loss: 1.7535...  0.3182 sec/batch
Epoch: 5/20...  Training Step: 961...  Training loss: 1.6701...  0.3176 sec/batch
Epoch: 5/20...  Training Step: 962...  Training loss: 1.6766...  0.3191 sec/batch
Epoch: 5/20...  Training Step: 963...  Training loss: 1.6773...  0.3171 sec/batch
Epoch: 5/20...  Training Step: 964...  Training loss: 1.6553...  0.3208 sec/batch
Epoch: 5/20...  Training Step: 965...  Training loss: 1.6932...  0.3192 sec/batch
Epoch: 5/20...  Training Step: 966...  Training loss: 1.6715...  0.3177 sec/batch
Epoch: 5/20...  Training Step: 967...  Training loss: 1.6878...  0.3194 sec/batch
Epoch: 5/20...  Training Step: 968...  Training loss: 1.6682...  0.3184 sec/batch
Epoch: 5/20...  Training Step: 969...  Training loss: 1.6702...  0.3207 sec/batch
Epoch: 5/20...  Training Step: 970...  Training loss: 1.7020...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 971...  Training loss: 1.6430...  0.3204 sec/batch
Epoch: 5/20...  Training Step: 972...  Training loss: 1.6466...  0.3170 sec/batch
Epoch: 5/20...  Training Step: 973...  Training loss: 1.6483...  0.3173 sec/batch
Epoch: 5/20...  Training Step: 974...  Training loss: 1.6615...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 975...  Training loss: 1.6724...  0.3197 sec/batch
Epoch: 5/20...  Training Step: 976...  Training loss: 1.6903...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 977...  Training loss: 1.6674...  0.3161 sec/batch
Epoch: 5/20...  Training Step: 978...  Training loss: 1.6518...  0.3196 sec/batch
Epoch: 5/20...  Training Step: 979...  Training loss: 1.6818...  0.3206 sec/batch
Epoch: 5/20...  Training Step: 980...  Training loss: 1.6498...  0.3194 sec/batch
Epoch: 5/20...  Training Step: 981...  Training loss: 1.6631...  0.3198 sec/batch
Epoch: 5/20...  Training Step: 982...  Training loss: 1.6682...  0.3181 sec/batch
Epoch: 5/20...  Training Step: 983...  Training loss: 1.6649...  0.3168 sec/batch
Epoch: 5/20...  Training Step: 984...  Training loss: 1.6355...  0.3195 sec/batch
Epoch: 5/20...  Training Step: 985...  Training loss: 1.6624...  0.3204 sec/batch
Epoch: 5/20...  Training Step: 986...  Training loss: 1.6436...  0.3200 sec/batch
Epoch: 5/20...  Training Step: 987...  Training loss: 1.6338...  0.3178 sec/batch
Epoch: 5/20...  Training Step: 988...  Training loss: 1.6763...  0.3199 sec/batch
Epoch: 5/20...  Training Step: 989...  Training loss: 1.6554...  0.3165 sec/batch
Epoch: 5/20...  Training Step: 990...  Training loss: 1.6509...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 991...  Training loss: 1.7648...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 992...  Training loss: 1.6568...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 993...  Training loss: 1.6578...  0.3174 sec/batch
Epoch: 6/20...  Training Step: 994...  Training loss: 1.6659...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 995...  Training loss: 1.6492...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 996...  Training loss: 1.6104...  0.3208 sec/batch
Epoch: 6/20...  Training Step: 997...  Training loss: 1.6652...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 998...  Training loss: 1.6426...  0.3184 sec/batch
Epoch: 6/20...  Training Step: 999...  Training loss: 1.6750...  0.3211 sec/batch
Epoch: 6/20...  Training Step: 1000...  Training loss: 1.6399...  0.3202 sec/batch
Epoch: 6/20...  Training Step: 1001...  Training loss: 1.6330...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1002...  Training loss: 1.6375...  0.3169 sec/batch
Epoch: 6/20...  Training Step: 1003...  Training loss: 1.6452...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1004...  Training loss: 1.6807...  0.3209 sec/batch
Epoch: 6/20...  Training Step: 1005...  Training loss: 1.6468...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1006...  Training loss: 1.6308...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1007...  Training loss: 1.6608...  0.3211 sec/batch
Epoch: 6/20...  Training Step: 1008...  Training loss: 1.6804...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1009...  Training loss: 1.6582...  0.3163 sec/batch
Epoch: 6/20...  Training Step: 1010...  Training loss: 1.6710...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1011...  Training loss: 1.6357...  0.3176 sec/batch
Epoch: 6/20...  Training Step: 1012...  Training loss: 1.6822...  0.3165 sec/batch
Epoch: 6/20...  Training Step: 1013...  Training loss: 1.6433...  0.3213 sec/batch
Epoch: 6/20...  Training Step: 1014...  Training loss: 1.6518...  0.3193 sec/batch
Epoch: 6/20...  Training Step: 1015...  Training loss: 1.6603...  0.3183 sec/batch
Epoch: 6/20...  Training Step: 1016...  Training loss: 1.6121...  0.3176 sec/batch
Epoch: 6/20...  Training Step: 1017...  Training loss: 1.6104...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1018...  Training loss: 1.6713...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1019...  Training loss: 1.6803...  0.3181 sec/batch
Epoch: 6/20...  Training Step: 1020...  Training loss: 1.6506...  0.3170 sec/batch
Epoch: 6/20...  Training Step: 1021...  Training loss: 1.6524...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1022...  Training loss: 1.6212...  0.3224 sec/batch
Epoch: 6/20...  Training Step: 1023...  Training loss: 1.6642...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1024...  Training loss: 1.6633...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1025...  Training loss: 1.6324...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1026...  Training loss: 1.6436...  0.3178 sec/batch
Epoch: 6/20...  Training Step: 1027...  Training loss: 1.6230...  0.3173 sec/batch
Epoch: 6/20...  Training Step: 1028...  Training loss: 1.6090...  0.3211 sec/batch
Epoch: 6/20...  Training Step: 1029...  Training loss: 1.5989...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1030...  Training loss: 1.6271...  0.3174 sec/batch
Epoch: 6/20...  Training Step: 1031...  Training loss: 1.6184...  0.3202 sec/batch
Epoch: 6/20...  Training Step: 1032...  Training loss: 1.6730...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1033...  Training loss: 1.6197...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1034...  Training loss: 1.6071...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 1035...  Training loss: 1.6418...  0.3205 sec/batch
Epoch: 6/20...  Training Step: 1036...  Training loss: 1.6035...  0.3176 sec/batch
Epoch: 6/20...  Training Step: 1037...  Training loss: 1.6393...  0.3173 sec/batch
Epoch: 6/20...  Training Step: 1038...  Training loss: 1.6263...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1039...  Training loss: 1.6250...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1040...  Training loss: 1.6633...  0.3181 sec/batch
Epoch: 6/20...  Training Step: 1041...  Training loss: 1.6062...  0.3163 sec/batch
Epoch: 6/20...  Training Step: 1042...  Training loss: 1.6913...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 1043...  Training loss: 1.6392...  0.3169 sec/batch
Epoch: 6/20...  Training Step: 1044...  Training loss: 1.6489...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1045...  Training loss: 1.6252...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1046...  Training loss: 1.6434...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1047...  Training loss: 1.6583...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1048...  Training loss: 1.6249...  0.3193 sec/batch
Epoch: 6/20...  Training Step: 1049...  Training loss: 1.6227...  0.3189 sec/batch
Epoch: 6/20...  Training Step: 1050...  Training loss: 1.6736...  0.3208 sec/batch
Epoch: 6/20...  Training Step: 1051...  Training loss: 1.6379...  0.3188 sec/batch
Epoch: 6/20...  Training Step: 1052...  Training loss: 1.6868...  0.3166 sec/batch
Epoch: 6/20...  Training Step: 1053...  Training loss: 1.6584...  0.3183 sec/batch
Epoch: 6/20...  Training Step: 1054...  Training loss: 1.6578...  0.3203 sec/batch
Epoch: 6/20...  Training Step: 1055...  Training loss: 1.6264...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 1056...  Training loss: 1.6510...  0.3205 sec/batch
Epoch: 6/20...  Training Step: 1057...  Training loss: 1.6517...  0.3164 sec/batch
Epoch: 6/20...  Training Step: 1058...  Training loss: 1.6086...  0.3165 sec/batch
Epoch: 6/20...  Training Step: 1059...  Training loss: 1.6365...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1060...  Training loss: 1.6304...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1061...  Training loss: 1.6712...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1062...  Training loss: 1.6516...  0.3185 sec/batch
Epoch: 6/20...  Training Step: 1063...  Training loss: 1.6630...  0.3183 sec/batch
Epoch: 6/20...  Training Step: 1064...  Training loss: 1.6157...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1065...  Training loss: 1.6316...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1066...  Training loss: 1.6452...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1067...  Training loss: 1.6224...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1068...  Training loss: 1.6334...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1069...  Training loss: 1.5914...  0.3194 sec/batch
Epoch: 6/20...  Training Step: 1070...  Training loss: 1.6339...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1071...  Training loss: 1.5838...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1072...  Training loss: 1.6363...  0.3182 sec/batch
Epoch: 6/20...  Training Step: 1073...  Training loss: 1.5880...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1074...  Training loss: 1.6225...  0.3173 sec/batch
Epoch: 6/20...  Training Step: 1075...  Training loss: 1.5886...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1076...  Training loss: 1.6066...  0.3208 sec/batch
Epoch: 6/20...  Training Step: 1077...  Training loss: 1.5915...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1078...  Training loss: 1.5921...  0.3161 sec/batch
Epoch: 6/20...  Training Step: 1079...  Training loss: 1.5803...  0.3194 sec/batch
Epoch: 6/20...  Training Step: 1080...  Training loss: 1.6306...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1081...  Training loss: 1.5954...  0.3192 sec/batch
Epoch: 6/20...  Training Step: 1082...  Training loss: 1.5959...  0.3208 sec/batch
Epoch: 6/20...  Training Step: 1083...  Training loss: 1.6003...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1084...  Training loss: 1.5874...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1085...  Training loss: 1.5991...  0.3178 sec/batch
Epoch: 6/20...  Training Step: 1086...  Training loss: 1.6293...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1087...  Training loss: 1.6171...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1088...  Training loss: 1.5848...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1089...  Training loss: 1.5997...  0.3182 sec/batch
Epoch: 6/20...  Training Step: 1090...  Training loss: 1.5824...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1091...  Training loss: 1.6121...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1092...  Training loss: 1.6057...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1093...  Training loss: 1.6121...  0.3203 sec/batch
Epoch: 6/20...  Training Step: 1094...  Training loss: 1.6107...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 1095...  Training loss: 1.5999...  0.3170 sec/batch
Epoch: 6/20...  Training Step: 1096...  Training loss: 1.6115...  0.3206 sec/batch
Epoch: 6/20...  Training Step: 1097...  Training loss: 1.6065...  0.3203 sec/batch
Epoch: 6/20...  Training Step: 1098...  Training loss: 1.6137...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1099...  Training loss: 1.6074...  0.3184 sec/batch
Epoch: 6/20...  Training Step: 1100...  Training loss: 1.6316...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 1101...  Training loss: 1.6034...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1102...  Training loss: 1.6001...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 1103...  Training loss: 1.5985...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1104...  Training loss: 1.5845...  0.3177 sec/batch
Epoch: 6/20...  Training Step: 1105...  Training loss: 1.5857...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1106...  Training loss: 1.5707...  0.3202 sec/batch
Epoch: 6/20...  Training Step: 1107...  Training loss: 1.6030...  0.3194 sec/batch
Epoch: 6/20...  Training Step: 1108...  Training loss: 1.5967...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1109...  Training loss: 1.5913...  0.3183 sec/batch
Epoch: 6/20...  Training Step: 1110...  Training loss: 1.5818...  0.3166 sec/batch
Epoch: 6/20...  Training Step: 1111...  Training loss: 1.6010...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1112...  Training loss: 1.5686...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1113...  Training loss: 1.5644...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1114...  Training loss: 1.6079...  0.3179 sec/batch
Epoch: 6/20...  Training Step: 1115...  Training loss: 1.5881...  0.3193 sec/batch
Epoch: 6/20...  Training Step: 1116...  Training loss: 1.5522...  0.3221 sec/batch
Epoch: 6/20...  Training Step: 1117...  Training loss: 1.6194...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 1118...  Training loss: 1.6159...  0.3171 sec/batch
Epoch: 6/20...  Training Step: 1119...  Training loss: 1.5895...  0.3163 sec/batch
Epoch: 6/20...  Training Step: 1120...  Training loss: 1.5803...  0.3185 sec/batch
Epoch: 6/20...  Training Step: 1121...  Training loss: 1.5589...  0.3168 sec/batch
Epoch: 6/20...  Training Step: 1122...  Training loss: 1.5708...  0.3191 sec/batch
Epoch: 6/20...  Training Step: 1123...  Training loss: 1.6117...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1124...  Training loss: 1.6066...  0.3175 sec/batch
Epoch: 6/20...  Training Step: 1125...  Training loss: 1.6017...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 1126...  Training loss: 1.6055...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1127...  Training loss: 1.6209...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1128...  Training loss: 1.6027...  0.3202 sec/batch
Epoch: 6/20...  Training Step: 1129...  Training loss: 1.6100...  0.3178 sec/batch
Epoch: 6/20...  Training Step: 1130...  Training loss: 1.5954...  0.3176 sec/batch
Epoch: 6/20...  Training Step: 1131...  Training loss: 1.6429...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1132...  Training loss: 1.6003...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1133...  Training loss: 1.5929...  0.3203 sec/batch
Epoch: 6/20...  Training Step: 1134...  Training loss: 1.6078...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1135...  Training loss: 1.5798...  0.3174 sec/batch
Epoch: 6/20...  Training Step: 1136...  Training loss: 1.6069...  0.3208 sec/batch
Epoch: 6/20...  Training Step: 1137...  Training loss: 1.5979...  0.3178 sec/batch
Epoch: 6/20...  Training Step: 1138...  Training loss: 1.6211...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1139...  Training loss: 1.5972...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1140...  Training loss: 1.5822...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1141...  Training loss: 1.5632...  0.3190 sec/batch
Epoch: 6/20...  Training Step: 1142...  Training loss: 1.5937...  0.3180 sec/batch
Epoch: 6/20...  Training Step: 1143...  Training loss: 1.5982...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1144...  Training loss: 1.6041...  0.3207 sec/batch
Epoch: 6/20...  Training Step: 1145...  Training loss: 1.5943...  0.3170 sec/batch
Epoch: 6/20...  Training Step: 1146...  Training loss: 1.5901...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1147...  Training loss: 1.6017...  0.3170 sec/batch
Epoch: 6/20...  Training Step: 1148...  Training loss: 1.5960...  0.3197 sec/batch
Epoch: 6/20...  Training Step: 1149...  Training loss: 1.5589...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1150...  Training loss: 1.6152...  0.3188 sec/batch
Epoch: 6/20...  Training Step: 1151...  Training loss: 1.6263...  0.3167 sec/batch
Epoch: 6/20...  Training Step: 1152...  Training loss: 1.5920...  0.3184 sec/batch
Epoch: 6/20...  Training Step: 1153...  Training loss: 1.6054...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1154...  Training loss: 1.5936...  0.3194 sec/batch
Epoch: 6/20...  Training Step: 1155...  Training loss: 1.5796...  0.3199 sec/batch
Epoch: 6/20...  Training Step: 1156...  Training loss: 1.5851...  0.3169 sec/batch
Epoch: 6/20...  Training Step: 1157...  Training loss: 1.6237...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1158...  Training loss: 1.6521...  0.3176 sec/batch
Epoch: 6/20...  Training Step: 1159...  Training loss: 1.5802...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1160...  Training loss: 1.5851...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1161...  Training loss: 1.5659...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1162...  Training loss: 1.5536...  0.3191 sec/batch
Epoch: 6/20...  Training Step: 1163...  Training loss: 1.6028...  0.3170 sec/batch
Epoch: 6/20...  Training Step: 1164...  Training loss: 1.5780...  0.3206 sec/batch
Epoch: 6/20...  Training Step: 1165...  Training loss: 1.5841...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1166...  Training loss: 1.5690...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1167...  Training loss: 1.5602...  0.3166 sec/batch
Epoch: 6/20...  Training Step: 1168...  Training loss: 1.6204...  0.3195 sec/batch
Epoch: 6/20...  Training Step: 1169...  Training loss: 1.5833...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1170...  Training loss: 1.5799...  0.3198 sec/batch
Epoch: 6/20...  Training Step: 1171...  Training loss: 1.5901...  0.3204 sec/batch
Epoch: 6/20...  Training Step: 1172...  Training loss: 1.5938...  0.3187 sec/batch
Epoch: 6/20...  Training Step: 1173...  Training loss: 1.5917...  0.3175 sec/batch
Epoch: 6/20...  Training Step: 1174...  Training loss: 1.5988...  0.3218 sec/batch
Epoch: 6/20...  Training Step: 1175...  Training loss: 1.5892...  0.3186 sec/batch
Epoch: 6/20...  Training Step: 1176...  Training loss: 1.5847...  0.3196 sec/batch
Epoch: 6/20...  Training Step: 1177...  Training loss: 1.5993...  0.3183 sec/batch
Epoch: 6/20...  Training Step: 1178...  Training loss: 1.5829...  0.3164 sec/batch
Epoch: 6/20...  Training Step: 1179...  Training loss: 1.5916...  0.3214 sec/batch
Epoch: 6/20...  Training Step: 1180...  Training loss: 1.5911...  0.3200 sec/batch
Epoch: 6/20...  Training Step: 1181...  Training loss: 1.5805...  0.3190 sec/batch
Epoch: 6/20...  Training Step: 1182...  Training loss: 1.5652...  0.3194 sec/batch
Epoch: 6/20...  Training Step: 1183...  Training loss: 1.5887...  0.3174 sec/batch
Epoch: 6/20...  Training Step: 1184...  Training loss: 1.5672...  0.3174 sec/batch
Epoch: 6/20...  Training Step: 1185...  Training loss: 1.5573...  0.3201 sec/batch
Epoch: 6/20...  Training Step: 1186...  Training loss: 1.5973...  0.3191 sec/batch
Epoch: 6/20...  Training Step: 1187...  Training loss: 1.5749...  0.3186 sec/batch
Epoch: 6/20...  Training Step: 1188...  Training loss: 1.5693...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1189...  Training loss: 1.6892...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1190...  Training loss: 1.5874...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1191...  Training loss: 1.5865...  0.3215 sec/batch
Epoch: 7/20...  Training Step: 1192...  Training loss: 1.6106...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1193...  Training loss: 1.5761...  0.3170 sec/batch
Epoch: 7/20...  Training Step: 1194...  Training loss: 1.5598...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1195...  Training loss: 1.5874...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1196...  Training loss: 1.5773...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1197...  Training loss: 1.5924...  0.3175 sec/batch
Epoch: 7/20...  Training Step: 1198...  Training loss: 1.5753...  0.3168 sec/batch
Epoch: 7/20...  Training Step: 1199...  Training loss: 1.5693...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1200...  Training loss: 1.5614...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1201...  Training loss: 1.5759...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1202...  Training loss: 1.6073...  0.3146 sec/batch
Epoch: 7/20...  Training Step: 1203...  Training loss: 1.5732...  0.3192 sec/batch
Epoch: 7/20...  Training Step: 1204...  Training loss: 1.5554...  0.3173 sec/batch
Epoch: 7/20...  Training Step: 1205...  Training loss: 1.5835...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1206...  Training loss: 1.6073...  0.3188 sec/batch
Epoch: 7/20...  Training Step: 1207...  Training loss: 1.5797...  0.3190 sec/batch
Epoch: 7/20...  Training Step: 1208...  Training loss: 1.5957...  0.3182 sec/batch
Epoch: 7/20...  Training Step: 1209...  Training loss: 1.5657...  0.3193 sec/batch
Epoch: 7/20...  Training Step: 1210...  Training loss: 1.6012...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1211...  Training loss: 1.5694...  0.3216 sec/batch
Epoch: 7/20...  Training Step: 1212...  Training loss: 1.5882...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1213...  Training loss: 1.5760...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1214...  Training loss: 1.5353...  0.3189 sec/batch
Epoch: 7/20...  Training Step: 1215...  Training loss: 1.5394...  0.3208 sec/batch
Epoch: 7/20...  Training Step: 1216...  Training loss: 1.5959...  0.3201 sec/batch
Epoch: 7/20...  Training Step: 1217...  Training loss: 1.5919...  0.3175 sec/batch
Epoch: 7/20...  Training Step: 1218...  Training loss: 1.5827...  0.3170 sec/batch
Epoch: 7/20...  Training Step: 1219...  Training loss: 1.5640...  0.3183 sec/batch
Epoch: 7/20...  Training Step: 1220...  Training loss: 1.5380...  0.3167 sec/batch
Epoch: 7/20...  Training Step: 1221...  Training loss: 1.5832...  0.3210 sec/batch
Epoch: 7/20...  Training Step: 1222...  Training loss: 1.5898...  0.3192 sec/batch
Epoch: 7/20...  Training Step: 1223...  Training loss: 1.5605...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1224...  Training loss: 1.5672...  0.3181 sec/batch
Epoch: 7/20...  Training Step: 1225...  Training loss: 1.5422...  0.3167 sec/batch
Epoch: 7/20...  Training Step: 1226...  Training loss: 1.5309...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1227...  Training loss: 1.5243...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1228...  Training loss: 1.5465...  0.3180 sec/batch
Epoch: 7/20...  Training Step: 1229...  Training loss: 1.5455...  0.3172 sec/batch
Epoch: 7/20...  Training Step: 1230...  Training loss: 1.5927...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1231...  Training loss: 1.5378...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1232...  Training loss: 1.5359...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1233...  Training loss: 1.5665...  0.3193 sec/batch
Epoch: 7/20...  Training Step: 1234...  Training loss: 1.5252...  0.3168 sec/batch
Epoch: 7/20...  Training Step: 1235...  Training loss: 1.5638...  0.3173 sec/batch
Epoch: 7/20...  Training Step: 1236...  Training loss: 1.5506...  0.3210 sec/batch
Epoch: 7/20...  Training Step: 1237...  Training loss: 1.5462...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1238...  Training loss: 1.5885...  0.3193 sec/batch
Epoch: 7/20...  Training Step: 1239...  Training loss: 1.5373...  0.3172 sec/batch
Epoch: 7/20...  Training Step: 1240...  Training loss: 1.6059...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1241...  Training loss: 1.5749...  0.3171 sec/batch
Epoch: 7/20...  Training Step: 1242...  Training loss: 1.5707...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1243...  Training loss: 1.5555...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1244...  Training loss: 1.5575...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1245...  Training loss: 1.5768...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1246...  Training loss: 1.5430...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1247...  Training loss: 1.5336...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1248...  Training loss: 1.5903...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1249...  Training loss: 1.5619...  0.3178 sec/batch
Epoch: 7/20...  Training Step: 1250...  Training loss: 1.6103...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1251...  Training loss: 1.5693...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1252...  Training loss: 1.5651...  0.3187 sec/batch
Epoch: 7/20...  Training Step: 1253...  Training loss: 1.5531...  0.3208 sec/batch
Epoch: 7/20...  Training Step: 1254...  Training loss: 1.5703...  0.3209 sec/batch
Epoch: 7/20...  Training Step: 1255...  Training loss: 1.5663...  0.3214 sec/batch
Epoch: 7/20...  Training Step: 1256...  Training loss: 1.5351...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1257...  Training loss: 1.5638...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1258...  Training loss: 1.5518...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1259...  Training loss: 1.6009...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1260...  Training loss: 1.5671...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1261...  Training loss: 1.5776...  0.3172 sec/batch
Epoch: 7/20...  Training Step: 1262...  Training loss: 1.5339...  0.3186 sec/batch
Epoch: 7/20...  Training Step: 1263...  Training loss: 1.5501...  0.3218 sec/batch
Epoch: 7/20...  Training Step: 1264...  Training loss: 1.5656...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1265...  Training loss: 1.5465...  0.3205 sec/batch
Epoch: 7/20...  Training Step: 1266...  Training loss: 1.5435...  0.3171 sec/batch
Epoch: 7/20...  Training Step: 1267...  Training loss: 1.5034...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1268...  Training loss: 1.5536...  0.3205 sec/batch
Epoch: 7/20...  Training Step: 1269...  Training loss: 1.5088...  0.3210 sec/batch
Epoch: 7/20...  Training Step: 1270...  Training loss: 1.5538...  0.3213 sec/batch
Epoch: 7/20...  Training Step: 1271...  Training loss: 1.5046...  0.3183 sec/batch
Epoch: 7/20...  Training Step: 1272...  Training loss: 1.5434...  0.3182 sec/batch
Epoch: 7/20...  Training Step: 1273...  Training loss: 1.5111...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1274...  Training loss: 1.5377...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1275...  Training loss: 1.5143...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1276...  Training loss: 1.5226...  0.3178 sec/batch
Epoch: 7/20...  Training Step: 1277...  Training loss: 1.5251...  0.3193 sec/batch
Epoch: 7/20...  Training Step: 1278...  Training loss: 1.5526...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1279...  Training loss: 1.5127...  0.3207 sec/batch
Epoch: 7/20...  Training Step: 1280...  Training loss: 1.5176...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1281...  Training loss: 1.5153...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1282...  Training loss: 1.5136...  0.3173 sec/batch
Epoch: 7/20...  Training Step: 1283...  Training loss: 1.5133...  0.3207 sec/batch
Epoch: 7/20...  Training Step: 1284...  Training loss: 1.5487...  0.3210 sec/batch
Epoch: 7/20...  Training Step: 1285...  Training loss: 1.5354...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1286...  Training loss: 1.4990...  0.3162 sec/batch
Epoch: 7/20...  Training Step: 1287...  Training loss: 1.5132...  0.3180 sec/batch
Epoch: 7/20...  Training Step: 1288...  Training loss: 1.5025...  0.3206 sec/batch
Epoch: 7/20...  Training Step: 1289...  Training loss: 1.5389...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1290...  Training loss: 1.5279...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1291...  Training loss: 1.5190...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1292...  Training loss: 1.5320...  0.3175 sec/batch
Epoch: 7/20...  Training Step: 1293...  Training loss: 1.5240...  0.3172 sec/batch
Epoch: 7/20...  Training Step: 1294...  Training loss: 1.5277...  0.3206 sec/batch
Epoch: 7/20...  Training Step: 1295...  Training loss: 1.5454...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1296...  Training loss: 1.5357...  0.3186 sec/batch
Epoch: 7/20...  Training Step: 1297...  Training loss: 1.5293...  0.3170 sec/batch
Epoch: 7/20...  Training Step: 1298...  Training loss: 1.5422...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1299...  Training loss: 1.5250...  0.3216 sec/batch
Epoch: 7/20...  Training Step: 1300...  Training loss: 1.5346...  0.3209 sec/batch
Epoch: 7/20...  Training Step: 1301...  Training loss: 1.5354...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1302...  Training loss: 1.5186...  0.3201 sec/batch
Epoch: 7/20...  Training Step: 1303...  Training loss: 1.5070...  0.3177 sec/batch
Epoch: 7/20...  Training Step: 1304...  Training loss: 1.4931...  0.3208 sec/batch
Epoch: 7/20...  Training Step: 1305...  Training loss: 1.5313...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1306...  Training loss: 1.5397...  0.3190 sec/batch
Epoch: 7/20...  Training Step: 1307...  Training loss: 1.5140...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1308...  Training loss: 1.5127...  0.3176 sec/batch
Epoch: 7/20...  Training Step: 1309...  Training loss: 1.5294...  0.3208 sec/batch
Epoch: 7/20...  Training Step: 1310...  Training loss: 1.4830...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1311...  Training loss: 1.4695...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1312...  Training loss: 1.5278...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1313...  Training loss: 1.5188...  0.3178 sec/batch
Epoch: 7/20...  Training Step: 1314...  Training loss: 1.4772...  0.3241 sec/batch
Epoch: 7/20...  Training Step: 1315...  Training loss: 1.5345...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1316...  Training loss: 1.5284...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1317...  Training loss: 1.5151...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1318...  Training loss: 1.4932...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1319...  Training loss: 1.4780...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1320...  Training loss: 1.5015...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1321...  Training loss: 1.5438...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1322...  Training loss: 1.5211...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1323...  Training loss: 1.5232...  0.3183 sec/batch
Epoch: 7/20...  Training Step: 1324...  Training loss: 1.5250...  0.3214 sec/batch
Epoch: 7/20...  Training Step: 1325...  Training loss: 1.5429...  0.3213 sec/batch
Epoch: 7/20...  Training Step: 1326...  Training loss: 1.5408...  0.3197 sec/batch
Epoch: 7/20...  Training Step: 1327...  Training loss: 1.5395...  0.3184 sec/batch
Epoch: 7/20...  Training Step: 1328...  Training loss: 1.5278...  0.3165 sec/batch
Epoch: 7/20...  Training Step: 1329...  Training loss: 1.5794...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1330...  Training loss: 1.5202...  0.3209 sec/batch
Epoch: 7/20...  Training Step: 1331...  Training loss: 1.5167...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1332...  Training loss: 1.5450...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1333...  Training loss: 1.5015...  0.3164 sec/batch
Epoch: 7/20...  Training Step: 1334...  Training loss: 1.5436...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1335...  Training loss: 1.5275...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1336...  Training loss: 1.5516...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1337...  Training loss: 1.5253...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1338...  Training loss: 1.5014...  0.3174 sec/batch
Epoch: 7/20...  Training Step: 1339...  Training loss: 1.4776...  0.3189 sec/batch
Epoch: 7/20...  Training Step: 1340...  Training loss: 1.5087...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1341...  Training loss: 1.5255...  0.3211 sec/batch
Epoch: 7/20...  Training Step: 1342...  Training loss: 1.5167...  0.3191 sec/batch
Epoch: 7/20...  Training Step: 1343...  Training loss: 1.5173...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1344...  Training loss: 1.5197...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1345...  Training loss: 1.5272...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1346...  Training loss: 1.5123...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1347...  Training loss: 1.4769...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1348...  Training loss: 1.5343...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1349...  Training loss: 1.5504...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1350...  Training loss: 1.5151...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1351...  Training loss: 1.5178...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1352...  Training loss: 1.5131...  0.3192 sec/batch
Epoch: 7/20...  Training Step: 1353...  Training loss: 1.5161...  0.3193 sec/batch
Epoch: 7/20...  Training Step: 1354...  Training loss: 1.5182...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1355...  Training loss: 1.5402...  0.3206 sec/batch
Epoch: 7/20...  Training Step: 1356...  Training loss: 1.5871...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1357...  Training loss: 1.5176...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1358...  Training loss: 1.5082...  0.3199 sec/batch
Epoch: 7/20...  Training Step: 1359...  Training loss: 1.4982...  0.3165 sec/batch
Epoch: 7/20...  Training Step: 1360...  Training loss: 1.4811...  0.3204 sec/batch
Epoch: 7/20...  Training Step: 1361...  Training loss: 1.5328...  0.3198 sec/batch
Epoch: 7/20...  Training Step: 1362...  Training loss: 1.5057...  0.3194 sec/batch
Epoch: 7/20...  Training Step: 1363...  Training loss: 1.5195...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1364...  Training loss: 1.5000...  0.3162 sec/batch
Epoch: 7/20...  Training Step: 1365...  Training loss: 1.4940...  0.3196 sec/batch
Epoch: 7/20...  Training Step: 1366...  Training loss: 1.5501...  0.3206 sec/batch
Epoch: 7/20...  Training Step: 1367...  Training loss: 1.4884...  0.3200 sec/batch
Epoch: 7/20...  Training Step: 1368...  Training loss: 1.4786...  0.3192 sec/batch
Epoch: 7/20...  Training Step: 1369...  Training loss: 1.4917...  0.3205 sec/batch
Epoch: 7/20...  Training Step: 1370...  Training loss: 1.5006...  0.3180 sec/batch
Epoch: 7/20...  Training Step: 1371...  Training loss: 1.5087...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1372...  Training loss: 1.5159...  0.3203 sec/batch
Epoch: 7/20...  Training Step: 1373...  Training loss: 1.5052...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1374...  Training loss: 1.4902...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1375...  Training loss: 1.5381...  0.3169 sec/batch
Epoch: 7/20...  Training Step: 1376...  Training loss: 1.5036...  0.3205 sec/batch
Epoch: 7/20...  Training Step: 1377...  Training loss: 1.4990...  0.3210 sec/batch
Epoch: 7/20...  Training Step: 1378...  Training loss: 1.5100...  0.3209 sec/batch
Epoch: 7/20...  Training Step: 1379...  Training loss: 1.5020...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1380...  Training loss: 1.4816...  0.3179 sec/batch
Epoch: 7/20...  Training Step: 1381...  Training loss: 1.5005...  0.3202 sec/batch
Epoch: 7/20...  Training Step: 1382...  Training loss: 1.4852...  0.3206 sec/batch
Epoch: 7/20...  Training Step: 1383...  Training loss: 1.4804...  0.3195 sec/batch
Epoch: 7/20...  Training Step: 1384...  Training loss: 1.5217...  0.3185 sec/batch
Epoch: 7/20...  Training Step: 1385...  Training loss: 1.4964...  0.3177 sec/batch
Epoch: 7/20...  Training Step: 1386...  Training loss: 1.4842...  0.3206 sec/batch
Epoch: 8/20...  Training Step: 1387...  Training loss: 1.6442...  0.3183 sec/batch
Epoch: 8/20...  Training Step: 1388...  Training loss: 1.5117...  0.3165 sec/batch
Epoch: 8/20...  Training Step: 1389...  Training loss: 1.5066...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1390...  Training loss: 1.5187...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1391...  Training loss: 1.5015...  0.3213 sec/batch
Epoch: 8/20...  Training Step: 1392...  Training loss: 1.4754...  0.3257 sec/batch
Epoch: 8/20...  Training Step: 1393...  Training loss: 1.5074...  0.3197 sec/batch
Epoch: 8/20...  Training Step: 1394...  Training loss: 1.4914...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1395...  Training loss: 1.5119...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1396...  Training loss: 1.4909...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1397...  Training loss: 1.4878...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1398...  Training loss: 1.4859...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1399...  Training loss: 1.4993...  0.3190 sec/batch
Epoch: 8/20...  Training Step: 1400...  Training loss: 1.5338...  0.3163 sec/batch
Epoch: 8/20...  Training Step: 1401...  Training loss: 1.4914...  0.3166 sec/batch
Epoch: 8/20...  Training Step: 1402...  Training loss: 1.4772...  0.3172 sec/batch
Epoch: 8/20...  Training Step: 1403...  Training loss: 1.5091...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1404...  Training loss: 1.5230...  0.3187 sec/batch
Epoch: 8/20...  Training Step: 1405...  Training loss: 1.5046...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1406...  Training loss: 1.5259...  0.3171 sec/batch
Epoch: 8/20...  Training Step: 1407...  Training loss: 1.4814...  0.3229 sec/batch
Epoch: 8/20...  Training Step: 1408...  Training loss: 1.5206...  0.3203 sec/batch
Epoch: 8/20...  Training Step: 1409...  Training loss: 1.4856...  0.3203 sec/batch
Epoch: 8/20...  Training Step: 1410...  Training loss: 1.5071...  0.3180 sec/batch
Epoch: 8/20...  Training Step: 1411...  Training loss: 1.4988...  0.3172 sec/batch
Epoch: 8/20...  Training Step: 1412...  Training loss: 1.4460...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1413...  Training loss: 1.4634...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1414...  Training loss: 1.5036...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1415...  Training loss: 1.5083...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1416...  Training loss: 1.5116...  0.3165 sec/batch
Epoch: 8/20...  Training Step: 1417...  Training loss: 1.4955...  0.3185 sec/batch
Epoch: 8/20...  Training Step: 1418...  Training loss: 1.4665...  0.3237 sec/batch
Epoch: 8/20...  Training Step: 1419...  Training loss: 1.5048...  0.3215 sec/batch
Epoch: 8/20...  Training Step: 1420...  Training loss: 1.5117...  0.3187 sec/batch
Epoch: 8/20...  Training Step: 1421...  Training loss: 1.4814...  0.3168 sec/batch
Epoch: 8/20...  Training Step: 1422...  Training loss: 1.4951...  0.3204 sec/batch
Epoch: 8/20...  Training Step: 1423...  Training loss: 1.4721...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1424...  Training loss: 1.4485...  0.3218 sec/batch
Epoch: 8/20...  Training Step: 1425...  Training loss: 1.4469...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1426...  Training loss: 1.4867...  0.3164 sec/batch
Epoch: 8/20...  Training Step: 1427...  Training loss: 1.4726...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1428...  Training loss: 1.5182...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1429...  Training loss: 1.4589...  0.3189 sec/batch
Epoch: 8/20...  Training Step: 1430...  Training loss: 1.4669...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1431...  Training loss: 1.4981...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1432...  Training loss: 1.4524...  0.3167 sec/batch
Epoch: 8/20...  Training Step: 1433...  Training loss: 1.4817...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1434...  Training loss: 1.4784...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1435...  Training loss: 1.4869...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1436...  Training loss: 1.5117...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1437...  Training loss: 1.4613...  0.3171 sec/batch
Epoch: 8/20...  Training Step: 1438...  Training loss: 1.5406...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1439...  Training loss: 1.4957...  0.3211 sec/batch
Epoch: 8/20...  Training Step: 1440...  Training loss: 1.5069...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1441...  Training loss: 1.4691...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1442...  Training loss: 1.4877...  0.3171 sec/batch
Epoch: 8/20...  Training Step: 1443...  Training loss: 1.5065...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1444...  Training loss: 1.4744...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1445...  Training loss: 1.4726...  0.3186 sec/batch
Epoch: 8/20...  Training Step: 1446...  Training loss: 1.5264...  0.3165 sec/batch
Epoch: 8/20...  Training Step: 1447...  Training loss: 1.4946...  0.3180 sec/batch
Epoch: 8/20...  Training Step: 1448...  Training loss: 1.5397...  0.3177 sec/batch
Epoch: 8/20...  Training Step: 1449...  Training loss: 1.5172...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1450...  Training loss: 1.4897...  0.3205 sec/batch
Epoch: 8/20...  Training Step: 1451...  Training loss: 1.4818...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1452...  Training loss: 1.4998...  0.3167 sec/batch
Epoch: 8/20...  Training Step: 1453...  Training loss: 1.5079...  0.3179 sec/batch
Epoch: 8/20...  Training Step: 1454...  Training loss: 1.4663...  0.3197 sec/batch
Epoch: 8/20...  Training Step: 1455...  Training loss: 1.4838...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1456...  Training loss: 1.4765...  0.3204 sec/batch
Epoch: 8/20...  Training Step: 1457...  Training loss: 1.5289...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1458...  Training loss: 1.5112...  0.3180 sec/batch
Epoch: 8/20...  Training Step: 1459...  Training loss: 1.5260...  0.3208 sec/batch
Epoch: 8/20...  Training Step: 1460...  Training loss: 1.4690...  0.3203 sec/batch
Epoch: 8/20...  Training Step: 1461...  Training loss: 1.4832...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1462...  Training loss: 1.4966...  0.3207 sec/batch
Epoch: 8/20...  Training Step: 1463...  Training loss: 1.4855...  0.3173 sec/batch
Epoch: 8/20...  Training Step: 1464...  Training loss: 1.4771...  0.3177 sec/batch
Epoch: 8/20...  Training Step: 1465...  Training loss: 1.4363...  0.3214 sec/batch
Epoch: 8/20...  Training Step: 1466...  Training loss: 1.4793...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1467...  Training loss: 1.4314...  0.3180 sec/batch
Epoch: 8/20...  Training Step: 1468...  Training loss: 1.4843...  0.3167 sec/batch
Epoch: 8/20...  Training Step: 1469...  Training loss: 1.4320...  0.3172 sec/batch
Epoch: 8/20...  Training Step: 1470...  Training loss: 1.4868...  0.3183 sec/batch
Epoch: 8/20...  Training Step: 1471...  Training loss: 1.4501...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1472...  Training loss: 1.4698...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1473...  Training loss: 1.4465...  0.3167 sec/batch
Epoch: 8/20...  Training Step: 1474...  Training loss: 1.4510...  0.3179 sec/batch
Epoch: 8/20...  Training Step: 1475...  Training loss: 1.4409...  0.3208 sec/batch
Epoch: 8/20...  Training Step: 1476...  Training loss: 1.4946...  0.3194 sec/batch
Epoch: 8/20...  Training Step: 1477...  Training loss: 1.4538...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1478...  Training loss: 1.4654...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1479...  Training loss: 1.4535...  0.3171 sec/batch
Epoch: 8/20...  Training Step: 1480...  Training loss: 1.4476...  0.3205 sec/batch
Epoch: 8/20...  Training Step: 1481...  Training loss: 1.4636...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1482...  Training loss: 1.4896...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1483...  Training loss: 1.4759...  0.3197 sec/batch
Epoch: 8/20...  Training Step: 1484...  Training loss: 1.4366...  0.3168 sec/batch
Epoch: 8/20...  Training Step: 1485...  Training loss: 1.4431...  0.3178 sec/batch
Epoch: 8/20...  Training Step: 1486...  Training loss: 1.4346...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1487...  Training loss: 1.4781...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1488...  Training loss: 1.4670...  0.3212 sec/batch
Epoch: 8/20...  Training Step: 1489...  Training loss: 1.4714...  0.3188 sec/batch
Epoch: 8/20...  Training Step: 1490...  Training loss: 1.4667...  0.3167 sec/batch
Epoch: 8/20...  Training Step: 1491...  Training loss: 1.4665...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1492...  Training loss: 1.4582...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1493...  Training loss: 1.4672...  0.3188 sec/batch
Epoch: 8/20...  Training Step: 1494...  Training loss: 1.4790...  0.3197 sec/batch
Epoch: 8/20...  Training Step: 1495...  Training loss: 1.4699...  0.3203 sec/batch
Epoch: 8/20...  Training Step: 1496...  Training loss: 1.4793...  0.3208 sec/batch
Epoch: 8/20...  Training Step: 1497...  Training loss: 1.4434...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1498...  Training loss: 1.4664...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1499...  Training loss: 1.4697...  0.3190 sec/batch
Epoch: 8/20...  Training Step: 1500...  Training loss: 1.4521...  0.3170 sec/batch
Epoch: 8/20...  Training Step: 1501...  Training loss: 1.4417...  0.3205 sec/batch
Epoch: 8/20...  Training Step: 1502...  Training loss: 1.4235...  0.3203 sec/batch
Epoch: 8/20...  Training Step: 1503...  Training loss: 1.4730...  0.3184 sec/batch
Epoch: 8/20...  Training Step: 1504...  Training loss: 1.4712...  0.3170 sec/batch
Epoch: 8/20...  Training Step: 1505...  Training loss: 1.4601...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1506...  Training loss: 1.4489...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1507...  Training loss: 1.4683...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1508...  Training loss: 1.4290...  0.3190 sec/batch
Epoch: 8/20...  Training Step: 1509...  Training loss: 1.4263...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1510...  Training loss: 1.4733...  0.3170 sec/batch
Epoch: 8/20...  Training Step: 1511...  Training loss: 1.4661...  0.3172 sec/batch
Epoch: 8/20...  Training Step: 1512...  Training loss: 1.4260...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1513...  Training loss: 1.4755...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1514...  Training loss: 1.4726...  0.3187 sec/batch
Epoch: 8/20...  Training Step: 1515...  Training loss: 1.4571...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1516...  Training loss: 1.4341...  0.3173 sec/batch
Epoch: 8/20...  Training Step: 1517...  Training loss: 1.4152...  0.3197 sec/batch
Epoch: 8/20...  Training Step: 1518...  Training loss: 1.4476...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1519...  Training loss: 1.4717...  0.3205 sec/batch
Epoch: 8/20...  Training Step: 1520...  Training loss: 1.4780...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1521...  Training loss: 1.4709...  0.3176 sec/batch
Epoch: 8/20...  Training Step: 1522...  Training loss: 1.4721...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1523...  Training loss: 1.5037...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1524...  Training loss: 1.4737...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1525...  Training loss: 1.4735...  0.3185 sec/batch
Epoch: 8/20...  Training Step: 1526...  Training loss: 1.4586...  0.3169 sec/batch
Epoch: 8/20...  Training Step: 1527...  Training loss: 1.5202...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1528...  Training loss: 1.4718...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1529...  Training loss: 1.4621...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1530...  Training loss: 1.4979...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1531...  Training loss: 1.4470...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1532...  Training loss: 1.4885...  0.3180 sec/batch
Epoch: 8/20...  Training Step: 1533...  Training loss: 1.4769...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1534...  Training loss: 1.5094...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1535...  Training loss: 1.4882...  0.3178 sec/batch
Epoch: 8/20...  Training Step: 1536...  Training loss: 1.4672...  0.3179 sec/batch
Epoch: 8/20...  Training Step: 1537...  Training loss: 1.4243...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1538...  Training loss: 1.4647...  0.3214 sec/batch
Epoch: 8/20...  Training Step: 1539...  Training loss: 1.4742...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1540...  Training loss: 1.4834...  0.3190 sec/batch
Epoch: 8/20...  Training Step: 1541...  Training loss: 1.4704...  0.3164 sec/batch
Epoch: 8/20...  Training Step: 1542...  Training loss: 1.4641...  0.3184 sec/batch
Epoch: 8/20...  Training Step: 1543...  Training loss: 1.4662...  0.3210 sec/batch
Epoch: 8/20...  Training Step: 1544...  Training loss: 1.4516...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1545...  Training loss: 1.4297...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1546...  Training loss: 1.4688...  0.3206 sec/batch
Epoch: 8/20...  Training Step: 1547...  Training loss: 1.4975...  0.3173 sec/batch
Epoch: 8/20...  Training Step: 1548...  Training loss: 1.4564...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1549...  Training loss: 1.4692...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1550...  Training loss: 1.4549...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1551...  Training loss: 1.4624...  0.3194 sec/batch
Epoch: 8/20...  Training Step: 1552...  Training loss: 1.4559...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1553...  Training loss: 1.4777...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1554...  Training loss: 1.5275...  0.3193 sec/batch
Epoch: 8/20...  Training Step: 1555...  Training loss: 1.4583...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1556...  Training loss: 1.4546...  0.3190 sec/batch
Epoch: 8/20...  Training Step: 1557...  Training loss: 1.4474...  0.3188 sec/batch
Epoch: 8/20...  Training Step: 1558...  Training loss: 1.4386...  0.3201 sec/batch
Epoch: 8/20...  Training Step: 1559...  Training loss: 1.4786...  0.3198 sec/batch
Epoch: 8/20...  Training Step: 1560...  Training loss: 1.4607...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1561...  Training loss: 1.4756...  0.3186 sec/batch
Epoch: 8/20...  Training Step: 1562...  Training loss: 1.4397...  0.3178 sec/batch
Epoch: 8/20...  Training Step: 1563...  Training loss: 1.4449...  0.3179 sec/batch
Epoch: 8/20...  Training Step: 1564...  Training loss: 1.4871...  0.3209 sec/batch
Epoch: 8/20...  Training Step: 1565...  Training loss: 1.4358...  0.3209 sec/batch
Epoch: 8/20...  Training Step: 1566...  Training loss: 1.4219...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1567...  Training loss: 1.4389...  0.3175 sec/batch
Epoch: 8/20...  Training Step: 1568...  Training loss: 1.4476...  0.3166 sec/batch
Epoch: 8/20...  Training Step: 1569...  Training loss: 1.4536...  0.3196 sec/batch
Epoch: 8/20...  Training Step: 1570...  Training loss: 1.4462...  0.3204 sec/batch
Epoch: 8/20...  Training Step: 1571...  Training loss: 1.4537...  0.3217 sec/batch
Epoch: 8/20...  Training Step: 1572...  Training loss: 1.4419...  0.3170 sec/batch
Epoch: 8/20...  Training Step: 1573...  Training loss: 1.4726...  0.3199 sec/batch
Epoch: 8/20...  Training Step: 1574...  Training loss: 1.4386...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1575...  Training loss: 1.4477...  0.3195 sec/batch
Epoch: 8/20...  Training Step: 1576...  Training loss: 1.4540...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1577...  Training loss: 1.4420...  0.3186 sec/batch
Epoch: 8/20...  Training Step: 1578...  Training loss: 1.4329...  0.3168 sec/batch
Epoch: 8/20...  Training Step: 1579...  Training loss: 1.4517...  0.3200 sec/batch
Epoch: 8/20...  Training Step: 1580...  Training loss: 1.4373...  0.3234 sec/batch
Epoch: 8/20...  Training Step: 1581...  Training loss: 1.4269...  0.3202 sec/batch
Epoch: 8/20...  Training Step: 1582...  Training loss: 1.4658...  0.3192 sec/batch
Epoch: 8/20...  Training Step: 1583...  Training loss: 1.4467...  0.3178 sec/batch
Epoch: 8/20...  Training Step: 1584...  Training loss: 1.4440...  0.3230 sec/batch
Epoch: 9/20...  Training Step: 1585...  Training loss: 1.5792...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1586...  Training loss: 1.4789...  0.3208 sec/batch
Epoch: 9/20...  Training Step: 1587...  Training loss: 1.4595...  0.3183 sec/batch
Epoch: 9/20...  Training Step: 1588...  Training loss: 1.4690...  0.3185 sec/batch
Epoch: 9/20...  Training Step: 1589...  Training loss: 1.4429...  0.3212 sec/batch
Epoch: 9/20...  Training Step: 1590...  Training loss: 1.4193...  0.3202 sec/batch
Epoch: 9/20...  Training Step: 1591...  Training loss: 1.4605...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1592...  Training loss: 1.4438...  0.3176 sec/batch
Epoch: 9/20...  Training Step: 1593...  Training loss: 1.4633...  0.3181 sec/batch
Epoch: 9/20...  Training Step: 1594...  Training loss: 1.4440...  0.3211 sec/batch
Epoch: 9/20...  Training Step: 1595...  Training loss: 1.4305...  0.3210 sec/batch
Epoch: 9/20...  Training Step: 1596...  Training loss: 1.4451...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1597...  Training loss: 1.4501...  0.3182 sec/batch
Epoch: 9/20...  Training Step: 1598...  Training loss: 1.4676...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1599...  Training loss: 1.4383...  0.3207 sec/batch
Epoch: 9/20...  Training Step: 1600...  Training loss: 1.4279...  0.3202 sec/batch
Epoch: 9/20...  Training Step: 1601...  Training loss: 1.4604...  0.3177 sec/batch
Epoch: 9/20...  Training Step: 1602...  Training loss: 1.4777...  0.3168 sec/batch
Epoch: 9/20...  Training Step: 1603...  Training loss: 1.4626...  0.3187 sec/batch
Epoch: 9/20...  Training Step: 1604...  Training loss: 1.4710...  0.3169 sec/batch
Epoch: 9/20...  Training Step: 1605...  Training loss: 1.4433...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1606...  Training loss: 1.4617...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1607...  Training loss: 1.4367...  0.3212 sec/batch
Epoch: 9/20...  Training Step: 1608...  Training loss: 1.4578...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1609...  Training loss: 1.4489...  0.3191 sec/batch
Epoch: 9/20...  Training Step: 1610...  Training loss: 1.4000...  0.3180 sec/batch
Epoch: 9/20...  Training Step: 1611...  Training loss: 1.4106...  0.3183 sec/batch
Epoch: 9/20...  Training Step: 1612...  Training loss: 1.4640...  0.3193 sec/batch
Epoch: 9/20...  Training Step: 1613...  Training loss: 1.4608...  0.3210 sec/batch
Epoch: 9/20...  Training Step: 1614...  Training loss: 1.4685...  0.3208 sec/batch
Epoch: 9/20...  Training Step: 1615...  Training loss: 1.4370...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1616...  Training loss: 1.4126...  0.3191 sec/batch
Epoch: 9/20...  Training Step: 1617...  Training loss: 1.4576...  0.3213 sec/batch
Epoch: 9/20...  Training Step: 1618...  Training loss: 1.4557...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1619...  Training loss: 1.4326...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1620...  Training loss: 1.4484...  0.3178 sec/batch
Epoch: 9/20...  Training Step: 1621...  Training loss: 1.4193...  0.3168 sec/batch
Epoch: 9/20...  Training Step: 1622...  Training loss: 1.3994...  0.3206 sec/batch
Epoch: 9/20...  Training Step: 1623...  Training loss: 1.3922...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1624...  Training loss: 1.4179...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1625...  Training loss: 1.4224...  0.3190 sec/batch
Epoch: 9/20...  Training Step: 1626...  Training loss: 1.4706...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1627...  Training loss: 1.4166...  0.3215 sec/batch
Epoch: 9/20...  Training Step: 1628...  Training loss: 1.4149...  0.3212 sec/batch
Epoch: 9/20...  Training Step: 1629...  Training loss: 1.4510...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1630...  Training loss: 1.4104...  0.3206 sec/batch
Epoch: 9/20...  Training Step: 1631...  Training loss: 1.4396...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1632...  Training loss: 1.4242...  0.3207 sec/batch
Epoch: 9/20...  Training Step: 1633...  Training loss: 1.4296...  0.3204 sec/batch
Epoch: 9/20...  Training Step: 1634...  Training loss: 1.4595...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1635...  Training loss: 1.4183...  0.3191 sec/batch
Epoch: 9/20...  Training Step: 1636...  Training loss: 1.4822...  0.3178 sec/batch
Epoch: 9/20...  Training Step: 1637...  Training loss: 1.4393...  0.3209 sec/batch
Epoch: 9/20...  Training Step: 1638...  Training loss: 1.4535...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1639...  Training loss: 1.4426...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1640...  Training loss: 1.4401...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1641...  Training loss: 1.4662...  0.3196 sec/batch
Epoch: 9/20...  Training Step: 1642...  Training loss: 1.4245...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1643...  Training loss: 1.4148...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1644...  Training loss: 1.4723...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1645...  Training loss: 1.4517...  0.3171 sec/batch
Epoch: 9/20...  Training Step: 1646...  Training loss: 1.4928...  0.3169 sec/batch
Epoch: 9/20...  Training Step: 1647...  Training loss: 1.4687...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1648...  Training loss: 1.4474...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1649...  Training loss: 1.4333...  0.3196 sec/batch
Epoch: 9/20...  Training Step: 1650...  Training loss: 1.4561...  0.3191 sec/batch
Epoch: 9/20...  Training Step: 1651...  Training loss: 1.4585...  0.3180 sec/batch
Epoch: 9/20...  Training Step: 1652...  Training loss: 1.4293...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1653...  Training loss: 1.4487...  0.3204 sec/batch
Epoch: 9/20...  Training Step: 1654...  Training loss: 1.4229...  0.3196 sec/batch
Epoch: 9/20...  Training Step: 1655...  Training loss: 1.4845...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1656...  Training loss: 1.4520...  0.3206 sec/batch
Epoch: 9/20...  Training Step: 1657...  Training loss: 1.4738...  0.3185 sec/batch
Epoch: 9/20...  Training Step: 1658...  Training loss: 1.4224...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1659...  Training loss: 1.4510...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1660...  Training loss: 1.4625...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1661...  Training loss: 1.4295...  0.3171 sec/batch
Epoch: 9/20...  Training Step: 1662...  Training loss: 1.4218...  0.3172 sec/batch
Epoch: 9/20...  Training Step: 1663...  Training loss: 1.3966...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1664...  Training loss: 1.4405...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1665...  Training loss: 1.3856...  0.3192 sec/batch
Epoch: 9/20...  Training Step: 1666...  Training loss: 1.4355...  0.3184 sec/batch
Epoch: 9/20...  Training Step: 1667...  Training loss: 1.3934...  0.3162 sec/batch
Epoch: 9/20...  Training Step: 1668...  Training loss: 1.4361...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1669...  Training loss: 1.4099...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1670...  Training loss: 1.4229...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1671...  Training loss: 1.4041...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1672...  Training loss: 1.4106...  0.3173 sec/batch
Epoch: 9/20...  Training Step: 1673...  Training loss: 1.3914...  0.3193 sec/batch
Epoch: 9/20...  Training Step: 1674...  Training loss: 1.4461...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1675...  Training loss: 1.4033...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1676...  Training loss: 1.4143...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1677...  Training loss: 1.4072...  0.3163 sec/batch
Epoch: 9/20...  Training Step: 1678...  Training loss: 1.4157...  0.3171 sec/batch
Epoch: 9/20...  Training Step: 1679...  Training loss: 1.4087...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1680...  Training loss: 1.4374...  0.3187 sec/batch
Epoch: 9/20...  Training Step: 1681...  Training loss: 1.4467...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1682...  Training loss: 1.3971...  0.3192 sec/batch
Epoch: 9/20...  Training Step: 1683...  Training loss: 1.4119...  0.3160 sec/batch
Epoch: 9/20...  Training Step: 1684...  Training loss: 1.4038...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1685...  Training loss: 1.4325...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1686...  Training loss: 1.4210...  0.3168 sec/batch
Epoch: 9/20...  Training Step: 1687...  Training loss: 1.4325...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1688...  Training loss: 1.4299...  0.3196 sec/batch
Epoch: 9/20...  Training Step: 1689...  Training loss: 1.4279...  0.3176 sec/batch
Epoch: 9/20...  Training Step: 1690...  Training loss: 1.4235...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1691...  Training loss: 1.4351...  0.3186 sec/batch
Epoch: 9/20...  Training Step: 1692...  Training loss: 1.4319...  0.3184 sec/batch
Epoch: 9/20...  Training Step: 1693...  Training loss: 1.4139...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1694...  Training loss: 1.4427...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1695...  Training loss: 1.4110...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1696...  Training loss: 1.4300...  0.3193 sec/batch
Epoch: 9/20...  Training Step: 1697...  Training loss: 1.4293...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1698...  Training loss: 1.4159...  0.3190 sec/batch
Epoch: 9/20...  Training Step: 1699...  Training loss: 1.4059...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1700...  Training loss: 1.3874...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1701...  Training loss: 1.4279...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1702...  Training loss: 1.4349...  0.3189 sec/batch
Epoch: 9/20...  Training Step: 1703...  Training loss: 1.4069...  0.3174 sec/batch
Epoch: 9/20...  Training Step: 1704...  Training loss: 1.4057...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1705...  Training loss: 1.4258...  0.3216 sec/batch
Epoch: 9/20...  Training Step: 1706...  Training loss: 1.3853...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1707...  Training loss: 1.3721...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1708...  Training loss: 1.4223...  0.3204 sec/batch
Epoch: 9/20...  Training Step: 1709...  Training loss: 1.4175...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1710...  Training loss: 1.3790...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1711...  Training loss: 1.4353...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1712...  Training loss: 1.4315...  0.3185 sec/batch
Epoch: 9/20...  Training Step: 1713...  Training loss: 1.4094...  0.3178 sec/batch
Epoch: 9/20...  Training Step: 1714...  Training loss: 1.3944...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1715...  Training loss: 1.3690...  0.3211 sec/batch
Epoch: 9/20...  Training Step: 1716...  Training loss: 1.4077...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1717...  Training loss: 1.4464...  0.3169 sec/batch
Epoch: 9/20...  Training Step: 1718...  Training loss: 1.4439...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1719...  Training loss: 1.4351...  0.3179 sec/batch
Epoch: 9/20...  Training Step: 1720...  Training loss: 1.4175...  0.3214 sec/batch
Epoch: 9/20...  Training Step: 1721...  Training loss: 1.4498...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1722...  Training loss: 1.4458...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1723...  Training loss: 1.4320...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1724...  Training loss: 1.4249...  0.3171 sec/batch
Epoch: 9/20...  Training Step: 1725...  Training loss: 1.4827...  0.3184 sec/batch
Epoch: 9/20...  Training Step: 1726...  Training loss: 1.4318...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1727...  Training loss: 1.4249...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1728...  Training loss: 1.4541...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1729...  Training loss: 1.4107...  0.3174 sec/batch
Epoch: 9/20...  Training Step: 1730...  Training loss: 1.4441...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1731...  Training loss: 1.4384...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1732...  Training loss: 1.4554...  0.3208 sec/batch
Epoch: 9/20...  Training Step: 1733...  Training loss: 1.4458...  0.3196 sec/batch
Epoch: 9/20...  Training Step: 1734...  Training loss: 1.4078...  0.3193 sec/batch
Epoch: 9/20...  Training Step: 1735...  Training loss: 1.3928...  0.3172 sec/batch
Epoch: 9/20...  Training Step: 1736...  Training loss: 1.4162...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1737...  Training loss: 1.4223...  0.3194 sec/batch
Epoch: 9/20...  Training Step: 1738...  Training loss: 1.4204...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1739...  Training loss: 1.4184...  0.3180 sec/batch
Epoch: 9/20...  Training Step: 1740...  Training loss: 1.4249...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1741...  Training loss: 1.4269...  0.3201 sec/batch
Epoch: 9/20...  Training Step: 1742...  Training loss: 1.4051...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1743...  Training loss: 1.3918...  0.3200 sec/batch
Epoch: 9/20...  Training Step: 1744...  Training loss: 1.4468...  0.3179 sec/batch
Epoch: 9/20...  Training Step: 1745...  Training loss: 1.4492...  0.3180 sec/batch
Epoch: 9/20...  Training Step: 1746...  Training loss: 1.4185...  0.3213 sec/batch
Epoch: 9/20...  Training Step: 1747...  Training loss: 1.4251...  0.3192 sec/batch
Epoch: 9/20...  Training Step: 1748...  Training loss: 1.4234...  0.3199 sec/batch
Epoch: 9/20...  Training Step: 1749...  Training loss: 1.4214...  0.3183 sec/batch
Epoch: 9/20...  Training Step: 1750...  Training loss: 1.4171...  0.3178 sec/batch
Epoch: 9/20...  Training Step: 1751...  Training loss: 1.4433...  0.3207 sec/batch
Epoch: 9/20...  Training Step: 1752...  Training loss: 1.4898...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1753...  Training loss: 1.4157...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1754...  Training loss: 1.4089...  0.3195 sec/batch
Epoch: 9/20...  Training Step: 1755...  Training loss: 1.4159...  0.3182 sec/batch
Epoch: 9/20...  Training Step: 1756...  Training loss: 1.3920...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1757...  Training loss: 1.4382...  0.3209 sec/batch
Epoch: 9/20...  Training Step: 1758...  Training loss: 1.4141...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1759...  Training loss: 1.4313...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1760...  Training loss: 1.3968...  0.3172 sec/batch
Epoch: 9/20...  Training Step: 1761...  Training loss: 1.3999...  0.3208 sec/batch
Epoch: 9/20...  Training Step: 1762...  Training loss: 1.4392...  0.3189 sec/batch
Epoch: 9/20...  Training Step: 1763...  Training loss: 1.3916...  0.3192 sec/batch
Epoch: 9/20...  Training Step: 1764...  Training loss: 1.3845...  0.3204 sec/batch
Epoch: 9/20...  Training Step: 1765...  Training loss: 1.3947...  0.3174 sec/batch
Epoch: 9/20...  Training Step: 1766...  Training loss: 1.4143...  0.3202 sec/batch
Epoch: 9/20...  Training Step: 1767...  Training loss: 1.4156...  0.3212 sec/batch
Epoch: 9/20...  Training Step: 1768...  Training loss: 1.4024...  0.3206 sec/batch
Epoch: 9/20...  Training Step: 1769...  Training loss: 1.4109...  0.3185 sec/batch
Epoch: 9/20...  Training Step: 1770...  Training loss: 1.4041...  0.3176 sec/batch
Epoch: 9/20...  Training Step: 1771...  Training loss: 1.4387...  0.3198 sec/batch
Epoch: 9/20...  Training Step: 1772...  Training loss: 1.3996...  0.3205 sec/batch
Epoch: 9/20...  Training Step: 1773...  Training loss: 1.4100...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1774...  Training loss: 1.4132...  0.3188 sec/batch
Epoch: 9/20...  Training Step: 1775...  Training loss: 1.3992...  0.3170 sec/batch
Epoch: 9/20...  Training Step: 1776...  Training loss: 1.3917...  0.3197 sec/batch
Epoch: 9/20...  Training Step: 1777...  Training loss: 1.4148...  0.3202 sec/batch
Epoch: 9/20...  Training Step: 1778...  Training loss: 1.3900...  0.3190 sec/batch
Epoch: 9/20...  Training Step: 1779...  Training loss: 1.3827...  0.3203 sec/batch
Epoch: 9/20...  Training Step: 1780...  Training loss: 1.4108...  0.3183 sec/batch
Epoch: 9/20...  Training Step: 1781...  Training loss: 1.3990...  0.3185 sec/batch
Epoch: 9/20...  Training Step: 1782...  Training loss: 1.3954...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1783...  Training loss: 1.5659...  0.3190 sec/batch
Epoch: 10/20...  Training Step: 1784...  Training loss: 1.4509...  0.3218 sec/batch
Epoch: 10/20...  Training Step: 1785...  Training loss: 1.4496...  0.3189 sec/batch
Epoch: 10/20...  Training Step: 1786...  Training loss: 1.4475...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1787...  Training loss: 1.3959...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1788...  Training loss: 1.3880...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1789...  Training loss: 1.4260...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1790...  Training loss: 1.4142...  0.3199 sec/batch
Epoch: 10/20...  Training Step: 1791...  Training loss: 1.4295...  0.3181 sec/batch
Epoch: 10/20...  Training Step: 1792...  Training loss: 1.4119...  0.3210 sec/batch
Epoch: 10/20...  Training Step: 1793...  Training loss: 1.4064...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1794...  Training loss: 1.4043...  0.3202 sec/batch
Epoch: 10/20...  Training Step: 1795...  Training loss: 1.4087...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1796...  Training loss: 1.4358...  0.3190 sec/batch
Epoch: 10/20...  Training Step: 1797...  Training loss: 1.4024...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1798...  Training loss: 1.3925...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1799...  Training loss: 1.4254...  0.3206 sec/batch
Epoch: 10/20...  Training Step: 1800...  Training loss: 1.4315...  0.3183 sec/batch
Epoch: 10/20...  Training Step: 1801...  Training loss: 1.4163...  0.3172 sec/batch
Epoch: 10/20...  Training Step: 1802...  Training loss: 1.4309...  0.3185 sec/batch
Epoch: 10/20...  Training Step: 1803...  Training loss: 1.4127...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1804...  Training loss: 1.4366...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1805...  Training loss: 1.4014...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1806...  Training loss: 1.4263...  0.3187 sec/batch
Epoch: 10/20...  Training Step: 1807...  Training loss: 1.4051...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1808...  Training loss: 1.3711...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1809...  Training loss: 1.3827...  0.3191 sec/batch
Epoch: 10/20...  Training Step: 1810...  Training loss: 1.4300...  0.3182 sec/batch
Epoch: 10/20...  Training Step: 1811...  Training loss: 1.4276...  0.3170 sec/batch
Epoch: 10/20...  Training Step: 1812...  Training loss: 1.4199...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1813...  Training loss: 1.4072...  0.3199 sec/batch
Epoch: 10/20...  Training Step: 1814...  Training loss: 1.3819...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1815...  Training loss: 1.4251...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1816...  Training loss: 1.4200...  0.3173 sec/batch
Epoch: 10/20...  Training Step: 1817...  Training loss: 1.4036...  0.3175 sec/batch
Epoch: 10/20...  Training Step: 1818...  Training loss: 1.4096...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1819...  Training loss: 1.3814...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1820...  Training loss: 1.3643...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1821...  Training loss: 1.3561...  0.3181 sec/batch
Epoch: 10/20...  Training Step: 1822...  Training loss: 1.3836...  0.3179 sec/batch
Epoch: 10/20...  Training Step: 1823...  Training loss: 1.3768...  0.3207 sec/batch
Epoch: 10/20...  Training Step: 1824...  Training loss: 1.4357...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1825...  Training loss: 1.3923...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1826...  Training loss: 1.3876...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1827...  Training loss: 1.4141...  0.3177 sec/batch
Epoch: 10/20...  Training Step: 1828...  Training loss: 1.3655...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1829...  Training loss: 1.3967...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1830...  Training loss: 1.3851...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1831...  Training loss: 1.3933...  0.3212 sec/batch
Epoch: 10/20...  Training Step: 1832...  Training loss: 1.4191...  0.3212 sec/batch
Epoch: 10/20...  Training Step: 1833...  Training loss: 1.3857...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1834...  Training loss: 1.4443...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1835...  Training loss: 1.4092...  0.3209 sec/batch
Epoch: 10/20...  Training Step: 1836...  Training loss: 1.4155...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1837...  Training loss: 1.3933...  0.3168 sec/batch
Epoch: 10/20...  Training Step: 1838...  Training loss: 1.4057...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1839...  Training loss: 1.4326...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1840...  Training loss: 1.3891...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1841...  Training loss: 1.3860...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1842...  Training loss: 1.4426...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1843...  Training loss: 1.4099...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1844...  Training loss: 1.4556...  0.3202 sec/batch
Epoch: 10/20...  Training Step: 1845...  Training loss: 1.4289...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1846...  Training loss: 1.4053...  0.3187 sec/batch
Epoch: 10/20...  Training Step: 1847...  Training loss: 1.3915...  0.3166 sec/batch
Epoch: 10/20...  Training Step: 1848...  Training loss: 1.4187...  0.3199 sec/batch
Epoch: 10/20...  Training Step: 1849...  Training loss: 1.4153...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1850...  Training loss: 1.3877...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1851...  Training loss: 1.4134...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1852...  Training loss: 1.3794...  0.3184 sec/batch
Epoch: 10/20...  Training Step: 1853...  Training loss: 1.4545...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1854...  Training loss: 1.4255...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1855...  Training loss: 1.4460...  0.3190 sec/batch
Epoch: 10/20...  Training Step: 1856...  Training loss: 1.3838...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1857...  Training loss: 1.3964...  0.3219 sec/batch
Epoch: 10/20...  Training Step: 1858...  Training loss: 1.4240...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1859...  Training loss: 1.3925...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1860...  Training loss: 1.3929...  0.3207 sec/batch
Epoch: 10/20...  Training Step: 1861...  Training loss: 1.3534...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1862...  Training loss: 1.3968...  0.3165 sec/batch
Epoch: 10/20...  Training Step: 1863...  Training loss: 1.3580...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1864...  Training loss: 1.3953...  0.3202 sec/batch
Epoch: 10/20...  Training Step: 1865...  Training loss: 1.3664...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1866...  Training loss: 1.4035...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1867...  Training loss: 1.3716...  0.3186 sec/batch
Epoch: 10/20...  Training Step: 1868...  Training loss: 1.3976...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1869...  Training loss: 1.3729...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1870...  Training loss: 1.3803...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1871...  Training loss: 1.3620...  0.3189 sec/batch
Epoch: 10/20...  Training Step: 1872...  Training loss: 1.4071...  0.3171 sec/batch
Epoch: 10/20...  Training Step: 1873...  Training loss: 1.3789...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1874...  Training loss: 1.3897...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1875...  Training loss: 1.3601...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1876...  Training loss: 1.3708...  0.3208 sec/batch
Epoch: 10/20...  Training Step: 1877...  Training loss: 1.3808...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1878...  Training loss: 1.4092...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1879...  Training loss: 1.4032...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1880...  Training loss: 1.3642...  0.3209 sec/batch
Epoch: 10/20...  Training Step: 1881...  Training loss: 1.3747...  0.3184 sec/batch
Epoch: 10/20...  Training Step: 1882...  Training loss: 1.3674...  0.3173 sec/batch
Epoch: 10/20...  Training Step: 1883...  Training loss: 1.3914...  0.3192 sec/batch
Epoch: 10/20...  Training Step: 1884...  Training loss: 1.3900...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1885...  Training loss: 1.3875...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1886...  Training loss: 1.3890...  0.3206 sec/batch
Epoch: 10/20...  Training Step: 1887...  Training loss: 1.3827...  0.3191 sec/batch
Epoch: 10/20...  Training Step: 1888...  Training loss: 1.3923...  0.3172 sec/batch
Epoch: 10/20...  Training Step: 1889...  Training loss: 1.3972...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1890...  Training loss: 1.4013...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1891...  Training loss: 1.3779...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1892...  Training loss: 1.4004...  0.3180 sec/batch
Epoch: 10/20...  Training Step: 1893...  Training loss: 1.3794...  0.3167 sec/batch
Epoch: 10/20...  Training Step: 1894...  Training loss: 1.3813...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1895...  Training loss: 1.3894...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1896...  Training loss: 1.3839...  0.3189 sec/batch
Epoch: 10/20...  Training Step: 1897...  Training loss: 1.3644...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1898...  Training loss: 1.3571...  0.3167 sec/batch
Epoch: 10/20...  Training Step: 1899...  Training loss: 1.4039...  0.3167 sec/batch
Epoch: 10/20...  Training Step: 1900...  Training loss: 1.3981...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1901...  Training loss: 1.3779...  0.3206 sec/batch
Epoch: 10/20...  Training Step: 1902...  Training loss: 1.3795...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1903...  Training loss: 1.3831...  0.3170 sec/batch
Epoch: 10/20...  Training Step: 1904...  Training loss: 1.3493...  0.3172 sec/batch
Epoch: 10/20...  Training Step: 1905...  Training loss: 1.3401...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1906...  Training loss: 1.4014...  0.3187 sec/batch
Epoch: 10/20...  Training Step: 1907...  Training loss: 1.3797...  0.3171 sec/batch
Epoch: 10/20...  Training Step: 1908...  Training loss: 1.3491...  0.3186 sec/batch
Epoch: 10/20...  Training Step: 1909...  Training loss: 1.3904...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1910...  Training loss: 1.3967...  0.3202 sec/batch
Epoch: 10/20...  Training Step: 1911...  Training loss: 1.3681...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1912...  Training loss: 1.3638...  0.3206 sec/batch
Epoch: 10/20...  Training Step: 1913...  Training loss: 1.3414...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1914...  Training loss: 1.3685...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1915...  Training loss: 1.4108...  0.3190 sec/batch
Epoch: 10/20...  Training Step: 1916...  Training loss: 1.4077...  0.3200 sec/batch
Epoch: 10/20...  Training Step: 1917...  Training loss: 1.4013...  0.3191 sec/batch
Epoch: 10/20...  Training Step: 1918...  Training loss: 1.3959...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1919...  Training loss: 1.4157...  0.3162 sec/batch
Epoch: 10/20...  Training Step: 1920...  Training loss: 1.4091...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1921...  Training loss: 1.4043...  0.3207 sec/batch
Epoch: 10/20...  Training Step: 1922...  Training loss: 1.3964...  0.3189 sec/batch
Epoch: 10/20...  Training Step: 1923...  Training loss: 1.4367...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1924...  Training loss: 1.3867...  0.3193 sec/batch
Epoch: 10/20...  Training Step: 1925...  Training loss: 1.3752...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1926...  Training loss: 1.4260...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1927...  Training loss: 1.3765...  0.3187 sec/batch
Epoch: 10/20...  Training Step: 1928...  Training loss: 1.4150...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1929...  Training loss: 1.3965...  0.3166 sec/batch
Epoch: 10/20...  Training Step: 1930...  Training loss: 1.4260...  0.3187 sec/batch
Epoch: 10/20...  Training Step: 1931...  Training loss: 1.4107...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1932...  Training loss: 1.3783...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1933...  Training loss: 1.3514...  0.3204 sec/batch
Epoch: 10/20...  Training Step: 1934...  Training loss: 1.3726...  0.3180 sec/batch
Epoch: 10/20...  Training Step: 1935...  Training loss: 1.3966...  0.3165 sec/batch
Epoch: 10/20...  Training Step: 1936...  Training loss: 1.3821...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1937...  Training loss: 1.3834...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1938...  Training loss: 1.3801...  0.3167 sec/batch
Epoch: 10/20...  Training Step: 1939...  Training loss: 1.3813...  0.3165 sec/batch
Epoch: 10/20...  Training Step: 1940...  Training loss: 1.3804...  0.3192 sec/batch
Epoch: 10/20...  Training Step: 1941...  Training loss: 1.3557...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1942...  Training loss: 1.4164...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1943...  Training loss: 1.4227...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1944...  Training loss: 1.3865...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1945...  Training loss: 1.3915...  0.3182 sec/batch
Epoch: 10/20...  Training Step: 1946...  Training loss: 1.3789...  0.3167 sec/batch
Epoch: 10/20...  Training Step: 1947...  Training loss: 1.3841...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1948...  Training loss: 1.3934...  0.3209 sec/batch
Epoch: 10/20...  Training Step: 1949...  Training loss: 1.4128...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1950...  Training loss: 1.4537...  0.3191 sec/batch
Epoch: 10/20...  Training Step: 1951...  Training loss: 1.3869...  0.3170 sec/batch
Epoch: 10/20...  Training Step: 1952...  Training loss: 1.3969...  0.3176 sec/batch
Epoch: 10/20...  Training Step: 1953...  Training loss: 1.3829...  0.3197 sec/batch
Epoch: 10/20...  Training Step: 1954...  Training loss: 1.3624...  0.3209 sec/batch
Epoch: 10/20...  Training Step: 1955...  Training loss: 1.4119...  0.3209 sec/batch
Epoch: 10/20...  Training Step: 1956...  Training loss: 1.3856...  0.3178 sec/batch
Epoch: 10/20...  Training Step: 1957...  Training loss: 1.3950...  0.3166 sec/batch
Epoch: 10/20...  Training Step: 1958...  Training loss: 1.3621...  0.3206 sec/batch
Epoch: 10/20...  Training Step: 1959...  Training loss: 1.3627...  0.3199 sec/batch
Epoch: 10/20...  Training Step: 1960...  Training loss: 1.4054...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1961...  Training loss: 1.3563...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1962...  Training loss: 1.3562...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1963...  Training loss: 1.3550...  0.3195 sec/batch
Epoch: 10/20...  Training Step: 1964...  Training loss: 1.3679...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1965...  Training loss: 1.3806...  0.3201 sec/batch
Epoch: 10/20...  Training Step: 1966...  Training loss: 1.3785...  0.3177 sec/batch
Epoch: 10/20...  Training Step: 1967...  Training loss: 1.3734...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1968...  Training loss: 1.3715...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1969...  Training loss: 1.4074...  0.3203 sec/batch
Epoch: 10/20...  Training Step: 1970...  Training loss: 1.3706...  0.3182 sec/batch
Epoch: 10/20...  Training Step: 1971...  Training loss: 1.3782...  0.3168 sec/batch
Epoch: 10/20...  Training Step: 1972...  Training loss: 1.3785...  0.3198 sec/batch
Epoch: 10/20...  Training Step: 1973...  Training loss: 1.3651...  0.3212 sec/batch
Epoch: 10/20...  Training Step: 1974...  Training loss: 1.3622...  0.3205 sec/batch
Epoch: 10/20...  Training Step: 1975...  Training loss: 1.3864...  0.3196 sec/batch
Epoch: 10/20...  Training Step: 1976...  Training loss: 1.3655...  0.3202 sec/batch
Epoch: 10/20...  Training Step: 1977...  Training loss: 1.3505...  0.3165 sec/batch
Epoch: 10/20...  Training Step: 1978...  Training loss: 1.3918...  0.3212 sec/batch
Epoch: 10/20...  Training Step: 1979...  Training loss: 1.3648...  0.3194 sec/batch
Epoch: 10/20...  Training Step: 1980...  Training loss: 1.3717...  0.3192 sec/batch
Epoch: 11/20...  Training Step: 1981...  Training loss: 1.5110...  0.3189 sec/batch
Epoch: 11/20...  Training Step: 1982...  Training loss: 1.4038...  0.3167 sec/batch
Epoch: 11/20...  Training Step: 1983...  Training loss: 1.4029...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 1984...  Training loss: 1.4168...  0.3208 sec/batch
Epoch: 11/20...  Training Step: 1985...  Training loss: 1.3615...  0.3229 sec/batch
Epoch: 11/20...  Training Step: 1986...  Training loss: 1.3604...  0.3173 sec/batch
Epoch: 11/20...  Training Step: 1987...  Training loss: 1.3920...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 1988...  Training loss: 1.3828...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 1989...  Training loss: 1.3914...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 1990...  Training loss: 1.3858...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 1991...  Training loss: 1.3717...  0.3185 sec/batch
Epoch: 11/20...  Training Step: 1992...  Training loss: 1.3807...  0.3176 sec/batch
Epoch: 11/20...  Training Step: 1993...  Training loss: 1.3888...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 1994...  Training loss: 1.4008...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 1995...  Training loss: 1.3732...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 1996...  Training loss: 1.3612...  0.3185 sec/batch
Epoch: 11/20...  Training Step: 1997...  Training loss: 1.3882...  0.3170 sec/batch
Epoch: 11/20...  Training Step: 1998...  Training loss: 1.4032...  0.3189 sec/batch
Epoch: 11/20...  Training Step: 1999...  Training loss: 1.3893...  0.3222 sec/batch
Epoch: 11/20...  Training Step: 2000...  Training loss: 1.3979...  0.3209 sec/batch
Epoch: 11/20...  Training Step: 2001...  Training loss: 1.3816...  0.3166 sec/batch
Epoch: 11/20...  Training Step: 2002...  Training loss: 1.4037...  0.3156 sec/batch
Epoch: 11/20...  Training Step: 2003...  Training loss: 1.3722...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2004...  Training loss: 1.3915...  0.3204 sec/batch
Epoch: 11/20...  Training Step: 2005...  Training loss: 1.3799...  0.3208 sec/batch
Epoch: 11/20...  Training Step: 2006...  Training loss: 1.3363...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2007...  Training loss: 1.3594...  0.3171 sec/batch
Epoch: 11/20...  Training Step: 2008...  Training loss: 1.3886...  0.3195 sec/batch
Epoch: 11/20...  Training Step: 2009...  Training loss: 1.3917...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2010...  Training loss: 1.4019...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2011...  Training loss: 1.3685...  0.3195 sec/batch
Epoch: 11/20...  Training Step: 2012...  Training loss: 1.3479...  0.3180 sec/batch
Epoch: 11/20...  Training Step: 2013...  Training loss: 1.3792...  0.3168 sec/batch
Epoch: 11/20...  Training Step: 2014...  Training loss: 1.3880...  0.3272 sec/batch
Epoch: 11/20...  Training Step: 2015...  Training loss: 1.3629...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2016...  Training loss: 1.3812...  0.3192 sec/batch
Epoch: 11/20...  Training Step: 2017...  Training loss: 1.3492...  0.3173 sec/batch
Epoch: 11/20...  Training Step: 2018...  Training loss: 1.3382...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2019...  Training loss: 1.3270...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2020...  Training loss: 1.3581...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2021...  Training loss: 1.3592...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2022...  Training loss: 1.4102...  0.3192 sec/batch
Epoch: 11/20...  Training Step: 2023...  Training loss: 1.3573...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2024...  Training loss: 1.3437...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2025...  Training loss: 1.3785...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2026...  Training loss: 1.3389...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2027...  Training loss: 1.3696...  0.3185 sec/batch
Epoch: 11/20...  Training Step: 2028...  Training loss: 1.3622...  0.3189 sec/batch
Epoch: 11/20...  Training Step: 2029...  Training loss: 1.3643...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2030...  Training loss: 1.3906...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2031...  Training loss: 1.3423...  0.3215 sec/batch
Epoch: 11/20...  Training Step: 2032...  Training loss: 1.4217...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2033...  Training loss: 1.3805...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2034...  Training loss: 1.3811...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2035...  Training loss: 1.3643...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2036...  Training loss: 1.3766...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2037...  Training loss: 1.3901...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2038...  Training loss: 1.3604...  0.3179 sec/batch
Epoch: 11/20...  Training Step: 2039...  Training loss: 1.3458...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2040...  Training loss: 1.4053...  0.3208 sec/batch
Epoch: 11/20...  Training Step: 2041...  Training loss: 1.3892...  0.3213 sec/batch
Epoch: 11/20...  Training Step: 2042...  Training loss: 1.4117...  0.3211 sec/batch
Epoch: 11/20...  Training Step: 2043...  Training loss: 1.4004...  0.3173 sec/batch
Epoch: 11/20...  Training Step: 2044...  Training loss: 1.3852...  0.3180 sec/batch
Epoch: 11/20...  Training Step: 2045...  Training loss: 1.3677...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2046...  Training loss: 1.3788...  0.3205 sec/batch
Epoch: 11/20...  Training Step: 2047...  Training loss: 1.3906...  0.3180 sec/batch
Epoch: 11/20...  Training Step: 2048...  Training loss: 1.3601...  0.3174 sec/batch
Epoch: 11/20...  Training Step: 2049...  Training loss: 1.3811...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2050...  Training loss: 1.3629...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2051...  Training loss: 1.4134...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2052...  Training loss: 1.3900...  0.3201 sec/batch
Epoch: 11/20...  Training Step: 2053...  Training loss: 1.4127...  0.3181 sec/batch
Epoch: 11/20...  Training Step: 2054...  Training loss: 1.3604...  0.3192 sec/batch
Epoch: 11/20...  Training Step: 2055...  Training loss: 1.3708...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2056...  Training loss: 1.4000...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2057...  Training loss: 1.3606...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2058...  Training loss: 1.3636...  0.3238 sec/batch
Epoch: 11/20...  Training Step: 2059...  Training loss: 1.3285...  0.3177 sec/batch
Epoch: 11/20...  Training Step: 2060...  Training loss: 1.3720...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2061...  Training loss: 1.3310...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2062...  Training loss: 1.3654...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2063...  Training loss: 1.3317...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2064...  Training loss: 1.3796...  0.3150 sec/batch
Epoch: 11/20...  Training Step: 2065...  Training loss: 1.3475...  0.3210 sec/batch
Epoch: 11/20...  Training Step: 2066...  Training loss: 1.3689...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2067...  Training loss: 1.3405...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2068...  Training loss: 1.3563...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2069...  Training loss: 1.3372...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2070...  Training loss: 1.3791...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2071...  Training loss: 1.3451...  0.3188 sec/batch
Epoch: 11/20...  Training Step: 2072...  Training loss: 1.3543...  0.3206 sec/batch
Epoch: 11/20...  Training Step: 2073...  Training loss: 1.3399...  0.3188 sec/batch
Epoch: 11/20...  Training Step: 2074...  Training loss: 1.3442...  0.3165 sec/batch
Epoch: 11/20...  Training Step: 2075...  Training loss: 1.3419...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2076...  Training loss: 1.3835...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2077...  Training loss: 1.3695...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2078...  Training loss: 1.3413...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2079...  Training loss: 1.3441...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2080...  Training loss: 1.3339...  0.3205 sec/batch
Epoch: 11/20...  Training Step: 2081...  Training loss: 1.3725...  0.3205 sec/batch
Epoch: 11/20...  Training Step: 2082...  Training loss: 1.3585...  0.3204 sec/batch
Epoch: 11/20...  Training Step: 2083...  Training loss: 1.3547...  0.3188 sec/batch
Epoch: 11/20...  Training Step: 2084...  Training loss: 1.3579...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2085...  Training loss: 1.3514...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2086...  Training loss: 1.3705...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2087...  Training loss: 1.3655...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2088...  Training loss: 1.3666...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2089...  Training loss: 1.3629...  0.3181 sec/batch
Epoch: 11/20...  Training Step: 2090...  Training loss: 1.3839...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2091...  Training loss: 1.3448...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2092...  Training loss: 1.3660...  0.3195 sec/batch
Epoch: 11/20...  Training Step: 2093...  Training loss: 1.3532...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2094...  Training loss: 1.3397...  0.3187 sec/batch
Epoch: 11/20...  Training Step: 2095...  Training loss: 1.3427...  0.3179 sec/batch
Epoch: 11/20...  Training Step: 2096...  Training loss: 1.3229...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2097...  Training loss: 1.3662...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2098...  Training loss: 1.3683...  0.3187 sec/batch
Epoch: 11/20...  Training Step: 2099...  Training loss: 1.3524...  0.3172 sec/batch
Epoch: 11/20...  Training Step: 2100...  Training loss: 1.3463...  0.3210 sec/batch
Epoch: 11/20...  Training Step: 2101...  Training loss: 1.3527...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2102...  Training loss: 1.3200...  0.3192 sec/batch
Epoch: 11/20...  Training Step: 2103...  Training loss: 1.3086...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2104...  Training loss: 1.3709...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2105...  Training loss: 1.3575...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2106...  Training loss: 1.3194...  0.3208 sec/batch
Epoch: 11/20...  Training Step: 2107...  Training loss: 1.3820...  0.3189 sec/batch
Epoch: 11/20...  Training Step: 2108...  Training loss: 1.3723...  0.3190 sec/batch
Epoch: 11/20...  Training Step: 2109...  Training loss: 1.3458...  0.3216 sec/batch
Epoch: 11/20...  Training Step: 2110...  Training loss: 1.3350...  0.3177 sec/batch
Epoch: 11/20...  Training Step: 2111...  Training loss: 1.3171...  0.3256 sec/batch
Epoch: 11/20...  Training Step: 2112...  Training loss: 1.3560...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2113...  Training loss: 1.3882...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2114...  Training loss: 1.3664...  0.3181 sec/batch
Epoch: 11/20...  Training Step: 2115...  Training loss: 1.3780...  0.3183 sec/batch
Epoch: 11/20...  Training Step: 2116...  Training loss: 1.3762...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2117...  Training loss: 1.3954...  0.3212 sec/batch
Epoch: 11/20...  Training Step: 2118...  Training loss: 1.3791...  0.3206 sec/batch
Epoch: 11/20...  Training Step: 2119...  Training loss: 1.3650...  0.3186 sec/batch
Epoch: 11/20...  Training Step: 2120...  Training loss: 1.3673...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2121...  Training loss: 1.4136...  0.3204 sec/batch
Epoch: 11/20...  Training Step: 2122...  Training loss: 1.3641...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2123...  Training loss: 1.3594...  0.3206 sec/batch
Epoch: 11/20...  Training Step: 2124...  Training loss: 1.3911...  0.3201 sec/batch
Epoch: 11/20...  Training Step: 2125...  Training loss: 1.3471...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2126...  Training loss: 1.3900...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2127...  Training loss: 1.3650...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2128...  Training loss: 1.3900...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2129...  Training loss: 1.3811...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2130...  Training loss: 1.3516...  0.3189 sec/batch
Epoch: 11/20...  Training Step: 2131...  Training loss: 1.3281...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2132...  Training loss: 1.3452...  0.3196 sec/batch
Epoch: 11/20...  Training Step: 2133...  Training loss: 1.3709...  0.3198 sec/batch
Epoch: 11/20...  Training Step: 2134...  Training loss: 1.3527...  0.3204 sec/batch
Epoch: 11/20...  Training Step: 2135...  Training loss: 1.3587...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2136...  Training loss: 1.3542...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2137...  Training loss: 1.3665...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2138...  Training loss: 1.3608...  0.3202 sec/batch
Epoch: 11/20...  Training Step: 2139...  Training loss: 1.3261...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2140...  Training loss: 1.3703...  0.3200 sec/batch
Epoch: 11/20...  Training Step: 2141...  Training loss: 1.3927...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2142...  Training loss: 1.3720...  0.3204 sec/batch
Epoch: 11/20...  Training Step: 2143...  Training loss: 1.3688...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2144...  Training loss: 1.3548...  0.3185 sec/batch
Epoch: 11/20...  Training Step: 2145...  Training loss: 1.3647...  0.3187 sec/batch
Epoch: 11/20...  Training Step: 2146...  Training loss: 1.3615...  0.3215 sec/batch
Epoch: 11/20...  Training Step: 2147...  Training loss: 1.3849...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2148...  Training loss: 1.4338...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2149...  Training loss: 1.3703...  0.3193 sec/batch
Epoch: 11/20...  Training Step: 2150...  Training loss: 1.3642...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2151...  Training loss: 1.3540...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2152...  Training loss: 1.3439...  0.3206 sec/batch
Epoch: 11/20...  Training Step: 2153...  Training loss: 1.3868...  0.3205 sec/batch
Epoch: 11/20...  Training Step: 2154...  Training loss: 1.3688...  0.3181 sec/batch
Epoch: 11/20...  Training Step: 2155...  Training loss: 1.3736...  0.3168 sec/batch
Epoch: 11/20...  Training Step: 2156...  Training loss: 1.3359...  0.3205 sec/batch
Epoch: 11/20...  Training Step: 2157...  Training loss: 1.3474...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2158...  Training loss: 1.3994...  0.3191 sec/batch
Epoch: 11/20...  Training Step: 2159...  Training loss: 1.3295...  0.3201 sec/batch
Epoch: 11/20...  Training Step: 2160...  Training loss: 1.3321...  0.3194 sec/batch
Epoch: 11/20...  Training Step: 2161...  Training loss: 1.3476...  0.3215 sec/batch
Epoch: 11/20...  Training Step: 2162...  Training loss: 1.3483...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2163...  Training loss: 1.3564...  0.3209 sec/batch
Epoch: 11/20...  Training Step: 2164...  Training loss: 1.3567...  0.3223 sec/batch
Epoch: 11/20...  Training Step: 2165...  Training loss: 1.3568...  0.3201 sec/batch
Epoch: 11/20...  Training Step: 2166...  Training loss: 1.3459...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2167...  Training loss: 1.3862...  0.3195 sec/batch
Epoch: 11/20...  Training Step: 2168...  Training loss: 1.3624...  0.3199 sec/batch
Epoch: 11/20...  Training Step: 2169...  Training loss: 1.3613...  0.3187 sec/batch
Epoch: 11/20...  Training Step: 2170...  Training loss: 1.3532...  0.3167 sec/batch
Epoch: 11/20...  Training Step: 2171...  Training loss: 1.3432...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2172...  Training loss: 1.3356...  0.3207 sec/batch
Epoch: 11/20...  Training Step: 2173...  Training loss: 1.3620...  0.3197 sec/batch
Epoch: 11/20...  Training Step: 2174...  Training loss: 1.3406...  0.3211 sec/batch
Epoch: 11/20...  Training Step: 2175...  Training loss: 1.3221...  0.3179 sec/batch
Epoch: 11/20...  Training Step: 2176...  Training loss: 1.3652...  0.3195 sec/batch
Epoch: 11/20...  Training Step: 2177...  Training loss: 1.3432...  0.3203 sec/batch
Epoch: 11/20...  Training Step: 2178...  Training loss: 1.3378...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2179...  Training loss: 1.4944...  0.3186 sec/batch
Epoch: 12/20...  Training Step: 2180...  Training loss: 1.3816...  0.3168 sec/batch
Epoch: 12/20...  Training Step: 2181...  Training loss: 1.3731...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2182...  Training loss: 1.3796...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2183...  Training loss: 1.3398...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2184...  Training loss: 1.3261...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2185...  Training loss: 1.3634...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2186...  Training loss: 1.3432...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2187...  Training loss: 1.3675...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2188...  Training loss: 1.3458...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2189...  Training loss: 1.3368...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2190...  Training loss: 1.3604...  0.3162 sec/batch
Epoch: 12/20...  Training Step: 2191...  Training loss: 1.3627...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2192...  Training loss: 1.3712...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2193...  Training loss: 1.3425...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2194...  Training loss: 1.3381...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2195...  Training loss: 1.3676...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2196...  Training loss: 1.3847...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2197...  Training loss: 1.3569...  0.3202 sec/batch
Epoch: 12/20...  Training Step: 2198...  Training loss: 1.3779...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2199...  Training loss: 1.3508...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2200...  Training loss: 1.3732...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2201...  Training loss: 1.3538...  0.3129 sec/batch
Epoch: 12/20...  Training Step: 2202...  Training loss: 1.3771...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2203...  Training loss: 1.3539...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2204...  Training loss: 1.3086...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2205...  Training loss: 1.3217...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2206...  Training loss: 1.3703...  0.3206 sec/batch
Epoch: 12/20...  Training Step: 2207...  Training loss: 1.3639...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2208...  Training loss: 1.3681...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2209...  Training loss: 1.3414...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2210...  Training loss: 1.3164...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2211...  Training loss: 1.3609...  0.3185 sec/batch
Epoch: 12/20...  Training Step: 2212...  Training loss: 1.3589...  0.3213 sec/batch
Epoch: 12/20...  Training Step: 2213...  Training loss: 1.3481...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2214...  Training loss: 1.3636...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2215...  Training loss: 1.3270...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2216...  Training loss: 1.3049...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2217...  Training loss: 1.3072...  0.3202 sec/batch
Epoch: 12/20...  Training Step: 2218...  Training loss: 1.3380...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2219...  Training loss: 1.3312...  0.3189 sec/batch
Epoch: 12/20...  Training Step: 2220...  Training loss: 1.3797...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2221...  Training loss: 1.3405...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2222...  Training loss: 1.3270...  0.3208 sec/batch
Epoch: 12/20...  Training Step: 2223...  Training loss: 1.3591...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2224...  Training loss: 1.3181...  0.3191 sec/batch
Epoch: 12/20...  Training Step: 2225...  Training loss: 1.3416...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2226...  Training loss: 1.3433...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2227...  Training loss: 1.3411...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2228...  Training loss: 1.3652...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2229...  Training loss: 1.3232...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2230...  Training loss: 1.3881...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2231...  Training loss: 1.3513...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2232...  Training loss: 1.3541...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2233...  Training loss: 1.3416...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2234...  Training loss: 1.3456...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2235...  Training loss: 1.3689...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2236...  Training loss: 1.3468...  0.3188 sec/batch
Epoch: 12/20...  Training Step: 2237...  Training loss: 1.3349...  0.3205 sec/batch
Epoch: 12/20...  Training Step: 2238...  Training loss: 1.3927...  0.3206 sec/batch
Epoch: 12/20...  Training Step: 2239...  Training loss: 1.3577...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2240...  Training loss: 1.3984...  0.3182 sec/batch
Epoch: 12/20...  Training Step: 2241...  Training loss: 1.3740...  0.3186 sec/batch
Epoch: 12/20...  Training Step: 2242...  Training loss: 1.3611...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2243...  Training loss: 1.3416...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2244...  Training loss: 1.3603...  0.3206 sec/batch
Epoch: 12/20...  Training Step: 2245...  Training loss: 1.3600...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2246...  Training loss: 1.3330...  0.3185 sec/batch
Epoch: 12/20...  Training Step: 2247...  Training loss: 1.3553...  0.3215 sec/batch
Epoch: 12/20...  Training Step: 2248...  Training loss: 1.3320...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2249...  Training loss: 1.3931...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2250...  Training loss: 1.3640...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2251...  Training loss: 1.3881...  0.3173 sec/batch
Epoch: 12/20...  Training Step: 2252...  Training loss: 1.3248...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2253...  Training loss: 1.3501...  0.3212 sec/batch
Epoch: 12/20...  Training Step: 2254...  Training loss: 1.3645...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2255...  Training loss: 1.3388...  0.3191 sec/batch
Epoch: 12/20...  Training Step: 2256...  Training loss: 1.3459...  0.3187 sec/batch
Epoch: 12/20...  Training Step: 2257...  Training loss: 1.3032...  0.3169 sec/batch
Epoch: 12/20...  Training Step: 2258...  Training loss: 1.3484...  0.3212 sec/batch
Epoch: 12/20...  Training Step: 2259...  Training loss: 1.3036...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2260...  Training loss: 1.3446...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2261...  Training loss: 1.3082...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2262...  Training loss: 1.3450...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2263...  Training loss: 1.3185...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2264...  Training loss: 1.3544...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2265...  Training loss: 1.3128...  0.3237 sec/batch
Epoch: 12/20...  Training Step: 2266...  Training loss: 1.3206...  0.3171 sec/batch
Epoch: 12/20...  Training Step: 2267...  Training loss: 1.3094...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2268...  Training loss: 1.3474...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2269...  Training loss: 1.3230...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2270...  Training loss: 1.3272...  0.3181 sec/batch
Epoch: 12/20...  Training Step: 2271...  Training loss: 1.3156...  0.3168 sec/batch
Epoch: 12/20...  Training Step: 2272...  Training loss: 1.3259...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2273...  Training loss: 1.3244...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2274...  Training loss: 1.3494...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2275...  Training loss: 1.3490...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2276...  Training loss: 1.3059...  0.3186 sec/batch
Epoch: 12/20...  Training Step: 2277...  Training loss: 1.3148...  0.3176 sec/batch
Epoch: 12/20...  Training Step: 2278...  Training loss: 1.3071...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2279...  Training loss: 1.3433...  0.3207 sec/batch
Epoch: 12/20...  Training Step: 2280...  Training loss: 1.3313...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2281...  Training loss: 1.3374...  0.3178 sec/batch
Epoch: 12/20...  Training Step: 2282...  Training loss: 1.3357...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2283...  Training loss: 1.3338...  0.3207 sec/batch
Epoch: 12/20...  Training Step: 2284...  Training loss: 1.3309...  0.3206 sec/batch
Epoch: 12/20...  Training Step: 2285...  Training loss: 1.3378...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2286...  Training loss: 1.3502...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2287...  Training loss: 1.3337...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2288...  Training loss: 1.3563...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2289...  Training loss: 1.3225...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2290...  Training loss: 1.3378...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2291...  Training loss: 1.3496...  0.3184 sec/batch
Epoch: 12/20...  Training Step: 2292...  Training loss: 1.3286...  0.3173 sec/batch
Epoch: 12/20...  Training Step: 2293...  Training loss: 1.3084...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2294...  Training loss: 1.3017...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2295...  Training loss: 1.3474...  0.3212 sec/batch
Epoch: 12/20...  Training Step: 2296...  Training loss: 1.3480...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2297...  Training loss: 1.3358...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2298...  Training loss: 1.3315...  0.3175 sec/batch
Epoch: 12/20...  Training Step: 2299...  Training loss: 1.3335...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2300...  Training loss: 1.2991...  0.3178 sec/batch
Epoch: 12/20...  Training Step: 2301...  Training loss: 1.2957...  0.3167 sec/batch
Epoch: 12/20...  Training Step: 2302...  Training loss: 1.3354...  0.3188 sec/batch
Epoch: 12/20...  Training Step: 2303...  Training loss: 1.3252...  0.3176 sec/batch
Epoch: 12/20...  Training Step: 2304...  Training loss: 1.2996...  0.3208 sec/batch
Epoch: 12/20...  Training Step: 2305...  Training loss: 1.3479...  0.3207 sec/batch
Epoch: 12/20...  Training Step: 2306...  Training loss: 1.3357...  0.3219 sec/batch
Epoch: 12/20...  Training Step: 2307...  Training loss: 1.3220...  0.3166 sec/batch
Epoch: 12/20...  Training Step: 2308...  Training loss: 1.2989...  0.3166 sec/batch
Epoch: 12/20...  Training Step: 2309...  Training loss: 1.2895...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2310...  Training loss: 1.3113...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2311...  Training loss: 1.3682...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2312...  Training loss: 1.3493...  0.3170 sec/batch
Epoch: 12/20...  Training Step: 2313...  Training loss: 1.3449...  0.3249 sec/batch
Epoch: 12/20...  Training Step: 2314...  Training loss: 1.3396...  0.3221 sec/batch
Epoch: 12/20...  Training Step: 2315...  Training loss: 1.3670...  0.3207 sec/batch
Epoch: 12/20...  Training Step: 2316...  Training loss: 1.3425...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2317...  Training loss: 1.3343...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2318...  Training loss: 1.3374...  0.3169 sec/batch
Epoch: 12/20...  Training Step: 2319...  Training loss: 1.3929...  0.3221 sec/batch
Epoch: 12/20...  Training Step: 2320...  Training loss: 1.3414...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2321...  Training loss: 1.3275...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2322...  Training loss: 1.3665...  0.3189 sec/batch
Epoch: 12/20...  Training Step: 2323...  Training loss: 1.3235...  0.3176 sec/batch
Epoch: 12/20...  Training Step: 2324...  Training loss: 1.3601...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2325...  Training loss: 1.3479...  0.3192 sec/batch
Epoch: 12/20...  Training Step: 2326...  Training loss: 1.3731...  0.3185 sec/batch
Epoch: 12/20...  Training Step: 2327...  Training loss: 1.3646...  0.3166 sec/batch
Epoch: 12/20...  Training Step: 2328...  Training loss: 1.3230...  0.3191 sec/batch
Epoch: 12/20...  Training Step: 2329...  Training loss: 1.3093...  0.3185 sec/batch
Epoch: 12/20...  Training Step: 2330...  Training loss: 1.3139...  0.3194 sec/batch
Epoch: 12/20...  Training Step: 2331...  Training loss: 1.3500...  0.3204 sec/batch
Epoch: 12/20...  Training Step: 2332...  Training loss: 1.3252...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2333...  Training loss: 1.3348...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2334...  Training loss: 1.3246...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2335...  Training loss: 1.3412...  0.3189 sec/batch
Epoch: 12/20...  Training Step: 2336...  Training loss: 1.3314...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2337...  Training loss: 1.2985...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2338...  Training loss: 1.3656...  0.3183 sec/batch
Epoch: 12/20...  Training Step: 2339...  Training loss: 1.3683...  0.3196 sec/batch
Epoch: 12/20...  Training Step: 2340...  Training loss: 1.3356...  0.3199 sec/batch
Epoch: 12/20...  Training Step: 2341...  Training loss: 1.3390...  0.3206 sec/batch
Epoch: 12/20...  Training Step: 2342...  Training loss: 1.3366...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2343...  Training loss: 1.3345...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2344...  Training loss: 1.3356...  0.3185 sec/batch
Epoch: 12/20...  Training Step: 2345...  Training loss: 1.3571...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2346...  Training loss: 1.4000...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2347...  Training loss: 1.3379...  0.3208 sec/batch
Epoch: 12/20...  Training Step: 2348...  Training loss: 1.3412...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2349...  Training loss: 1.3295...  0.3179 sec/batch
Epoch: 12/20...  Training Step: 2350...  Training loss: 1.3194...  0.3219 sec/batch
Epoch: 12/20...  Training Step: 2351...  Training loss: 1.3662...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2352...  Training loss: 1.3387...  0.3203 sec/batch
Epoch: 12/20...  Training Step: 2353...  Training loss: 1.3416...  0.3188 sec/batch
Epoch: 12/20...  Training Step: 2354...  Training loss: 1.3174...  0.3177 sec/batch
Epoch: 12/20...  Training Step: 2355...  Training loss: 1.3241...  0.3213 sec/batch
Epoch: 12/20...  Training Step: 2356...  Training loss: 1.3626...  0.3198 sec/batch
Epoch: 12/20...  Training Step: 2357...  Training loss: 1.3112...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2358...  Training loss: 1.3044...  0.3190 sec/batch
Epoch: 12/20...  Training Step: 2359...  Training loss: 1.3196...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2360...  Training loss: 1.3336...  0.3197 sec/batch
Epoch: 12/20...  Training Step: 2361...  Training loss: 1.3389...  0.3209 sec/batch
Epoch: 12/20...  Training Step: 2362...  Training loss: 1.3381...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2363...  Training loss: 1.3240...  0.3202 sec/batch
Epoch: 12/20...  Training Step: 2364...  Training loss: 1.3212...  0.3193 sec/batch
Epoch: 12/20...  Training Step: 2365...  Training loss: 1.3558...  0.3215 sec/batch
Epoch: 12/20...  Training Step: 2366...  Training loss: 1.3322...  0.3202 sec/batch
Epoch: 12/20...  Training Step: 2367...  Training loss: 1.3262...  0.3201 sec/batch
Epoch: 12/20...  Training Step: 2368...  Training loss: 1.3382...  0.3208 sec/batch
Epoch: 12/20...  Training Step: 2369...  Training loss: 1.3239...  0.3178 sec/batch
Epoch: 12/20...  Training Step: 2370...  Training loss: 1.3109...  0.3214 sec/batch
Epoch: 12/20...  Training Step: 2371...  Training loss: 1.3453...  0.3200 sec/batch
Epoch: 12/20...  Training Step: 2372...  Training loss: 1.3176...  0.3202 sec/batch
Epoch: 12/20...  Training Step: 2373...  Training loss: 1.2993...  0.3180 sec/batch
Epoch: 12/20...  Training Step: 2374...  Training loss: 1.3396...  0.3195 sec/batch
Epoch: 12/20...  Training Step: 2375...  Training loss: 1.3316...  0.3208 sec/batch
Epoch: 12/20...  Training Step: 2376...  Training loss: 1.3250...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2377...  Training loss: 1.4674...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2378...  Training loss: 1.3479...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2379...  Training loss: 1.3287...  0.3181 sec/batch
Epoch: 13/20...  Training Step: 2380...  Training loss: 1.3514...  0.3183 sec/batch
Epoch: 13/20...  Training Step: 2381...  Training loss: 1.3105...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2382...  Training loss: 1.2967...  0.3190 sec/batch
Epoch: 13/20...  Training Step: 2383...  Training loss: 1.3376...  0.3192 sec/batch
Epoch: 13/20...  Training Step: 2384...  Training loss: 1.3345...  0.3179 sec/batch
Epoch: 13/20...  Training Step: 2385...  Training loss: 1.3403...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2386...  Training loss: 1.3292...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2387...  Training loss: 1.3179...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2388...  Training loss: 1.3361...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2389...  Training loss: 1.3325...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2390...  Training loss: 1.3427...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2391...  Training loss: 1.3185...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2392...  Training loss: 1.3150...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2393...  Training loss: 1.3504...  0.3190 sec/batch
Epoch: 13/20...  Training Step: 2394...  Training loss: 1.3459...  0.3178 sec/batch
Epoch: 13/20...  Training Step: 2395...  Training loss: 1.3348...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2396...  Training loss: 1.3601...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2397...  Training loss: 1.3191...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2398...  Training loss: 1.3503...  0.3189 sec/batch
Epoch: 13/20...  Training Step: 2399...  Training loss: 1.3281...  0.3190 sec/batch
Epoch: 13/20...  Training Step: 2400...  Training loss: 1.3485...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2401...  Training loss: 1.3326...  0.3182 sec/batch
Epoch: 13/20...  Training Step: 2402...  Training loss: 1.2924...  0.3173 sec/batch
Epoch: 13/20...  Training Step: 2403...  Training loss: 1.3002...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2404...  Training loss: 1.3489...  0.3172 sec/batch
Epoch: 13/20...  Training Step: 2405...  Training loss: 1.3460...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2406...  Training loss: 1.3369...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2407...  Training loss: 1.3207...  0.3199 sec/batch
Epoch: 13/20...  Training Step: 2408...  Training loss: 1.3026...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2409...  Training loss: 1.3404...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2410...  Training loss: 1.3372...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2411...  Training loss: 1.3112...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2412...  Training loss: 1.3321...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2413...  Training loss: 1.3013...  0.3192 sec/batch
Epoch: 13/20...  Training Step: 2414...  Training loss: 1.2827...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2415...  Training loss: 1.2887...  0.3179 sec/batch
Epoch: 13/20...  Training Step: 2416...  Training loss: 1.3090...  0.3216 sec/batch
Epoch: 13/20...  Training Step: 2417...  Training loss: 1.3009...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2418...  Training loss: 1.3682...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2419...  Training loss: 1.3127...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2420...  Training loss: 1.3057...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2421...  Training loss: 1.3392...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2422...  Training loss: 1.2935...  0.3188 sec/batch
Epoch: 13/20...  Training Step: 2423...  Training loss: 1.3150...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2424...  Training loss: 1.3077...  0.3214 sec/batch
Epoch: 13/20...  Training Step: 2425...  Training loss: 1.3203...  0.3278 sec/batch
Epoch: 13/20...  Training Step: 2426...  Training loss: 1.3402...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2427...  Training loss: 1.2965...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2428...  Training loss: 1.3627...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2429...  Training loss: 1.3275...  0.3168 sec/batch
Epoch: 13/20...  Training Step: 2430...  Training loss: 1.3377...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2431...  Training loss: 1.3141...  0.3199 sec/batch
Epoch: 13/20...  Training Step: 2432...  Training loss: 1.3191...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2433...  Training loss: 1.3465...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2434...  Training loss: 1.3067...  0.3172 sec/batch
Epoch: 13/20...  Training Step: 2435...  Training loss: 1.3093...  0.3164 sec/batch
Epoch: 13/20...  Training Step: 2436...  Training loss: 1.3608...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2437...  Training loss: 1.3302...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2438...  Training loss: 1.3612...  0.3205 sec/batch
Epoch: 13/20...  Training Step: 2439...  Training loss: 1.3534...  0.3199 sec/batch
Epoch: 13/20...  Training Step: 2440...  Training loss: 1.3400...  0.3184 sec/batch
Epoch: 13/20...  Training Step: 2441...  Training loss: 1.3227...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2442...  Training loss: 1.3349...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2443...  Training loss: 1.3485...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2444...  Training loss: 1.3135...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2445...  Training loss: 1.3346...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2446...  Training loss: 1.3133...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2447...  Training loss: 1.3657...  0.3186 sec/batch
Epoch: 13/20...  Training Step: 2448...  Training loss: 1.3492...  0.3205 sec/batch
Epoch: 13/20...  Training Step: 2449...  Training loss: 1.3574...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2450...  Training loss: 1.3101...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2451...  Training loss: 1.3298...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2452...  Training loss: 1.3336...  0.3216 sec/batch
Epoch: 13/20...  Training Step: 2453...  Training loss: 1.3190...  0.3186 sec/batch
Epoch: 13/20...  Training Step: 2454...  Training loss: 1.3220...  0.3199 sec/batch
Epoch: 13/20...  Training Step: 2455...  Training loss: 1.2862...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2456...  Training loss: 1.3286...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2457...  Training loss: 1.2870...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2458...  Training loss: 1.3235...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2459...  Training loss: 1.2936...  0.3205 sec/batch
Epoch: 13/20...  Training Step: 2460...  Training loss: 1.3276...  0.3175 sec/batch
Epoch: 13/20...  Training Step: 2461...  Training loss: 1.2992...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2462...  Training loss: 1.3166...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2463...  Training loss: 1.2939...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2464...  Training loss: 1.3082...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2465...  Training loss: 1.2952...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2466...  Training loss: 1.3270...  0.3209 sec/batch
Epoch: 13/20...  Training Step: 2467...  Training loss: 1.3035...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2468...  Training loss: 1.3014...  0.3187 sec/batch
Epoch: 13/20...  Training Step: 2469...  Training loss: 1.2885...  0.3212 sec/batch
Epoch: 13/20...  Training Step: 2470...  Training loss: 1.2990...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2471...  Training loss: 1.3059...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2472...  Training loss: 1.3393...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2473...  Training loss: 1.3201...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2474...  Training loss: 1.2858...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2475...  Training loss: 1.2990...  0.3175 sec/batch
Epoch: 13/20...  Training Step: 2476...  Training loss: 1.3039...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2477...  Training loss: 1.3203...  0.3209 sec/batch
Epoch: 13/20...  Training Step: 2478...  Training loss: 1.3078...  0.3215 sec/batch
Epoch: 13/20...  Training Step: 2479...  Training loss: 1.3166...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2480...  Training loss: 1.3072...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2481...  Training loss: 1.3104...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2482...  Training loss: 1.3137...  0.3205 sec/batch
Epoch: 13/20...  Training Step: 2483...  Training loss: 1.3220...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2484...  Training loss: 1.3345...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2485...  Training loss: 1.3015...  0.3176 sec/batch
Epoch: 13/20...  Training Step: 2486...  Training loss: 1.3317...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2487...  Training loss: 1.2927...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2488...  Training loss: 1.3226...  0.3192 sec/batch
Epoch: 13/20...  Training Step: 2489...  Training loss: 1.3218...  0.3190 sec/batch
Epoch: 13/20...  Training Step: 2490...  Training loss: 1.3055...  0.3168 sec/batch
Epoch: 13/20...  Training Step: 2491...  Training loss: 1.2956...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2492...  Training loss: 1.2877...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2493...  Training loss: 1.3262...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2494...  Training loss: 1.3299...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2495...  Training loss: 1.3107...  0.3184 sec/batch
Epoch: 13/20...  Training Step: 2496...  Training loss: 1.3077...  0.3159 sec/batch
Epoch: 13/20...  Training Step: 2497...  Training loss: 1.3097...  0.3222 sec/batch
Epoch: 13/20...  Training Step: 2498...  Training loss: 1.2880...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2499...  Training loss: 1.2747...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2500...  Training loss: 1.3131...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2501...  Training loss: 1.3089...  0.3199 sec/batch
Epoch: 13/20...  Training Step: 2502...  Training loss: 1.2764...  0.3211 sec/batch
Epoch: 13/20...  Training Step: 2503...  Training loss: 1.3162...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2504...  Training loss: 1.3154...  0.3185 sec/batch
Epoch: 13/20...  Training Step: 2505...  Training loss: 1.3015...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2506...  Training loss: 1.2866...  0.3174 sec/batch
Epoch: 13/20...  Training Step: 2507...  Training loss: 1.2636...  0.3205 sec/batch
Epoch: 13/20...  Training Step: 2508...  Training loss: 1.2953...  0.3189 sec/batch
Epoch: 13/20...  Training Step: 2509...  Training loss: 1.3470...  0.3213 sec/batch
Epoch: 13/20...  Training Step: 2510...  Training loss: 1.3210...  0.3192 sec/batch
Epoch: 13/20...  Training Step: 2511...  Training loss: 1.3178...  0.3176 sec/batch
Epoch: 13/20...  Training Step: 2512...  Training loss: 1.3164...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2513...  Training loss: 1.3398...  0.3188 sec/batch
Epoch: 13/20...  Training Step: 2514...  Training loss: 1.3400...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2515...  Training loss: 1.3275...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2516...  Training loss: 1.3209...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2517...  Training loss: 1.3676...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2518...  Training loss: 1.3186...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2519...  Training loss: 1.3024...  0.3192 sec/batch
Epoch: 13/20...  Training Step: 2520...  Training loss: 1.3515...  0.3196 sec/batch
Epoch: 13/20...  Training Step: 2521...  Training loss: 1.3042...  0.3188 sec/batch
Epoch: 13/20...  Training Step: 2522...  Training loss: 1.3294...  0.3239 sec/batch
Epoch: 13/20...  Training Step: 2523...  Training loss: 1.3301...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2524...  Training loss: 1.3645...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2525...  Training loss: 1.3444...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2526...  Training loss: 1.3143...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2527...  Training loss: 1.2889...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2528...  Training loss: 1.3031...  0.3208 sec/batch
Epoch: 13/20...  Training Step: 2529...  Training loss: 1.3300...  0.3180 sec/batch
Epoch: 13/20...  Training Step: 2530...  Training loss: 1.3094...  0.3167 sec/batch
Epoch: 13/20...  Training Step: 2531...  Training loss: 1.3119...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2532...  Training loss: 1.3114...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2533...  Training loss: 1.3201...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2534...  Training loss: 1.3010...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2535...  Training loss: 1.2819...  0.3175 sec/batch
Epoch: 13/20...  Training Step: 2536...  Training loss: 1.3345...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2537...  Training loss: 1.3426...  0.3208 sec/batch
Epoch: 13/20...  Training Step: 2538...  Training loss: 1.3185...  0.3208 sec/batch
Epoch: 13/20...  Training Step: 2539...  Training loss: 1.3147...  0.3206 sec/batch
Epoch: 13/20...  Training Step: 2540...  Training loss: 1.3197...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2541...  Training loss: 1.3137...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2542...  Training loss: 1.3156...  0.3189 sec/batch
Epoch: 13/20...  Training Step: 2543...  Training loss: 1.3374...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2544...  Training loss: 1.3808...  0.3207 sec/batch
Epoch: 13/20...  Training Step: 2545...  Training loss: 1.3295...  0.3184 sec/batch
Epoch: 13/20...  Training Step: 2546...  Training loss: 1.3122...  0.3170 sec/batch
Epoch: 13/20...  Training Step: 2547...  Training loss: 1.3012...  0.3200 sec/batch
Epoch: 13/20...  Training Step: 2548...  Training loss: 1.2985...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2549...  Training loss: 1.3463...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2550...  Training loss: 1.3147...  0.3197 sec/batch
Epoch: 13/20...  Training Step: 2551...  Training loss: 1.3350...  0.3185 sec/batch
Epoch: 13/20...  Training Step: 2552...  Training loss: 1.2899...  0.3178 sec/batch
Epoch: 13/20...  Training Step: 2553...  Training loss: 1.3044...  0.3198 sec/batch
Epoch: 13/20...  Training Step: 2554...  Training loss: 1.3539...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2555...  Training loss: 1.3049...  0.3191 sec/batch
Epoch: 13/20...  Training Step: 2556...  Training loss: 1.2992...  0.3176 sec/batch
Epoch: 13/20...  Training Step: 2557...  Training loss: 1.3033...  0.3195 sec/batch
Epoch: 13/20...  Training Step: 2558...  Training loss: 1.3093...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2559...  Training loss: 1.3162...  0.3203 sec/batch
Epoch: 13/20...  Training Step: 2560...  Training loss: 1.3058...  0.3204 sec/batch
Epoch: 13/20...  Training Step: 2561...  Training loss: 1.3092...  0.3181 sec/batch
Epoch: 13/20...  Training Step: 2562...  Training loss: 1.2953...  0.3167 sec/batch
Epoch: 13/20...  Training Step: 2563...  Training loss: 1.3353...  0.3193 sec/batch
Epoch: 13/20...  Training Step: 2564...  Training loss: 1.3064...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2565...  Training loss: 1.3138...  0.3202 sec/batch
Epoch: 13/20...  Training Step: 2566...  Training loss: 1.3021...  0.3182 sec/batch
Epoch: 13/20...  Training Step: 2567...  Training loss: 1.3026...  0.3165 sec/batch
Epoch: 13/20...  Training Step: 2568...  Training loss: 1.3028...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2569...  Training loss: 1.3189...  0.3210 sec/batch
Epoch: 13/20...  Training Step: 2570...  Training loss: 1.3040...  0.3194 sec/batch
Epoch: 13/20...  Training Step: 2571...  Training loss: 1.2758...  0.3201 sec/batch
Epoch: 13/20...  Training Step: 2572...  Training loss: 1.3189...  0.3185 sec/batch
Epoch: 13/20...  Training Step: 2573...  Training loss: 1.3057...  0.3181 sec/batch
Epoch: 13/20...  Training Step: 2574...  Training loss: 1.3054...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2575...  Training loss: 1.4329...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2576...  Training loss: 1.3259...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2577...  Training loss: 1.3250...  0.3191 sec/batch
Epoch: 14/20...  Training Step: 2578...  Training loss: 1.3412...  0.3170 sec/batch
Epoch: 14/20...  Training Step: 2579...  Training loss: 1.2993...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2580...  Training loss: 1.2822...  0.3217 sec/batch
Epoch: 14/20...  Training Step: 2581...  Training loss: 1.3156...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2582...  Training loss: 1.3004...  0.3171 sec/batch
Epoch: 14/20...  Training Step: 2583...  Training loss: 1.3175...  0.3182 sec/batch
Epoch: 14/20...  Training Step: 2584...  Training loss: 1.3056...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2585...  Training loss: 1.2900...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2586...  Training loss: 1.3035...  0.3189 sec/batch
Epoch: 14/20...  Training Step: 2587...  Training loss: 1.3046...  0.3169 sec/batch
Epoch: 14/20...  Training Step: 2588...  Training loss: 1.3265...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2589...  Training loss: 1.2966...  0.3211 sec/batch
Epoch: 14/20...  Training Step: 2590...  Training loss: 1.2857...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2591...  Training loss: 1.3308...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2592...  Training loss: 1.3322...  0.3209 sec/batch
Epoch: 14/20...  Training Step: 2593...  Training loss: 1.3131...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2594...  Training loss: 1.3375...  0.3207 sec/batch
Epoch: 14/20...  Training Step: 2595...  Training loss: 1.3163...  0.3207 sec/batch
Epoch: 14/20...  Training Step: 2596...  Training loss: 1.3284...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2597...  Training loss: 1.3106...  0.3175 sec/batch
Epoch: 14/20...  Training Step: 2598...  Training loss: 1.3256...  0.3172 sec/batch
Epoch: 14/20...  Training Step: 2599...  Training loss: 1.3141...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2600...  Training loss: 1.2691...  0.3254 sec/batch
Epoch: 14/20...  Training Step: 2601...  Training loss: 1.2829...  0.3141 sec/batch
Epoch: 14/20...  Training Step: 2602...  Training loss: 1.3186...  0.3167 sec/batch
Epoch: 14/20...  Training Step: 2603...  Training loss: 1.3252...  0.3182 sec/batch
Epoch: 14/20...  Training Step: 2604...  Training loss: 1.3319...  0.3168 sec/batch
Epoch: 14/20...  Training Step: 2605...  Training loss: 1.2992...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2606...  Training loss: 1.2827...  0.3212 sec/batch
Epoch: 14/20...  Training Step: 2607...  Training loss: 1.3240...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2608...  Training loss: 1.3197...  0.3205 sec/batch
Epoch: 14/20...  Training Step: 2609...  Training loss: 1.2947...  0.3191 sec/batch
Epoch: 14/20...  Training Step: 2610...  Training loss: 1.3133...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2611...  Training loss: 1.2870...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2612...  Training loss: 1.2641...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2613...  Training loss: 1.2708...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2614...  Training loss: 1.3003...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2615...  Training loss: 1.2836...  0.3183 sec/batch
Epoch: 14/20...  Training Step: 2616...  Training loss: 1.3455...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2617...  Training loss: 1.2959...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2618...  Training loss: 1.2836...  0.3209 sec/batch
Epoch: 14/20...  Training Step: 2619...  Training loss: 1.3158...  0.3175 sec/batch
Epoch: 14/20...  Training Step: 2620...  Training loss: 1.2816...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2621...  Training loss: 1.3034...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2622...  Training loss: 1.2886...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2623...  Training loss: 1.3060...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2624...  Training loss: 1.3119...  0.3175 sec/batch
Epoch: 14/20...  Training Step: 2625...  Training loss: 1.2884...  0.3168 sec/batch
Epoch: 14/20...  Training Step: 2626...  Training loss: 1.3403...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2627...  Training loss: 1.3213...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2628...  Training loss: 1.3174...  0.3185 sec/batch
Epoch: 14/20...  Training Step: 2629...  Training loss: 1.2910...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2630...  Training loss: 1.3061...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2631...  Training loss: 1.3214...  0.3178 sec/batch
Epoch: 14/20...  Training Step: 2632...  Training loss: 1.2973...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2633...  Training loss: 1.2931...  0.3200 sec/batch
Epoch: 14/20...  Training Step: 2634...  Training loss: 1.3411...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2635...  Training loss: 1.3122...  0.3177 sec/batch
Epoch: 14/20...  Training Step: 2636...  Training loss: 1.3494...  0.3178 sec/batch
Epoch: 14/20...  Training Step: 2637...  Training loss: 1.3390...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2638...  Training loss: 1.3209...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2639...  Training loss: 1.3096...  0.3191 sec/batch
Epoch: 14/20...  Training Step: 2640...  Training loss: 1.3208...  0.3181 sec/batch
Epoch: 14/20...  Training Step: 2641...  Training loss: 1.3243...  0.3186 sec/batch
Epoch: 14/20...  Training Step: 2642...  Training loss: 1.2998...  0.3200 sec/batch
Epoch: 14/20...  Training Step: 2643...  Training loss: 1.3131...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2644...  Training loss: 1.2865...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2645...  Training loss: 1.3480...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2646...  Training loss: 1.3374...  0.3178 sec/batch
Epoch: 14/20...  Training Step: 2647...  Training loss: 1.3424...  0.3185 sec/batch
Epoch: 14/20...  Training Step: 2648...  Training loss: 1.2912...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2649...  Training loss: 1.3171...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2650...  Training loss: 1.3298...  0.3189 sec/batch
Epoch: 14/20...  Training Step: 2651...  Training loss: 1.2976...  0.3174 sec/batch
Epoch: 14/20...  Training Step: 2652...  Training loss: 1.2899...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2653...  Training loss: 1.2640...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2654...  Training loss: 1.3135...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2655...  Training loss: 1.2735...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2656...  Training loss: 1.3101...  0.3182 sec/batch
Epoch: 14/20...  Training Step: 2657...  Training loss: 1.2675...  0.3171 sec/batch
Epoch: 14/20...  Training Step: 2658...  Training loss: 1.3037...  0.3211 sec/batch
Epoch: 14/20...  Training Step: 2659...  Training loss: 1.2818...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2660...  Training loss: 1.3025...  0.3179 sec/batch
Epoch: 14/20...  Training Step: 2661...  Training loss: 1.2754...  0.3164 sec/batch
Epoch: 14/20...  Training Step: 2662...  Training loss: 1.2854...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2663...  Training loss: 1.2754...  0.3206 sec/batch
Epoch: 14/20...  Training Step: 2664...  Training loss: 1.3072...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2665...  Training loss: 1.2772...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2666...  Training loss: 1.2962...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2667...  Training loss: 1.2817...  0.3172 sec/batch
Epoch: 14/20...  Training Step: 2668...  Training loss: 1.2782...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2669...  Training loss: 1.2867...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2670...  Training loss: 1.3097...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2671...  Training loss: 1.3003...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2672...  Training loss: 1.2681...  0.3175 sec/batch
Epoch: 14/20...  Training Step: 2673...  Training loss: 1.2795...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2674...  Training loss: 1.2830...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2675...  Training loss: 1.3054...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2676...  Training loss: 1.2945...  0.3207 sec/batch
Epoch: 14/20...  Training Step: 2677...  Training loss: 1.2990...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2678...  Training loss: 1.2883...  0.3214 sec/batch
Epoch: 14/20...  Training Step: 2679...  Training loss: 1.2916...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2680...  Training loss: 1.3051...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2681...  Training loss: 1.3066...  0.3216 sec/batch
Epoch: 14/20...  Training Step: 2682...  Training loss: 1.2984...  0.3180 sec/batch
Epoch: 14/20...  Training Step: 2683...  Training loss: 1.2910...  0.3180 sec/batch
Epoch: 14/20...  Training Step: 2684...  Training loss: 1.3134...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2685...  Training loss: 1.2784...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2686...  Training loss: 1.3038...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2687...  Training loss: 1.3077...  0.3171 sec/batch
Epoch: 14/20...  Training Step: 2688...  Training loss: 1.2996...  0.3208 sec/batch
Epoch: 14/20...  Training Step: 2689...  Training loss: 1.2693...  0.3212 sec/batch
Epoch: 14/20...  Training Step: 2690...  Training loss: 1.2605...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2691...  Training loss: 1.3066...  0.3189 sec/batch
Epoch: 14/20...  Training Step: 2692...  Training loss: 1.3082...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2693...  Training loss: 1.2926...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2694...  Training loss: 1.2999...  0.3207 sec/batch
Epoch: 14/20...  Training Step: 2695...  Training loss: 1.2952...  0.3253 sec/batch
Epoch: 14/20...  Training Step: 2696...  Training loss: 1.2678...  0.3181 sec/batch
Epoch: 14/20...  Training Step: 2697...  Training loss: 1.2568...  0.3173 sec/batch
Epoch: 14/20...  Training Step: 2698...  Training loss: 1.2988...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2699...  Training loss: 1.2836...  0.3191 sec/batch
Epoch: 14/20...  Training Step: 2700...  Training loss: 1.2530...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2701...  Training loss: 1.3080...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2702...  Training loss: 1.2981...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2703...  Training loss: 1.2746...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2704...  Training loss: 1.2670...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2705...  Training loss: 1.2492...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2706...  Training loss: 1.2843...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2707...  Training loss: 1.3242...  0.3194 sec/batch
Epoch: 14/20...  Training Step: 2708...  Training loss: 1.2988...  0.3179 sec/batch
Epoch: 14/20...  Training Step: 2709...  Training loss: 1.3036...  0.3213 sec/batch
Epoch: 14/20...  Training Step: 2710...  Training loss: 1.3043...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2711...  Training loss: 1.3181...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2712...  Training loss: 1.3178...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2713...  Training loss: 1.3082...  0.3182 sec/batch
Epoch: 14/20...  Training Step: 2714...  Training loss: 1.3017...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2715...  Training loss: 1.3429...  0.3191 sec/batch
Epoch: 14/20...  Training Step: 2716...  Training loss: 1.3054...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2717...  Training loss: 1.2998...  0.3186 sec/batch
Epoch: 14/20...  Training Step: 2718...  Training loss: 1.3386...  0.3177 sec/batch
Epoch: 14/20...  Training Step: 2719...  Training loss: 1.2807...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2720...  Training loss: 1.3237...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2721...  Training loss: 1.3076...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2722...  Training loss: 1.3399...  0.3177 sec/batch
Epoch: 14/20...  Training Step: 2723...  Training loss: 1.3274...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2724...  Training loss: 1.2995...  0.3206 sec/batch
Epoch: 14/20...  Training Step: 2725...  Training loss: 1.2625...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2726...  Training loss: 1.2846...  0.3218 sec/batch
Epoch: 14/20...  Training Step: 2727...  Training loss: 1.3124...  0.3177 sec/batch
Epoch: 14/20...  Training Step: 2728...  Training loss: 1.2927...  0.3180 sec/batch
Epoch: 14/20...  Training Step: 2729...  Training loss: 1.2900...  0.3199 sec/batch
Epoch: 14/20...  Training Step: 2730...  Training loss: 1.2937...  0.3214 sec/batch
Epoch: 14/20...  Training Step: 2731...  Training loss: 1.2939...  0.3187 sec/batch
Epoch: 14/20...  Training Step: 2732...  Training loss: 1.2896...  0.3176 sec/batch
Epoch: 14/20...  Training Step: 2733...  Training loss: 1.2699...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2734...  Training loss: 1.3078...  0.3168 sec/batch
Epoch: 14/20...  Training Step: 2735...  Training loss: 1.3330...  0.3195 sec/batch
Epoch: 14/20...  Training Step: 2736...  Training loss: 1.3064...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2737...  Training loss: 1.2923...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2738...  Training loss: 1.2989...  0.3188 sec/batch
Epoch: 14/20...  Training Step: 2739...  Training loss: 1.3010...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2740...  Training loss: 1.2979...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2741...  Training loss: 1.3276...  0.3190 sec/batch
Epoch: 14/20...  Training Step: 2742...  Training loss: 1.3664...  0.3190 sec/batch
Epoch: 14/20...  Training Step: 2743...  Training loss: 1.3030...  0.3180 sec/batch
Epoch: 14/20...  Training Step: 2744...  Training loss: 1.3027...  0.3181 sec/batch
Epoch: 14/20...  Training Step: 2745...  Training loss: 1.2932...  0.3190 sec/batch
Epoch: 14/20...  Training Step: 2746...  Training loss: 1.2808...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2747...  Training loss: 1.3332...  0.3192 sec/batch
Epoch: 14/20...  Training Step: 2748...  Training loss: 1.3032...  0.3196 sec/batch
Epoch: 14/20...  Training Step: 2749...  Training loss: 1.3061...  0.3206 sec/batch
Epoch: 14/20...  Training Step: 2750...  Training loss: 1.2706...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2751...  Training loss: 1.2914...  0.3203 sec/batch
Epoch: 14/20...  Training Step: 2752...  Training loss: 1.3338...  0.3201 sec/batch
Epoch: 14/20...  Training Step: 2753...  Training loss: 1.2797...  0.3179 sec/batch
Epoch: 14/20...  Training Step: 2754...  Training loss: 1.2754...  0.3170 sec/batch
Epoch: 14/20...  Training Step: 2755...  Training loss: 1.2762...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2756...  Training loss: 1.2966...  0.3198 sec/batch
Epoch: 14/20...  Training Step: 2757...  Training loss: 1.2934...  0.3183 sec/batch
Epoch: 14/20...  Training Step: 2758...  Training loss: 1.2929...  0.3169 sec/batch
Epoch: 14/20...  Training Step: 2759...  Training loss: 1.2784...  0.3202 sec/batch
Epoch: 14/20...  Training Step: 2760...  Training loss: 1.2907...  0.3189 sec/batch
Epoch: 14/20...  Training Step: 2761...  Training loss: 1.3333...  0.3218 sec/batch
Epoch: 14/20...  Training Step: 2762...  Training loss: 1.2907...  0.3200 sec/batch
Epoch: 14/20...  Training Step: 2763...  Training loss: 1.2978...  0.3197 sec/batch
Epoch: 14/20...  Training Step: 2764...  Training loss: 1.2956...  0.3173 sec/batch
Epoch: 14/20...  Training Step: 2765...  Training loss: 1.2772...  0.3212 sec/batch
Epoch: 14/20...  Training Step: 2766...  Training loss: 1.2822...  0.3205 sec/batch
Epoch: 14/20...  Training Step: 2767...  Training loss: 1.2924...  0.3209 sec/batch
Epoch: 14/20...  Training Step: 2768...  Training loss: 1.2824...  0.3178 sec/batch
Epoch: 14/20...  Training Step: 2769...  Training loss: 1.2688...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2770...  Training loss: 1.2948...  0.3193 sec/batch
Epoch: 14/20...  Training Step: 2771...  Training loss: 1.2893...  0.3204 sec/batch
Epoch: 14/20...  Training Step: 2772...  Training loss: 1.2840...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2773...  Training loss: 1.4155...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2774...  Training loss: 1.3124...  0.3177 sec/batch
Epoch: 15/20...  Training Step: 2775...  Training loss: 1.2968...  0.3207 sec/batch
Epoch: 15/20...  Training Step: 2776...  Training loss: 1.3226...  0.3220 sec/batch
Epoch: 15/20...  Training Step: 2777...  Training loss: 1.2834...  0.3178 sec/batch
Epoch: 15/20...  Training Step: 2778...  Training loss: 1.2682...  0.3165 sec/batch
Epoch: 15/20...  Training Step: 2779...  Training loss: 1.3081...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2780...  Training loss: 1.2995...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2781...  Training loss: 1.3111...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2782...  Training loss: 1.2942...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2783...  Training loss: 1.2844...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2784...  Training loss: 1.2992...  0.3186 sec/batch
Epoch: 15/20...  Training Step: 2785...  Training loss: 1.2934...  0.3177 sec/batch
Epoch: 15/20...  Training Step: 2786...  Training loss: 1.3145...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2787...  Training loss: 1.2868...  0.3202 sec/batch
Epoch: 15/20...  Training Step: 2788...  Training loss: 1.2827...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2789...  Training loss: 1.3151...  0.3164 sec/batch
Epoch: 15/20...  Training Step: 2790...  Training loss: 1.3199...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2791...  Training loss: 1.3091...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2792...  Training loss: 1.3191...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2793...  Training loss: 1.2921...  0.3188 sec/batch
Epoch: 15/20...  Training Step: 2794...  Training loss: 1.3123...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2795...  Training loss: 1.2883...  0.3182 sec/batch
Epoch: 15/20...  Training Step: 2796...  Training loss: 1.3205...  0.3176 sec/batch
Epoch: 15/20...  Training Step: 2797...  Training loss: 1.3005...  0.3226 sec/batch
Epoch: 15/20...  Training Step: 2798...  Training loss: 1.2486...  0.3188 sec/batch
Epoch: 15/20...  Training Step: 2799...  Training loss: 1.2692...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2800...  Training loss: 1.3109...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2801...  Training loss: 1.3054...  0.3149 sec/batch
Epoch: 15/20...  Training Step: 2802...  Training loss: 1.3117...  0.3161 sec/batch
Epoch: 15/20...  Training Step: 2803...  Training loss: 1.2774...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2804...  Training loss: 1.2675...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2805...  Training loss: 1.3119...  0.3178 sec/batch
Epoch: 15/20...  Training Step: 2806...  Training loss: 1.3043...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2807...  Training loss: 1.2796...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2808...  Training loss: 1.2995...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2809...  Training loss: 1.2605...  0.3189 sec/batch
Epoch: 15/20...  Training Step: 2810...  Training loss: 1.2554...  0.3175 sec/batch
Epoch: 15/20...  Training Step: 2811...  Training loss: 1.2524...  0.3171 sec/batch
Epoch: 15/20...  Training Step: 2812...  Training loss: 1.2723...  0.3183 sec/batch
Epoch: 15/20...  Training Step: 2813...  Training loss: 1.2682...  0.3205 sec/batch
Epoch: 15/20...  Training Step: 2814...  Training loss: 1.3259...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2815...  Training loss: 1.2727...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2816...  Training loss: 1.2626...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2817...  Training loss: 1.3040...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2818...  Training loss: 1.2640...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2819...  Training loss: 1.2742...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2820...  Training loss: 1.2896...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2821...  Training loss: 1.2909...  0.3177 sec/batch
Epoch: 15/20...  Training Step: 2822...  Training loss: 1.3057...  0.3176 sec/batch
Epoch: 15/20...  Training Step: 2823...  Training loss: 1.2594...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2824...  Training loss: 1.3293...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2825...  Training loss: 1.3048...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2826...  Training loss: 1.3024...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2827...  Training loss: 1.2863...  0.3191 sec/batch
Epoch: 15/20...  Training Step: 2828...  Training loss: 1.2921...  0.3187 sec/batch
Epoch: 15/20...  Training Step: 2829...  Training loss: 1.2967...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2830...  Training loss: 1.2894...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2831...  Training loss: 1.2740...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2832...  Training loss: 1.3250...  0.3178 sec/batch
Epoch: 15/20...  Training Step: 2833...  Training loss: 1.2986...  0.3191 sec/batch
Epoch: 15/20...  Training Step: 2834...  Training loss: 1.3333...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2835...  Training loss: 1.3206...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2836...  Training loss: 1.3014...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2837...  Training loss: 1.2789...  0.3184 sec/batch
Epoch: 15/20...  Training Step: 2838...  Training loss: 1.2962...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2839...  Training loss: 1.3096...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2840...  Training loss: 1.2724...  0.3201 sec/batch
Epoch: 15/20...  Training Step: 2841...  Training loss: 1.2980...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2842...  Training loss: 1.2697...  0.3181 sec/batch
Epoch: 15/20...  Training Step: 2843...  Training loss: 1.3348...  0.3171 sec/batch
Epoch: 15/20...  Training Step: 2844...  Training loss: 1.3113...  0.3205 sec/batch
Epoch: 15/20...  Training Step: 2845...  Training loss: 1.3320...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2846...  Training loss: 1.2727...  0.3190 sec/batch
Epoch: 15/20...  Training Step: 2847...  Training loss: 1.2954...  0.3174 sec/batch
Epoch: 15/20...  Training Step: 2848...  Training loss: 1.3109...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2849...  Training loss: 1.2809...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2850...  Training loss: 1.2821...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2851...  Training loss: 1.2401...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2852...  Training loss: 1.2923...  0.3205 sec/batch
Epoch: 15/20...  Training Step: 2853...  Training loss: 1.2499...  0.3168 sec/batch
Epoch: 15/20...  Training Step: 2854...  Training loss: 1.2858...  0.3211 sec/batch
Epoch: 15/20...  Training Step: 2855...  Training loss: 1.2630...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2856...  Training loss: 1.2904...  0.3190 sec/batch
Epoch: 15/20...  Training Step: 2857...  Training loss: 1.2585...  0.3162 sec/batch
Epoch: 15/20...  Training Step: 2858...  Training loss: 1.2838...  0.3186 sec/batch
Epoch: 15/20...  Training Step: 2859...  Training loss: 1.2620...  0.3168 sec/batch
Epoch: 15/20...  Training Step: 2860...  Training loss: 1.2638...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2861...  Training loss: 1.2588...  0.3185 sec/batch
Epoch: 15/20...  Training Step: 2862...  Training loss: 1.2970...  0.3163 sec/batch
Epoch: 15/20...  Training Step: 2863...  Training loss: 1.2734...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2864...  Training loss: 1.2804...  0.3188 sec/batch
Epoch: 15/20...  Training Step: 2865...  Training loss: 1.2572...  0.3201 sec/batch
Epoch: 15/20...  Training Step: 2866...  Training loss: 1.2658...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2867...  Training loss: 1.2677...  0.3186 sec/batch
Epoch: 15/20...  Training Step: 2868...  Training loss: 1.2966...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2869...  Training loss: 1.2971...  0.3165 sec/batch
Epoch: 15/20...  Training Step: 2870...  Training loss: 1.2578...  0.3218 sec/batch
Epoch: 15/20...  Training Step: 2871...  Training loss: 1.2657...  0.3202 sec/batch
Epoch: 15/20...  Training Step: 2872...  Training loss: 1.2572...  0.3205 sec/batch
Epoch: 15/20...  Training Step: 2873...  Training loss: 1.2898...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2874...  Training loss: 1.2876...  0.3176 sec/batch
Epoch: 15/20...  Training Step: 2875...  Training loss: 1.2812...  0.3202 sec/batch
Epoch: 15/20...  Training Step: 2876...  Training loss: 1.2744...  0.3184 sec/batch
Epoch: 15/20...  Training Step: 2877...  Training loss: 1.2766...  0.3234 sec/batch
Epoch: 15/20...  Training Step: 2878...  Training loss: 1.2813...  0.3174 sec/batch
Epoch: 15/20...  Training Step: 2879...  Training loss: 1.2819...  0.3180 sec/batch
Epoch: 15/20...  Training Step: 2880...  Training loss: 1.2919...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2881...  Training loss: 1.2653...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2882...  Training loss: 1.3062...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2883...  Training loss: 1.2683...  0.3172 sec/batch
Epoch: 15/20...  Training Step: 2884...  Training loss: 1.2951...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2885...  Training loss: 1.2890...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2886...  Training loss: 1.2769...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2887...  Training loss: 1.2523...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2888...  Training loss: 1.2542...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2889...  Training loss: 1.3009...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2890...  Training loss: 1.2914...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2891...  Training loss: 1.2828...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2892...  Training loss: 1.2840...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2893...  Training loss: 1.2750...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2894...  Training loss: 1.2594...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2895...  Training loss: 1.2475...  0.3190 sec/batch
Epoch: 15/20...  Training Step: 2896...  Training loss: 1.2842...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2897...  Training loss: 1.2768...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2898...  Training loss: 1.2356...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2899...  Training loss: 1.2964...  0.3175 sec/batch
Epoch: 15/20...  Training Step: 2900...  Training loss: 1.2865...  0.3181 sec/batch
Epoch: 15/20...  Training Step: 2901...  Training loss: 1.2667...  0.3213 sec/batch
Epoch: 15/20...  Training Step: 2902...  Training loss: 1.2416...  0.3229 sec/batch
Epoch: 15/20...  Training Step: 2903...  Training loss: 1.2334...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2904...  Training loss: 1.2580...  0.3189 sec/batch
Epoch: 15/20...  Training Step: 2905...  Training loss: 1.3037...  0.3180 sec/batch
Epoch: 15/20...  Training Step: 2906...  Training loss: 1.2894...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2907...  Training loss: 1.2864...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2908...  Training loss: 1.2880...  0.3175 sec/batch
Epoch: 15/20...  Training Step: 2909...  Training loss: 1.3118...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2910...  Training loss: 1.2985...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2911...  Training loss: 1.2952...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2912...  Training loss: 1.2865...  0.3207 sec/batch
Epoch: 15/20...  Training Step: 2913...  Training loss: 1.3355...  0.3178 sec/batch
Epoch: 15/20...  Training Step: 2914...  Training loss: 1.2949...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2915...  Training loss: 1.2780...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2916...  Training loss: 1.3049...  0.3207 sec/batch
Epoch: 15/20...  Training Step: 2917...  Training loss: 1.2761...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2918...  Training loss: 1.3016...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2919...  Training loss: 1.2962...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2920...  Training loss: 1.3161...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2921...  Training loss: 1.3024...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2922...  Training loss: 1.2849...  0.3204 sec/batch
Epoch: 15/20...  Training Step: 2923...  Training loss: 1.2473...  0.3191 sec/batch
Epoch: 15/20...  Training Step: 2924...  Training loss: 1.2570...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2925...  Training loss: 1.2911...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2926...  Training loss: 1.2727...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2927...  Training loss: 1.2774...  0.3203 sec/batch
Epoch: 15/20...  Training Step: 2928...  Training loss: 1.2850...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2929...  Training loss: 1.2944...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2930...  Training loss: 1.2698...  0.3182 sec/batch
Epoch: 15/20...  Training Step: 2931...  Training loss: 1.2510...  0.3205 sec/batch
Epoch: 15/20...  Training Step: 2932...  Training loss: 1.3007...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2933...  Training loss: 1.3181...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2934...  Training loss: 1.2899...  0.3182 sec/batch
Epoch: 15/20...  Training Step: 2935...  Training loss: 1.2755...  0.3170 sec/batch
Epoch: 15/20...  Training Step: 2936...  Training loss: 1.2810...  0.3201 sec/batch
Epoch: 15/20...  Training Step: 2937...  Training loss: 1.2848...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2938...  Training loss: 1.2783...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2939...  Training loss: 1.3175...  0.3186 sec/batch
Epoch: 15/20...  Training Step: 2940...  Training loss: 1.3444...  0.3171 sec/batch
Epoch: 15/20...  Training Step: 2941...  Training loss: 1.2899...  0.3202 sec/batch
Epoch: 15/20...  Training Step: 2942...  Training loss: 1.2968...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2943...  Training loss: 1.2777...  0.3202 sec/batch
Epoch: 15/20...  Training Step: 2944...  Training loss: 1.2695...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2945...  Training loss: 1.3234...  0.3180 sec/batch
Epoch: 15/20...  Training Step: 2946...  Training loss: 1.2871...  0.3172 sec/batch
Epoch: 15/20...  Training Step: 2947...  Training loss: 1.2893...  0.3206 sec/batch
Epoch: 15/20...  Training Step: 2948...  Training loss: 1.2576...  0.3209 sec/batch
Epoch: 15/20...  Training Step: 2949...  Training loss: 1.2677...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2950...  Training loss: 1.3211...  0.3178 sec/batch
Epoch: 15/20...  Training Step: 2951...  Training loss: 1.2693...  0.3179 sec/batch
Epoch: 15/20...  Training Step: 2952...  Training loss: 1.2530...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2953...  Training loss: 1.2652...  0.3200 sec/batch
Epoch: 15/20...  Training Step: 2954...  Training loss: 1.2778...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2955...  Training loss: 1.2788...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2956...  Training loss: 1.2774...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2957...  Training loss: 1.2754...  0.3197 sec/batch
Epoch: 15/20...  Training Step: 2958...  Training loss: 1.2663...  0.3193 sec/batch
Epoch: 15/20...  Training Step: 2959...  Training loss: 1.3031...  0.3192 sec/batch
Epoch: 15/20...  Training Step: 2960...  Training loss: 1.2690...  0.3196 sec/batch
Epoch: 15/20...  Training Step: 2961...  Training loss: 1.2686...  0.3176 sec/batch
Epoch: 15/20...  Training Step: 2962...  Training loss: 1.2837...  0.3199 sec/batch
Epoch: 15/20...  Training Step: 2963...  Training loss: 1.2572...  0.3201 sec/batch
Epoch: 15/20...  Training Step: 2964...  Training loss: 1.2617...  0.3194 sec/batch
Epoch: 15/20...  Training Step: 2965...  Training loss: 1.2799...  0.3201 sec/batch
Epoch: 15/20...  Training Step: 2966...  Training loss: 1.2547...  0.3183 sec/batch
Epoch: 15/20...  Training Step: 2967...  Training loss: 1.2483...  0.3195 sec/batch
Epoch: 15/20...  Training Step: 2968...  Training loss: 1.2851...  0.3208 sec/batch
Epoch: 15/20...  Training Step: 2969...  Training loss: 1.2692...  0.3198 sec/batch
Epoch: 15/20...  Training Step: 2970...  Training loss: 1.2684...  0.3183 sec/batch
Epoch: 16/20...  Training Step: 2971...  Training loss: 1.4092...  0.3175 sec/batch
Epoch: 16/20...  Training Step: 2972...  Training loss: 1.3063...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 2973...  Training loss: 1.2855...  0.3209 sec/batch
Epoch: 16/20...  Training Step: 2974...  Training loss: 1.3133...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 2975...  Training loss: 1.2645...  0.3189 sec/batch
Epoch: 16/20...  Training Step: 2976...  Training loss: 1.2476...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 2977...  Training loss: 1.2862...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 2978...  Training loss: 1.2815...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 2979...  Training loss: 1.2840...  0.3222 sec/batch
Epoch: 16/20...  Training Step: 2980...  Training loss: 1.2726...  0.3203 sec/batch
Epoch: 16/20...  Training Step: 2981...  Training loss: 1.2609...  0.3164 sec/batch
Epoch: 16/20...  Training Step: 2982...  Training loss: 1.2830...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 2983...  Training loss: 1.2853...  0.3211 sec/batch
Epoch: 16/20...  Training Step: 2984...  Training loss: 1.2922...  0.3204 sec/batch
Epoch: 16/20...  Training Step: 2985...  Training loss: 1.2781...  0.3192 sec/batch
Epoch: 16/20...  Training Step: 2986...  Training loss: 1.2616...  0.3172 sec/batch
Epoch: 16/20...  Training Step: 2987...  Training loss: 1.3026...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 2988...  Training loss: 1.2974...  0.3191 sec/batch
Epoch: 16/20...  Training Step: 2989...  Training loss: 1.2920...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 2990...  Training loss: 1.2964...  0.3209 sec/batch
Epoch: 16/20...  Training Step: 2991...  Training loss: 1.2840...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 2992...  Training loss: 1.3012...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 2993...  Training loss: 1.2640...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 2994...  Training loss: 1.3056...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 2995...  Training loss: 1.2845...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 2996...  Training loss: 1.2437...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 2997...  Training loss: 1.2507...  0.3178 sec/batch
Epoch: 16/20...  Training Step: 2998...  Training loss: 1.2958...  0.3206 sec/batch
Epoch: 16/20...  Training Step: 2999...  Training loss: 1.2971...  0.3208 sec/batch
Epoch: 16/20...  Training Step: 3000...  Training loss: 1.2959...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3001...  Training loss: 1.2696...  0.3173 sec/batch
Epoch: 16/20...  Training Step: 3002...  Training loss: 1.2518...  0.3152 sec/batch
Epoch: 16/20...  Training Step: 3003...  Training loss: 1.2900...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3004...  Training loss: 1.2833...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3005...  Training loss: 1.2684...  0.3210 sec/batch
Epoch: 16/20...  Training Step: 3006...  Training loss: 1.2826...  0.3193 sec/batch
Epoch: 16/20...  Training Step: 3007...  Training loss: 1.2550...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3008...  Training loss: 1.2391...  0.3191 sec/batch
Epoch: 16/20...  Training Step: 3009...  Training loss: 1.2340...  0.3206 sec/batch
Epoch: 16/20...  Training Step: 3010...  Training loss: 1.2656...  0.3203 sec/batch
Epoch: 16/20...  Training Step: 3011...  Training loss: 1.2443...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3012...  Training loss: 1.3238...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3013...  Training loss: 1.2576...  0.3193 sec/batch
Epoch: 16/20...  Training Step: 3014...  Training loss: 1.2517...  0.3180 sec/batch
Epoch: 16/20...  Training Step: 3015...  Training loss: 1.2771...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3016...  Training loss: 1.2519...  0.3190 sec/batch
Epoch: 16/20...  Training Step: 3017...  Training loss: 1.2613...  0.3211 sec/batch
Epoch: 16/20...  Training Step: 3018...  Training loss: 1.2593...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3019...  Training loss: 1.2700...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3020...  Training loss: 1.2978...  0.3213 sec/batch
Epoch: 16/20...  Training Step: 3021...  Training loss: 1.2487...  0.3209 sec/batch
Epoch: 16/20...  Training Step: 3022...  Training loss: 1.3093...  0.3216 sec/batch
Epoch: 16/20...  Training Step: 3023...  Training loss: 1.2720...  0.3190 sec/batch
Epoch: 16/20...  Training Step: 3024...  Training loss: 1.2810...  0.3170 sec/batch
Epoch: 16/20...  Training Step: 3025...  Training loss: 1.2721...  0.3210 sec/batch
Epoch: 16/20...  Training Step: 3026...  Training loss: 1.2792...  0.3190 sec/batch
Epoch: 16/20...  Training Step: 3027...  Training loss: 1.2899...  0.3163 sec/batch
Epoch: 16/20...  Training Step: 3028...  Training loss: 1.2743...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3029...  Training loss: 1.2562...  0.3175 sec/batch
Epoch: 16/20...  Training Step: 3030...  Training loss: 1.3197...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3031...  Training loss: 1.2929...  0.3193 sec/batch
Epoch: 16/20...  Training Step: 3032...  Training loss: 1.3090...  0.3208 sec/batch
Epoch: 16/20...  Training Step: 3033...  Training loss: 1.3014...  0.3191 sec/batch
Epoch: 16/20...  Training Step: 3034...  Training loss: 1.2898...  0.3182 sec/batch
Epoch: 16/20...  Training Step: 3035...  Training loss: 1.2717...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 3036...  Training loss: 1.2771...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3037...  Training loss: 1.3050...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3038...  Training loss: 1.2611...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3039...  Training loss: 1.2832...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3040...  Training loss: 1.2600...  0.3212 sec/batch
Epoch: 16/20...  Training Step: 3041...  Training loss: 1.3221...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3042...  Training loss: 1.2944...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3043...  Training loss: 1.3225...  0.3184 sec/batch
Epoch: 16/20...  Training Step: 3044...  Training loss: 1.2568...  0.3170 sec/batch
Epoch: 16/20...  Training Step: 3045...  Training loss: 1.2856...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 3046...  Training loss: 1.2918...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3047...  Training loss: 1.2783...  0.3212 sec/batch
Epoch: 16/20...  Training Step: 3048...  Training loss: 1.2625...  0.3184 sec/batch
Epoch: 16/20...  Training Step: 3049...  Training loss: 1.2340...  0.3176 sec/batch
Epoch: 16/20...  Training Step: 3050...  Training loss: 1.2699...  0.3208 sec/batch
Epoch: 16/20...  Training Step: 3051...  Training loss: 1.2467...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3052...  Training loss: 1.2718...  0.3193 sec/batch
Epoch: 16/20...  Training Step: 3053...  Training loss: 1.2405...  0.3204 sec/batch
Epoch: 16/20...  Training Step: 3054...  Training loss: 1.2675...  0.3168 sec/batch
Epoch: 16/20...  Training Step: 3055...  Training loss: 1.2456...  0.3189 sec/batch
Epoch: 16/20...  Training Step: 3056...  Training loss: 1.2764...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3057...  Training loss: 1.2488...  0.3203 sec/batch
Epoch: 16/20...  Training Step: 3058...  Training loss: 1.2610...  0.3178 sec/batch
Epoch: 16/20...  Training Step: 3059...  Training loss: 1.2415...  0.3177 sec/batch
Epoch: 16/20...  Training Step: 3060...  Training loss: 1.2815...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3061...  Training loss: 1.2511...  0.3220 sec/batch
Epoch: 16/20...  Training Step: 3062...  Training loss: 1.2680...  0.3201 sec/batch
Epoch: 16/20...  Training Step: 3063...  Training loss: 1.2466...  0.3204 sec/batch
Epoch: 16/20...  Training Step: 3064...  Training loss: 1.2422...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3065...  Training loss: 1.2626...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3066...  Training loss: 1.2847...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3067...  Training loss: 1.2826...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3068...  Training loss: 1.2343...  0.3185 sec/batch
Epoch: 16/20...  Training Step: 3069...  Training loss: 1.2502...  0.3164 sec/batch
Epoch: 16/20...  Training Step: 3070...  Training loss: 1.2463...  0.3188 sec/batch
Epoch: 16/20...  Training Step: 3071...  Training loss: 1.2757...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3072...  Training loss: 1.2605...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3073...  Training loss: 1.2732...  0.3208 sec/batch
Epoch: 16/20...  Training Step: 3074...  Training loss: 1.2619...  0.3184 sec/batch
Epoch: 16/20...  Training Step: 3075...  Training loss: 1.2630...  0.3191 sec/batch
Epoch: 16/20...  Training Step: 3076...  Training loss: 1.2651...  0.3201 sec/batch
Epoch: 16/20...  Training Step: 3077...  Training loss: 1.2709...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3078...  Training loss: 1.2698...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3079...  Training loss: 1.2557...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3080...  Training loss: 1.2865...  0.3194 sec/batch
Epoch: 16/20...  Training Step: 3081...  Training loss: 1.2522...  0.3226 sec/batch
Epoch: 16/20...  Training Step: 3082...  Training loss: 1.2758...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3083...  Training loss: 1.2738...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3084...  Training loss: 1.2549...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3085...  Training loss: 1.2392...  0.3180 sec/batch
Epoch: 16/20...  Training Step: 3086...  Training loss: 1.2395...  0.3211 sec/batch
Epoch: 16/20...  Training Step: 3087...  Training loss: 1.2773...  0.3277 sec/batch
Epoch: 16/20...  Training Step: 3088...  Training loss: 1.2751...  0.3207 sec/batch
Epoch: 16/20...  Training Step: 3089...  Training loss: 1.2581...  0.3185 sec/batch
Epoch: 16/20...  Training Step: 3090...  Training loss: 1.2659...  0.3179 sec/batch
Epoch: 16/20...  Training Step: 3091...  Training loss: 1.2588...  0.3207 sec/batch
Epoch: 16/20...  Training Step: 3092...  Training loss: 1.2431...  0.3183 sec/batch
Epoch: 16/20...  Training Step: 3093...  Training loss: 1.2316...  0.3172 sec/batch
Epoch: 16/20...  Training Step: 3094...  Training loss: 1.2730...  0.3209 sec/batch
Epoch: 16/20...  Training Step: 3095...  Training loss: 1.2582...  0.3206 sec/batch
Epoch: 16/20...  Training Step: 3096...  Training loss: 1.2284...  0.3208 sec/batch
Epoch: 16/20...  Training Step: 3097...  Training loss: 1.2713...  0.3218 sec/batch
Epoch: 16/20...  Training Step: 3098...  Training loss: 1.2764...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 3099...  Training loss: 1.2498...  0.3188 sec/batch
Epoch: 16/20...  Training Step: 3100...  Training loss: 1.2268...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3101...  Training loss: 1.2235...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3102...  Training loss: 1.2493...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3103...  Training loss: 1.2968...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 3104...  Training loss: 1.2697...  0.3183 sec/batch
Epoch: 16/20...  Training Step: 3105...  Training loss: 1.2718...  0.3172 sec/batch
Epoch: 16/20...  Training Step: 3106...  Training loss: 1.2684...  0.3204 sec/batch
Epoch: 16/20...  Training Step: 3107...  Training loss: 1.3013...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3108...  Training loss: 1.2818...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3109...  Training loss: 1.2724...  0.3190 sec/batch
Epoch: 16/20...  Training Step: 3110...  Training loss: 1.2740...  0.3192 sec/batch
Epoch: 16/20...  Training Step: 3111...  Training loss: 1.3209...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3112...  Training loss: 1.2742...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 3113...  Training loss: 1.2672...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3114...  Training loss: 1.3025...  0.3183 sec/batch
Epoch: 16/20...  Training Step: 3115...  Training loss: 1.2476...  0.3181 sec/batch
Epoch: 16/20...  Training Step: 3116...  Training loss: 1.2962...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 3117...  Training loss: 1.2851...  0.3212 sec/batch
Epoch: 16/20...  Training Step: 3118...  Training loss: 1.3072...  0.3205 sec/batch
Epoch: 16/20...  Training Step: 3119...  Training loss: 1.2945...  0.3189 sec/batch
Epoch: 16/20...  Training Step: 3120...  Training loss: 1.2695...  0.3169 sec/batch
Epoch: 16/20...  Training Step: 3121...  Training loss: 1.2350...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3122...  Training loss: 1.2448...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3123...  Training loss: 1.2759...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3124...  Training loss: 1.2622...  0.3190 sec/batch
Epoch: 16/20...  Training Step: 3125...  Training loss: 1.2586...  0.3192 sec/batch
Epoch: 16/20...  Training Step: 3126...  Training loss: 1.2668...  0.3180 sec/batch
Epoch: 16/20...  Training Step: 3127...  Training loss: 1.2738...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3128...  Training loss: 1.2559...  0.3202 sec/batch
Epoch: 16/20...  Training Step: 3129...  Training loss: 1.2316...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3130...  Training loss: 1.2885...  0.3179 sec/batch
Epoch: 16/20...  Training Step: 3131...  Training loss: 1.3006...  0.3269 sec/batch
Epoch: 16/20...  Training Step: 3132...  Training loss: 1.2778...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 3133...  Training loss: 1.2658...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3134...  Training loss: 1.2686...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 3135...  Training loss: 1.2690...  0.3181 sec/batch
Epoch: 16/20...  Training Step: 3136...  Training loss: 1.2645...  0.3185 sec/batch
Epoch: 16/20...  Training Step: 3137...  Training loss: 1.2979...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3138...  Training loss: 1.3268...  0.3211 sec/batch
Epoch: 16/20...  Training Step: 3139...  Training loss: 1.2733...  0.3193 sec/batch
Epoch: 16/20...  Training Step: 3140...  Training loss: 1.2752...  0.3214 sec/batch
Epoch: 16/20...  Training Step: 3141...  Training loss: 1.2638...  0.3169 sec/batch
Epoch: 16/20...  Training Step: 3142...  Training loss: 1.2562...  0.3192 sec/batch
Epoch: 16/20...  Training Step: 3143...  Training loss: 1.2957...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3144...  Training loss: 1.2735...  0.3192 sec/batch
Epoch: 16/20...  Training Step: 3145...  Training loss: 1.2770...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3146...  Training loss: 1.2387...  0.3196 sec/batch
Epoch: 16/20...  Training Step: 3147...  Training loss: 1.2612...  0.3201 sec/batch
Epoch: 16/20...  Training Step: 3148...  Training loss: 1.3087...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3149...  Training loss: 1.2406...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3150...  Training loss: 1.2515...  0.3188 sec/batch
Epoch: 16/20...  Training Step: 3151...  Training loss: 1.2588...  0.3180 sec/batch
Epoch: 16/20...  Training Step: 3152...  Training loss: 1.2671...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3153...  Training loss: 1.2623...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3154...  Training loss: 1.2683...  0.3195 sec/batch
Epoch: 16/20...  Training Step: 3155...  Training loss: 1.2578...  0.3204 sec/batch
Epoch: 16/20...  Training Step: 3156...  Training loss: 1.2466...  0.3171 sec/batch
Epoch: 16/20...  Training Step: 3157...  Training loss: 1.2952...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3158...  Training loss: 1.2542...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3159...  Training loss: 1.2658...  0.3197 sec/batch
Epoch: 16/20...  Training Step: 3160...  Training loss: 1.2728...  0.3198 sec/batch
Epoch: 16/20...  Training Step: 3161...  Training loss: 1.2517...  0.3199 sec/batch
Epoch: 16/20...  Training Step: 3162...  Training loss: 1.2528...  0.3189 sec/batch
Epoch: 16/20...  Training Step: 3163...  Training loss: 1.2835...  0.3207 sec/batch
Epoch: 16/20...  Training Step: 3164...  Training loss: 1.2505...  0.3191 sec/batch
Epoch: 16/20...  Training Step: 3165...  Training loss: 1.2327...  0.3206 sec/batch
Epoch: 16/20...  Training Step: 3166...  Training loss: 1.2738...  0.3200 sec/batch
Epoch: 16/20...  Training Step: 3167...  Training loss: 1.2706...  0.3217 sec/batch
Epoch: 16/20...  Training Step: 3168...  Training loss: 1.2545...  0.3213 sec/batch
Epoch: 17/20...  Training Step: 3169...  Training loss: 1.3885...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3170...  Training loss: 1.2817...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3171...  Training loss: 1.2694...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3172...  Training loss: 1.2872...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3173...  Training loss: 1.2523...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3174...  Training loss: 1.2335...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3175...  Training loss: 1.2785...  0.3187 sec/batch
Epoch: 17/20...  Training Step: 3176...  Training loss: 1.2743...  0.3176 sec/batch
Epoch: 17/20...  Training Step: 3177...  Training loss: 1.2724...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3178...  Training loss: 1.2719...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3179...  Training loss: 1.2583...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3180...  Training loss: 1.2634...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3181...  Training loss: 1.2701...  0.3189 sec/batch
Epoch: 17/20...  Training Step: 3182...  Training loss: 1.2818...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3183...  Training loss: 1.2587...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3184...  Training loss: 1.2535...  0.3242 sec/batch
Epoch: 17/20...  Training Step: 3185...  Training loss: 1.2811...  0.3183 sec/batch
Epoch: 17/20...  Training Step: 3186...  Training loss: 1.2835...  0.3180 sec/batch
Epoch: 17/20...  Training Step: 3187...  Training loss: 1.2772...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3188...  Training loss: 1.2915...  0.3221 sec/batch
Epoch: 17/20...  Training Step: 3189...  Training loss: 1.2649...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3190...  Training loss: 1.2763...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3191...  Training loss: 1.2585...  0.3187 sec/batch
Epoch: 17/20...  Training Step: 3192...  Training loss: 1.2852...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3193...  Training loss: 1.2644...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3194...  Training loss: 1.2294...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3195...  Training loss: 1.2420...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3196...  Training loss: 1.2776...  0.3178 sec/batch
Epoch: 17/20...  Training Step: 3197...  Training loss: 1.2822...  0.3180 sec/batch
Epoch: 17/20...  Training Step: 3198...  Training loss: 1.2790...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3199...  Training loss: 1.2520...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3200...  Training loss: 1.2379...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3201...  Training loss: 1.2812...  0.3172 sec/batch
Epoch: 17/20...  Training Step: 3202...  Training loss: 1.2777...  0.3155 sec/batch
Epoch: 17/20...  Training Step: 3203...  Training loss: 1.2459...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3204...  Training loss: 1.2700...  0.3225 sec/batch
Epoch: 17/20...  Training Step: 3205...  Training loss: 1.2392...  0.3209 sec/batch
Epoch: 17/20...  Training Step: 3206...  Training loss: 1.2120...  0.3205 sec/batch
Epoch: 17/20...  Training Step: 3207...  Training loss: 1.2286...  0.3207 sec/batch
Epoch: 17/20...  Training Step: 3208...  Training loss: 1.2563...  0.3189 sec/batch
Epoch: 17/20...  Training Step: 3209...  Training loss: 1.2349...  0.3208 sec/batch
Epoch: 17/20...  Training Step: 3210...  Training loss: 1.3114...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3211...  Training loss: 1.2628...  0.3215 sec/batch
Epoch: 17/20...  Training Step: 3212...  Training loss: 1.2406...  0.3191 sec/batch
Epoch: 17/20...  Training Step: 3213...  Training loss: 1.2833...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3214...  Training loss: 1.2408...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3215...  Training loss: 1.2619...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3216...  Training loss: 1.2493...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3217...  Training loss: 1.2566...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3218...  Training loss: 1.2775...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3219...  Training loss: 1.2352...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3220...  Training loss: 1.3008...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3221...  Training loss: 1.2700...  0.3190 sec/batch
Epoch: 17/20...  Training Step: 3222...  Training loss: 1.2683...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3223...  Training loss: 1.2570...  0.3175 sec/batch
Epoch: 17/20...  Training Step: 3224...  Training loss: 1.2564...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3225...  Training loss: 1.2741...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3226...  Training loss: 1.2506...  0.3189 sec/batch
Epoch: 17/20...  Training Step: 3227...  Training loss: 1.2447...  0.3183 sec/batch
Epoch: 17/20...  Training Step: 3228...  Training loss: 1.2970...  0.3213 sec/batch
Epoch: 17/20...  Training Step: 3229...  Training loss: 1.2747...  0.3190 sec/batch
Epoch: 17/20...  Training Step: 3230...  Training loss: 1.3015...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3231...  Training loss: 1.2784...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3232...  Training loss: 1.2666...  0.3207 sec/batch
Epoch: 17/20...  Training Step: 3233...  Training loss: 1.2603...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3234...  Training loss: 1.2748...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3235...  Training loss: 1.2833...  0.3191 sec/batch
Epoch: 17/20...  Training Step: 3236...  Training loss: 1.2579...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3237...  Training loss: 1.2779...  0.3180 sec/batch
Epoch: 17/20...  Training Step: 3238...  Training loss: 1.2543...  0.3164 sec/batch
Epoch: 17/20...  Training Step: 3239...  Training loss: 1.2952...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3240...  Training loss: 1.2903...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3241...  Training loss: 1.2933...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3242...  Training loss: 1.2484...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3243...  Training loss: 1.2713...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3244...  Training loss: 1.2852...  0.3208 sec/batch
Epoch: 17/20...  Training Step: 3245...  Training loss: 1.2568...  0.3212 sec/batch
Epoch: 17/20...  Training Step: 3246...  Training loss: 1.2570...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3247...  Training loss: 1.2190...  0.3181 sec/batch
Epoch: 17/20...  Training Step: 3248...  Training loss: 1.2640...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3249...  Training loss: 1.2318...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3250...  Training loss: 1.2706...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3251...  Training loss: 1.2409...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3252...  Training loss: 1.2587...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3253...  Training loss: 1.2421...  0.3164 sec/batch
Epoch: 17/20...  Training Step: 3254...  Training loss: 1.2627...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3255...  Training loss: 1.2376...  0.3208 sec/batch
Epoch: 17/20...  Training Step: 3256...  Training loss: 1.2410...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3257...  Training loss: 1.2311...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3258...  Training loss: 1.2693...  0.3182 sec/batch
Epoch: 17/20...  Training Step: 3259...  Training loss: 1.2446...  0.3183 sec/batch
Epoch: 17/20...  Training Step: 3260...  Training loss: 1.2508...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3261...  Training loss: 1.2386...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3262...  Training loss: 1.2318...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3263...  Training loss: 1.2469...  0.3183 sec/batch
Epoch: 17/20...  Training Step: 3264...  Training loss: 1.2674...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3265...  Training loss: 1.2718...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3266...  Training loss: 1.2229...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3267...  Training loss: 1.2440...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3268...  Training loss: 1.2340...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3269...  Training loss: 1.2627...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3270...  Training loss: 1.2526...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3271...  Training loss: 1.2542...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3272...  Training loss: 1.2549...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3273...  Training loss: 1.2524...  0.3189 sec/batch
Epoch: 17/20...  Training Step: 3274...  Training loss: 1.2590...  0.3166 sec/batch
Epoch: 17/20...  Training Step: 3275...  Training loss: 1.2602...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3276...  Training loss: 1.2674...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3277...  Training loss: 1.2376...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3278...  Training loss: 1.2743...  0.3186 sec/batch
Epoch: 17/20...  Training Step: 3279...  Training loss: 1.2382...  0.3210 sec/batch
Epoch: 17/20...  Training Step: 3280...  Training loss: 1.2655...  0.3219 sec/batch
Epoch: 17/20...  Training Step: 3281...  Training loss: 1.2508...  0.3211 sec/batch
Epoch: 17/20...  Training Step: 3282...  Training loss: 1.2436...  0.3227 sec/batch
Epoch: 17/20...  Training Step: 3283...  Training loss: 1.2315...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3284...  Training loss: 1.2284...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3285...  Training loss: 1.2667...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3286...  Training loss: 1.2599...  0.3191 sec/batch
Epoch: 17/20...  Training Step: 3287...  Training loss: 1.2563...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3288...  Training loss: 1.2521...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3289...  Training loss: 1.2555...  0.3200 sec/batch
Epoch: 17/20...  Training Step: 3290...  Training loss: 1.2303...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3291...  Training loss: 1.2156...  0.3211 sec/batch
Epoch: 17/20...  Training Step: 3292...  Training loss: 1.2559...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3293...  Training loss: 1.2443...  0.3182 sec/batch
Epoch: 17/20...  Training Step: 3294...  Training loss: 1.2161...  0.3166 sec/batch
Epoch: 17/20...  Training Step: 3295...  Training loss: 1.2719...  0.3215 sec/batch
Epoch: 17/20...  Training Step: 3296...  Training loss: 1.2664...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3297...  Training loss: 1.2409...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3298...  Training loss: 1.2224...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3299...  Training loss: 1.2150...  0.3185 sec/batch
Epoch: 17/20...  Training Step: 3300...  Training loss: 1.2383...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3301...  Training loss: 1.2879...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3302...  Training loss: 1.2608...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3303...  Training loss: 1.2641...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3304...  Training loss: 1.2562...  0.3181 sec/batch
Epoch: 17/20...  Training Step: 3305...  Training loss: 1.2815...  0.3179 sec/batch
Epoch: 17/20...  Training Step: 3306...  Training loss: 1.2710...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3307...  Training loss: 1.2646...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3308...  Training loss: 1.2636...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3309...  Training loss: 1.2947...  0.3165 sec/batch
Epoch: 17/20...  Training Step: 3310...  Training loss: 1.2653...  0.3188 sec/batch
Epoch: 17/20...  Training Step: 3311...  Training loss: 1.2520...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3312...  Training loss: 1.2932...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3313...  Training loss: 1.2548...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3314...  Training loss: 1.2772...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3315...  Training loss: 1.2665...  0.3175 sec/batch
Epoch: 17/20...  Training Step: 3316...  Training loss: 1.2857...  0.3208 sec/batch
Epoch: 17/20...  Training Step: 3317...  Training loss: 1.2821...  0.3202 sec/batch
Epoch: 17/20...  Training Step: 3318...  Training loss: 1.2491...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3319...  Training loss: 1.2277...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3320...  Training loss: 1.2359...  0.3177 sec/batch
Epoch: 17/20...  Training Step: 3321...  Training loss: 1.2803...  0.3205 sec/batch
Epoch: 17/20...  Training Step: 3322...  Training loss: 1.2589...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3323...  Training loss: 1.2430...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3324...  Training loss: 1.2573...  0.3171 sec/batch
Epoch: 17/20...  Training Step: 3325...  Training loss: 1.2551...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3326...  Training loss: 1.2458...  0.3198 sec/batch
Epoch: 17/20...  Training Step: 3327...  Training loss: 1.2266...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3328...  Training loss: 1.2796...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3329...  Training loss: 1.2859...  0.3185 sec/batch
Epoch: 17/20...  Training Step: 3330...  Training loss: 1.2587...  0.3173 sec/batch
Epoch: 17/20...  Training Step: 3331...  Training loss: 1.2484...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3332...  Training loss: 1.2628...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3333...  Training loss: 1.2568...  0.3195 sec/batch
Epoch: 17/20...  Training Step: 3334...  Training loss: 1.2616...  0.3175 sec/batch
Epoch: 17/20...  Training Step: 3335...  Training loss: 1.2752...  0.3188 sec/batch
Epoch: 17/20...  Training Step: 3336...  Training loss: 1.3145...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3337...  Training loss: 1.2680...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3338...  Training loss: 1.2602...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3339...  Training loss: 1.2611...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3340...  Training loss: 1.2481...  0.3179 sec/batch
Epoch: 17/20...  Training Step: 3341...  Training loss: 1.2843...  0.3165 sec/batch
Epoch: 17/20...  Training Step: 3342...  Training loss: 1.2696...  0.3199 sec/batch
Epoch: 17/20...  Training Step: 3343...  Training loss: 1.2719...  0.3193 sec/batch
Epoch: 17/20...  Training Step: 3344...  Training loss: 1.2249...  0.3192 sec/batch
Epoch: 17/20...  Training Step: 3345...  Training loss: 1.2540...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3346...  Training loss: 1.2911...  0.3161 sec/batch
Epoch: 17/20...  Training Step: 3347...  Training loss: 1.2354...  0.3206 sec/batch
Epoch: 17/20...  Training Step: 3348...  Training loss: 1.2268...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3349...  Training loss: 1.2414...  0.3194 sec/batch
Epoch: 17/20...  Training Step: 3350...  Training loss: 1.2675...  0.3191 sec/batch
Epoch: 17/20...  Training Step: 3351...  Training loss: 1.2525...  0.3175 sec/batch
Epoch: 17/20...  Training Step: 3352...  Training loss: 1.2572...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3353...  Training loss: 1.2497...  0.3189 sec/batch
Epoch: 17/20...  Training Step: 3354...  Training loss: 1.2369...  0.3205 sec/batch
Epoch: 17/20...  Training Step: 3355...  Training loss: 1.2818...  0.3184 sec/batch
Epoch: 17/20...  Training Step: 3356...  Training loss: 1.2447...  0.3171 sec/batch
Epoch: 17/20...  Training Step: 3357...  Training loss: 1.2634...  0.3197 sec/batch
Epoch: 17/20...  Training Step: 3358...  Training loss: 1.2594...  0.3201 sec/batch
Epoch: 17/20...  Training Step: 3359...  Training loss: 1.2399...  0.3203 sec/batch
Epoch: 17/20...  Training Step: 3360...  Training loss: 1.2401...  0.3196 sec/batch
Epoch: 17/20...  Training Step: 3361...  Training loss: 1.2603...  0.3166 sec/batch
Epoch: 17/20...  Training Step: 3362...  Training loss: 1.2408...  0.3185 sec/batch
Epoch: 17/20...  Training Step: 3363...  Training loss: 1.2175...  0.3204 sec/batch
Epoch: 17/20...  Training Step: 3364...  Training loss: 1.2594...  0.3181 sec/batch
Epoch: 17/20...  Training Step: 3365...  Training loss: 1.2441...  0.3161 sec/batch
Epoch: 17/20...  Training Step: 3366...  Training loss: 1.2493...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3367...  Training loss: 1.3749...  0.3174 sec/batch
Epoch: 18/20...  Training Step: 3368...  Training loss: 1.2654...  0.3197 sec/batch
Epoch: 18/20...  Training Step: 3369...  Training loss: 1.2536...  0.3207 sec/batch
Epoch: 18/20...  Training Step: 3370...  Training loss: 1.2768...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3371...  Training loss: 1.2275...  0.3162 sec/batch
Epoch: 18/20...  Training Step: 3372...  Training loss: 1.2205...  0.3169 sec/batch
Epoch: 18/20...  Training Step: 3373...  Training loss: 1.2567...  0.3213 sec/batch
Epoch: 18/20...  Training Step: 3374...  Training loss: 1.2542...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3375...  Training loss: 1.2585...  0.3186 sec/batch
Epoch: 18/20...  Training Step: 3376...  Training loss: 1.2512...  0.3182 sec/batch
Epoch: 18/20...  Training Step: 3377...  Training loss: 1.2254...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3378...  Training loss: 1.2609...  0.3168 sec/batch
Epoch: 18/20...  Training Step: 3379...  Training loss: 1.2630...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3380...  Training loss: 1.2624...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3381...  Training loss: 1.2404...  0.3208 sec/batch
Epoch: 18/20...  Training Step: 3382...  Training loss: 1.2342...  0.3197 sec/batch
Epoch: 18/20...  Training Step: 3383...  Training loss: 1.2709...  0.3177 sec/batch
Epoch: 18/20...  Training Step: 3384...  Training loss: 1.2785...  0.3205 sec/batch
Epoch: 18/20...  Training Step: 3385...  Training loss: 1.2590...  0.3200 sec/batch
Epoch: 18/20...  Training Step: 3386...  Training loss: 1.2750...  0.3188 sec/batch
Epoch: 18/20...  Training Step: 3387...  Training loss: 1.2491...  0.3189 sec/batch
Epoch: 18/20...  Training Step: 3388...  Training loss: 1.2693...  0.3187 sec/batch
Epoch: 18/20...  Training Step: 3389...  Training loss: 1.2540...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3390...  Training loss: 1.2755...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3391...  Training loss: 1.2582...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3392...  Training loss: 1.2189...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3393...  Training loss: 1.2188...  0.3183 sec/batch
Epoch: 18/20...  Training Step: 3394...  Training loss: 1.2671...  0.3209 sec/batch
Epoch: 18/20...  Training Step: 3395...  Training loss: 1.2616...  0.3209 sec/batch
Epoch: 18/20...  Training Step: 3396...  Training loss: 1.2724...  0.3216 sec/batch
Epoch: 18/20...  Training Step: 3397...  Training loss: 1.2422...  0.3185 sec/batch
Epoch: 18/20...  Training Step: 3398...  Training loss: 1.2306...  0.3185 sec/batch
Epoch: 18/20...  Training Step: 3399...  Training loss: 1.2660...  0.3204 sec/batch
Epoch: 18/20...  Training Step: 3400...  Training loss: 1.2686...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3401...  Training loss: 1.2456...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3402...  Training loss: 1.2594...  0.3163 sec/batch
Epoch: 18/20...  Training Step: 3403...  Training loss: 1.2333...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3404...  Training loss: 1.2123...  0.3215 sec/batch
Epoch: 18/20...  Training Step: 3405...  Training loss: 1.2123...  0.3205 sec/batch
Epoch: 18/20...  Training Step: 3406...  Training loss: 1.2447...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3407...  Training loss: 1.2267...  0.3167 sec/batch
Epoch: 18/20...  Training Step: 3408...  Training loss: 1.2838...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3409...  Training loss: 1.2419...  0.3202 sec/batch
Epoch: 18/20...  Training Step: 3410...  Training loss: 1.2233...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3411...  Training loss: 1.2599...  0.3188 sec/batch
Epoch: 18/20...  Training Step: 3412...  Training loss: 1.2255...  0.3185 sec/batch
Epoch: 18/20...  Training Step: 3413...  Training loss: 1.2475...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3414...  Training loss: 1.2437...  0.3233 sec/batch
Epoch: 18/20...  Training Step: 3415...  Training loss: 1.2535...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3416...  Training loss: 1.2626...  0.3200 sec/batch
Epoch: 18/20...  Training Step: 3417...  Training loss: 1.2275...  0.3178 sec/batch
Epoch: 18/20...  Training Step: 3418...  Training loss: 1.2801...  0.3166 sec/batch
Epoch: 18/20...  Training Step: 3419...  Training loss: 1.2516...  0.3221 sec/batch
Epoch: 18/20...  Training Step: 3420...  Training loss: 1.2600...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3421...  Training loss: 1.2551...  0.3200 sec/batch
Epoch: 18/20...  Training Step: 3422...  Training loss: 1.2600...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3423...  Training loss: 1.2668...  0.3181 sec/batch
Epoch: 18/20...  Training Step: 3424...  Training loss: 1.2482...  0.3216 sec/batch
Epoch: 18/20...  Training Step: 3425...  Training loss: 1.2354...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3426...  Training loss: 1.2927...  0.3189 sec/batch
Epoch: 18/20...  Training Step: 3427...  Training loss: 1.2613...  0.3172 sec/batch
Epoch: 18/20...  Training Step: 3428...  Training loss: 1.2899...  0.3169 sec/batch
Epoch: 18/20...  Training Step: 3429...  Training loss: 1.2680...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3430...  Training loss: 1.2576...  0.3214 sec/batch
Epoch: 18/20...  Training Step: 3431...  Training loss: 1.2430...  0.3189 sec/batch
Epoch: 18/20...  Training Step: 3432...  Training loss: 1.2573...  0.3183 sec/batch
Epoch: 18/20...  Training Step: 3433...  Training loss: 1.2764...  0.3205 sec/batch
Epoch: 18/20...  Training Step: 3434...  Training loss: 1.2321...  0.3184 sec/batch
Epoch: 18/20...  Training Step: 3435...  Training loss: 1.2543...  0.3216 sec/batch
Epoch: 18/20...  Training Step: 3436...  Training loss: 1.2398...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3437...  Training loss: 1.2888...  0.3207 sec/batch
Epoch: 18/20...  Training Step: 3438...  Training loss: 1.2706...  0.3174 sec/batch
Epoch: 18/20...  Training Step: 3439...  Training loss: 1.2857...  0.3170 sec/batch
Epoch: 18/20...  Training Step: 3440...  Training loss: 1.2366...  0.3210 sec/batch
Epoch: 18/20...  Training Step: 3441...  Training loss: 1.2532...  0.3204 sec/batch
Epoch: 18/20...  Training Step: 3442...  Training loss: 1.2832...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3443...  Training loss: 1.2565...  0.3186 sec/batch
Epoch: 18/20...  Training Step: 3444...  Training loss: 1.2418...  0.3174 sec/batch
Epoch: 18/20...  Training Step: 3445...  Training loss: 1.2165...  0.3211 sec/batch
Epoch: 18/20...  Training Step: 3446...  Training loss: 1.2497...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3447...  Training loss: 1.2234...  0.3204 sec/batch
Epoch: 18/20...  Training Step: 3448...  Training loss: 1.2515...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3449...  Training loss: 1.2176...  0.3184 sec/batch
Epoch: 18/20...  Training Step: 3450...  Training loss: 1.2433...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3451...  Training loss: 1.2169...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3452...  Training loss: 1.2446...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3453...  Training loss: 1.2247...  0.3186 sec/batch
Epoch: 18/20...  Training Step: 3454...  Training loss: 1.2244...  0.3162 sec/batch
Epoch: 18/20...  Training Step: 3455...  Training loss: 1.2190...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3456...  Training loss: 1.2551...  0.3202 sec/batch
Epoch: 18/20...  Training Step: 3457...  Training loss: 1.2291...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3458...  Training loss: 1.2366...  0.3214 sec/batch
Epoch: 18/20...  Training Step: 3459...  Training loss: 1.2264...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3460...  Training loss: 1.2202...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3461...  Training loss: 1.2391...  0.3212 sec/batch
Epoch: 18/20...  Training Step: 3462...  Training loss: 1.2589...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3463...  Training loss: 1.2524...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3464...  Training loss: 1.2138...  0.3170 sec/batch
Epoch: 18/20...  Training Step: 3465...  Training loss: 1.2334...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3466...  Training loss: 1.2233...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3467...  Training loss: 1.2474...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3468...  Training loss: 1.2412...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3469...  Training loss: 1.2451...  0.3186 sec/batch
Epoch: 18/20...  Training Step: 3470...  Training loss: 1.2394...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3471...  Training loss: 1.2341...  0.3213 sec/batch
Epoch: 18/20...  Training Step: 3472...  Training loss: 1.2467...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3473...  Training loss: 1.2470...  0.3206 sec/batch
Epoch: 18/20...  Training Step: 3474...  Training loss: 1.2574...  0.3197 sec/batch
Epoch: 18/20...  Training Step: 3475...  Training loss: 1.2391...  0.3172 sec/batch
Epoch: 18/20...  Training Step: 3476...  Training loss: 1.2673...  0.3220 sec/batch
Epoch: 18/20...  Training Step: 3477...  Training loss: 1.2267...  0.3188 sec/batch
Epoch: 18/20...  Training Step: 3478...  Training loss: 1.2474...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3479...  Training loss: 1.2553...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3480...  Training loss: 1.2382...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3481...  Training loss: 1.2167...  0.3205 sec/batch
Epoch: 18/20...  Training Step: 3482...  Training loss: 1.2180...  0.3215 sec/batch
Epoch: 18/20...  Training Step: 3483...  Training loss: 1.2556...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3484...  Training loss: 1.2465...  0.3176 sec/batch
Epoch: 18/20...  Training Step: 3485...  Training loss: 1.2351...  0.3179 sec/batch
Epoch: 18/20...  Training Step: 3486...  Training loss: 1.2376...  0.3203 sec/batch
Epoch: 18/20...  Training Step: 3487...  Training loss: 1.2366...  0.3212 sec/batch
Epoch: 18/20...  Training Step: 3488...  Training loss: 1.2158...  0.3203 sec/batch
Epoch: 18/20...  Training Step: 3489...  Training loss: 1.1942...  0.3187 sec/batch
Epoch: 18/20...  Training Step: 3490...  Training loss: 1.2427...  0.3182 sec/batch
Epoch: 18/20...  Training Step: 3491...  Training loss: 1.2328...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3492...  Training loss: 1.2085...  0.3197 sec/batch
Epoch: 18/20...  Training Step: 3493...  Training loss: 1.2552...  0.3200 sec/batch
Epoch: 18/20...  Training Step: 3494...  Training loss: 1.2491...  0.3185 sec/batch
Epoch: 18/20...  Training Step: 3495...  Training loss: 1.2270...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3496...  Training loss: 1.2118...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3497...  Training loss: 1.2058...  0.3293 sec/batch
Epoch: 18/20...  Training Step: 3498...  Training loss: 1.2294...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3499...  Training loss: 1.2718...  0.3167 sec/batch
Epoch: 18/20...  Training Step: 3500...  Training loss: 1.2431...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3501...  Training loss: 1.2544...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3502...  Training loss: 1.2530...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3503...  Training loss: 1.2714...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3504...  Training loss: 1.2693...  0.3181 sec/batch
Epoch: 18/20...  Training Step: 3505...  Training loss: 1.2528...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3506...  Training loss: 1.2524...  0.3214 sec/batch
Epoch: 18/20...  Training Step: 3507...  Training loss: 1.3011...  0.3211 sec/batch
Epoch: 18/20...  Training Step: 3508...  Training loss: 1.2538...  0.3188 sec/batch
Epoch: 18/20...  Training Step: 3509...  Training loss: 1.2425...  0.3213 sec/batch
Epoch: 18/20...  Training Step: 3510...  Training loss: 1.2795...  0.3244 sec/batch
Epoch: 18/20...  Training Step: 3511...  Training loss: 1.2288...  0.3240 sec/batch
Epoch: 18/20...  Training Step: 3512...  Training loss: 1.2701...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3513...  Training loss: 1.2553...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3514...  Training loss: 1.2914...  0.3203 sec/batch
Epoch: 18/20...  Training Step: 3515...  Training loss: 1.2683...  0.3165 sec/batch
Epoch: 18/20...  Training Step: 3516...  Training loss: 1.2388...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3517...  Training loss: 1.2176...  0.3207 sec/batch
Epoch: 18/20...  Training Step: 3518...  Training loss: 1.2292...  0.3197 sec/batch
Epoch: 18/20...  Training Step: 3519...  Training loss: 1.2557...  0.3186 sec/batch
Epoch: 18/20...  Training Step: 3520...  Training loss: 1.2409...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3521...  Training loss: 1.2365...  0.3202 sec/batch
Epoch: 18/20...  Training Step: 3522...  Training loss: 1.2386...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3523...  Training loss: 1.2498...  0.3190 sec/batch
Epoch: 18/20...  Training Step: 3524...  Training loss: 1.2293...  0.3177 sec/batch
Epoch: 18/20...  Training Step: 3525...  Training loss: 1.2120...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3526...  Training loss: 1.2693...  0.3202 sec/batch
Epoch: 18/20...  Training Step: 3527...  Training loss: 1.2649...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3528...  Training loss: 1.2476...  0.3212 sec/batch
Epoch: 18/20...  Training Step: 3529...  Training loss: 1.2353...  0.3202 sec/batch
Epoch: 18/20...  Training Step: 3530...  Training loss: 1.2544...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3531...  Training loss: 1.2507...  0.3206 sec/batch
Epoch: 18/20...  Training Step: 3532...  Training loss: 1.2529...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3533...  Training loss: 1.2672...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3534...  Training loss: 1.3101...  0.3183 sec/batch
Epoch: 18/20...  Training Step: 3535...  Training loss: 1.2675...  0.3182 sec/batch
Epoch: 18/20...  Training Step: 3536...  Training loss: 1.2507...  0.3214 sec/batch
Epoch: 18/20...  Training Step: 3537...  Training loss: 1.2435...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3538...  Training loss: 1.2398...  0.3211 sec/batch
Epoch: 18/20...  Training Step: 3539...  Training loss: 1.2737...  0.3205 sec/batch
Epoch: 18/20...  Training Step: 3540...  Training loss: 1.2569...  0.3178 sec/batch
Epoch: 18/20...  Training Step: 3541...  Training loss: 1.2695...  0.3196 sec/batch
Epoch: 18/20...  Training Step: 3542...  Training loss: 1.2199...  0.3192 sec/batch
Epoch: 18/20...  Training Step: 3543...  Training loss: 1.2419...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3544...  Training loss: 1.2822...  0.3179 sec/batch
Epoch: 18/20...  Training Step: 3545...  Training loss: 1.2328...  0.3180 sec/batch
Epoch: 18/20...  Training Step: 3546...  Training loss: 1.2188...  0.3210 sec/batch
Epoch: 18/20...  Training Step: 3547...  Training loss: 1.2364...  0.3211 sec/batch
Epoch: 18/20...  Training Step: 3548...  Training loss: 1.2488...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3549...  Training loss: 1.2498...  0.3204 sec/batch
Epoch: 18/20...  Training Step: 3550...  Training loss: 1.2397...  0.3171 sec/batch
Epoch: 18/20...  Training Step: 3551...  Training loss: 1.2358...  0.3206 sec/batch
Epoch: 18/20...  Training Step: 3552...  Training loss: 1.2313...  0.3195 sec/batch
Epoch: 18/20...  Training Step: 3553...  Training loss: 1.2720...  0.3201 sec/batch
Epoch: 18/20...  Training Step: 3554...  Training loss: 1.2354...  0.3194 sec/batch
Epoch: 18/20...  Training Step: 3555...  Training loss: 1.2425...  0.3182 sec/batch
Epoch: 18/20...  Training Step: 3556...  Training loss: 1.2378...  0.3171 sec/batch
Epoch: 18/20...  Training Step: 3557...  Training loss: 1.2164...  0.3221 sec/batch
Epoch: 18/20...  Training Step: 3558...  Training loss: 1.2289...  0.3193 sec/batch
Epoch: 18/20...  Training Step: 3559...  Training loss: 1.2493...  0.3199 sec/batch
Epoch: 18/20...  Training Step: 3560...  Training loss: 1.2245...  0.3178 sec/batch
Epoch: 18/20...  Training Step: 3561...  Training loss: 1.2037...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3562...  Training loss: 1.2470...  0.3191 sec/batch
Epoch: 18/20...  Training Step: 3563...  Training loss: 1.2365...  0.3198 sec/batch
Epoch: 18/20...  Training Step: 3564...  Training loss: 1.2367...  0.3213 sec/batch
Epoch: 19/20...  Training Step: 3565...  Training loss: 1.3608...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3566...  Training loss: 1.2549...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3567...  Training loss: 1.2500...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3568...  Training loss: 1.2657...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3569...  Training loss: 1.2230...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3570...  Training loss: 1.2054...  0.3184 sec/batch
Epoch: 19/20...  Training Step: 3571...  Training loss: 1.2408...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3572...  Training loss: 1.2426...  0.3206 sec/batch
Epoch: 19/20...  Training Step: 3573...  Training loss: 1.2498...  0.3188 sec/batch
Epoch: 19/20...  Training Step: 3574...  Training loss: 1.2319...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3575...  Training loss: 1.2226...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3576...  Training loss: 1.2408...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3577...  Training loss: 1.2494...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3578...  Training loss: 1.2520...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3579...  Training loss: 1.2356...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3580...  Training loss: 1.2255...  0.3177 sec/batch
Epoch: 19/20...  Training Step: 3581...  Training loss: 1.2629...  0.3170 sec/batch
Epoch: 19/20...  Training Step: 3582...  Training loss: 1.2613...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3583...  Training loss: 1.2460...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3584...  Training loss: 1.2622...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3585...  Training loss: 1.2444...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3586...  Training loss: 1.2571...  0.3179 sec/batch
Epoch: 19/20...  Training Step: 3587...  Training loss: 1.2361...  0.3206 sec/batch
Epoch: 19/20...  Training Step: 3588...  Training loss: 1.2613...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3589...  Training loss: 1.2501...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3590...  Training loss: 1.1996...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3591...  Training loss: 1.2193...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3592...  Training loss: 1.2588...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3593...  Training loss: 1.2586...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3594...  Training loss: 1.2618...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3595...  Training loss: 1.2306...  0.3182 sec/batch
Epoch: 19/20...  Training Step: 3596...  Training loss: 1.2295...  0.3166 sec/batch
Epoch: 19/20...  Training Step: 3597...  Training loss: 1.2493...  0.3221 sec/batch
Epoch: 19/20...  Training Step: 3598...  Training loss: 1.2514...  0.3224 sec/batch
Epoch: 19/20...  Training Step: 3599...  Training loss: 1.2245...  0.3177 sec/batch
Epoch: 19/20...  Training Step: 3600...  Training loss: 1.2377...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3601...  Training loss: 1.2215...  0.3169 sec/batch
Epoch: 19/20...  Training Step: 3602...  Training loss: 1.2007...  0.3159 sec/batch
Epoch: 19/20...  Training Step: 3603...  Training loss: 1.1967...  0.3210 sec/batch
Epoch: 19/20...  Training Step: 3604...  Training loss: 1.2360...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3605...  Training loss: 1.2210...  0.3162 sec/batch
Epoch: 19/20...  Training Step: 3606...  Training loss: 1.2834...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3607...  Training loss: 1.2348...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3608...  Training loss: 1.2160...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3609...  Training loss: 1.2467...  0.3210 sec/batch
Epoch: 19/20...  Training Step: 3610...  Training loss: 1.2202...  0.3190 sec/batch
Epoch: 19/20...  Training Step: 3611...  Training loss: 1.2309...  0.3186 sec/batch
Epoch: 19/20...  Training Step: 3612...  Training loss: 1.2301...  0.3188 sec/batch
Epoch: 19/20...  Training Step: 3613...  Training loss: 1.2365...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3614...  Training loss: 1.2527...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3615...  Training loss: 1.2093...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3616...  Training loss: 1.2737...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3617...  Training loss: 1.2426...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3618...  Training loss: 1.2624...  0.3216 sec/batch
Epoch: 19/20...  Training Step: 3619...  Training loss: 1.2310...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3620...  Training loss: 1.2382...  0.3194 sec/batch
Epoch: 19/20...  Training Step: 3621...  Training loss: 1.2548...  0.3178 sec/batch
Epoch: 19/20...  Training Step: 3622...  Training loss: 1.2316...  0.3175 sec/batch
Epoch: 19/20...  Training Step: 3623...  Training loss: 1.2205...  0.3212 sec/batch
Epoch: 19/20...  Training Step: 3624...  Training loss: 1.2847...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3625...  Training loss: 1.2544...  0.3210 sec/batch
Epoch: 19/20...  Training Step: 3626...  Training loss: 1.2724...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3627...  Training loss: 1.2620...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3628...  Training loss: 1.2504...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3629...  Training loss: 1.2373...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3630...  Training loss: 1.2524...  0.3206 sec/batch
Epoch: 19/20...  Training Step: 3631...  Training loss: 1.2661...  0.3185 sec/batch
Epoch: 19/20...  Training Step: 3632...  Training loss: 1.2269...  0.3173 sec/batch
Epoch: 19/20...  Training Step: 3633...  Training loss: 1.2492...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3634...  Training loss: 1.2175...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3635...  Training loss: 1.2811...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3636...  Training loss: 1.2519...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3637...  Training loss: 1.2680...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3638...  Training loss: 1.2258...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3639...  Training loss: 1.2469...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3640...  Training loss: 1.2555...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3641...  Training loss: 1.2325...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3642...  Training loss: 1.2354...  0.3179 sec/batch
Epoch: 19/20...  Training Step: 3643...  Training loss: 1.1957...  0.3182 sec/batch
Epoch: 19/20...  Training Step: 3644...  Training loss: 1.2403...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3645...  Training loss: 1.2133...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3646...  Training loss: 1.2411...  0.3181 sec/batch
Epoch: 19/20...  Training Step: 3647...  Training loss: 1.2090...  0.3171 sec/batch
Epoch: 19/20...  Training Step: 3648...  Training loss: 1.2425...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3649...  Training loss: 1.2117...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3650...  Training loss: 1.2295...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3651...  Training loss: 1.2150...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3652...  Training loss: 1.2220...  0.3194 sec/batch
Epoch: 19/20...  Training Step: 3653...  Training loss: 1.2095...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3654...  Training loss: 1.2449...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3655...  Training loss: 1.2187...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3656...  Training loss: 1.2212...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3657...  Training loss: 1.2079...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3658...  Training loss: 1.2094...  0.3185 sec/batch
Epoch: 19/20...  Training Step: 3659...  Training loss: 1.2218...  0.3207 sec/batch
Epoch: 19/20...  Training Step: 3660...  Training loss: 1.2470...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3661...  Training loss: 1.2415...  0.3230 sec/batch
Epoch: 19/20...  Training Step: 3662...  Training loss: 1.2081...  0.3217 sec/batch
Epoch: 19/20...  Training Step: 3663...  Training loss: 1.2176...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3664...  Training loss: 1.2176...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3665...  Training loss: 1.2361...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3666...  Training loss: 1.2257...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3667...  Training loss: 1.2375...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3668...  Training loss: 1.2181...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3669...  Training loss: 1.2393...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3670...  Training loss: 1.2390...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3671...  Training loss: 1.2493...  0.3188 sec/batch
Epoch: 19/20...  Training Step: 3672...  Training loss: 1.2468...  0.3179 sec/batch
Epoch: 19/20...  Training Step: 3673...  Training loss: 1.2282...  0.3190 sec/batch
Epoch: 19/20...  Training Step: 3674...  Training loss: 1.2506...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3675...  Training loss: 1.2181...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3676...  Training loss: 1.2541...  0.3190 sec/batch
Epoch: 19/20...  Training Step: 3677...  Training loss: 1.2486...  0.3207 sec/batch
Epoch: 19/20...  Training Step: 3678...  Training loss: 1.2319...  0.3189 sec/batch
Epoch: 19/20...  Training Step: 3679...  Training loss: 1.2117...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3680...  Training loss: 1.2052...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3681...  Training loss: 1.2486...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3682...  Training loss: 1.2427...  0.3170 sec/batch
Epoch: 19/20...  Training Step: 3683...  Training loss: 1.2356...  0.3194 sec/batch
Epoch: 19/20...  Training Step: 3684...  Training loss: 1.2325...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3685...  Training loss: 1.2398...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3686...  Training loss: 1.2086...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3687...  Training loss: 1.1946...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3688...  Training loss: 1.2369...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3689...  Training loss: 1.2190...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3690...  Training loss: 1.2036...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3691...  Training loss: 1.2470...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3692...  Training loss: 1.2420...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3693...  Training loss: 1.2229...  0.3180 sec/batch
Epoch: 19/20...  Training Step: 3694...  Training loss: 1.2023...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3695...  Training loss: 1.1946...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3696...  Training loss: 1.2182...  0.3189 sec/batch
Epoch: 19/20...  Training Step: 3697...  Training loss: 1.2544...  0.3179 sec/batch
Epoch: 19/20...  Training Step: 3698...  Training loss: 1.2391...  0.3174 sec/batch
Epoch: 19/20...  Training Step: 3699...  Training loss: 1.2387...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3700...  Training loss: 1.2305...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3701...  Training loss: 1.2630...  0.3207 sec/batch
Epoch: 19/20...  Training Step: 3702...  Training loss: 1.2565...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3703...  Training loss: 1.2382...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3704...  Training loss: 1.2339...  0.3178 sec/batch
Epoch: 19/20...  Training Step: 3705...  Training loss: 1.2842...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3706...  Training loss: 1.2488...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3707...  Training loss: 1.2230...  0.3210 sec/batch
Epoch: 19/20...  Training Step: 3708...  Training loss: 1.2722...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3709...  Training loss: 1.2163...  0.3181 sec/batch
Epoch: 19/20...  Training Step: 3710...  Training loss: 1.2627...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3711...  Training loss: 1.2561...  0.3192 sec/batch
Epoch: 19/20...  Training Step: 3712...  Training loss: 1.2726...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3713...  Training loss: 1.2628...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3714...  Training loss: 1.2303...  0.3223 sec/batch
Epoch: 19/20...  Training Step: 3715...  Training loss: 1.2061...  0.3251 sec/batch
Epoch: 19/20...  Training Step: 3716...  Training loss: 1.2138...  0.3207 sec/batch
Epoch: 19/20...  Training Step: 3717...  Training loss: 1.2457...  0.3191 sec/batch
Epoch: 19/20...  Training Step: 3718...  Training loss: 1.2343...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3719...  Training loss: 1.2225...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3720...  Training loss: 1.2340...  0.3206 sec/batch
Epoch: 19/20...  Training Step: 3721...  Training loss: 1.2464...  0.3186 sec/batch
Epoch: 19/20...  Training Step: 3722...  Training loss: 1.2262...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3723...  Training loss: 1.2082...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3724...  Training loss: 1.2562...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3725...  Training loss: 1.2649...  0.3198 sec/batch
Epoch: 19/20...  Training Step: 3726...  Training loss: 1.2382...  0.3199 sec/batch
Epoch: 19/20...  Training Step: 3727...  Training loss: 1.2377...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3728...  Training loss: 1.2457...  0.3194 sec/batch
Epoch: 19/20...  Training Step: 3729...  Training loss: 1.2244...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3730...  Training loss: 1.2296...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3731...  Training loss: 1.2668...  0.3207 sec/batch
Epoch: 19/20...  Training Step: 3732...  Training loss: 1.2894...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3733...  Training loss: 1.2408...  0.3193 sec/batch
Epoch: 19/20...  Training Step: 3734...  Training loss: 1.2489...  0.3200 sec/batch
Epoch: 19/20...  Training Step: 3735...  Training loss: 1.2364...  0.3194 sec/batch
Epoch: 19/20...  Training Step: 3736...  Training loss: 1.2257...  0.3190 sec/batch
Epoch: 19/20...  Training Step: 3737...  Training loss: 1.2567...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3738...  Training loss: 1.2417...  0.3188 sec/batch
Epoch: 19/20...  Training Step: 3739...  Training loss: 1.2504...  0.3204 sec/batch
Epoch: 19/20...  Training Step: 3740...  Training loss: 1.2121...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3741...  Training loss: 1.2348...  0.3202 sec/batch
Epoch: 19/20...  Training Step: 3742...  Training loss: 1.2857...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3743...  Training loss: 1.2269...  0.3223 sec/batch
Epoch: 19/20...  Training Step: 3744...  Training loss: 1.2074...  0.3213 sec/batch
Epoch: 19/20...  Training Step: 3745...  Training loss: 1.2307...  0.3196 sec/batch
Epoch: 19/20...  Training Step: 3746...  Training loss: 1.2360...  0.3188 sec/batch
Epoch: 19/20...  Training Step: 3747...  Training loss: 1.2277...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3748...  Training loss: 1.2311...  0.3191 sec/batch
Epoch: 19/20...  Training Step: 3749...  Training loss: 1.2303...  0.3197 sec/batch
Epoch: 19/20...  Training Step: 3750...  Training loss: 1.2126...  0.3208 sec/batch
Epoch: 19/20...  Training Step: 3751...  Training loss: 1.2512...  0.3211 sec/batch
Epoch: 19/20...  Training Step: 3752...  Training loss: 1.2281...  0.3195 sec/batch
Epoch: 19/20...  Training Step: 3753...  Training loss: 1.2223...  0.3181 sec/batch
Epoch: 19/20...  Training Step: 3754...  Training loss: 1.2388...  0.3190 sec/batch
Epoch: 19/20...  Training Step: 3755...  Training loss: 1.2118...  0.3211 sec/batch
Epoch: 19/20...  Training Step: 3756...  Training loss: 1.2139...  0.3203 sec/batch
Epoch: 19/20...  Training Step: 3757...  Training loss: 1.2345...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3758...  Training loss: 1.2210...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3759...  Training loss: 1.1969...  0.3205 sec/batch
Epoch: 19/20...  Training Step: 3760...  Training loss: 1.2406...  0.3201 sec/batch
Epoch: 19/20...  Training Step: 3761...  Training loss: 1.2269...  0.3209 sec/batch
Epoch: 19/20...  Training Step: 3762...  Training loss: 1.2241...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3763...  Training loss: 1.3408...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3764...  Training loss: 1.2455...  0.3209 sec/batch
Epoch: 20/20...  Training Step: 3765...  Training loss: 1.2319...  0.3215 sec/batch
Epoch: 20/20...  Training Step: 3766...  Training loss: 1.2570...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3767...  Training loss: 1.2103...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3768...  Training loss: 1.1950...  0.3167 sec/batch
Epoch: 20/20...  Training Step: 3769...  Training loss: 1.2263...  0.3216 sec/batch
Epoch: 20/20...  Training Step: 3770...  Training loss: 1.2276...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3771...  Training loss: 1.2362...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3772...  Training loss: 1.2154...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3773...  Training loss: 1.2156...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3774...  Training loss: 1.2402...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3775...  Training loss: 1.2421...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3776...  Training loss: 1.2381...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3777...  Training loss: 1.2163...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3778...  Training loss: 1.2028...  0.3190 sec/batch
Epoch: 20/20...  Training Step: 3779...  Training loss: 1.2531...  0.3218 sec/batch
Epoch: 20/20...  Training Step: 3780...  Training loss: 1.2536...  0.3205 sec/batch
Epoch: 20/20...  Training Step: 3781...  Training loss: 1.2353...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3782...  Training loss: 1.2548...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3783...  Training loss: 1.2246...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3784...  Training loss: 1.2497...  0.3189 sec/batch
Epoch: 20/20...  Training Step: 3785...  Training loss: 1.2203...  0.3216 sec/batch
Epoch: 20/20...  Training Step: 3786...  Training loss: 1.2599...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3787...  Training loss: 1.2326...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3788...  Training loss: 1.1943...  0.3173 sec/batch
Epoch: 20/20...  Training Step: 3789...  Training loss: 1.2012...  0.3208 sec/batch
Epoch: 20/20...  Training Step: 3790...  Training loss: 1.2549...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3791...  Training loss: 1.2370...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3792...  Training loss: 1.2523...  0.3210 sec/batch
Epoch: 20/20...  Training Step: 3793...  Training loss: 1.2149...  0.3181 sec/batch
Epoch: 20/20...  Training Step: 3794...  Training loss: 1.2033...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3795...  Training loss: 1.2313...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3796...  Training loss: 1.2368...  0.3216 sec/batch
Epoch: 20/20...  Training Step: 3797...  Training loss: 1.2230...  0.3190 sec/batch
Epoch: 20/20...  Training Step: 3798...  Training loss: 1.2284...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3799...  Training loss: 1.2138...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3800...  Training loss: 1.1900...  0.3194 sec/batch
Epoch: 20/20...  Training Step: 3801...  Training loss: 1.1884...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3802...  Training loss: 1.2169...  0.3142 sec/batch
Epoch: 20/20...  Training Step: 3803...  Training loss: 1.2096...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3804...  Training loss: 1.2765...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3805...  Training loss: 1.2216...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3806...  Training loss: 1.2077...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3807...  Training loss: 1.2453...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3808...  Training loss: 1.2024...  0.3184 sec/batch
Epoch: 20/20...  Training Step: 3809...  Training loss: 1.2252...  0.3188 sec/batch
Epoch: 20/20...  Training Step: 3810...  Training loss: 1.2177...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3811...  Training loss: 1.2248...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3812...  Training loss: 1.2408...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3813...  Training loss: 1.2093...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3814...  Training loss: 1.2644...  0.3205 sec/batch
Epoch: 20/20...  Training Step: 3815...  Training loss: 1.2306...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3816...  Training loss: 1.2418...  0.3208 sec/batch
Epoch: 20/20...  Training Step: 3817...  Training loss: 1.2222...  0.3207 sec/batch
Epoch: 20/20...  Training Step: 3818...  Training loss: 1.2333...  0.3171 sec/batch
Epoch: 20/20...  Training Step: 3819...  Training loss: 1.2430...  0.3216 sec/batch
Epoch: 20/20...  Training Step: 3820...  Training loss: 1.2268...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3821...  Training loss: 1.2152...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3822...  Training loss: 1.2717...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3823...  Training loss: 1.2443...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3824...  Training loss: 1.2664...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3825...  Training loss: 1.2488...  0.3208 sec/batch
Epoch: 20/20...  Training Step: 3826...  Training loss: 1.2399...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3827...  Training loss: 1.2308...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3828...  Training loss: 1.2488...  0.3176 sec/batch
Epoch: 20/20...  Training Step: 3829...  Training loss: 1.2598...  0.3179 sec/batch
Epoch: 20/20...  Training Step: 3830...  Training loss: 1.2214...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3831...  Training loss: 1.2408...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3832...  Training loss: 1.2124...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3833...  Training loss: 1.2702...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3834...  Training loss: 1.2605...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3835...  Training loss: 1.2754...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3836...  Training loss: 1.2120...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3837...  Training loss: 1.2350...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3838...  Training loss: 1.2625...  0.3194 sec/batch
Epoch: 20/20...  Training Step: 3839...  Training loss: 1.2298...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3840...  Training loss: 1.2207...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3841...  Training loss: 1.1845...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3842...  Training loss: 1.2323...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3843...  Training loss: 1.2096...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3844...  Training loss: 1.2359...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3845...  Training loss: 1.2035...  0.3224 sec/batch
Epoch: 20/20...  Training Step: 3846...  Training loss: 1.2207...  0.3184 sec/batch
Epoch: 20/20...  Training Step: 3847...  Training loss: 1.2058...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3848...  Training loss: 1.2246...  0.3194 sec/batch
Epoch: 20/20...  Training Step: 3849...  Training loss: 1.1986...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3850...  Training loss: 1.2169...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3851...  Training loss: 1.1933...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3852...  Training loss: 1.2312...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3853...  Training loss: 1.2044...  0.3251 sec/batch
Epoch: 20/20...  Training Step: 3854...  Training loss: 1.2173...  0.3176 sec/batch
Epoch: 20/20...  Training Step: 3855...  Training loss: 1.2075...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3856...  Training loss: 1.2074...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3857...  Training loss: 1.2163...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3858...  Training loss: 1.2399...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3859...  Training loss: 1.2364...  0.3190 sec/batch
Epoch: 20/20...  Training Step: 3860...  Training loss: 1.2033...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3861...  Training loss: 1.2053...  0.3207 sec/batch
Epoch: 20/20...  Training Step: 3862...  Training loss: 1.2021...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3863...  Training loss: 1.2316...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3864...  Training loss: 1.2203...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3865...  Training loss: 1.2407...  0.3208 sec/batch
Epoch: 20/20...  Training Step: 3866...  Training loss: 1.2193...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3867...  Training loss: 1.2201...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3868...  Training loss: 1.2285...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3869...  Training loss: 1.2238...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3870...  Training loss: 1.2420...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3871...  Training loss: 1.2193...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3872...  Training loss: 1.2430...  0.3229 sec/batch
Epoch: 20/20...  Training Step: 3873...  Training loss: 1.2098...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3874...  Training loss: 1.2215...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3875...  Training loss: 1.2424...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3876...  Training loss: 1.2205...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3877...  Training loss: 1.2039...  0.3176 sec/batch
Epoch: 20/20...  Training Step: 3878...  Training loss: 1.1998...  0.3187 sec/batch
Epoch: 20/20...  Training Step: 3879...  Training loss: 1.2318...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3880...  Training loss: 1.2352...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3881...  Training loss: 1.2255...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3882...  Training loss: 1.2337...  0.3194 sec/batch
Epoch: 20/20...  Training Step: 3883...  Training loss: 1.2317...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3884...  Training loss: 1.2095...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3885...  Training loss: 1.1935...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3886...  Training loss: 1.2240...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3887...  Training loss: 1.2199...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3888...  Training loss: 1.1887...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3889...  Training loss: 1.2271...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3890...  Training loss: 1.2264...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3891...  Training loss: 1.2238...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3892...  Training loss: 1.1819...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3893...  Training loss: 1.1773...  0.3212 sec/batch
Epoch: 20/20...  Training Step: 3894...  Training loss: 1.2171...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3895...  Training loss: 1.2552...  0.3207 sec/batch
Epoch: 20/20...  Training Step: 3896...  Training loss: 1.2275...  0.3222 sec/batch
Epoch: 20/20...  Training Step: 3897...  Training loss: 1.2378...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3898...  Training loss: 1.2200...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3899...  Training loss: 1.2544...  0.3211 sec/batch
Epoch: 20/20...  Training Step: 3900...  Training loss: 1.2421...  0.3253 sec/batch
Epoch: 20/20...  Training Step: 3901...  Training loss: 1.2372...  0.3210 sec/batch
Epoch: 20/20...  Training Step: 3902...  Training loss: 1.2347...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3903...  Training loss: 1.2689...  0.3185 sec/batch
Epoch: 20/20...  Training Step: 3904...  Training loss: 1.2354...  0.3204 sec/batch
Epoch: 20/20...  Training Step: 3905...  Training loss: 1.2200...  0.3218 sec/batch
Epoch: 20/20...  Training Step: 3906...  Training loss: 1.2565...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3907...  Training loss: 1.2061...  0.3183 sec/batch
Epoch: 20/20...  Training Step: 3908...  Training loss: 1.2463...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3909...  Training loss: 1.2383...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3910...  Training loss: 1.2538...  0.3219 sec/batch
Epoch: 20/20...  Training Step: 3911...  Training loss: 1.2539...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3912...  Training loss: 1.2241...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3913...  Training loss: 1.1850...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3914...  Training loss: 1.2113...  0.3194 sec/batch
Epoch: 20/20...  Training Step: 3915...  Training loss: 1.2398...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3916...  Training loss: 1.2274...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3917...  Training loss: 1.2094...  0.3192 sec/batch
Epoch: 20/20...  Training Step: 3918...  Training loss: 1.2239...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3919...  Training loss: 1.2332...  0.3190 sec/batch
Epoch: 20/20...  Training Step: 3920...  Training loss: 1.2212...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3921...  Training loss: 1.1954...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3922...  Training loss: 1.2469...  0.3213 sec/batch
Epoch: 20/20...  Training Step: 3923...  Training loss: 1.2598...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3924...  Training loss: 1.2368...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3925...  Training loss: 1.2143...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3926...  Training loss: 1.2306...  0.3200 sec/batch
Epoch: 20/20...  Training Step: 3927...  Training loss: 1.2257...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3928...  Training loss: 1.2214...  0.3189 sec/batch
Epoch: 20/20...  Training Step: 3929...  Training loss: 1.2514...  0.3173 sec/batch
Epoch: 20/20...  Training Step: 3930...  Training loss: 1.2826...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3931...  Training loss: 1.2519...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3932...  Training loss: 1.2328...  0.3196 sec/batch
Epoch: 20/20...  Training Step: 3933...  Training loss: 1.2222...  0.3182 sec/batch
Epoch: 20/20...  Training Step: 3934...  Training loss: 1.2178...  0.3201 sec/batch
Epoch: 20/20...  Training Step: 3935...  Training loss: 1.2530...  0.3213 sec/batch
Epoch: 20/20...  Training Step: 3936...  Training loss: 1.2372...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3937...  Training loss: 1.2347...  0.3210 sec/batch
Epoch: 20/20...  Training Step: 3938...  Training loss: 1.2021...  0.3210 sec/batch
Epoch: 20/20...  Training Step: 3939...  Training loss: 1.2200...  0.3189 sec/batch
Epoch: 20/20...  Training Step: 3940...  Training loss: 1.2661...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3941...  Training loss: 1.2109...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3942...  Training loss: 1.2055...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3943...  Training loss: 1.2183...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3944...  Training loss: 1.2198...  0.3199 sec/batch
Epoch: 20/20...  Training Step: 3945...  Training loss: 1.2191...  0.3207 sec/batch
Epoch: 20/20...  Training Step: 3946...  Training loss: 1.2218...  0.3198 sec/batch
Epoch: 20/20...  Training Step: 3947...  Training loss: 1.2204...  0.3188 sec/batch
Epoch: 20/20...  Training Step: 3948...  Training loss: 1.2045...  0.3182 sec/batch
Epoch: 20/20...  Training Step: 3949...  Training loss: 1.2558...  0.3182 sec/batch
Epoch: 20/20...  Training Step: 3950...  Training loss: 1.2174...  0.3203 sec/batch
Epoch: 20/20...  Training Step: 3951...  Training loss: 1.2181...  0.3193 sec/batch
Epoch: 20/20...  Training Step: 3952...  Training loss: 1.2241...  0.3195 sec/batch
Epoch: 20/20...  Training Step: 3953...  Training loss: 1.2031...  0.3190 sec/batch
Epoch: 20/20...  Training Step: 3954...  Training loss: 1.2173...  0.3191 sec/batch
Epoch: 20/20...  Training Step: 3955...  Training loss: 1.2271...  0.3206 sec/batch
Epoch: 20/20...  Training Step: 3956...  Training loss: 1.2100...  0.3197 sec/batch
Epoch: 20/20...  Training Step: 3957...  Training loss: 1.1931...  0.3205 sec/batch
Epoch: 20/20...  Training Step: 3958...  Training loss: 1.2240...  0.3210 sec/batch
Epoch: 20/20...  Training Step: 3959...  Training loss: 1.2201...  0.3202 sec/batch
Epoch: 20/20...  Training Step: 3960...  Training loss: 1.2115...  0.3198 sec/batch

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [18]:
tf.train.get_checkpoint_state('checkpoints')


Out[18]:
model_checkpoint_path: "checkpoints/i3960_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i1800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i2800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3000_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3800_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i3960_l512.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [19]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [20]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [21]:
tf.train.latest_checkpoint('checkpoints')


Out[21]:
'checkpoints/i3960_l512.ckpt'

In [22]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)


Fartity, they should go, and that he cared to talk to him, he had
beet at a people who had bought herself only to him the staincies
of her. He had an offer of the candles were simply stranging in the
same three face. Though those words of her mother, who had sent
the children, and all the courtion of a carriage, that had a sterching he
had to be struck this was to be considered it and take a thousands about
that. She saw in any one with a mushroom in this words all the were there
and all at the table and the conversation was still any dark another at
the position, who would be thinkings, but he could shoot her to be a
suncases.

"Yes, it's, that's taking my crowds and talk again," the colonel of
the marsh were as thougan to care to see him; he said that he had
natesated this. And such as he was not seeing them. And she was as how
he was too stared. He was angry, and he will be no course about the
sense of her passion.

At faring the point to this terrible second towards he had seen
a moment, that the sight there was no dead, and the peasants took the
secretary as she went out of the following that who saw it.

Alexey Alexandrovitch would have seen being butter oft, and he cried
to her, he was a lady and too he could not be all of the place, he was
to be sone of his hand, and as though she came into how they were
a man who had no repeating in the second same in the stude.

"I won't because you thret man to make a time to see it all to him, I don't
know, and that's to be in a conversation," he thought.

"Why, I don't know why, I don't want to go out of her.


Conter too like them?" she said, smiling, stood at the most important face
that she was in hostest that he could not be any will be she forgotten
him to be the face well, and how he was as the same signs of the carriage.
She would not have recalled her, and had said.

"I am not interested in her tone," said Stepan Arkadyevitch.

"You don't know that the peasant has so many and all of it."

"Yes, I won't allew as that

In [23]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farde.,
 I tilt hoe sore tha hesersod thas sher woun on tor to tang ant on the wot ather at thos
erang at os hee war teat he ar and tor ans san had wos at he thes wel tor an at orer seesso son asd sis ald the he tor he shire tis teere alde sot he tore ho the tha son te wh th the sirite, andere sot whe son ares ont the an oton as se tit the sh ans as and on ha te th teser ond tote the wer ha h erese ant hes arde tat te thee sore tins
the sis ate tit os the ho ansesed the whes
os hor asid ales and, ad wot an the son hil sins oot thered, to te se tot hor sha wer had
eongong an he so tot her than sos the ang whin her and wat hit he ansis an terered ath at hithin wan ot ting the hos an tins ant at hins sil tat hee sor sith th te hed ator tinsit he wites
he tand has he to he whis son whased
te ton te thas sas tou tot te thar, an os hiss ot th at so hh anger hite he sat he sand
innge ant at the tans sesand ato the ha whe ses ale tan ong hat hers and the sered ang the win an ang otir are to hant h

In [24]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farnt at he shadd
all had the sere ou a chongisely, and she sood te conting to her the
poscorse with his seand has bag her to anco and sald to he
said to the stere the coldented to a meroot.
There wish as hid the chonte of the
stire. To caner of anthor and the pastions
of his wist her war a prompen a doontenss, as
her ther
was he whith hin har as though at ter intoust and and to the semple sain had to be and sort and was to her he shad to her.

"I said har eveny," sead
the some.".

"Yas seed and he was,
but to hin had all say, thic shile the mers im. Ald the welt al where he
said the here on there were treent an a the heares of hor
suppaned that him at he
cherted in of the
caspiless. To she was not on his were him, the saming all were his
shacked and her wese thay the
wers and ther that the her sood
seed. There to her sour the sere were.

"Te, you wind here word. The ceally, and as if the rould and tither, there aseened the morse on tely
sulling his and the postens, the soiging
assed. He h

In [25]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farth had and their
tries was not intellest to be
his hand, shill astered and sont her hands.
"He has told you worl how he was not to be a same of it? And he wanted out into
still their word,
thinker would not be in one a dees of stick one the tame and this have been
to see you've still the soun and that the
mestions to he would he with the stepl then
work to a little
man that was that that how he was a most and the preceat who should be the merold that in though so to see the moshed," teord their sente in the simply of a thought, and the most she had
so the highes of hus own one to their
the than the started to
say think of the
solest into say to
and the
pritict, was not so some all the pointing and the sempersains at the chost treas,
he was stanting to say, and the mistre asterstand of his cancess with her all the some time that it welk on the sumple of the paces of his cheatal, and this aspense of horring, while her such trating, and she went on this seemed and wince and with his while,