Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [34]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [35]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [36]:
text[:100]


Out[36]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

And we can see the characters encoded as integers.


In [37]:
encoded[:100]


Out[37]:
array([68,  4,  1, 51, 78, 59, 49, 40,  5, 21, 21, 21, 15,  1, 51, 51,  9,
       40, 53,  1, 36, 27, 26, 27, 59, 54, 40,  1, 49, 59, 40,  1, 26, 26,
       40,  1, 26, 27, 28, 59, 46, 40, 59, 82, 59, 49,  9, 40,  2, 25,  4,
        1, 51, 51,  9, 40, 53,  1, 36, 27, 26,  9, 40, 27, 54, 40,  2, 25,
        4,  1, 51, 51,  9, 40, 27, 25, 40, 27, 78, 54, 40, 37, 30, 25, 21,
       30,  1,  9, 45, 21, 21, 58, 82, 59, 49,  9, 78,  4, 27, 25])

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [38]:
len(vocab)


Out[38]:
83

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.


In [39]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the number of characters per batch and number of batches we can make
    characters_per_batch = n_seqs * n_steps
    n_batches = len(arr)//characters_per_batch
    
    # Keep only enough characters to make full batches
    arr = arr[:n_batches * characters_per_batch]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs, -1))
    
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:, n:n+n_steps]
        # print("x: ", x)
        # The targets, shifted by one
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
        # print("y:",  y)
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [40]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)

In [41]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[68  4  1 51 78 59 49 40  5 21]
 [40  1 36 40 25 37 78 40 24 37]
 [82 27 25 45 21 21 34 12 59 54]
 [25 40 42  2 49 27 25 24 40  4]
 [40 27 78 40 27 54 39 40 54 27]
 [40 32 78 40 30  1 54 21 37 25]
 [ 4 59 25 40 79 37 36 59 40 53]
 [46 40  6  2 78 40 25 37 30 40]
 [78 40 27 54 25 20 78 45 40 33]
 [40 54  1 27 42 40 78 37 40  4]]

y
 [[ 4  1 51 78 59 49 40  5 21 21]
 [ 1 36 40 25 37 78 40 24 37 27]
 [27 25 45 21 21 34 12 59 54 39]
 [40 42  2 49 27 25 24 40  4 27]
 [27 78 40 27 54 39 40 54 27 49]
 [32 78 40 30  1 54 21 37 25 26]
 [59 25 40 79 37 36 59 40 53 37]
 [40  6  2 78 40 25 37 30 40 54]
 [40 27 54 25 20 78 45 40 33  4]
 [54  1 27 42 40 78 37 40  4 59]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.


In [42]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like

def build_cell(num_units, keep_prob):
    lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

    return drop

tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])

Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Below, we implement the build_lstm function to create these LSTM cells and the initial state.


In [43]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    
    def build_cell(lstm_size, keep_prob):
        # Use a basic LSTM cell
        lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        
        # Add dropout to the cell
        drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
        return drop
    
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
    initial_state = cell.zero_state(batch_size, tf.float32)
    
    return cell, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.


In [44]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        x: Input tensor
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''
    
    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    # That is, the shape should be batch_size*num_steps rows by lstm_size columns
    seq_output = tf.concat(lstm_output, axis=1)
    x = tf.reshape(seq_output, [-1, in_size])
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
        softmax_b = tf.Variable(tf.zeros(out_size))
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.matmul(x, softmax_w) + softmax_b
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits, name='predictions')
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.


In [45]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    print(logits.shape, targets.shape, lstm_size, num_classes)  # (10000, 83) (100, 100) 512 83
    
    # One-hot encode targets and reshape to match logits, one row per batch_size per step
    y_one_hot = tf.one_hot(targets, num_classes)
    print(y_one_hot.shape)  # (100, 100, 83)
    y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
    print(y_reshaped.shape)  # y_reshaped
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
    loss = tf.reduce_mean(loss)  # if this line is left, then bug will arise
    return loss

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [46]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.


In [47]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)

        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN and collect the outputs
        outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here I'm defining the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [48]:
batch_size = 100        # Sequences per batch
num_steps = 100         # Number of sequence steps per batch
lstm_size = 512         # Size of hidden layers in LSTMs
num_layers = 2          # Number of LSTM layers
learning_rate = 0.001   # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt


In [49]:
epochs = 20
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


(10000, 83) (100, 100) 512 83
(100, 100, 83)
(10000, 83)
Epoch: 1/20...  Training Step: 1...  Training loss: 4.4178...  10.0954 sec/batch
Epoch: 1/20...  Training Step: 2...  Training loss: 4.3261...  8.9448 sec/batch
Epoch: 1/20...  Training Step: 3...  Training loss: 3.8011...  9.2421 sec/batch
Epoch: 1/20...  Training Step: 4...  Training loss: 5.8763...  8.6986 sec/batch
Epoch: 1/20...  Training Step: 5...  Training loss: 4.1510...  8.7608 sec/batch
Epoch: 1/20...  Training Step: 6...  Training loss: 4.0745...  8.1808 sec/batch
Epoch: 1/20...  Training Step: 7...  Training loss: 3.9357...  8.5888 sec/batch
Epoch: 1/20...  Training Step: 8...  Training loss: 3.7423...  8.0414 sec/batch
Epoch: 1/20...  Training Step: 9...  Training loss: 3.5234...  8.0314 sec/batch
Epoch: 1/20...  Training Step: 10...  Training loss: 3.4406...  7.9837 sec/batch
Epoch: 1/20...  Training Step: 11...  Training loss: 3.3864...  7.9862 sec/batch
Epoch: 1/20...  Training Step: 12...  Training loss: 3.4183...  7.8744 sec/batch
Epoch: 1/20...  Training Step: 13...  Training loss: 3.3960...  8.8661 sec/batch
Epoch: 1/20...  Training Step: 14...  Training loss: 3.3862...  8.9278 sec/batch
Epoch: 1/20...  Training Step: 15...  Training loss: 3.3556...  8.7899 sec/batch
Epoch: 1/20...  Training Step: 16...  Training loss: 3.3317...  8.4440 sec/batch
Epoch: 1/20...  Training Step: 17...  Training loss: 3.3142...  8.5708 sec/batch
Epoch: 1/20...  Training Step: 18...  Training loss: 3.3330...  9.9720 sec/batch
Epoch: 1/20...  Training Step: 19...  Training loss: 3.2941...  8.4354 sec/batch
Epoch: 1/20...  Training Step: 20...  Training loss: 3.2695...  7.9095 sec/batch
Epoch: 1/20...  Training Step: 21...  Training loss: 3.2815...  7.8910 sec/batch
Epoch: 1/20...  Training Step: 22...  Training loss: 3.2721...  8.2550 sec/batch
Epoch: 1/20...  Training Step: 23...  Training loss: 3.2650...  9.1403 sec/batch
Epoch: 1/20...  Training Step: 24...  Training loss: 3.2615...  8.7292 sec/batch
Epoch: 1/20...  Training Step: 25...  Training loss: 3.2524...  8.8616 sec/batch
Epoch: 1/20...  Training Step: 26...  Training loss: 3.2625...  8.8220 sec/batch
Epoch: 1/20...  Training Step: 27...  Training loss: 3.2567...  8.6676 sec/batch
Epoch: 1/20...  Training Step: 28...  Training loss: 3.2261...  8.4730 sec/batch
Epoch: 1/20...  Training Step: 29...  Training loss: 3.2277...  8.9899 sec/batch
Epoch: 1/20...  Training Step: 30...  Training loss: 3.2350...  9.7419 sec/batch
Epoch: 1/20...  Training Step: 31...  Training loss: 3.2428...  8.2690 sec/batch
Epoch: 1/20...  Training Step: 32...  Training loss: 3.2245...  8.2971 sec/batch
Epoch: 1/20...  Training Step: 33...  Training loss: 3.2006...  8.3597 sec/batch
Epoch: 1/20...  Training Step: 34...  Training loss: 3.2297...  8.8495 sec/batch
Epoch: 1/20...  Training Step: 35...  Training loss: 3.1945...  8.0494 sec/batch
Epoch: 1/20...  Training Step: 36...  Training loss: 3.2102...  8.1376 sec/batch
Epoch: 1/20...  Training Step: 37...  Training loss: 3.1882...  8.8515 sec/batch
Epoch: 1/20...  Training Step: 38...  Training loss: 3.1938...  8.6259 sec/batch
Epoch: 1/20...  Training Step: 39...  Training loss: 3.1863...  8.6926 sec/batch
Epoch: 1/20...  Training Step: 40...  Training loss: 3.1819...  8.4234 sec/batch
Epoch: 1/20...  Training Step: 41...  Training loss: 3.1861...  8.8831 sec/batch
Epoch: 1/20...  Training Step: 42...  Training loss: 3.1826...  8.9009 sec/batch
Epoch: 1/20...  Training Step: 43...  Training loss: 3.1758...  8.4169 sec/batch
Epoch: 1/20...  Training Step: 44...  Training loss: 3.1764...  8.2509 sec/batch
Epoch: 1/20...  Training Step: 45...  Training loss: 3.1629...  8.1928 sec/batch
Epoch: 1/20...  Training Step: 46...  Training loss: 3.1765...  8.6666 sec/batch
Epoch: 1/20...  Training Step: 47...  Training loss: 3.1762...  9.2030 sec/batch
Epoch: 1/20...  Training Step: 48...  Training loss: 3.1830...  9.7359 sec/batch
Epoch: 1/20...  Training Step: 49...  Training loss: 3.1740...  8.8932 sec/batch
Epoch: 1/20...  Training Step: 50...  Training loss: 3.1756...  8.2038 sec/batch
Epoch: 1/20...  Training Step: 51...  Training loss: 3.1726...  8.1126 sec/batch
Epoch: 1/20...  Training Step: 52...  Training loss: 3.1590...  8.0604 sec/batch
Epoch: 1/20...  Training Step: 53...  Training loss: 3.1705...  8.1181 sec/batch
Epoch: 1/20...  Training Step: 54...  Training loss: 3.1483...  8.4149 sec/batch
Epoch: 1/20...  Training Step: 55...  Training loss: 3.1625...  9.9074 sec/batch
Epoch: 1/20...  Training Step: 56...  Training loss: 3.1428...  9.3930 sec/batch
Epoch: 1/20...  Training Step: 57...  Training loss: 3.1557...  10.6203 sec/batch
Epoch: 1/20...  Training Step: 58...  Training loss: 3.1505...  8.9298 sec/batch
Epoch: 1/20...  Training Step: 59...  Training loss: 3.1443...  8.0940 sec/batch
Epoch: 1/20...  Training Step: 60...  Training loss: 3.1537...  8.0855 sec/batch
Epoch: 1/20...  Training Step: 61...  Training loss: 3.1497...  8.1056 sec/batch
Epoch: 1/20...  Training Step: 62...  Training loss: 3.1638...  8.0790 sec/batch
Epoch: 1/20...  Training Step: 63...  Training loss: 3.1731...  9.0511 sec/batch
Epoch: 1/20...  Training Step: 64...  Training loss: 3.1271...  8.1291 sec/batch
Epoch: 1/20...  Training Step: 65...  Training loss: 3.1386...  8.0469 sec/batch
Epoch: 1/20...  Training Step: 66...  Training loss: 3.1611...  8.1507 sec/batch
Epoch: 1/20...  Training Step: 67...  Training loss: 3.1507...  8.0655 sec/batch
Epoch: 1/20...  Training Step: 68...  Training loss: 3.1086...  8.0810 sec/batch
Epoch: 1/20...  Training Step: 69...  Training loss: 3.1296...  8.2180 sec/batch
Epoch: 1/20...  Training Step: 70...  Training loss: 3.1446...  8.1286 sec/batch
Epoch: 1/20...  Training Step: 71...  Training loss: 3.1358...  8.1351 sec/batch
Epoch: 1/20...  Training Step: 72...  Training loss: 3.1569...  8.0479 sec/batch
Epoch: 1/20...  Training Step: 73...  Training loss: 3.1297...  8.1993 sec/batch
Epoch: 1/20...  Training Step: 74...  Training loss: 3.1368...  8.1211 sec/batch
Epoch: 1/20...  Training Step: 75...  Training loss: 3.1422...  8.2545 sec/batch
Epoch: 1/20...  Training Step: 76...  Training loss: 3.1455...  10.4498 sec/batch
Epoch: 1/20...  Training Step: 77...  Training loss: 3.1381...  9.9004 sec/batch
Epoch: 1/20...  Training Step: 78...  Training loss: 3.1304...  9.8001 sec/batch
Epoch: 1/20...  Training Step: 79...  Training loss: 3.1274...  9.3850 sec/batch
Epoch: 1/20...  Training Step: 80...  Training loss: 3.1080...  9.1513 sec/batch
Epoch: 1/20...  Training Step: 81...  Training loss: 3.1148...  8.4379 sec/batch
Epoch: 1/20...  Training Step: 82...  Training loss: 3.1347...  8.3417 sec/batch
Epoch: 1/20...  Training Step: 83...  Training loss: 3.1366...  8.3231 sec/batch
Epoch: 1/20...  Training Step: 84...  Training loss: 3.1205...  8.7503 sec/batch
Epoch: 1/20...  Training Step: 85...  Training loss: 3.1015...  8.6787 sec/batch
Epoch: 1/20...  Training Step: 86...  Training loss: 3.1217...  8.2028 sec/batch
Epoch: 1/20...  Training Step: 87...  Training loss: 3.1090...  8.2068 sec/batch
Epoch: 1/20...  Training Step: 88...  Training loss: 3.1120...  8.7894 sec/batch
Epoch: 1/20...  Training Step: 89...  Training loss: 3.1210...  8.0860 sec/batch
Epoch: 1/20...  Training Step: 90...  Training loss: 3.1199...  8.1031 sec/batch
Epoch: 1/20...  Training Step: 91...  Training loss: 3.1172...  8.1361 sec/batch
Epoch: 1/20...  Training Step: 92...  Training loss: 3.1060...  8.1687 sec/batch
Epoch: 1/20...  Training Step: 93...  Training loss: 3.1104...  8.1306 sec/batch
Epoch: 1/20...  Training Step: 94...  Training loss: 3.1113...  8.1326 sec/batch
Epoch: 1/20...  Training Step: 95...  Training loss: 3.1004...  8.4023 sec/batch
Epoch: 1/20...  Training Step: 96...  Training loss: 3.0960...  9.9916 sec/batch
Epoch: 1/20...  Training Step: 97...  Training loss: 3.1025...  8.8165 sec/batch
Epoch: 1/20...  Training Step: 98...  Training loss: 3.0923...  8.1823 sec/batch
Epoch: 1/20...  Training Step: 99...  Training loss: 3.0957...  8.1321 sec/batch
Epoch: 1/20...  Training Step: 100...  Training loss: 3.0811...  8.1848 sec/batch
Epoch: 1/20...  Training Step: 101...  Training loss: 3.0885...  8.5066 sec/batch
Epoch: 1/20...  Training Step: 102...  Training loss: 3.0869...  8.5593 sec/batch
Epoch: 1/20...  Training Step: 103...  Training loss: 3.0878...  9.8677 sec/batch
Epoch: 1/20...  Training Step: 104...  Training loss: 3.0722...  9.6186 sec/batch
Epoch: 1/20...  Training Step: 105...  Training loss: 3.0773...  9.1152 sec/batch
Epoch: 1/20...  Training Step: 106...  Training loss: 3.0710...  9.0362 sec/batch
Epoch: 1/20...  Training Step: 107...  Training loss: 3.0531...  9.8276 sec/batch
Epoch: 1/20...  Training Step: 108...  Training loss: 3.0560...  9.5596 sec/batch
Epoch: 1/20...  Training Step: 109...  Training loss: 3.0668...  10.6022 sec/batch
Epoch: 1/20...  Training Step: 110...  Training loss: 3.0292...  9.1047 sec/batch
Epoch: 1/20...  Training Step: 111...  Training loss: 3.0430...  9.0210 sec/batch
Epoch: 1/20...  Training Step: 112...  Training loss: 3.0508...  9.5895 sec/batch
Epoch: 1/20...  Training Step: 113...  Training loss: 3.0363...  9.8898 sec/batch
Epoch: 1/20...  Training Step: 114...  Training loss: 3.0130...  8.2850 sec/batch
Epoch: 1/20...  Training Step: 115...  Training loss: 3.0151...  8.1281 sec/batch
Epoch: 1/20...  Training Step: 116...  Training loss: 3.0624...  8.0810 sec/batch
Epoch: 1/20...  Training Step: 117...  Training loss: 3.0279...  8.1587 sec/batch
Epoch: 1/20...  Training Step: 118...  Training loss: 3.0372...  8.7267 sec/batch
Epoch: 1/20...  Training Step: 119...  Training loss: 3.0468...  9.9801 sec/batch
Epoch: 1/20...  Training Step: 120...  Training loss: 3.0070...  8.8581 sec/batch
Epoch: 1/20...  Training Step: 121...  Training loss: 3.0419...  8.9353 sec/batch
Epoch: 1/20...  Training Step: 122...  Training loss: 3.0233...  8.2585 sec/batch
Epoch: 1/20...  Training Step: 123...  Training loss: 3.0052...  8.2118 sec/batch
Epoch: 1/20...  Training Step: 124...  Training loss: 3.0132...  8.0850 sec/batch
Epoch: 1/20...  Training Step: 125...  Training loss: 2.9844...  8.0665 sec/batch
Epoch: 1/20...  Training Step: 126...  Training loss: 2.9546...  9.1002 sec/batch
Epoch: 1/20...  Training Step: 127...  Training loss: 2.9777...  8.1391 sec/batch
Epoch: 1/20...  Training Step: 128...  Training loss: 2.9793...  7.9326 sec/batch
Epoch: 1/20...  Training Step: 129...  Training loss: 2.9429...  7.9146 sec/batch
Epoch: 1/20...  Training Step: 130...  Training loss: 2.9450...  8.0995 sec/batch
Epoch: 1/20...  Training Step: 131...  Training loss: 2.9507...  8.1387 sec/batch
Epoch: 1/20...  Training Step: 132...  Training loss: 2.9110...  8.0743 sec/batch
Epoch: 1/20...  Training Step: 133...  Training loss: 2.9203...  7.9527 sec/batch
Epoch: 1/20...  Training Step: 134...  Training loss: 2.9009...  8.0133 sec/batch
Epoch: 1/20...  Training Step: 135...  Training loss: 2.8736...  8.0183 sec/batch
Epoch: 1/20...  Training Step: 136...  Training loss: 2.8711...  7.9842 sec/batch
Epoch: 1/20...  Training Step: 137...  Training loss: 2.8784...  8.0740 sec/batch
Epoch: 1/20...  Training Step: 138...  Training loss: 2.8660...  8.0890 sec/batch
Epoch: 1/20...  Training Step: 139...  Training loss: 2.8852...  8.0053 sec/batch
Epoch: 1/20...  Training Step: 140...  Training loss: 2.8600...  7.9421 sec/batch
Epoch: 1/20...  Training Step: 141...  Training loss: 2.8693...  7.9085 sec/batch
Epoch: 1/20...  Training Step: 142...  Training loss: 2.8299...  8.0765 sec/batch
Epoch: 1/20...  Training Step: 143...  Training loss: 2.8444...  9.9916 sec/batch
Epoch: 1/20...  Training Step: 144...  Training loss: 2.8274...  9.2219 sec/batch
Epoch: 1/20...  Training Step: 145...  Training loss: 2.8359...  9.0606 sec/batch
Epoch: 1/20...  Training Step: 146...  Training loss: 2.8282...  8.4796 sec/batch
Epoch: 1/20...  Training Step: 147...  Training loss: 2.8325...  9.8883 sec/batch
Epoch: 1/20...  Training Step: 148...  Training loss: 2.8301...  8.9638 sec/batch
Epoch: 1/20...  Training Step: 149...  Training loss: 2.8006...  9.5289 sec/batch
Epoch: 1/20...  Training Step: 150...  Training loss: 2.7867...  8.6585 sec/batch
Epoch: 1/20...  Training Step: 151...  Training loss: 2.8380...  8.0655 sec/batch
Epoch: 1/20...  Training Step: 152...  Training loss: 2.8172...  8.0980 sec/batch
Epoch: 1/20...  Training Step: 153...  Training loss: 2.7886...  8.1427 sec/batch
Epoch: 1/20...  Training Step: 154...  Training loss: 2.7753...  8.0018 sec/batch
Epoch: 1/20...  Training Step: 155...  Training loss: 2.7605...  8.0840 sec/batch
Epoch: 1/20...  Training Step: 156...  Training loss: 2.7562...  9.1935 sec/batch
Epoch: 1/20...  Training Step: 157...  Training loss: 2.7294...  8.9398 sec/batch
Epoch: 1/20...  Training Step: 158...  Training loss: 2.7265...  8.6435 sec/batch
Epoch: 1/20...  Training Step: 159...  Training loss: 2.6959...  9.6261 sec/batch
Epoch: 1/20...  Training Step: 160...  Training loss: 2.7242...  9.6421 sec/batch
Epoch: 1/20...  Training Step: 161...  Training loss: 2.7069...  9.5995 sec/batch
Epoch: 1/20...  Training Step: 162...  Training loss: 2.6690...  9.2656 sec/batch
Epoch: 1/20...  Training Step: 163...  Training loss: 2.6632...  8.8225 sec/batch
Epoch: 1/20...  Training Step: 164...  Training loss: 2.6853...  9.6467 sec/batch
Epoch: 1/20...  Training Step: 165...  Training loss: 2.7191...  9.7910 sec/batch
Epoch: 1/20...  Training Step: 166...  Training loss: 2.7105...  8.8500 sec/batch
Epoch: 1/20...  Training Step: 167...  Training loss: 2.6986...  8.7122 sec/batch
Epoch: 1/20...  Training Step: 168...  Training loss: 2.6642...  9.8301 sec/batch
Epoch: 1/20...  Training Step: 169...  Training loss: 2.6835...  9.6888 sec/batch
Epoch: 1/20...  Training Step: 170...  Training loss: 2.6420...  9.2611 sec/batch
Epoch: 1/20...  Training Step: 171...  Training loss: 2.6512...  9.5700 sec/batch
Epoch: 1/20...  Training Step: 172...  Training loss: 2.6756...  8.9981 sec/batch
Epoch: 1/20...  Training Step: 173...  Training loss: 2.6824...  7.7245 sec/batch
Epoch: 1/20...  Training Step: 174...  Training loss: 2.6626...  7.6403 sec/batch
Epoch: 1/20...  Training Step: 175...  Training loss: 2.7261...  8.1958 sec/batch
Epoch: 1/20...  Training Step: 176...  Training loss: 2.6812...  8.2875 sec/batch
Epoch: 1/20...  Training Step: 177...  Training loss: 2.6125...  7.6493 sec/batch
Epoch: 1/20...  Training Step: 178...  Training loss: 2.5812...  7.6709 sec/batch
Epoch: 1/20...  Training Step: 179...  Training loss: 2.5797...  7.6037 sec/batch
Epoch: 1/20...  Training Step: 180...  Training loss: 2.5744...  7.6784 sec/batch
Epoch: 1/20...  Training Step: 181...  Training loss: 2.5726...  7.6373 sec/batch
Epoch: 1/20...  Training Step: 182...  Training loss: 2.5789...  7.5952 sec/batch
Epoch: 1/20...  Training Step: 183...  Training loss: 2.5556...  7.6759 sec/batch
Epoch: 1/20...  Training Step: 184...  Training loss: 2.5820...  7.7005 sec/batch
Epoch: 1/20...  Training Step: 185...  Training loss: 2.5935...  7.7155 sec/batch
Epoch: 1/20...  Training Step: 186...  Training loss: 2.5489...  8.2169 sec/batch
Epoch: 1/20...  Training Step: 187...  Training loss: 2.5328...  9.2929 sec/batch
Epoch: 1/20...  Training Step: 188...  Training loss: 2.5218...  8.5790 sec/batch
Epoch: 1/20...  Training Step: 189...  Training loss: 2.5204...  7.8800 sec/batch
Epoch: 1/20...  Training Step: 190...  Training loss: 2.5246...  7.6353 sec/batch
Epoch: 1/20...  Training Step: 191...  Training loss: 2.5277...  7.6624 sec/batch
Epoch: 1/20...  Training Step: 192...  Training loss: 2.4981...  7.6549 sec/batch
Epoch: 1/20...  Training Step: 193...  Training loss: 2.5162...  7.8354 sec/batch
Epoch: 1/20...  Training Step: 194...  Training loss: 2.5051...  7.7451 sec/batch
Epoch: 1/20...  Training Step: 195...  Training loss: 2.5040...  7.7842 sec/batch
Epoch: 1/20...  Training Step: 196...  Training loss: 2.5067...  7.7849 sec/batch
Epoch: 1/20...  Training Step: 197...  Training loss: 2.4851...  7.7271 sec/batch
Epoch: 1/20...  Training Step: 198...  Training loss: 2.4851...  7.6804 sec/batch
Epoch: 2/20...  Training Step: 199...  Training loss: 2.5479...  7.8940 sec/batch
Epoch: 2/20...  Training Step: 200...  Training loss: 2.4602...  7.9321 sec/batch
Epoch: 2/20...  Training Step: 201...  Training loss: 2.4700...  8.0875 sec/batch
Epoch: 2/20...  Training Step: 202...  Training loss: 2.4784...  8.4289 sec/batch
Epoch: 2/20...  Training Step: 203...  Training loss: 2.4829...  7.8163 sec/batch
Epoch: 2/20...  Training Step: 204...  Training loss: 2.4756...  8.7784 sec/batch
Epoch: 2/20...  Training Step: 205...  Training loss: 2.4806...  8.2690 sec/batch
Epoch: 2/20...  Training Step: 206...  Training loss: 2.4850...  7.8584 sec/batch
Epoch: 2/20...  Training Step: 207...  Training loss: 2.4895...  7.9010 sec/batch
Epoch: 2/20...  Training Step: 208...  Training loss: 2.4536...  9.0890 sec/batch
Epoch: 2/20...  Training Step: 209...  Training loss: 2.4576...  9.0125 sec/batch
Epoch: 2/20...  Training Step: 210...  Training loss: 2.4631...  8.8420 sec/batch
Epoch: 2/20...  Training Step: 211...  Training loss: 2.4567...  8.3082 sec/batch
Epoch: 2/20...  Training Step: 212...  Training loss: 2.4974...  8.5517 sec/batch
Epoch: 2/20...  Training Step: 213...  Training loss: 2.4590...  7.8048 sec/batch
Epoch: 2/20...  Training Step: 214...  Training loss: 2.4508...  7.6037 sec/batch
Epoch: 2/20...  Training Step: 215...  Training loss: 2.4602...  7.5882 sec/batch
Epoch: 2/20...  Training Step: 216...  Training loss: 2.4862...  7.7060 sec/batch
Epoch: 2/20...  Training Step: 217...  Training loss: 2.4539...  7.6273 sec/batch
Epoch: 2/20...  Training Step: 218...  Training loss: 2.4257...  7.6017 sec/batch
Epoch: 2/20...  Training Step: 219...  Training loss: 2.4341...  7.6824 sec/batch
Epoch: 2/20...  Training Step: 220...  Training loss: 2.4707...  7.6153 sec/batch
Epoch: 2/20...  Training Step: 221...  Training loss: 2.4409...  9.4140 sec/batch
Epoch: 2/20...  Training Step: 222...  Training loss: 2.4254...  8.6781 sec/batch
Epoch: 2/20...  Training Step: 223...  Training loss: 2.4224...  7.6885 sec/batch
Epoch: 2/20...  Training Step: 224...  Training loss: 2.4203...  7.6293 sec/batch
Epoch: 2/20...  Training Step: 225...  Training loss: 2.4147...  7.9020 sec/batch
Epoch: 2/20...  Training Step: 226...  Training loss: 2.4126...  7.7982 sec/batch
Epoch: 2/20...  Training Step: 227...  Training loss: 2.4287...  7.8770 sec/batch
Epoch: 2/20...  Training Step: 228...  Training loss: 2.4266...  8.1720 sec/batch
Epoch: 2/20...  Training Step: 229...  Training loss: 2.4332...  7.9181 sec/batch
Epoch: 2/20...  Training Step: 230...  Training loss: 2.3938...  8.9495 sec/batch
Epoch: 2/20...  Training Step: 231...  Training loss: 2.3914...  9.8748 sec/batch
Epoch: 2/20...  Training Step: 232...  Training loss: 2.4181...  8.8230 sec/batch
Epoch: 2/20...  Training Step: 233...  Training loss: 2.3907...  9.2857 sec/batch
Epoch: 2/20...  Training Step: 234...  Training loss: 2.4032...  9.2857 sec/batch
Epoch: 2/20...  Training Step: 235...  Training loss: 2.3880...  9.7535 sec/batch
Epoch: 2/20...  Training Step: 236...  Training loss: 2.3613...  7.8389 sec/batch
Epoch: 2/20...  Training Step: 237...  Training loss: 2.3691...  7.6869 sec/batch
Epoch: 2/20...  Training Step: 238...  Training loss: 2.3684...  7.7060 sec/batch
Epoch: 2/20...  Training Step: 239...  Training loss: 2.3557...  7.8178 sec/batch
Epoch: 2/20...  Training Step: 240...  Training loss: 2.3608...  8.2845 sec/batch
Epoch: 2/20...  Training Step: 241...  Training loss: 2.3571...  8.5167 sec/batch
Epoch: 2/20...  Training Step: 242...  Training loss: 2.3557...  8.7874 sec/batch
Epoch: 2/20...  Training Step: 243...  Training loss: 2.3691...  7.8484 sec/batch
Epoch: 2/20...  Training Step: 244...  Training loss: 2.3216...  7.6128 sec/batch
Epoch: 2/20...  Training Step: 245...  Training loss: 2.3782...  7.6433 sec/batch
Epoch: 2/20...  Training Step: 246...  Training loss: 2.3581...  7.6288 sec/batch
Epoch: 2/20...  Training Step: 247...  Training loss: 2.3524...  7.5882 sec/batch
Epoch: 2/20...  Training Step: 248...  Training loss: 2.3831...  7.6559 sec/batch
Epoch: 2/20...  Training Step: 249...  Training loss: 2.3334...  7.7040 sec/batch
Epoch: 2/20...  Training Step: 250...  Training loss: 2.3633...  7.7511 sec/batch
Epoch: 2/20...  Training Step: 251...  Training loss: 2.3391...  7.9717 sec/batch
Epoch: 2/20...  Training Step: 252...  Training loss: 2.3369...  8.4665 sec/batch
Epoch: 2/20...  Training Step: 253...  Training loss: 2.3381...  8.7042 sec/batch
Epoch: 2/20...  Training Step: 254...  Training loss: 2.3507...  8.5903 sec/batch
Epoch: 2/20...  Training Step: 255...  Training loss: 2.3479...  7.5897 sec/batch
Epoch: 2/20...  Training Step: 256...  Training loss: 2.3253...  7.5571 sec/batch
Epoch: 2/20...  Training Step: 257...  Training loss: 2.3287...  7.7852 sec/batch
Epoch: 2/20...  Training Step: 258...  Training loss: 2.3505...  7.7055 sec/batch
Epoch: 2/20...  Training Step: 259...  Training loss: 2.3352...  7.7728 sec/batch
Epoch: 2/20...  Training Step: 260...  Training loss: 2.3330...  7.6323 sec/batch
Epoch: 2/20...  Training Step: 261...  Training loss: 2.3498...  7.6534 sec/batch
Epoch: 2/20...  Training Step: 262...  Training loss: 2.3242...  7.7847 sec/batch
Epoch: 2/20...  Training Step: 263...  Training loss: 2.3144...  7.6458 sec/batch
Epoch: 2/20...  Training Step: 264...  Training loss: 2.3373...  7.6584 sec/batch
Epoch: 2/20...  Training Step: 265...  Training loss: 2.3265...  7.6960 sec/batch
Epoch: 2/20...  Training Step: 266...  Training loss: 2.2990...  7.7050 sec/batch
Epoch: 2/20...  Training Step: 267...  Training loss: 2.2885...  7.6535 sec/batch
Epoch: 2/20...  Training Step: 268...  Training loss: 2.3203...  7.6684 sec/batch
Epoch: 2/20...  Training Step: 269...  Training loss: 2.3377...  7.6834 sec/batch
Epoch: 2/20...  Training Step: 270...  Training loss: 2.3191...  7.7867 sec/batch
Epoch: 2/20...  Training Step: 271...  Training loss: 2.3208...  7.6163 sec/batch
Epoch: 2/20...  Training Step: 272...  Training loss: 2.2921...  7.8093 sec/batch
Epoch: 2/20...  Training Step: 273...  Training loss: 2.3071...  7.7937 sec/batch
Epoch: 2/20...  Training Step: 274...  Training loss: 2.3420...  7.6564 sec/batch
Epoch: 2/20...  Training Step: 275...  Training loss: 2.2904...  7.6303 sec/batch
Epoch: 2/20...  Training Step: 276...  Training loss: 2.3155...  7.5854 sec/batch
Epoch: 2/20...  Training Step: 277...  Training loss: 2.2776...  7.6844 sec/batch
Epoch: 2/20...  Training Step: 278...  Training loss: 2.2874...  7.6519 sec/batch
Epoch: 2/20...  Training Step: 279...  Training loss: 2.2815...  7.6764 sec/batch
Epoch: 2/20...  Training Step: 280...  Training loss: 2.3043...  7.7441 sec/batch
Epoch: 2/20...  Training Step: 281...  Training loss: 2.2738...  7.6133 sec/batch
Epoch: 2/20...  Training Step: 282...  Training loss: 2.2585...  7.6519 sec/batch
Epoch: 2/20...  Training Step: 283...  Training loss: 2.2457...  7.6579 sec/batch
Epoch: 2/20...  Training Step: 284...  Training loss: 2.2637...  7.6273 sec/batch
Epoch: 2/20...  Training Step: 285...  Training loss: 2.2742...  7.7576 sec/batch
Epoch: 2/20...  Training Step: 286...  Training loss: 2.2748...  7.6037 sec/batch
Epoch: 2/20...  Training Step: 287...  Training loss: 2.2429...  7.6353 sec/batch
Epoch: 2/20...  Training Step: 288...  Training loss: 2.2771...  7.7165 sec/batch
Epoch: 2/20...  Training Step: 289...  Training loss: 2.2500...  7.8504 sec/batch
Epoch: 2/20...  Training Step: 290...  Training loss: 2.2686...  7.6117 sec/batch
Epoch: 2/20...  Training Step: 291...  Training loss: 2.2447...  7.5792 sec/batch
Epoch: 2/20...  Training Step: 292...  Training loss: 2.2428...  7.6263 sec/batch
Epoch: 2/20...  Training Step: 293...  Training loss: 2.2409...  7.6463 sec/batch
Epoch: 2/20...  Training Step: 294...  Training loss: 2.2367...  7.6203 sec/batch
Epoch: 2/20...  Training Step: 295...  Training loss: 2.2549...  7.6799 sec/batch
Epoch: 2/20...  Training Step: 296...  Training loss: 2.2552...  7.6809 sec/batch
Epoch: 2/20...  Training Step: 297...  Training loss: 2.2293...  7.5581 sec/batch
Epoch: 2/20...  Training Step: 298...  Training loss: 2.2344...  7.6729 sec/batch
Epoch: 2/20...  Training Step: 299...  Training loss: 2.2552...  7.6478 sec/batch
Epoch: 2/20...  Training Step: 300...  Training loss: 2.2471...  7.6614 sec/batch
Epoch: 2/20...  Training Step: 301...  Training loss: 2.2240...  7.6714 sec/batch
Epoch: 2/20...  Training Step: 302...  Training loss: 2.2232...  7.6318 sec/batch
Epoch: 2/20...  Training Step: 303...  Training loss: 2.2219...  7.6413 sec/batch
Epoch: 2/20...  Training Step: 304...  Training loss: 2.2353...  7.8363 sec/batch
Epoch: 2/20...  Training Step: 305...  Training loss: 2.2278...  7.6503 sec/batch
Epoch: 2/20...  Training Step: 306...  Training loss: 2.2486...  7.6438 sec/batch
Epoch: 2/20...  Training Step: 307...  Training loss: 2.2506...  7.6113 sec/batch
Epoch: 2/20...  Training Step: 308...  Training loss: 2.2172...  7.6228 sec/batch
Epoch: 2/20...  Training Step: 309...  Training loss: 2.2302...  7.6741 sec/batch
Epoch: 2/20...  Training Step: 310...  Training loss: 2.2386...  7.6849 sec/batch
Epoch: 2/20...  Training Step: 311...  Training loss: 2.2259...  7.7556 sec/batch
Epoch: 2/20...  Training Step: 312...  Training loss: 2.2025...  7.6508 sec/batch
Epoch: 2/20...  Training Step: 313...  Training loss: 2.2022...  7.6514 sec/batch
Epoch: 2/20...  Training Step: 314...  Training loss: 2.1757...  7.5927 sec/batch
Epoch: 2/20...  Training Step: 315...  Training loss: 2.2225...  7.6368 sec/batch
Epoch: 2/20...  Training Step: 316...  Training loss: 2.2084...  7.6910 sec/batch
Epoch: 2/20...  Training Step: 317...  Training loss: 2.2287...  7.6122 sec/batch
Epoch: 2/20...  Training Step: 318...  Training loss: 2.2127...  7.7601 sec/batch
Epoch: 2/20...  Training Step: 319...  Training loss: 2.2354...  7.6837 sec/batch
Epoch: 2/20...  Training Step: 320...  Training loss: 2.1917...  7.6263 sec/batch
Epoch: 2/20...  Training Step: 321...  Training loss: 2.1873...  7.6579 sec/batch
Epoch: 2/20...  Training Step: 322...  Training loss: 2.2194...  7.6143 sec/batch
Epoch: 2/20...  Training Step: 323...  Training loss: 2.1949...  7.5975 sec/batch
Epoch: 2/20...  Training Step: 324...  Training loss: 2.1702...  7.6338 sec/batch
Epoch: 2/20...  Training Step: 325...  Training loss: 2.2098...  7.5952 sec/batch
Epoch: 2/20...  Training Step: 326...  Training loss: 2.2097...  7.7952 sec/batch
Epoch: 2/20...  Training Step: 327...  Training loss: 2.2029...  7.6970 sec/batch
Epoch: 2/20...  Training Step: 328...  Training loss: 2.1959...  7.5526 sec/batch
Epoch: 2/20...  Training Step: 329...  Training loss: 2.1775...  7.5777 sec/batch
Epoch: 2/20...  Training Step: 330...  Training loss: 2.1621...  7.6072 sec/batch
Epoch: 2/20...  Training Step: 331...  Training loss: 2.1974...  7.7707 sec/batch
Epoch: 2/20...  Training Step: 332...  Training loss: 2.2024...  7.6133 sec/batch
Epoch: 2/20...  Training Step: 333...  Training loss: 2.1819...  7.6443 sec/batch
Epoch: 2/20...  Training Step: 334...  Training loss: 2.1976...  7.6248 sec/batch
Epoch: 2/20...  Training Step: 335...  Training loss: 2.1932...  7.6323 sec/batch
Epoch: 2/20...  Training Step: 336...  Training loss: 2.1789...  7.5872 sec/batch
Epoch: 2/20...  Training Step: 337...  Training loss: 2.2109...  7.5561 sec/batch
Epoch: 2/20...  Training Step: 338...  Training loss: 2.1708...  7.5972 sec/batch
Epoch: 2/20...  Training Step: 339...  Training loss: 2.1989...  7.5355 sec/batch
Epoch: 2/20...  Training Step: 340...  Training loss: 2.1734...  7.5065 sec/batch
Epoch: 2/20...  Training Step: 341...  Training loss: 2.1634...  7.6298 sec/batch
Epoch: 2/20...  Training Step: 342...  Training loss: 2.1690...  7.5817 sec/batch
Epoch: 2/20...  Training Step: 343...  Training loss: 2.1604...  7.5661 sec/batch
Epoch: 2/20...  Training Step: 344...  Training loss: 2.1909...  7.5601 sec/batch
Epoch: 2/20...  Training Step: 345...  Training loss: 2.1821...  7.5220 sec/batch
Epoch: 2/20...  Training Step: 346...  Training loss: 2.1844...  7.5371 sec/batch
Epoch: 2/20...  Training Step: 347...  Training loss: 2.1668...  7.5386 sec/batch
Epoch: 2/20...  Training Step: 348...  Training loss: 2.1497...  7.5932 sec/batch
Epoch: 2/20...  Training Step: 349...  Training loss: 2.1686...  7.7165 sec/batch
Epoch: 2/20...  Training Step: 350...  Training loss: 2.1985...  7.5586 sec/batch
Epoch: 2/20...  Training Step: 351...  Training loss: 2.1634...  7.5686 sec/batch
Epoch: 2/20...  Training Step: 352...  Training loss: 2.1733...  7.5646 sec/batch
Epoch: 2/20...  Training Step: 353...  Training loss: 2.1518...  7.5235 sec/batch
Epoch: 2/20...  Training Step: 354...  Training loss: 2.1560...  7.5421 sec/batch
Epoch: 2/20...  Training Step: 355...  Training loss: 2.1438...  7.5025 sec/batch
Epoch: 2/20...  Training Step: 356...  Training loss: 2.1480...  7.4789 sec/batch
Epoch: 2/20...  Training Step: 357...  Training loss: 2.1268...  7.6012 sec/batch
Epoch: 2/20...  Training Step: 358...  Training loss: 2.1795...  7.5701 sec/batch
Epoch: 2/20...  Training Step: 359...  Training loss: 2.1625...  7.5736 sec/batch
Epoch: 2/20...  Training Step: 360...  Training loss: 2.1329...  7.5932 sec/batch
Epoch: 2/20...  Training Step: 361...  Training loss: 2.1543...  7.5581 sec/batch
Epoch: 2/20...  Training Step: 362...  Training loss: 2.1425...  7.6037 sec/batch
Epoch: 2/20...  Training Step: 363...  Training loss: 2.1498...  7.5797 sec/batch
Epoch: 2/20...  Training Step: 364...  Training loss: 2.1360...  7.6283 sec/batch
Epoch: 2/20...  Training Step: 365...  Training loss: 2.1513...  7.6469 sec/batch
Epoch: 2/20...  Training Step: 366...  Training loss: 2.1646...  7.6022 sec/batch
Epoch: 2/20...  Training Step: 367...  Training loss: 2.1294...  7.6012 sec/batch
Epoch: 2/20...  Training Step: 368...  Training loss: 2.1324...  7.6022 sec/batch
Epoch: 2/20...  Training Step: 369...  Training loss: 2.1235...  7.4674 sec/batch
Epoch: 2/20...  Training Step: 370...  Training loss: 2.1343...  7.5596 sec/batch
Epoch: 2/20...  Training Step: 371...  Training loss: 2.1534...  7.5155 sec/batch
Epoch: 2/20...  Training Step: 372...  Training loss: 2.1479...  7.6679 sec/batch
Epoch: 2/20...  Training Step: 373...  Training loss: 2.1349...  7.6724 sec/batch
Epoch: 2/20...  Training Step: 374...  Training loss: 2.1496...  7.5591 sec/batch
Epoch: 2/20...  Training Step: 375...  Training loss: 2.1153...  7.5361 sec/batch
Epoch: 2/20...  Training Step: 376...  Training loss: 2.1370...  7.5706 sec/batch
Epoch: 2/20...  Training Step: 377...  Training loss: 2.1023...  7.6138 sec/batch
Epoch: 2/20...  Training Step: 378...  Training loss: 2.0794...  7.6569 sec/batch
Epoch: 2/20...  Training Step: 379...  Training loss: 2.1000...  7.6358 sec/batch
Epoch: 2/20...  Training Step: 380...  Training loss: 2.1222...  7.6323 sec/batch
Epoch: 2/20...  Training Step: 381...  Training loss: 2.1101...  7.6659 sec/batch
Epoch: 2/20...  Training Step: 382...  Training loss: 2.1424...  7.6338 sec/batch
Epoch: 2/20...  Training Step: 383...  Training loss: 2.1266...  7.6093 sec/batch
Epoch: 2/20...  Training Step: 384...  Training loss: 2.1068...  7.6433 sec/batch
Epoch: 2/20...  Training Step: 385...  Training loss: 2.1070...  7.5536 sec/batch
Epoch: 2/20...  Training Step: 386...  Training loss: 2.0830...  7.5878 sec/batch
Epoch: 2/20...  Training Step: 387...  Training loss: 2.0912...  7.6910 sec/batch
Epoch: 2/20...  Training Step: 388...  Training loss: 2.1027...  7.6834 sec/batch
Epoch: 2/20...  Training Step: 389...  Training loss: 2.1140...  7.6243 sec/batch
Epoch: 2/20...  Training Step: 390...  Training loss: 2.0750...  7.5787 sec/batch
Epoch: 2/20...  Training Step: 391...  Training loss: 2.1094...  7.6012 sec/batch
Epoch: 2/20...  Training Step: 392...  Training loss: 2.0975...  7.6092 sec/batch
Epoch: 2/20...  Training Step: 393...  Training loss: 2.0814...  7.6348 sec/batch
Epoch: 2/20...  Training Step: 394...  Training loss: 2.0944...  7.6498 sec/batch
Epoch: 2/20...  Training Step: 395...  Training loss: 2.0864...  7.6468 sec/batch
Epoch: 2/20...  Training Step: 396...  Training loss: 2.0767...  7.6418 sec/batch
Epoch: 3/20...  Training Step: 397...  Training loss: 2.1618...  7.5922 sec/batch
Epoch: 3/20...  Training Step: 398...  Training loss: 2.0668...  7.5621 sec/batch
Epoch: 3/20...  Training Step: 399...  Training loss: 2.0705...  7.6529 sec/batch
Epoch: 3/20...  Training Step: 400...  Training loss: 2.0687...  7.5927 sec/batch
Epoch: 3/20...  Training Step: 401...  Training loss: 2.0902...  7.5225 sec/batch
Epoch: 3/20...  Training Step: 402...  Training loss: 2.0465...  7.6258 sec/batch
Epoch: 3/20...  Training Step: 403...  Training loss: 2.0750...  7.6714 sec/batch
Epoch: 3/20...  Training Step: 404...  Training loss: 2.0767...  7.6388 sec/batch
Epoch: 3/20...  Training Step: 405...  Training loss: 2.1063...  7.6278 sec/batch
Epoch: 3/20...  Training Step: 406...  Training loss: 2.0798...  7.5305 sec/batch
Epoch: 3/20...  Training Step: 407...  Training loss: 2.0561...  7.6103 sec/batch
Epoch: 3/20...  Training Step: 408...  Training loss: 2.0393...  7.6524 sec/batch
Epoch: 3/20...  Training Step: 409...  Training loss: 2.0690...  7.6097 sec/batch
Epoch: 3/20...  Training Step: 410...  Training loss: 2.0960...  7.6784 sec/batch
Epoch: 3/20...  Training Step: 411...  Training loss: 2.0657...  7.6549 sec/batch
Epoch: 3/20...  Training Step: 412...  Training loss: 2.0512...  7.5972 sec/batch
Epoch: 3/20...  Training Step: 413...  Training loss: 2.0631...  7.6654 sec/batch
Epoch: 3/20...  Training Step: 414...  Training loss: 2.1106...  7.7471 sec/batch
Epoch: 3/20...  Training Step: 415...  Training loss: 2.0609...  7.7296 sec/batch
Epoch: 3/20...  Training Step: 416...  Training loss: 2.0649...  7.5511 sec/batch
Epoch: 3/20...  Training Step: 417...  Training loss: 2.0573...  7.6032 sec/batch
Epoch: 3/20...  Training Step: 418...  Training loss: 2.0886...  7.6569 sec/batch
Epoch: 3/20...  Training Step: 419...  Training loss: 2.0700...  7.8384 sec/batch
Epoch: 3/20...  Training Step: 420...  Training loss: 2.0462...  7.6509 sec/batch
Epoch: 3/20...  Training Step: 421...  Training loss: 2.0542...  7.6258 sec/batch
Epoch: 3/20...  Training Step: 422...  Training loss: 2.0318...  7.5556 sec/batch
Epoch: 3/20...  Training Step: 423...  Training loss: 2.0391...  7.6112 sec/batch
Epoch: 3/20...  Training Step: 424...  Training loss: 2.0601...  7.6007 sec/batch
Epoch: 3/20...  Training Step: 425...  Training loss: 2.0852...  7.6378 sec/batch
Epoch: 3/20...  Training Step: 426...  Training loss: 2.0636...  7.6193 sec/batch
Epoch: 3/20...  Training Step: 427...  Training loss: 2.0537...  7.6844 sec/batch
Epoch: 3/20...  Training Step: 428...  Training loss: 2.0211...  7.5992 sec/batch
Epoch: 3/20...  Training Step: 429...  Training loss: 2.0443...  7.6323 sec/batch
Epoch: 3/20...  Training Step: 430...  Training loss: 2.0843...  7.8373 sec/batch
Epoch: 3/20...  Training Step: 431...  Training loss: 2.0314...  7.6208 sec/batch
Epoch: 3/20...  Training Step: 432...  Training loss: 2.0329...  7.6057 sec/batch
Epoch: 3/20...  Training Step: 433...  Training loss: 2.0305...  7.7180 sec/batch
Epoch: 3/20...  Training Step: 434...  Training loss: 1.9971...  7.6930 sec/batch
Epoch: 3/20...  Training Step: 435...  Training loss: 1.9975...  7.7256 sec/batch
Epoch: 3/20...  Training Step: 436...  Training loss: 2.0002...  7.6273 sec/batch
Epoch: 3/20...  Training Step: 437...  Training loss: 2.0159...  7.6544 sec/batch
Epoch: 3/20...  Training Step: 438...  Training loss: 2.0172...  7.6463 sec/batch
Epoch: 3/20...  Training Step: 439...  Training loss: 2.0140...  7.5768 sec/batch
Epoch: 3/20...  Training Step: 440...  Training loss: 2.0014...  7.6388 sec/batch
Epoch: 3/20...  Training Step: 441...  Training loss: 2.0272...  7.6428 sec/batch
Epoch: 3/20...  Training Step: 442...  Training loss: 1.9684...  7.6864 sec/batch
Epoch: 3/20...  Training Step: 443...  Training loss: 2.0321...  7.6859 sec/batch
Epoch: 3/20...  Training Step: 444...  Training loss: 1.9989...  7.5406 sec/batch
Epoch: 3/20...  Training Step: 445...  Training loss: 2.0100...  7.6990 sec/batch
Epoch: 3/20...  Training Step: 446...  Training loss: 2.0609...  7.5977 sec/batch
Epoch: 3/20...  Training Step: 447...  Training loss: 1.9816...  7.6052 sec/batch
Epoch: 3/20...  Training Step: 448...  Training loss: 2.0594...  7.6138 sec/batch
Epoch: 3/20...  Training Step: 449...  Training loss: 2.0039...  7.6158 sec/batch
Epoch: 3/20...  Training Step: 450...  Training loss: 1.9987...  7.7055 sec/batch
Epoch: 3/20...  Training Step: 451...  Training loss: 2.0092...  7.5506 sec/batch
Epoch: 3/20...  Training Step: 452...  Training loss: 2.0163...  7.6458 sec/batch
Epoch: 3/20...  Training Step: 453...  Training loss: 2.0215...  7.5777 sec/batch
Epoch: 3/20...  Training Step: 454...  Training loss: 2.0051...  7.5275 sec/batch
Epoch: 3/20...  Training Step: 455...  Training loss: 1.9898...  7.5421 sec/batch
Epoch: 3/20...  Training Step: 456...  Training loss: 2.0414...  7.7200 sec/batch
Epoch: 3/20...  Training Step: 457...  Training loss: 2.0057...  7.6706 sec/batch
Epoch: 3/20...  Training Step: 458...  Training loss: 2.0420...  7.5691 sec/batch
Epoch: 3/20...  Training Step: 459...  Training loss: 2.0339...  7.5416 sec/batch
Epoch: 3/20...  Training Step: 460...  Training loss: 2.0131...  7.5230 sec/batch
Epoch: 3/20...  Training Step: 461...  Training loss: 1.9911...  7.6504 sec/batch
Epoch: 3/20...  Training Step: 462...  Training loss: 2.0307...  7.6243 sec/batch
Epoch: 3/20...  Training Step: 463...  Training loss: 2.0165...  7.6178 sec/batch
Epoch: 3/20...  Training Step: 464...  Training loss: 1.9784...  7.6062 sec/batch
Epoch: 3/20...  Training Step: 465...  Training loss: 1.9831...  7.7747 sec/batch
Epoch: 3/20...  Training Step: 466...  Training loss: 1.9945...  7.5030 sec/batch
Epoch: 3/20...  Training Step: 467...  Training loss: 2.0376...  7.4869 sec/batch
Epoch: 3/20...  Training Step: 468...  Training loss: 2.0079...  7.7261 sec/batch
Epoch: 3/20...  Training Step: 469...  Training loss: 2.0204...  7.6042 sec/batch
Epoch: 3/20...  Training Step: 470...  Training loss: 1.9781...  7.5696 sec/batch
Epoch: 3/20...  Training Step: 471...  Training loss: 1.9880...  7.5987 sec/batch
Epoch: 3/20...  Training Step: 472...  Training loss: 2.0239...  7.5757 sec/batch
Epoch: 3/20...  Training Step: 473...  Training loss: 1.9979...  7.5458 sec/batch
Epoch: 3/20...  Training Step: 474...  Training loss: 1.9941...  7.6288 sec/batch
Epoch: 3/20...  Training Step: 475...  Training loss: 1.9598...  7.5902 sec/batch
Epoch: 3/20...  Training Step: 476...  Training loss: 1.9827...  7.5241 sec/batch
Epoch: 3/20...  Training Step: 477...  Training loss: 1.9506...  7.6343 sec/batch
Epoch: 3/20...  Training Step: 478...  Training loss: 1.9971...  7.5711 sec/batch
Epoch: 3/20...  Training Step: 479...  Training loss: 1.9475...  7.6604 sec/batch
Epoch: 3/20...  Training Step: 480...  Training loss: 1.9742...  7.6729 sec/batch
Epoch: 3/20...  Training Step: 481...  Training loss: 1.9415...  7.5586 sec/batch
Epoch: 3/20...  Training Step: 482...  Training loss: 1.9559...  7.5877 sec/batch
Epoch: 3/20...  Training Step: 483...  Training loss: 1.9689...  7.5050 sec/batch
Epoch: 3/20...  Training Step: 484...  Training loss: 1.9653...  7.5020 sec/batch
Epoch: 3/20...  Training Step: 485...  Training loss: 1.9497...  7.6442 sec/batch
Epoch: 3/20...  Training Step: 486...  Training loss: 1.9809...  7.6303 sec/batch
Epoch: 3/20...  Training Step: 487...  Training loss: 1.9484...  7.7261 sec/batch
Epoch: 3/20...  Training Step: 488...  Training loss: 1.9641...  7.7050 sec/batch
Epoch: 3/20...  Training Step: 489...  Training loss: 1.9330...  7.5401 sec/batch
Epoch: 3/20...  Training Step: 490...  Training loss: 1.9471...  7.5365 sec/batch
Epoch: 3/20...  Training Step: 491...  Training loss: 1.9483...  7.5726 sec/batch
Epoch: 3/20...  Training Step: 492...  Training loss: 1.9692...  7.5897 sec/batch
Epoch: 3/20...  Training Step: 493...  Training loss: 1.9621...  7.5511 sec/batch
Epoch: 3/20...  Training Step: 494...  Training loss: 1.9373...  7.6388 sec/batch
Epoch: 3/20...  Training Step: 495...  Training loss: 1.9475...  7.6333 sec/batch
Epoch: 3/20...  Training Step: 496...  Training loss: 1.9229...  7.5817 sec/batch
Epoch: 3/20...  Training Step: 497...  Training loss: 1.9694...  7.5541 sec/batch
Epoch: 3/20...  Training Step: 498...  Training loss: 1.9704...  7.6609 sec/batch
Epoch: 3/20...  Training Step: 499...  Training loss: 1.9392...  7.6554 sec/batch
Epoch: 3/20...  Training Step: 500...  Training loss: 1.9489...  7.5837 sec/batch
Epoch: 3/20...  Training Step: 501...  Training loss: 1.9373...  7.5962 sec/batch
Epoch: 3/20...  Training Step: 502...  Training loss: 1.9554...  7.6072 sec/batch
Epoch: 3/20...  Training Step: 503...  Training loss: 1.9501...  7.6373 sec/batch
Epoch: 3/20...  Training Step: 504...  Training loss: 1.9654...  7.6774 sec/batch
Epoch: 3/20...  Training Step: 505...  Training loss: 1.9662...  7.9567 sec/batch
Epoch: 3/20...  Training Step: 506...  Training loss: 1.9513...  7.5596 sec/batch
Epoch: 3/20...  Training Step: 507...  Training loss: 1.9449...  7.5481 sec/batch
Epoch: 3/20...  Training Step: 508...  Training loss: 1.9396...  7.6258 sec/batch
Epoch: 3/20...  Training Step: 509...  Training loss: 1.9419...  7.6097 sec/batch
Epoch: 3/20...  Training Step: 510...  Training loss: 1.9341...  7.6342 sec/batch
Epoch: 3/20...  Training Step: 511...  Training loss: 1.9253...  7.6799 sec/batch
Epoch: 3/20...  Training Step: 512...  Training loss: 1.9006...  7.6823 sec/batch
Epoch: 3/20...  Training Step: 513...  Training loss: 1.9410...  7.5556 sec/batch
Epoch: 3/20...  Training Step: 514...  Training loss: 1.9331...  7.5982 sec/batch
Epoch: 3/20...  Training Step: 515...  Training loss: 1.9467...  7.5736 sec/batch
Epoch: 3/20...  Training Step: 516...  Training loss: 1.9378...  7.5616 sec/batch
Epoch: 3/20...  Training Step: 517...  Training loss: 1.9437...  7.8760 sec/batch
Epoch: 3/20...  Training Step: 518...  Training loss: 1.9132...  7.8083 sec/batch
Epoch: 3/20...  Training Step: 519...  Training loss: 1.9173...  7.6012 sec/batch
Epoch: 3/20...  Training Step: 520...  Training loss: 1.9629...  7.5611 sec/batch
Epoch: 3/20...  Training Step: 521...  Training loss: 1.9301...  7.5997 sec/batch
Epoch: 3/20...  Training Step: 522...  Training loss: 1.8911...  7.6022 sec/batch
Epoch: 3/20...  Training Step: 523...  Training loss: 1.9454...  7.5762 sec/batch
Epoch: 3/20...  Training Step: 524...  Training loss: 1.9447...  7.7075 sec/batch
Epoch: 3/20...  Training Step: 525...  Training loss: 1.9304...  7.6669 sec/batch
Epoch: 3/20...  Training Step: 526...  Training loss: 1.9302...  7.7556 sec/batch
Epoch: 3/20...  Training Step: 527...  Training loss: 1.9098...  7.5320 sec/batch
Epoch: 3/20...  Training Step: 528...  Training loss: 1.9007...  7.5195 sec/batch
Epoch: 3/20...  Training Step: 529...  Training loss: 1.9423...  7.5551 sec/batch
Epoch: 3/20...  Training Step: 530...  Training loss: 1.9418...  7.6383 sec/batch
Epoch: 3/20...  Training Step: 531...  Training loss: 1.9176...  7.6819 sec/batch
Epoch: 3/20...  Training Step: 532...  Training loss: 1.9316...  7.6504 sec/batch
Epoch: 3/20...  Training Step: 533...  Training loss: 1.9401...  7.6127 sec/batch
Epoch: 3/20...  Training Step: 534...  Training loss: 1.9324...  7.6238 sec/batch
Epoch: 3/20...  Training Step: 535...  Training loss: 1.9589...  7.5556 sec/batch
Epoch: 3/20...  Training Step: 536...  Training loss: 1.9171...  7.6991 sec/batch
Epoch: 3/20...  Training Step: 537...  Training loss: 1.9508...  7.5406 sec/batch
Epoch: 3/20...  Training Step: 538...  Training loss: 1.9109...  7.6183 sec/batch
Epoch: 3/20...  Training Step: 539...  Training loss: 1.9233...  7.4884 sec/batch
Epoch: 3/20...  Training Step: 540...  Training loss: 1.9236...  7.6283 sec/batch
Epoch: 3/20...  Training Step: 541...  Training loss: 1.9069...  7.6754 sec/batch
Epoch: 3/20...  Training Step: 542...  Training loss: 1.9265...  7.6238 sec/batch
Epoch: 3/20...  Training Step: 543...  Training loss: 1.9327...  7.6714 sec/batch
Epoch: 3/20...  Training Step: 544...  Training loss: 1.9427...  7.5647 sec/batch
Epoch: 3/20...  Training Step: 545...  Training loss: 1.9267...  7.5305 sec/batch
Epoch: 3/20...  Training Step: 546...  Training loss: 1.8971...  7.5927 sec/batch
Epoch: 3/20...  Training Step: 547...  Training loss: 1.9104...  7.4909 sec/batch
Epoch: 3/20...  Training Step: 548...  Training loss: 1.9546...  7.6574 sec/batch
Epoch: 3/20...  Training Step: 549...  Training loss: 1.9261...  7.6313 sec/batch
Epoch: 3/20...  Training Step: 550...  Training loss: 1.9253...  7.6017 sec/batch
Epoch: 3/20...  Training Step: 551...  Training loss: 1.9128...  7.5837 sec/batch
Epoch: 3/20...  Training Step: 552...  Training loss: 1.9200...  7.5305 sec/batch
Epoch: 3/20...  Training Step: 553...  Training loss: 1.9182...  7.5360 sec/batch
Epoch: 3/20...  Training Step: 554...  Training loss: 1.9117...  7.5110 sec/batch
Epoch: 3/20...  Training Step: 555...  Training loss: 1.8742...  7.5822 sec/batch
Epoch: 3/20...  Training Step: 556...  Training loss: 1.9425...  7.7160 sec/batch
Epoch: 3/20...  Training Step: 557...  Training loss: 1.9385...  7.5626 sec/batch
Epoch: 3/20...  Training Step: 558...  Training loss: 1.9085...  7.5471 sec/batch
Epoch: 3/20...  Training Step: 559...  Training loss: 1.9271...  7.5566 sec/batch
Epoch: 3/20...  Training Step: 560...  Training loss: 1.9126...  7.5536 sec/batch
Epoch: 3/20...  Training Step: 561...  Training loss: 1.9024...  7.5631 sec/batch
Epoch: 3/20...  Training Step: 562...  Training loss: 1.9026...  7.5982 sec/batch
Epoch: 3/20...  Training Step: 563...  Training loss: 1.9215...  7.6408 sec/batch
Epoch: 3/20...  Training Step: 564...  Training loss: 1.9513...  7.6283 sec/batch
Epoch: 3/20...  Training Step: 565...  Training loss: 1.8881...  7.5842 sec/batch
Epoch: 3/20...  Training Step: 566...  Training loss: 1.8930...  7.5071 sec/batch
Epoch: 3/20...  Training Step: 567...  Training loss: 1.8808...  7.5741 sec/batch
Epoch: 3/20...  Training Step: 568...  Training loss: 1.8765...  7.5882 sec/batch
Epoch: 3/20...  Training Step: 569...  Training loss: 1.9193...  7.6112 sec/batch
Epoch: 3/20...  Training Step: 570...  Training loss: 1.9110...  7.5656 sec/batch
Epoch: 3/20...  Training Step: 571...  Training loss: 1.8975...  7.6353 sec/batch
Epoch: 3/20...  Training Step: 572...  Training loss: 1.8872...  7.5767 sec/batch
Epoch: 3/20...  Training Step: 573...  Training loss: 1.8846...  7.5365 sec/batch
Epoch: 3/20...  Training Step: 574...  Training loss: 1.9037...  7.5987 sec/batch
Epoch: 3/20...  Training Step: 575...  Training loss: 1.8709...  7.6634 sec/batch
Epoch: 3/20...  Training Step: 576...  Training loss: 1.8534...  7.5601 sec/batch
Epoch: 3/20...  Training Step: 577...  Training loss: 1.8595...  7.5481 sec/batch
Epoch: 3/20...  Training Step: 578...  Training loss: 1.8892...  7.5145 sec/batch
Epoch: 3/20...  Training Step: 579...  Training loss: 1.8896...  7.5957 sec/batch
Epoch: 3/20...  Training Step: 580...  Training loss: 1.9185...  7.6579 sec/batch
Epoch: 3/20...  Training Step: 581...  Training loss: 1.8929...  7.6263 sec/batch
Epoch: 3/20...  Training Step: 582...  Training loss: 1.8848...  7.6153 sec/batch
Epoch: 3/20...  Training Step: 583...  Training loss: 1.8958...  7.5471 sec/batch
Epoch: 3/20...  Training Step: 584...  Training loss: 1.8665...  7.5631 sec/batch
Epoch: 3/20...  Training Step: 585...  Training loss: 1.8811...  7.5701 sec/batch
Epoch: 3/20...  Training Step: 586...  Training loss: 1.8912...  7.6293 sec/batch
Epoch: 3/20...  Training Step: 587...  Training loss: 1.9000...  8.0269 sec/batch
Epoch: 3/20...  Training Step: 588...  Training loss: 1.8568...  9.0225 sec/batch
Epoch: 3/20...  Training Step: 589...  Training loss: 1.8832...  10.1966 sec/batch
Epoch: 3/20...  Training Step: 590...  Training loss: 1.8663...  9.0758 sec/batch
Epoch: 3/20...  Training Step: 591...  Training loss: 1.8450...  9.0715 sec/batch
Epoch: 3/20...  Training Step: 592...  Training loss: 1.8836...  9.1584 sec/batch
Epoch: 3/20...  Training Step: 593...  Training loss: 1.8814...  8.6656 sec/batch
Epoch: 3/20...  Training Step: 594...  Training loss: 1.8663...  9.0952 sec/batch
Epoch: 4/20...  Training Step: 595...  Training loss: 1.9651...  8.3958 sec/batch
Epoch: 4/20...  Training Step: 596...  Training loss: 1.8567...  8.0685 sec/batch
Epoch: 4/20...  Training Step: 597...  Training loss: 1.8601...  8.1888 sec/batch
Epoch: 4/20...  Training Step: 598...  Training loss: 1.8631...  8.2429 sec/batch
Epoch: 4/20...  Training Step: 599...  Training loss: 1.8664...  7.6644 sec/batch
Epoch: 4/20...  Training Step: 600...  Training loss: 1.8234...  7.6854 sec/batch
Epoch: 4/20...  Training Step: 601...  Training loss: 1.8669...  7.7020 sec/batch
Epoch: 4/20...  Training Step: 602...  Training loss: 1.8548...  7.9481 sec/batch
Epoch: 4/20...  Training Step: 603...  Training loss: 1.9071...  9.7555 sec/batch
Epoch: 4/20...  Training Step: 604...  Training loss: 1.8596...  8.6295 sec/batch
Epoch: 4/20...  Training Step: 605...  Training loss: 1.8407...  8.7269 sec/batch
Epoch: 4/20...  Training Step: 606...  Training loss: 1.8383...  8.1271 sec/batch
Epoch: 4/20...  Training Step: 607...  Training loss: 1.8599...  8.4942 sec/batch
Epoch: 4/20...  Training Step: 608...  Training loss: 1.9003...  8.5442 sec/batch
Epoch: 4/20...  Training Step: 609...  Training loss: 1.8566...  8.4490 sec/batch
Epoch: 4/20...  Training Step: 610...  Training loss: 1.8314...  8.5523 sec/batch
Epoch: 4/20...  Training Step: 611...  Training loss: 1.8599...  8.4329 sec/batch
Epoch: 4/20...  Training Step: 612...  Training loss: 1.8968...  8.1046 sec/batch
Epoch: 4/20...  Training Step: 613...  Training loss: 1.8571...  8.1366 sec/batch
Epoch: 4/20...  Training Step: 614...  Training loss: 1.8599...  8.8300 sec/batch
Epoch: 4/20...  Training Step: 615...  Training loss: 1.8493...  8.2951 sec/batch
Epoch: 4/20...  Training Step: 616...  Training loss: 1.8871...  8.2791 sec/batch
Epoch: 4/20...  Training Step: 617...  Training loss: 1.8554...  8.4871 sec/batch
Epoch: 4/20...  Training Step: 618...  Training loss: 1.8526...  9.3248 sec/batch
Epoch: 4/20...  Training Step: 619...  Training loss: 1.8569...  8.3743 sec/batch
Epoch: 4/20...  Training Step: 620...  Training loss: 1.8287...  8.2906 sec/batch
Epoch: 4/20...  Training Step: 621...  Training loss: 1.8177...  8.3873 sec/batch
Epoch: 4/20...  Training Step: 622...  Training loss: 1.8645...  8.4715 sec/batch
Epoch: 4/20...  Training Step: 623...  Training loss: 1.8857...  8.3663 sec/batch
Epoch: 4/20...  Training Step: 624...  Training loss: 1.8714...  8.4264 sec/batch
Epoch: 4/20...  Training Step: 625...  Training loss: 1.8412...  8.0790 sec/batch
Epoch: 4/20...  Training Step: 626...  Training loss: 1.8268...  8.0504 sec/batch
Epoch: 4/20...  Training Step: 627...  Training loss: 1.8539...  8.5608 sec/batch
Epoch: 4/20...  Training Step: 628...  Training loss: 1.8705...  8.4184 sec/batch
Epoch: 4/20...  Training Step: 629...  Training loss: 1.8213...  8.3266 sec/batch
Epoch: 4/20...  Training Step: 630...  Training loss: 1.8456...  8.4224 sec/batch
Epoch: 4/20...  Training Step: 631...  Training loss: 1.8342...  8.5462 sec/batch
Epoch: 4/20...  Training Step: 632...  Training loss: 1.8075...  8.1727 sec/batch
Epoch: 4/20...  Training Step: 633...  Training loss: 1.8030...  9.4344 sec/batch
Epoch: 4/20...  Training Step: 634...  Training loss: 1.8092...  8.6401 sec/batch
Epoch: 4/20...  Training Step: 635...  Training loss: 1.8227...  9.0503 sec/batch
Epoch: 4/20...  Training Step: 636...  Training loss: 1.8477...  7.7146 sec/batch
Epoch: 4/20...  Training Step: 637...  Training loss: 1.8156...  7.7245 sec/batch
Epoch: 4/20...  Training Step: 638...  Training loss: 1.8024...  7.7586 sec/batch
Epoch: 4/20...  Training Step: 639...  Training loss: 1.8354...  7.9306 sec/batch
Epoch: 4/20...  Training Step: 640...  Training loss: 1.7868...  7.9351 sec/batch
Epoch: 4/20...  Training Step: 641...  Training loss: 1.8302...  7.7747 sec/batch
Epoch: 4/20...  Training Step: 642...  Training loss: 1.8129...  7.8148 sec/batch
Epoch: 4/20...  Training Step: 643...  Training loss: 1.8210...  7.7802 sec/batch
Epoch: 4/20...  Training Step: 644...  Training loss: 1.8708...  7.7070 sec/batch
Epoch: 4/20...  Training Step: 645...  Training loss: 1.8027...  7.7331 sec/batch
Epoch: 4/20...  Training Step: 646...  Training loss: 1.8854...  7.6493 sec/batch
Epoch: 4/20...  Training Step: 647...  Training loss: 1.8291...  7.6097 sec/batch
Epoch: 4/20...  Training Step: 648...  Training loss: 1.8324...  7.7105 sec/batch
Epoch: 4/20...  Training Step: 649...  Training loss: 1.8165...  7.8017 sec/batch
Epoch: 4/20...  Training Step: 650...  Training loss: 1.8349...  7.6874 sec/batch
Epoch: 4/20...  Training Step: 651...  Training loss: 1.8513...  7.6343 sec/batch
Epoch: 4/20...  Training Step: 652...  Training loss: 1.8108...  7.6509 sec/batch
Epoch: 4/20...  Training Step: 653...  Training loss: 1.8099...  7.6478 sec/batch
Epoch: 4/20...  Training Step: 654...  Training loss: 1.8691...  7.6498 sec/batch
Epoch: 4/20...  Training Step: 655...  Training loss: 1.8178...  7.7291 sec/batch
Epoch: 4/20...  Training Step: 656...  Training loss: 1.8660...  8.5573 sec/batch
Epoch: 4/20...  Training Step: 657...  Training loss: 1.8586...  7.9055 sec/batch
Epoch: 4/20...  Training Step: 658...  Training loss: 1.8473...  8.3387 sec/batch
Epoch: 4/20...  Training Step: 659...  Training loss: 1.8157...  8.0173 sec/batch
Epoch: 4/20...  Training Step: 660...  Training loss: 1.8630...  8.5152 sec/batch
Epoch: 4/20...  Training Step: 661...  Training loss: 1.8478...  8.1056 sec/batch
Epoch: 4/20...  Training Step: 662...  Training loss: 1.8109...  8.1858 sec/batch
Epoch: 4/20...  Training Step: 663...  Training loss: 1.8130...  8.3397 sec/batch
Epoch: 4/20...  Training Step: 664...  Training loss: 1.8167...  8.0314 sec/batch
Epoch: 4/20...  Training Step: 665...  Training loss: 1.8583...  8.1371 sec/batch
Epoch: 4/20...  Training Step: 666...  Training loss: 1.8330...  8.0609 sec/batch
Epoch: 4/20...  Training Step: 667...  Training loss: 1.8386...  7.9943 sec/batch
Epoch: 4/20...  Training Step: 668...  Training loss: 1.8077...  8.0760 sec/batch
Epoch: 4/20...  Training Step: 669...  Training loss: 1.8167...  8.0334 sec/batch
Epoch: 4/20...  Training Step: 670...  Training loss: 1.8460...  7.9687 sec/batch
Epoch: 4/20...  Training Step: 671...  Training loss: 1.8147...  8.2886 sec/batch
Epoch: 4/20...  Training Step: 672...  Training loss: 1.8075...  8.3091 sec/batch
Epoch: 4/20...  Training Step: 673...  Training loss: 1.7850...  8.0344 sec/batch
Epoch: 4/20...  Training Step: 674...  Training loss: 1.8037...  7.8469 sec/batch
Epoch: 4/20...  Training Step: 675...  Training loss: 1.7722...  7.8449 sec/batch
Epoch: 4/20...  Training Step: 676...  Training loss: 1.8257...  7.7266 sec/batch
Epoch: 4/20...  Training Step: 677...  Training loss: 1.7726...  7.8013 sec/batch
Epoch: 4/20...  Training Step: 678...  Training loss: 1.8109...  7.8704 sec/batch
Epoch: 4/20...  Training Step: 679...  Training loss: 1.7748...  7.9100 sec/batch
Epoch: 4/20...  Training Step: 680...  Training loss: 1.8006...  7.9888 sec/batch
Epoch: 4/20...  Training Step: 681...  Training loss: 1.7987...  7.9898 sec/batch
Epoch: 4/20...  Training Step: 682...  Training loss: 1.7905...  7.8038 sec/batch
Epoch: 4/20...  Training Step: 683...  Training loss: 1.7729...  7.8855 sec/batch
Epoch: 4/20...  Training Step: 684...  Training loss: 1.8179...  8.0294 sec/batch
Epoch: 4/20...  Training Step: 685...  Training loss: 1.7844...  7.9301 sec/batch
Epoch: 4/20...  Training Step: 686...  Training loss: 1.7982...  8.2770 sec/batch
Epoch: 4/20...  Training Step: 687...  Training loss: 1.7669...  8.2971 sec/batch
Epoch: 4/20...  Training Step: 688...  Training loss: 1.7702...  8.3803 sec/batch
Epoch: 4/20...  Training Step: 689...  Training loss: 1.7803...  8.2374 sec/batch
Epoch: 4/20...  Training Step: 690...  Training loss: 1.8074...  7.9161 sec/batch
Epoch: 4/20...  Training Step: 691...  Training loss: 1.7848...  7.7426 sec/batch
Epoch: 4/20...  Training Step: 692...  Training loss: 1.7576...  7.5767 sec/batch
Epoch: 4/20...  Training Step: 693...  Training loss: 1.7718...  7.6764 sec/batch
Epoch: 4/20...  Training Step: 694...  Training loss: 1.7493...  7.8068 sec/batch
Epoch: 4/20...  Training Step: 695...  Training loss: 1.7997...  8.1397 sec/batch
Epoch: 4/20...  Training Step: 696...  Training loss: 1.7913...  7.9491 sec/batch
Epoch: 4/20...  Training Step: 697...  Training loss: 1.7814...  7.8203 sec/batch
Epoch: 4/20...  Training Step: 698...  Training loss: 1.7809...  7.9928 sec/batch
Epoch: 4/20...  Training Step: 699...  Training loss: 1.7778...  7.7571 sec/batch
Epoch: 4/20...  Training Step: 700...  Training loss: 1.7868...  7.7737 sec/batch
Epoch: 4/20...  Training Step: 701...  Training loss: 1.7831...  7.9857 sec/batch
Epoch: 4/20...  Training Step: 702...  Training loss: 1.7910...  8.0188 sec/batch
Epoch: 4/20...  Training Step: 703...  Training loss: 1.7947...  8.3678 sec/batch
Epoch: 4/20...  Training Step: 704...  Training loss: 1.7993...  8.4735 sec/batch
Epoch: 4/20...  Training Step: 705...  Training loss: 1.7878...  8.2184 sec/batch
Epoch: 4/20...  Training Step: 706...  Training loss: 1.7866...  8.0309 sec/batch
Epoch: 4/20...  Training Step: 707...  Training loss: 1.7770...  8.0123 sec/batch
Epoch: 4/20...  Training Step: 708...  Training loss: 1.7763...  7.7982 sec/batch
Epoch: 4/20...  Training Step: 709...  Training loss: 1.7617...  8.6881 sec/batch
Epoch: 4/20...  Training Step: 710...  Training loss: 1.7540...  8.7498 sec/batch
Epoch: 4/20...  Training Step: 711...  Training loss: 1.7899...  9.1127 sec/batch
Epoch: 4/20...  Training Step: 712...  Training loss: 1.7811...  8.4415 sec/batch
Epoch: 4/20...  Training Step: 713...  Training loss: 1.7758...  8.0710 sec/batch
Epoch: 4/20...  Training Step: 714...  Training loss: 1.7851...  8.0198 sec/batch
Epoch: 4/20...  Training Step: 715...  Training loss: 1.7981...  8.4179 sec/batch
Epoch: 4/20...  Training Step: 716...  Training loss: 1.7495...  8.2279 sec/batch
Epoch: 4/20...  Training Step: 717...  Training loss: 1.7427...  8.3231 sec/batch
Epoch: 4/20...  Training Step: 718...  Training loss: 1.7981...  8.4560 sec/batch
Epoch: 4/20...  Training Step: 719...  Training loss: 1.7839...  8.2214 sec/batch
Epoch: 4/20...  Training Step: 720...  Training loss: 1.7259...  8.4570 sec/batch
Epoch: 4/20...  Training Step: 721...  Training loss: 1.7877...  8.3928 sec/batch
Epoch: 4/20...  Training Step: 722...  Training loss: 1.7947...  7.9326 sec/batch
Epoch: 4/20...  Training Step: 723...  Training loss: 1.7636...  8.4816 sec/batch
Epoch: 4/20...  Training Step: 724...  Training loss: 1.7626...  8.2735 sec/batch
Epoch: 4/20...  Training Step: 725...  Training loss: 1.7480...  7.7155 sec/batch
Epoch: 4/20...  Training Step: 726...  Training loss: 1.7387...  7.6458 sec/batch
Epoch: 4/20...  Training Step: 727...  Training loss: 1.7781...  7.8739 sec/batch
Epoch: 4/20...  Training Step: 728...  Training loss: 1.7782...  8.0063 sec/batch
Epoch: 4/20...  Training Step: 729...  Training loss: 1.7694...  7.8183 sec/batch
Epoch: 4/20...  Training Step: 730...  Training loss: 1.7665...  7.7170 sec/batch
Epoch: 4/20...  Training Step: 731...  Training loss: 1.7834...  8.1552 sec/batch
Epoch: 4/20...  Training Step: 732...  Training loss: 1.7757...  8.2640 sec/batch
Epoch: 4/20...  Training Step: 733...  Training loss: 1.8005...  8.3272 sec/batch
Epoch: 4/20...  Training Step: 734...  Training loss: 1.7596...  8.2106 sec/batch
Epoch: 4/20...  Training Step: 735...  Training loss: 1.8100...  8.0354 sec/batch
Epoch: 4/20...  Training Step: 736...  Training loss: 1.7561...  7.9817 sec/batch
Epoch: 4/20...  Training Step: 737...  Training loss: 1.7668...  7.5777 sec/batch
Epoch: 4/20...  Training Step: 738...  Training loss: 1.7737...  7.8213 sec/batch
Epoch: 4/20...  Training Step: 739...  Training loss: 1.7502...  7.8950 sec/batch
Epoch: 4/20...  Training Step: 740...  Training loss: 1.7670...  8.2535 sec/batch
Epoch: 4/20...  Training Step: 741...  Training loss: 1.7721...  8.5698 sec/batch
Epoch: 4/20...  Training Step: 742...  Training loss: 1.7877...  8.1492 sec/batch
Epoch: 4/20...  Training Step: 743...  Training loss: 1.7716...  7.6042 sec/batch
Epoch: 4/20...  Training Step: 744...  Training loss: 1.7530...  8.0359 sec/batch
Epoch: 4/20...  Training Step: 745...  Training loss: 1.7502...  8.6184 sec/batch
Epoch: 4/20...  Training Step: 746...  Training loss: 1.7820...  8.1702 sec/batch
Epoch: 4/20...  Training Step: 747...  Training loss: 1.7775...  8.4675 sec/batch
Epoch: 4/20...  Training Step: 748...  Training loss: 1.7745...  7.9787 sec/batch
Epoch: 4/20...  Training Step: 749...  Training loss: 1.7683...  8.1201 sec/batch
Epoch: 4/20...  Training Step: 750...  Training loss: 1.7643...  7.9847 sec/batch
Epoch: 4/20...  Training Step: 751...  Training loss: 1.7764...  7.9477 sec/batch
Epoch: 4/20...  Training Step: 752...  Training loss: 1.7575...  7.7807 sec/batch
Epoch: 4/20...  Training Step: 753...  Training loss: 1.7169...  7.7185 sec/batch
Epoch: 4/20...  Training Step: 754...  Training loss: 1.7910...  7.6869 sec/batch
Epoch: 4/20...  Training Step: 755...  Training loss: 1.7878...  7.9151 sec/batch
Epoch: 4/20...  Training Step: 756...  Training loss: 1.7545...  7.9322 sec/batch
Epoch: 4/20...  Training Step: 757...  Training loss: 1.7706...  7.7952 sec/batch
Epoch: 4/20...  Training Step: 758...  Training loss: 1.7591...  7.6388 sec/batch
Epoch: 4/20...  Training Step: 759...  Training loss: 1.7598...  8.2752 sec/batch
Epoch: 4/20...  Training Step: 760...  Training loss: 1.7531...  7.8915 sec/batch
Epoch: 4/20...  Training Step: 761...  Training loss: 1.7674...  7.8775 sec/batch
Epoch: 4/20...  Training Step: 762...  Training loss: 1.8168...  7.9226 sec/batch
Epoch: 4/20...  Training Step: 763...  Training loss: 1.7515...  7.9151 sec/batch
Epoch: 4/20...  Training Step: 764...  Training loss: 1.7461...  7.9095 sec/batch
Epoch: 4/20...  Training Step: 765...  Training loss: 1.7316...  7.8890 sec/batch
Epoch: 4/20...  Training Step: 766...  Training loss: 1.7403...  7.7927 sec/batch
Epoch: 4/20...  Training Step: 767...  Training loss: 1.7739...  7.8825 sec/batch
Epoch: 4/20...  Training Step: 768...  Training loss: 1.7554...  7.7872 sec/batch
Epoch: 4/20...  Training Step: 769...  Training loss: 1.7518...  7.5481 sec/batch
Epoch: 4/20...  Training Step: 770...  Training loss: 1.7483...  7.9827 sec/batch
Epoch: 4/20...  Training Step: 771...  Training loss: 1.7283...  8.6756 sec/batch
Epoch: 4/20...  Training Step: 772...  Training loss: 1.7578...  8.1477 sec/batch
Epoch: 4/20...  Training Step: 773...  Training loss: 1.7286...  8.0910 sec/batch
Epoch: 4/20...  Training Step: 774...  Training loss: 1.7170...  8.1863 sec/batch
Epoch: 4/20...  Training Step: 775...  Training loss: 1.7187...  7.8208 sec/batch
Epoch: 4/20...  Training Step: 776...  Training loss: 1.7412...  7.7631 sec/batch
Epoch: 4/20...  Training Step: 777...  Training loss: 1.7391...  7.6323 sec/batch
Epoch: 4/20...  Training Step: 778...  Training loss: 1.7557...  8.3116 sec/batch
Epoch: 4/20...  Training Step: 779...  Training loss: 1.7442...  7.8674 sec/batch
Epoch: 4/20...  Training Step: 780...  Training loss: 1.7246...  8.0208 sec/batch
Epoch: 4/20...  Training Step: 781...  Training loss: 1.7568...  8.0640 sec/batch
Epoch: 4/20...  Training Step: 782...  Training loss: 1.7194...  7.9527 sec/batch
Epoch: 4/20...  Training Step: 783...  Training loss: 1.7277...  8.1371 sec/batch
Epoch: 4/20...  Training Step: 784...  Training loss: 1.7390...  8.1467 sec/batch
Epoch: 4/20...  Training Step: 785...  Training loss: 1.7416...  8.2068 sec/batch
Epoch: 4/20...  Training Step: 786...  Training loss: 1.7227...  7.9852 sec/batch
Epoch: 4/20...  Training Step: 787...  Training loss: 1.7358...  7.8644 sec/batch
Epoch: 4/20...  Training Step: 788...  Training loss: 1.7129...  8.2479 sec/batch
Epoch: 4/20...  Training Step: 789...  Training loss: 1.7050...  8.0344 sec/batch
Epoch: 4/20...  Training Step: 790...  Training loss: 1.7445...  8.4264 sec/batch
Epoch: 4/20...  Training Step: 791...  Training loss: 1.7358...  9.2025 sec/batch
Epoch: 4/20...  Training Step: 792...  Training loss: 1.7101...  8.3708 sec/batch
Epoch: 5/20...  Training Step: 793...  Training loss: 1.8256...  8.4675 sec/batch
Epoch: 5/20...  Training Step: 794...  Training loss: 1.7210...  9.2195 sec/batch
Epoch: 5/20...  Training Step: 795...  Training loss: 1.7054...  10.6774 sec/batch
Epoch: 5/20...  Training Step: 796...  Training loss: 1.7171...  8.7347 sec/batch
Epoch: 5/20...  Training Step: 797...  Training loss: 1.7211...  8.2023 sec/batch
Epoch: 5/20...  Training Step: 798...  Training loss: 1.6745...  7.9251 sec/batch
Epoch: 5/20...  Training Step: 799...  Training loss: 1.7214...  7.8955 sec/batch
Epoch: 5/20...  Training Step: 800...  Training loss: 1.7118...  7.8905 sec/batch
Epoch: 5/20...  Training Step: 801...  Training loss: 1.7463...  7.9822 sec/batch
Epoch: 5/20...  Training Step: 802...  Training loss: 1.7092...  8.4425 sec/batch
Epoch: 5/20...  Training Step: 803...  Training loss: 1.6993...  8.2234 sec/batch
Epoch: 5/20...  Training Step: 804...  Training loss: 1.7054...  8.1839 sec/batch
Epoch: 5/20...  Training Step: 805...  Training loss: 1.7186...  7.9692 sec/batch
Epoch: 5/20...  Training Step: 806...  Training loss: 1.7682...  7.8785 sec/batch
Epoch: 5/20...  Training Step: 807...  Training loss: 1.7087...  7.8554 sec/batch
Epoch: 5/20...  Training Step: 808...  Training loss: 1.7064...  7.9973 sec/batch
Epoch: 5/20...  Training Step: 809...  Training loss: 1.7230...  7.8173 sec/batch
Epoch: 5/20...  Training Step: 810...  Training loss: 1.7464...  8.3171 sec/batch
Epoch: 5/20...  Training Step: 811...  Training loss: 1.7270...  7.7652 sec/batch
Epoch: 5/20...  Training Step: 812...  Training loss: 1.7233...  7.7762 sec/batch
Epoch: 5/20...  Training Step: 813...  Training loss: 1.7087...  7.9080 sec/batch
Epoch: 5/20...  Training Step: 814...  Training loss: 1.7485...  7.9722 sec/batch
Epoch: 5/20...  Training Step: 815...  Training loss: 1.7158...  7.8038 sec/batch
Epoch: 5/20...  Training Step: 816...  Training loss: 1.7215...  7.8198 sec/batch
Epoch: 5/20...  Training Step: 817...  Training loss: 1.7198...  7.9271 sec/batch
Epoch: 5/20...  Training Step: 818...  Training loss: 1.6857...  7.9471 sec/batch
Epoch: 5/20...  Training Step: 819...  Training loss: 1.6864...  7.8609 sec/batch
Epoch: 5/20...  Training Step: 820...  Training loss: 1.7165...  7.9031 sec/batch
Epoch: 5/20...  Training Step: 821...  Training loss: 1.7452...  8.0645 sec/batch
Epoch: 5/20...  Training Step: 822...  Training loss: 1.7311...  9.5248 sec/batch
Epoch: 5/20...  Training Step: 823...  Training loss: 1.7046...  8.1206 sec/batch
Epoch: 5/20...  Training Step: 824...  Training loss: 1.6919...  8.3833 sec/batch
Epoch: 5/20...  Training Step: 825...  Training loss: 1.7313...  8.2931 sec/batch
Epoch: 5/20...  Training Step: 826...  Training loss: 1.7238...  8.3392 sec/batch
Epoch: 5/20...  Training Step: 827...  Training loss: 1.7040...  8.7859 sec/batch
Epoch: 5/20...  Training Step: 828...  Training loss: 1.7148...  9.2306 sec/batch
Epoch: 5/20...  Training Step: 829...  Training loss: 1.6916...  10.0675 sec/batch
Epoch: 5/20...  Training Step: 830...  Training loss: 1.6659...  9.0661 sec/batch
Epoch: 5/20...  Training Step: 831...  Training loss: 1.6589...  8.4812 sec/batch
Epoch: 5/20...  Training Step: 832...  Training loss: 1.6824...  8.5563 sec/batch
Epoch: 5/20...  Training Step: 833...  Training loss: 1.6881...  8.1793 sec/batch
Epoch: 5/20...  Training Step: 834...  Training loss: 1.7331...  8.4946 sec/batch
Epoch: 5/20...  Training Step: 835...  Training loss: 1.6856...  9.2556 sec/batch
Epoch: 5/20...  Training Step: 836...  Training loss: 1.6713...  8.3903 sec/batch
Epoch: 5/20...  Training Step: 837...  Training loss: 1.7229...  8.1096 sec/batch
Epoch: 5/20...  Training Step: 838...  Training loss: 1.6655...  8.1687 sec/batch
Epoch: 5/20...  Training Step: 839...  Training loss: 1.6975...  8.3417 sec/batch
Epoch: 5/20...  Training Step: 840...  Training loss: 1.6888...  8.2936 sec/batch
Epoch: 5/20...  Training Step: 841...  Training loss: 1.6904...  8.6069 sec/batch
Epoch: 5/20...  Training Step: 842...  Training loss: 1.7347...  8.4019 sec/batch
Epoch: 5/20...  Training Step: 843...  Training loss: 1.6748...  8.1913 sec/batch
Epoch: 5/20...  Training Step: 844...  Training loss: 1.7632...  8.1763 sec/batch
Epoch: 5/20...  Training Step: 845...  Training loss: 1.7036...  8.0534 sec/batch
Epoch: 5/20...  Training Step: 846...  Training loss: 1.6993...  8.7187 sec/batch
Epoch: 5/20...  Training Step: 847...  Training loss: 1.6974...  9.4386 sec/batch
Epoch: 5/20...  Training Step: 848...  Training loss: 1.7147...  8.1467 sec/batch
Epoch: 5/20...  Training Step: 849...  Training loss: 1.7201...  7.9632 sec/batch
Epoch: 5/20...  Training Step: 850...  Training loss: 1.6824...  7.9547 sec/batch
Epoch: 5/20...  Training Step: 851...  Training loss: 1.6844...  7.7637 sec/batch
Epoch: 5/20...  Training Step: 852...  Training loss: 1.7303...  7.9792 sec/batch
Epoch: 5/20...  Training Step: 853...  Training loss: 1.7013...  8.4695 sec/batch
Epoch: 5/20...  Training Step: 854...  Training loss: 1.7458...  7.8303 sec/batch
Epoch: 5/20...  Training Step: 855...  Training loss: 1.7275...  8.1803 sec/batch
Epoch: 5/20...  Training Step: 856...  Training loss: 1.7149...  8.0304 sec/batch
Epoch: 5/20...  Training Step: 857...  Training loss: 1.6884...  7.9346 sec/batch
Epoch: 5/20...  Training Step: 858...  Training loss: 1.7165...  8.1883 sec/batch
Epoch: 5/20...  Training Step: 859...  Training loss: 1.7009...  8.0015 sec/batch
Epoch: 5/20...  Training Step: 860...  Training loss: 1.6841...  7.9752 sec/batch
Epoch: 5/20...  Training Step: 861...  Training loss: 1.6912...  7.8128 sec/batch
Epoch: 5/20...  Training Step: 862...  Training loss: 1.6791...  7.8363 sec/batch
Epoch: 5/20...  Training Step: 863...  Training loss: 1.7363...  8.1677 sec/batch
Epoch: 5/20...  Training Step: 864...  Training loss: 1.7123...  8.2164 sec/batch
Epoch: 5/20...  Training Step: 865...  Training loss: 1.7250...  8.2169 sec/batch
Epoch: 5/20...  Training Step: 866...  Training loss: 1.6732...  8.2058 sec/batch
Epoch: 5/20...  Training Step: 867...  Training loss: 1.6890...  7.5767 sec/batch
Epoch: 5/20...  Training Step: 868...  Training loss: 1.7148...  7.7606 sec/batch
Epoch: 5/20...  Training Step: 869...  Training loss: 1.6885...  8.2675 sec/batch
Epoch: 5/20...  Training Step: 870...  Training loss: 1.6935...  8.2309 sec/batch
Epoch: 5/20...  Training Step: 871...  Training loss: 1.6508...  8.2755 sec/batch
Epoch: 5/20...  Training Step: 872...  Training loss: 1.6848...  7.8338 sec/batch
Epoch: 5/20...  Training Step: 873...  Training loss: 1.6435...  8.0058 sec/batch
Epoch: 5/20...  Training Step: 874...  Training loss: 1.6947...  8.2033 sec/batch
Epoch: 5/20...  Training Step: 875...  Training loss: 1.6398...  8.0579 sec/batch
Epoch: 5/20...  Training Step: 876...  Training loss: 1.6800...  8.2795 sec/batch
Epoch: 5/20...  Training Step: 877...  Training loss: 1.6502...  8.0008 sec/batch
Epoch: 5/20...  Training Step: 878...  Training loss: 1.6627...  8.1487 sec/batch
Epoch: 5/20...  Training Step: 879...  Training loss: 1.6534...  8.2108 sec/batch
Epoch: 5/20...  Training Step: 880...  Training loss: 1.6543...  7.8885 sec/batch
Epoch: 5/20...  Training Step: 881...  Training loss: 1.6405...  8.2058 sec/batch
Epoch: 5/20...  Training Step: 882...  Training loss: 1.7024...  8.2880 sec/batch
Epoch: 5/20...  Training Step: 883...  Training loss: 1.6621...  8.2424 sec/batch
Epoch: 5/20...  Training Step: 884...  Training loss: 1.6746...  8.1768 sec/batch
Epoch: 5/20...  Training Step: 885...  Training loss: 1.6542...  8.0770 sec/batch
Epoch: 5/20...  Training Step: 886...  Training loss: 1.6605...  8.3056 sec/batch
Epoch: 5/20...  Training Step: 887...  Training loss: 1.6576...  8.0835 sec/batch
Epoch: 5/20...  Training Step: 888...  Training loss: 1.6849...  8.1301 sec/batch
Epoch: 5/20...  Training Step: 889...  Training loss: 1.6820...  8.1336 sec/batch
Epoch: 5/20...  Training Step: 890...  Training loss: 1.6377...  7.9817 sec/batch
Epoch: 5/20...  Training Step: 891...  Training loss: 1.6575...  8.2931 sec/batch
Epoch: 5/20...  Training Step: 892...  Training loss: 1.6317...  7.8599 sec/batch
Epoch: 5/20...  Training Step: 893...  Training loss: 1.6796...  7.6042 sec/batch
Epoch: 5/20...  Training Step: 894...  Training loss: 1.6693...  7.6153 sec/batch
Epoch: 5/20...  Training Step: 895...  Training loss: 1.6670...  7.6012 sec/batch
Epoch: 5/20...  Training Step: 896...  Training loss: 1.6703...  7.6684 sec/batch
Epoch: 5/20...  Training Step: 897...  Training loss: 1.6635...  7.9647 sec/batch
Epoch: 5/20...  Training Step: 898...  Training loss: 1.6646...  7.7667 sec/batch
Epoch: 5/20...  Training Step: 899...  Training loss: 1.6644...  7.7486 sec/batch
Epoch: 5/20...  Training Step: 900...  Training loss: 1.6720...  7.6498 sec/batch
Epoch: 5/20...  Training Step: 901...  Training loss: 1.6668...  7.7421 sec/batch
Epoch: 5/20...  Training Step: 902...  Training loss: 1.6831...  7.9361 sec/batch
Epoch: 5/20...  Training Step: 903...  Training loss: 1.6656...  7.9923 sec/batch
Epoch: 5/20...  Training Step: 904...  Training loss: 1.6686...  8.3001 sec/batch
Epoch: 5/20...  Training Step: 905...  Training loss: 1.6653...  8.1116 sec/batch
Epoch: 5/20...  Training Step: 906...  Training loss: 1.6520...  7.9306 sec/batch
Epoch: 5/20...  Training Step: 907...  Training loss: 1.6434...  8.2844 sec/batch
Epoch: 5/20...  Training Step: 908...  Training loss: 1.6275...  8.1176 sec/batch
Epoch: 5/20...  Training Step: 909...  Training loss: 1.6667...  8.3668 sec/batch
Epoch: 5/20...  Training Step: 910...  Training loss: 1.6533...  8.3753 sec/batch
Epoch: 5/20...  Training Step: 911...  Training loss: 1.6627...  7.8584 sec/batch
Epoch: 5/20...  Training Step: 912...  Training loss: 1.6606...  8.1146 sec/batch
Epoch: 5/20...  Training Step: 913...  Training loss: 1.6634...  8.2284 sec/batch
Epoch: 5/20...  Training Step: 914...  Training loss: 1.6215...  8.1998 sec/batch
Epoch: 5/20...  Training Step: 915...  Training loss: 1.6233...  8.6806 sec/batch
Epoch: 5/20...  Training Step: 916...  Training loss: 1.6748...  8.2741 sec/batch
Epoch: 5/20...  Training Step: 917...  Training loss: 1.6559...  8.1963 sec/batch
Epoch: 5/20...  Training Step: 918...  Training loss: 1.6133...  7.9847 sec/batch
Epoch: 5/20...  Training Step: 919...  Training loss: 1.6713...  8.2946 sec/batch
Epoch: 5/20...  Training Step: 920...  Training loss: 1.6718...  7.9060 sec/batch
Epoch: 5/20...  Training Step: 921...  Training loss: 1.6541...  8.0544 sec/batch
Epoch: 5/20...  Training Step: 922...  Training loss: 1.6297...  8.6696 sec/batch
Epoch: 5/20...  Training Step: 923...  Training loss: 1.6216...  8.4074 sec/batch
Epoch: 5/20...  Training Step: 924...  Training loss: 1.6338...  7.8028 sec/batch
Epoch: 5/20...  Training Step: 925...  Training loss: 1.6734...  7.9767 sec/batch
Epoch: 5/20...  Training Step: 926...  Training loss: 1.6651...  7.8749 sec/batch
Epoch: 5/20...  Training Step: 927...  Training loss: 1.6570...  7.9577 sec/batch
Epoch: 5/20...  Training Step: 928...  Training loss: 1.6573...  7.7306 sec/batch
Epoch: 5/20...  Training Step: 929...  Training loss: 1.6768...  7.9908 sec/batch
Epoch: 5/20...  Training Step: 930...  Training loss: 1.6657...  8.0579 sec/batch
Epoch: 5/20...  Training Step: 931...  Training loss: 1.6692...  8.2434 sec/batch
Epoch: 5/20...  Training Step: 932...  Training loss: 1.6491...  7.8328 sec/batch
Epoch: 5/20...  Training Step: 933...  Training loss: 1.6968...  7.7747 sec/batch
Epoch: 5/20...  Training Step: 934...  Training loss: 1.6491...  8.1527 sec/batch
Epoch: 5/20...  Training Step: 935...  Training loss: 1.6471...  7.8454 sec/batch
Epoch: 5/20...  Training Step: 936...  Training loss: 1.6709...  7.5401 sec/batch
Epoch: 5/20...  Training Step: 937...  Training loss: 1.6416...  7.9893 sec/batch
Epoch: 5/20...  Training Step: 938...  Training loss: 1.6561...  7.8469 sec/batch
Epoch: 5/20...  Training Step: 939...  Training loss: 1.6624...  7.6233 sec/batch
Epoch: 5/20...  Training Step: 940...  Training loss: 1.6960...  7.6298 sec/batch
Epoch: 5/20...  Training Step: 941...  Training loss: 1.6630...  7.7411 sec/batch
Epoch: 5/20...  Training Step: 942...  Training loss: 1.6385...  7.8810 sec/batch
Epoch: 5/20...  Training Step: 943...  Training loss: 1.6186...  7.9426 sec/batch
Epoch: 5/20...  Training Step: 944...  Training loss: 1.6555...  8.4400 sec/batch
Epoch: 5/20...  Training Step: 945...  Training loss: 1.6552...  8.4364 sec/batch
Epoch: 5/20...  Training Step: 946...  Training loss: 1.6561...  7.8830 sec/batch
Epoch: 5/20...  Training Step: 947...  Training loss: 1.6504...  8.7162 sec/batch
Epoch: 5/20...  Training Step: 948...  Training loss: 1.6526...  8.1286 sec/batch
Epoch: 5/20...  Training Step: 949...  Training loss: 1.6549...  7.8168 sec/batch
Epoch: 5/20...  Training Step: 950...  Training loss: 1.6353...  7.9667 sec/batch
Epoch: 5/20...  Training Step: 951...  Training loss: 1.6071...  7.8168 sec/batch
Epoch: 5/20...  Training Step: 952...  Training loss: 1.6803...  7.9632 sec/batch
Epoch: 5/20...  Training Step: 953...  Training loss: 1.6891...  7.7561 sec/batch
Epoch: 5/20...  Training Step: 954...  Training loss: 1.6496...  7.8454 sec/batch
Epoch: 5/20...  Training Step: 955...  Training loss: 1.6656...  7.9672 sec/batch
Epoch: 5/20...  Training Step: 956...  Training loss: 1.6498...  7.8760 sec/batch
Epoch: 5/20...  Training Step: 957...  Training loss: 1.6525...  7.7962 sec/batch
Epoch: 5/20...  Training Step: 958...  Training loss: 1.6453...  7.9401 sec/batch
Epoch: 5/20...  Training Step: 959...  Training loss: 1.6691...  7.9376 sec/batch
Epoch: 5/20...  Training Step: 960...  Training loss: 1.7242...  8.0168 sec/batch
Epoch: 5/20...  Training Step: 961...  Training loss: 1.6346...  8.0730 sec/batch
Epoch: 5/20...  Training Step: 962...  Training loss: 1.6503...  7.8749 sec/batch
Epoch: 5/20...  Training Step: 963...  Training loss: 1.6388...  8.0549 sec/batch
Epoch: 5/20...  Training Step: 964...  Training loss: 1.6282...  7.7977 sec/batch
Epoch: 5/20...  Training Step: 965...  Training loss: 1.6733...  7.7576 sec/batch
Epoch: 5/20...  Training Step: 966...  Training loss: 1.6515...  7.8584 sec/batch
Epoch: 5/20...  Training Step: 967...  Training loss: 1.6630...  7.9131 sec/batch
Epoch: 5/20...  Training Step: 968...  Training loss: 1.6263...  8.6229 sec/batch
Epoch: 5/20...  Training Step: 969...  Training loss: 1.6198...  8.1582 sec/batch
Epoch: 5/20...  Training Step: 970...  Training loss: 1.6630...  8.1231 sec/batch
Epoch: 5/20...  Training Step: 971...  Training loss: 1.6199...  8.1286 sec/batch
Epoch: 5/20...  Training Step: 972...  Training loss: 1.6113...  7.9532 sec/batch
Epoch: 5/20...  Training Step: 973...  Training loss: 1.6073...  7.7391 sec/batch
Epoch: 5/20...  Training Step: 974...  Training loss: 1.6420...  8.0780 sec/batch
Epoch: 5/20...  Training Step: 975...  Training loss: 1.6326...  7.6784 sec/batch
Epoch: 5/20...  Training Step: 976...  Training loss: 1.6430...  7.6353 sec/batch
Epoch: 5/20...  Training Step: 977...  Training loss: 1.6363...  7.7326 sec/batch
Epoch: 5/20...  Training Step: 978...  Training loss: 1.6112...  7.9161 sec/batch
Epoch: 5/20...  Training Step: 979...  Training loss: 1.6518...  7.8218 sec/batch
Epoch: 5/20...  Training Step: 980...  Training loss: 1.6265...  7.9276 sec/batch
Epoch: 5/20...  Training Step: 981...  Training loss: 1.6342...  7.6534 sec/batch
Epoch: 5/20...  Training Step: 982...  Training loss: 1.6434...  7.9903 sec/batch
Epoch: 5/20...  Training Step: 983...  Training loss: 1.6296...  8.2635 sec/batch
Epoch: 5/20...  Training Step: 984...  Training loss: 1.6110...  8.1853 sec/batch
Epoch: 5/20...  Training Step: 985...  Training loss: 1.6324...  8.1321 sec/batch
Epoch: 5/20...  Training Step: 986...  Training loss: 1.6117...  8.4862 sec/batch
Epoch: 5/20...  Training Step: 987...  Training loss: 1.5984...  8.1101 sec/batch
Epoch: 5/20...  Training Step: 988...  Training loss: 1.6367...  8.0790 sec/batch
Epoch: 5/20...  Training Step: 989...  Training loss: 1.6182...  8.1041 sec/batch
Epoch: 5/20...  Training Step: 990...  Training loss: 1.6172...  8.2108 sec/batch
Epoch: 6/20...  Training Step: 991...  Training loss: 1.7226...  7.6880 sec/batch
Epoch: 6/20...  Training Step: 992...  Training loss: 1.6224...  7.5877 sec/batch
Epoch: 6/20...  Training Step: 993...  Training loss: 1.6173...  8.0309 sec/batch
Epoch: 6/20...  Training Step: 994...  Training loss: 1.6354...  8.1136 sec/batch
Epoch: 6/20...  Training Step: 995...  Training loss: 1.6168...  7.8073 sec/batch
Epoch: 6/20...  Training Step: 996...  Training loss: 1.5800...  7.8093 sec/batch
Epoch: 6/20...  Training Step: 997...  Training loss: 1.6473...  7.8865 sec/batch
Epoch: 6/20...  Training Step: 998...  Training loss: 1.6146...  7.6995 sec/batch
Epoch: 6/20...  Training Step: 999...  Training loss: 1.6488...  7.6378 sec/batch
Epoch: 6/20...  Training Step: 1000...  Training loss: 1.6211...  7.7195 sec/batch
Epoch: 6/20...  Training Step: 1001...  Training loss: 1.6001...  7.7384 sec/batch
Epoch: 6/20...  Training Step: 1002...  Training loss: 1.5984...  7.8258 sec/batch
Epoch: 6/20...  Training Step: 1003...  Training loss: 1.6171...  7.9878 sec/batch
Epoch: 6/20...  Training Step: 1004...  Training loss: 1.6613...  7.9331 sec/batch
Epoch: 6/20...  Training Step: 1005...  Training loss: 1.6176...  7.8840 sec/batch
Epoch: 6/20...  Training Step: 1006...  Training loss: 1.6043...  7.8143 sec/batch
Epoch: 6/20...  Training Step: 1007...  Training loss: 1.6286...  7.8384 sec/batch
Epoch: 6/20...  Training Step: 1008...  Training loss: 1.6579...  8.0243 sec/batch
Epoch: 6/20...  Training Step: 1009...  Training loss: 1.6283...  8.0690 sec/batch
Epoch: 6/20...  Training Step: 1010...  Training loss: 1.6387...  8.2840 sec/batch
Epoch: 6/20...  Training Step: 1011...  Training loss: 1.6167...  7.8564 sec/batch
Epoch: 6/20...  Training Step: 1012...  Training loss: 1.6421...  7.7882 sec/batch
Epoch: 6/20...  Training Step: 1013...  Training loss: 1.6132...  7.8243 sec/batch
Epoch: 6/20...  Training Step: 1014...  Training loss: 1.6232...  7.7727 sec/batch
Epoch: 6/20...  Training Step: 1015...  Training loss: 1.6257...  7.8514 sec/batch
Epoch: 6/20...  Training Step: 1016...  Training loss: 1.5914...  8.3201 sec/batch
Epoch: 6/20...  Training Step: 1017...  Training loss: 1.5835...  8.1201 sec/batch
Epoch: 6/20...  Training Step: 1018...  Training loss: 1.6306...  7.6173 sec/batch
Epoch: 6/20...  Training Step: 1019...  Training loss: 1.6370...  7.5787 sec/batch
Epoch: 6/20...  Training Step: 1020...  Training loss: 1.6235...  7.7005 sec/batch
Epoch: 6/20...  Training Step: 1021...  Training loss: 1.6178...  7.7416 sec/batch
Epoch: 6/20...  Training Step: 1022...  Training loss: 1.5843...  7.5511 sec/batch
Epoch: 6/20...  Training Step: 1023...  Training loss: 1.6287...  8.2088 sec/batch
Epoch: 6/20...  Training Step: 1024...  Training loss: 1.6215...  8.3666 sec/batch
Epoch: 6/20...  Training Step: 1025...  Training loss: 1.5954...  8.3342 sec/batch
Epoch: 6/20...  Training Step: 1026...  Training loss: 1.6039...  7.6283 sec/batch
Epoch: 6/20...  Training Step: 1027...  Training loss: 1.5873...  8.1306 sec/batch
Epoch: 6/20...  Training Step: 1028...  Training loss: 1.5628...  8.1557 sec/batch
Epoch: 6/20...  Training Step: 1029...  Training loss: 1.5524...  7.7932 sec/batch
Epoch: 6/20...  Training Step: 1030...  Training loss: 1.5721...  7.8910 sec/batch
Epoch: 6/20...  Training Step: 1031...  Training loss: 1.5953...  7.9552 sec/batch
Epoch: 6/20...  Training Step: 1032...  Training loss: 1.6312...  7.6879 sec/batch
Epoch: 6/20...  Training Step: 1033...  Training loss: 1.5872...  8.1773 sec/batch
Epoch: 6/20...  Training Step: 1034...  Training loss: 1.5640...  7.6549 sec/batch
Epoch: 6/20...  Training Step: 1035...  Training loss: 1.6069...  7.8905 sec/batch
Epoch: 6/20...  Training Step: 1036...  Training loss: 1.5583...  8.0725 sec/batch
Epoch: 6/20...  Training Step: 1037...  Training loss: 1.5960...  7.9075 sec/batch
Epoch: 6/20...  Training Step: 1038...  Training loss: 1.5839...  8.1366 sec/batch
Epoch: 6/20...  Training Step: 1039...  Training loss: 1.5856...  8.0579 sec/batch
Epoch: 6/20...  Training Step: 1040...  Training loss: 1.6339...  8.0178 sec/batch
Epoch: 6/20...  Training Step: 1041...  Training loss: 1.5743...  8.0705 sec/batch
Epoch: 6/20...  Training Step: 1042...  Training loss: 1.6576...  8.0128 sec/batch
Epoch: 6/20...  Training Step: 1043...  Training loss: 1.5927...  8.0193 sec/batch
Epoch: 6/20...  Training Step: 1044...  Training loss: 1.6049...  7.6779 sec/batch
Epoch: 6/20...  Training Step: 1045...  Training loss: 1.5897...  7.8444 sec/batch
Epoch: 6/20...  Training Step: 1046...  Training loss: 1.6030...  7.8975 sec/batch
Epoch: 6/20...  Training Step: 1047...  Training loss: 1.6388...  8.1432 sec/batch
Epoch: 6/20...  Training Step: 1048...  Training loss: 1.5959...  8.0198 sec/batch
Epoch: 6/20...  Training Step: 1049...  Training loss: 1.5773...  7.9642 sec/batch
Epoch: 6/20...  Training Step: 1050...  Training loss: 1.6260...  8.2635 sec/batch
Epoch: 6/20...  Training Step: 1051...  Training loss: 1.5977...  8.2635 sec/batch
Epoch: 6/20...  Training Step: 1052...  Training loss: 1.6427...  8.1737 sec/batch
Epoch: 6/20...  Training Step: 1053...  Training loss: 1.6315...  8.1542 sec/batch
Epoch: 6/20...  Training Step: 1054...  Training loss: 1.6173...  8.2499 sec/batch
Epoch: 6/20...  Training Step: 1055...  Training loss: 1.5855...  8.6480 sec/batch
Epoch: 6/20...  Training Step: 1056...  Training loss: 1.6028...  9.1473 sec/batch
Epoch: 6/20...  Training Step: 1057...  Training loss: 1.6049...  9.3285 sec/batch
Epoch: 6/20...  Training Step: 1058...  Training loss: 1.5787...  8.5778 sec/batch
Epoch: 6/20...  Training Step: 1059...  Training loss: 1.5956...  9.2928 sec/batch
Epoch: 6/20...  Training Step: 1060...  Training loss: 1.5953...  9.1201 sec/batch
Epoch: 6/20...  Training Step: 1061...  Training loss: 1.6291...  9.1243 sec/batch
Epoch: 6/20...  Training Step: 1062...  Training loss: 1.6088...  9.6462 sec/batch
Epoch: 6/20...  Training Step: 1063...  Training loss: 1.6242...  8.5177 sec/batch
Epoch: 6/20...  Training Step: 1064...  Training loss: 1.5753...  8.4535 sec/batch
Epoch: 6/20...  Training Step: 1065...  Training loss: 1.5908...  8.4344 sec/batch
Epoch: 6/20...  Training Step: 1066...  Training loss: 1.6053...  8.1983 sec/batch
Epoch: 6/20...  Training Step: 1067...  Training loss: 1.5807...  8.2464 sec/batch
Epoch: 6/20...  Training Step: 1068...  Training loss: 1.5891...  7.9873 sec/batch
Epoch: 6/20...  Training Step: 1069...  Training loss: 1.5478...  7.8353 sec/batch
Epoch: 6/20...  Training Step: 1070...  Training loss: 1.5863...  8.1442 sec/batch
Epoch: 6/20...  Training Step: 1071...  Training loss: 1.5494...  8.1883 sec/batch
Epoch: 6/20...  Training Step: 1072...  Training loss: 1.5908...  8.2123 sec/batch
Epoch: 6/20...  Training Step: 1073...  Training loss: 1.5432...  7.9842 sec/batch
Epoch: 6/20...  Training Step: 1074...  Training loss: 1.5823...  7.9647 sec/batch
Epoch: 6/20...  Training Step: 1075...  Training loss: 1.5607...  7.8569 sec/batch
Epoch: 6/20...  Training Step: 1076...  Training loss: 1.5841...  7.8494 sec/batch
Epoch: 6/20...  Training Step: 1077...  Training loss: 1.5548...  7.9491 sec/batch
Epoch: 6/20...  Training Step: 1078...  Training loss: 1.5667...  7.6564 sec/batch
Epoch: 6/20...  Training Step: 1079...  Training loss: 1.5491...  7.7165 sec/batch
Epoch: 6/20...  Training Step: 1080...  Training loss: 1.5936...  7.8384 sec/batch
Epoch: 6/20...  Training Step: 1081...  Training loss: 1.5525...  7.7753 sec/batch
Epoch: 6/20...  Training Step: 1082...  Training loss: 1.5606...  7.7742 sec/batch
Epoch: 6/20...  Training Step: 1083...  Training loss: 1.5526...  7.7115 sec/batch
Epoch: 6/20...  Training Step: 1084...  Training loss: 1.5640...  8.1361 sec/batch
Epoch: 6/20...  Training Step: 1085...  Training loss: 1.5714...  8.3211 sec/batch
Epoch: 6/20...  Training Step: 1086...  Training loss: 1.6046...  8.4550 sec/batch
Epoch: 6/20...  Training Step: 1087...  Training loss: 1.5901...  8.2795 sec/batch
Epoch: 6/20...  Training Step: 1088...  Training loss: 1.5538...  7.7416 sec/batch
Epoch: 6/20...  Training Step: 1089...  Training loss: 1.5652...  7.8263 sec/batch
Epoch: 6/20...  Training Step: 1090...  Training loss: 1.5503...  7.8950 sec/batch
Epoch: 6/20...  Training Step: 1091...  Training loss: 1.5937...  8.6244 sec/batch
Epoch: 6/20...  Training Step: 1092...  Training loss: 1.5800...  7.9963 sec/batch
Epoch: 6/20...  Training Step: 1093...  Training loss: 1.5720...  8.1637 sec/batch
Epoch: 6/20...  Training Step: 1094...  Training loss: 1.5724...  8.3417 sec/batch
Epoch: 6/20...  Training Step: 1095...  Training loss: 1.5727...  7.7030 sec/batch
Epoch: 6/20...  Training Step: 1096...  Training loss: 1.5675...  8.2359 sec/batch
Epoch: 6/20...  Training Step: 1097...  Training loss: 1.5736...  8.4755 sec/batch
Epoch: 6/20...  Training Step: 1098...  Training loss: 1.5742...  8.3673 sec/batch
Epoch: 6/20...  Training Step: 1099...  Training loss: 1.5779...  8.0559 sec/batch
Epoch: 6/20...  Training Step: 1100...  Training loss: 1.5968...  7.6504 sec/batch
Epoch: 6/20...  Training Step: 1101...  Training loss: 1.5749...  8.4756 sec/batch
Epoch: 6/20...  Training Step: 1102...  Training loss: 1.5698...  8.5728 sec/batch
Epoch: 6/20...  Training Step: 1103...  Training loss: 1.5677...  9.3809 sec/batch
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-49-ad76a1595262> in <module>()
     28                                                  model.final_state,
     29                                                  model.optimizer], 
---> 30                                                  feed_dict=feed)
     31 
     32             end = time.time()

C:\ProgramData\Anaconda3\envs\dlnd-tf-lab\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    776     try:
    777       result = self._run(None, fetches, feed_dict, options_ptr,
--> 778                          run_metadata_ptr)
    779       if run_metadata:
    780         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\ProgramData\Anaconda3\envs\dlnd-tf-lab\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    980     if final_fetches or final_targets:
    981       results = self._do_run(handle, final_targets, final_fetches,
--> 982                              feed_dict_string, options, run_metadata)
    983     else:
    984       results = []

C:\ProgramData\Anaconda3\envs\dlnd-tf-lab\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1030     if handle is None:
   1031       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1032                            target_list, options, run_metadata)
   1033     else:
   1034       return self._do_call(_prun_fn, self._session, handle, feed_dict,

C:\ProgramData\Anaconda3\envs\dlnd-tf-lab\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1037   def _do_call(self, fn, *args):
   1038     try:
-> 1039       return fn(*args)
   1040     except errors.OpError as e:
   1041       message = compat.as_text(e.message)

C:\ProgramData\Anaconda3\envs\dlnd-tf-lab\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1019         return tf_session.TF_Run(session, options,
   1020                                  feed_dict, fetch_list, target_list,
-> 1021                                  status, run_metadata)
   1022 
   1023     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [ ]:
tf.train.get_checkpoint_state('checkpoints')

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [ ]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [ ]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [ ]:
tf.train.latest_checkpoint('checkpoints')

In [ ]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)