Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [86]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [87]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [88]:
text[:100]


Out[88]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

And we can see the characters encoded as integers.


In [89]:
encoded[:100]


Out[89]:
array([38, 76, 59,  3, 17, 31, 67, 16, 66, 53, 53, 53, 77, 59,  3,  3, 23,
       16, 24, 59, 18, 26, 63, 26, 31, 64, 16, 59, 67, 31, 16, 59, 63, 63,
       16, 59, 63, 26, 57, 31, 10, 16, 31, 36, 31, 67, 23, 16, 65, 81, 76,
       59,  3,  3, 23, 16, 24, 59, 18, 26, 63, 23, 16, 26, 64, 16, 65, 81,
       76, 59,  3,  3, 23, 16, 26, 81, 16, 26, 17, 64, 16, 14, 62, 81, 53,
       62, 59, 23, 30, 53, 53, 39, 36, 31, 67, 23, 17, 76, 26, 81], dtype=int32)

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [90]:
len(vocab)


Out[90]:
83

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.


In [91]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the batch size and number of batches we can make
    batch_size = n_seqs * n_steps
    n_batches = len(arr)//batch_size
    
    # Keep only enough characters to make full batches
    arr = arr[:n_batches * batch_size]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs, -1))
    
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:, n:n+n_steps]
        # The targets, shifted by one
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [92]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)

In [93]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[38 76 59  3 17 31 67 16 66 53]
 [16 59 18 16 81 14 17 16 28 14]
 [36 26 81 30 53 53 43  4 31 64]
 [81 16 58 65 67 26 81 28 16 76]
 [16 26 17 16 26 64 35 16 64 26]
 [16  6 17 16 62 59 64 53 14 81]
 [76 31 81 16 13 14 18 31 16 24]
 [10 16 22 65 17 16 81 14 62 16]
 [17 16 26 64 81 19 17 30 16 29]
 [16 64 59 26 58 16 17 14 16 76]]

y
 [[76 59  3 17 31 67 16 66 53 53]
 [59 18 16 81 14 17 16 28 14 26]
 [26 81 30 53 53 43  4 31 64 35]
 [16 58 65 67 26 81 28 16 76 26]
 [26 17 16 26 64 35 16 64 26 67]
 [ 6 17 16 62 59 64 53 14 81 63]
 [31 81 16 13 14 18 31 16 24 14]
 [16 22 65 17 16 81 14 62 16 64]
 [16 26 64 81 19 17 30 16 29 76]
 [64 59 26 58 16 17 14 16 76 31]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.


In [94]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Below, we implement the build_lstm function to create these LSTM cells and the initial state.


In [95]:
def lstm_cell(lstm_size,keep_prod):
    cell = tf.contrib.rnn.NASCell(lstm_size, reuse=tf.get_variable_scope().reuse)
    return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prod)

In [96]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    # Use a basic LSTM cell
    #lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
    
    # Add dropout to the cell
    #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    
    # Stack up multiple LSTM layers, for deep learning
    # cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
    # MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
    rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size,keep_prob) for _ in range(num_layers)], state_is_tuple = True)

    initial_state = rnn_cells.zero_state(batch_size, tf.float32)
    
    return rnn_cells, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.


In [97]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        x: Input tensor
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''

    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    # That is, the shape should be batch_size*num_steps rows by lstm_size columns
    seq_output = tf.concat(lstm_output, axis=1)
    x = tf.reshape(seq_output, [-1, in_size])
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
        softmax_b = tf.Variable(tf.zeros(out_size))
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.matmul(x, softmax_w) + softmax_b
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits, name='predictions')
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.


In [98]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    
    # One-hot encode targets and reshape to match logits, one row per batch_size per step
    y_one_hot = tf.one_hot(targets, num_classes)
    y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
    
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
    loss = tf.reduce_mean(loss)
    return loss

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [99]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.


In [100]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)

        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN and collect the outputs
        outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here I'm defining the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [101]:
batch_size = 100        # Sequences per batch
num_steps = 100         # Number of sequence steps per batch
lstm_size = 512         # Size of hidden layers in LSTMs
num_layers = 2          # Number of LSTM layers
learning_rate = 0.001   # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt


In [121]:
epochs = 4
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


Epoch: 1/4...  Training Step: 1...  Training loss: 4.4189...  6.1058 sec/batch
Epoch: 1/4...  Training Step: 2...  Training loss: 4.4162...  6.1373 sec/batch
Epoch: 1/4...  Training Step: 3...  Training loss: 4.4133...  6.5628 sec/batch
Epoch: 1/4...  Training Step: 4...  Training loss: 4.4100...  8.3439 sec/batch
Epoch: 1/4...  Training Step: 5...  Training loss: 4.4059...  7.4316 sec/batch
Epoch: 1/4...  Training Step: 6...  Training loss: 4.4006...  9.3314 sec/batch
Epoch: 1/4...  Training Step: 7...  Training loss: 4.3936...  6.3849 sec/batch
Epoch: 1/4...  Training Step: 8...  Training loss: 4.3838...  6.4868 sec/batch
Epoch: 1/4...  Training Step: 9...  Training loss: 4.3693...  6.7828 sec/batch
Epoch: 1/4...  Training Step: 10...  Training loss: 4.3482...  6.5327 sec/batch
Epoch: 1/4...  Training Step: 11...  Training loss: 4.3117...  6.5971 sec/batch
Epoch: 1/4...  Training Step: 12...  Training loss: 4.2422...  6.7471 sec/batch
Epoch: 1/4...  Training Step: 13...  Training loss: 3.9860...  6.6364 sec/batch
Epoch: 1/4...  Training Step: 14...  Training loss: 4.0240...  6.6825 sec/batch
Epoch: 1/4...  Training Step: 15...  Training loss: 3.9150...  6.7730 sec/batch
Epoch: 1/4...  Training Step: 16...  Training loss: 3.5688...  7.2246 sec/batch
Epoch: 1/4...  Training Step: 17...  Training loss: 3.3786...  6.7066 sec/batch
Epoch: 1/4...  Training Step: 18...  Training loss: 3.3307...  6.6967 sec/batch
Epoch: 1/4...  Training Step: 19...  Training loss: 3.3284...  6.7379 sec/batch
Epoch: 1/4...  Training Step: 20...  Training loss: 3.3160...  6.7311 sec/batch
Epoch: 1/4...  Training Step: 21...  Training loss: 3.3281...  6.7567 sec/batch
Epoch: 1/4...  Training Step: 22...  Training loss: 3.3023...  6.7746 sec/batch
Epoch: 1/4...  Training Step: 23...  Training loss: 3.2952...  6.7158 sec/batch
Epoch: 1/4...  Training Step: 24...  Training loss: 3.2777...  6.7559 sec/batch
Epoch: 1/4...  Training Step: 25...  Training loss: 3.2768...  6.9414 sec/batch
Epoch: 1/4...  Training Step: 26...  Training loss: 3.2719...  6.7697 sec/batch
Epoch: 1/4...  Training Step: 27...  Training loss: 3.2609...  6.7725 sec/batch
Epoch: 1/4...  Training Step: 28...  Training loss: 3.2170...  6.8480 sec/batch
Epoch: 1/4...  Training Step: 29...  Training loss: 3.2137...  6.7658 sec/batch
Epoch: 1/4...  Training Step: 30...  Training loss: 3.2090...  6.8103 sec/batch
Epoch: 1/4...  Training Step: 31...  Training loss: 3.2294...  6.8103 sec/batch
Epoch: 1/4...  Training Step: 32...  Training loss: 3.1913...  6.8264 sec/batch
Epoch: 1/4...  Training Step: 33...  Training loss: 3.1817...  6.8153 sec/batch
Epoch: 1/4...  Training Step: 34...  Training loss: 3.2046...  6.8743 sec/batch
Epoch: 1/4...  Training Step: 35...  Training loss: 3.1813...  6.7889 sec/batch
Epoch: 1/4...  Training Step: 36...  Training loss: 3.2105...  6.9567 sec/batch
Epoch: 1/4...  Training Step: 37...  Training loss: 3.1723...  7.4165 sec/batch
Epoch: 1/4...  Training Step: 38...  Training loss: 3.1750...  6.9512 sec/batch
Epoch: 1/4...  Training Step: 39...  Training loss: 3.1708...  7.0455 sec/batch
Epoch: 1/4...  Training Step: 40...  Training loss: 3.1787...  6.9977 sec/batch
Epoch: 1/4...  Training Step: 41...  Training loss: 3.1638...  7.0049 sec/batch
Epoch: 1/4...  Training Step: 42...  Training loss: 3.1676...  6.9754 sec/batch
Epoch: 1/4...  Training Step: 43...  Training loss: 3.1616...  7.0638 sec/batch
Epoch: 1/4...  Training Step: 44...  Training loss: 3.1569...  7.0485 sec/batch
Epoch: 1/4...  Training Step: 45...  Training loss: 3.1465...  7.1446 sec/batch
Epoch: 1/4...  Training Step: 46...  Training loss: 3.1660...  7.0417 sec/batch
Epoch: 1/4...  Training Step: 47...  Training loss: 3.1692...  7.0824 sec/batch
Epoch: 1/4...  Training Step: 48...  Training loss: 3.1694...  7.1968 sec/batch
Epoch: 1/4...  Training Step: 49...  Training loss: 3.1675...  6.9816 sec/batch
Epoch: 1/4...  Training Step: 50...  Training loss: 3.1723...  7.0983 sec/batch
Epoch: 1/4...  Training Step: 51...  Training loss: 3.1597...  7.0839 sec/batch
Epoch: 1/4...  Training Step: 52...  Training loss: 3.1500...  7.0467 sec/batch
Epoch: 1/4...  Training Step: 53...  Training loss: 3.1661...  7.1028 sec/batch
Epoch: 1/4...  Training Step: 54...  Training loss: 3.1468...  7.1557 sec/batch
Epoch: 1/4...  Training Step: 55...  Training loss: 3.1541...  7.1363 sec/batch
Epoch: 1/4...  Training Step: 56...  Training loss: 3.1348...  7.1251 sec/batch
Epoch: 1/4...  Training Step: 57...  Training loss: 3.1506...  7.1048 sec/batch
Epoch: 1/4...  Training Step: 58...  Training loss: 3.1469...  7.5379 sec/batch
Epoch: 1/4...  Training Step: 59...  Training loss: 3.1462...  7.0555 sec/batch
Epoch: 1/4...  Training Step: 60...  Training loss: 3.1497...  7.1041 sec/batch
Epoch: 1/4...  Training Step: 61...  Training loss: 3.1574...  7.2455 sec/batch
Epoch: 1/4...  Training Step: 62...  Training loss: 3.1687...  7.4922 sec/batch
Epoch: 1/4...  Training Step: 63...  Training loss: 3.1713...  7.0855 sec/batch
Epoch: 1/4...  Training Step: 64...  Training loss: 3.1234...  7.1295 sec/batch
Epoch: 1/4...  Training Step: 65...  Training loss: 3.1385...  7.1453 sec/batch
Epoch: 1/4...  Training Step: 66...  Training loss: 3.1575...  7.1540 sec/batch
Epoch: 1/4...  Training Step: 67...  Training loss: 3.1596...  7.1479 sec/batch
Epoch: 1/4...  Training Step: 68...  Training loss: 3.1111...  7.1439 sec/batch
Epoch: 1/4...  Training Step: 69...  Training loss: 3.1345...  7.0752 sec/batch
Epoch: 1/4...  Training Step: 70...  Training loss: 3.1520...  7.1338 sec/batch
Epoch: 1/4...  Training Step: 71...  Training loss: 3.1458...  7.3553 sec/batch
Epoch: 1/4...  Training Step: 72...  Training loss: 3.1655...  7.1410 sec/batch
Epoch: 1/4...  Training Step: 73...  Training loss: 3.1427...  7.1798 sec/batch
Epoch: 1/4...  Training Step: 74...  Training loss: 3.1511...  7.1635 sec/batch
Epoch: 1/4...  Training Step: 75...  Training loss: 3.1488...  7.1231 sec/batch
Epoch: 1/4...  Training Step: 76...  Training loss: 3.1576...  7.1194 sec/batch
Epoch: 1/4...  Training Step: 77...  Training loss: 3.1524...  7.1825 sec/batch
Epoch: 1/4...  Training Step: 78...  Training loss: 3.1493...  7.6384 sec/batch
Epoch: 1/4...  Training Step: 79...  Training loss: 3.1438...  7.1043 sec/batch
Epoch: 1/4...  Training Step: 80...  Training loss: 3.1288...  7.1020 sec/batch
Epoch: 1/4...  Training Step: 81...  Training loss: 3.1347...  7.2009 sec/batch
Epoch: 1/4...  Training Step: 82...  Training loss: 3.1518...  7.1298 sec/batch
Epoch: 1/4...  Training Step: 83...  Training loss: 3.1504...  7.1788 sec/batch
Epoch: 1/4...  Training Step: 84...  Training loss: 3.1348...  7.0880 sec/batch
Epoch: 1/4...  Training Step: 85...  Training loss: 3.1252...  7.1687 sec/batch
Epoch: 1/4...  Training Step: 86...  Training loss: 3.1357...  7.3462 sec/batch
Epoch: 1/4...  Training Step: 87...  Training loss: 3.1332...  7.1144 sec/batch
Epoch: 1/4...  Training Step: 88...  Training loss: 3.1329...  7.2331 sec/batch
Epoch: 1/4...  Training Step: 89...  Training loss: 3.1464...  7.2128 sec/batch
Epoch: 1/4...  Training Step: 90...  Training loss: 3.1440...  7.3077 sec/batch
Epoch: 1/4...  Training Step: 91...  Training loss: 3.1455...  7.1301 sec/batch
Epoch: 1/4...  Training Step: 92...  Training loss: 3.1381...  7.1377 sec/batch
Epoch: 1/4...  Training Step: 93...  Training loss: 3.1446...  7.1415 sec/batch
Epoch: 1/4...  Training Step: 94...  Training loss: 3.1440...  7.1930 sec/batch
Epoch: 1/4...  Training Step: 95...  Training loss: 3.1344...  7.1439 sec/batch
Epoch: 1/4...  Training Step: 96...  Training loss: 3.1313...  7.2495 sec/batch
Epoch: 1/4...  Training Step: 97...  Training loss: 3.1436...  7.0919 sec/batch
Epoch: 1/4...  Training Step: 98...  Training loss: 3.1337...  7.6123 sec/batch
Epoch: 1/4...  Training Step: 99...  Training loss: 3.1443...  7.0789 sec/batch
Epoch: 1/4...  Training Step: 100...  Training loss: 3.1289...  7.1421 sec/batch
Epoch: 1/4...  Training Step: 101...  Training loss: 3.1418...  7.1660 sec/batch
Epoch: 1/4...  Training Step: 102...  Training loss: 3.1411...  7.1311 sec/batch
Epoch: 1/4...  Training Step: 103...  Training loss: 3.1433...  7.1326 sec/batch
Epoch: 1/4...  Training Step: 104...  Training loss: 3.1377...  7.2089 sec/batch
Epoch: 1/4...  Training Step: 105...  Training loss: 3.1366...  7.0977 sec/batch
Epoch: 1/4...  Training Step: 106...  Training loss: 3.1398...  7.1961 sec/batch
Epoch: 1/4...  Training Step: 107...  Training loss: 3.1165...  7.2120 sec/batch
Epoch: 1/4...  Training Step: 108...  Training loss: 3.1220...  7.1681 sec/batch
Epoch: 1/4...  Training Step: 109...  Training loss: 3.1348...  7.1007 sec/batch
Epoch: 1/4...  Training Step: 110...  Training loss: 3.1110...  7.1311 sec/batch
Epoch: 1/4...  Training Step: 111...  Training loss: 3.1337...  7.3768 sec/batch
Epoch: 1/4...  Training Step: 112...  Training loss: 3.1372...  7.2220 sec/batch
Epoch: 1/4...  Training Step: 113...  Training loss: 3.1287...  7.3630 sec/batch
Epoch: 1/4...  Training Step: 114...  Training loss: 3.1169...  7.1460 sec/batch
Epoch: 1/4...  Training Step: 115...  Training loss: 3.1154...  7.1898 sec/batch
Epoch: 1/4...  Training Step: 116...  Training loss: 3.1208...  7.1728 sec/batch
Epoch: 1/4...  Training Step: 117...  Training loss: 3.1254...  7.3016 sec/batch
Epoch: 1/4...  Training Step: 118...  Training loss: 3.1461...  7.6525 sec/batch
Epoch: 1/4...  Training Step: 119...  Training loss: 3.1424...  7.2602 sec/batch
Epoch: 1/4...  Training Step: 120...  Training loss: 3.1204...  7.0939 sec/batch
Epoch: 1/4...  Training Step: 121...  Training loss: 3.1586...  7.1509 sec/batch
Epoch: 1/4...  Training Step: 122...  Training loss: 3.1355...  7.3151 sec/batch
Epoch: 1/4...  Training Step: 123...  Training loss: 3.1393...  7.1322 sec/batch
Epoch: 1/4...  Training Step: 124...  Training loss: 3.1471...  7.1840 sec/batch
Epoch: 1/4...  Training Step: 125...  Training loss: 3.1241...  7.1892 sec/batch
Epoch: 1/4...  Training Step: 126...  Training loss: 3.1084...  7.1746 sec/batch
Epoch: 1/4...  Training Step: 127...  Training loss: 3.1295...  7.2101 sec/batch
Epoch: 1/4...  Training Step: 128...  Training loss: 3.1363...  7.1564 sec/batch
Epoch: 1/4...  Training Step: 129...  Training loss: 3.1217...  7.0991 sec/batch
Epoch: 1/4...  Training Step: 130...  Training loss: 3.1335...  7.1945 sec/batch
Epoch: 1/4...  Training Step: 131...  Training loss: 3.1444...  7.1749 sec/batch
Epoch: 1/4...  Training Step: 132...  Training loss: 3.1225...  7.2682 sec/batch
Epoch: 1/4...  Training Step: 133...  Training loss: 3.1324...  7.2221 sec/batch
Epoch: 1/4...  Training Step: 134...  Training loss: 3.1225...  7.1461 sec/batch
Epoch: 1/4...  Training Step: 135...  Training loss: 3.0939...  7.1165 sec/batch
Epoch: 1/4...  Training Step: 136...  Training loss: 3.1018...  7.2076 sec/batch
Epoch: 1/4...  Training Step: 137...  Training loss: 3.1242...  7.2074 sec/batch
Epoch: 1/4...  Training Step: 138...  Training loss: 3.1049...  7.6610 sec/batch
Epoch: 1/4...  Training Step: 139...  Training loss: 3.1281...  7.1446 sec/batch
Epoch: 1/4...  Training Step: 140...  Training loss: 3.1243...  7.1078 sec/batch
Epoch: 1/4...  Training Step: 141...  Training loss: 3.1164...  7.1903 sec/batch
Epoch: 1/4...  Training Step: 142...  Training loss: 3.0963...  7.1218 sec/batch
Epoch: 1/4...  Training Step: 143...  Training loss: 3.1180...  7.1540 sec/batch
Epoch: 1/4...  Training Step: 144...  Training loss: 3.1068...  7.9534 sec/batch
Epoch: 1/4...  Training Step: 145...  Training loss: 3.1165...  7.3282 sec/batch
Epoch: 1/4...  Training Step: 146...  Training loss: 3.1252...  7.1178 sec/batch
Epoch: 1/4...  Training Step: 147...  Training loss: 3.1374...  7.1726 sec/batch
Epoch: 1/4...  Training Step: 148...  Training loss: 3.1475...  7.2050 sec/batch
Epoch: 1/4...  Training Step: 149...  Training loss: 3.1061...  7.2070 sec/batch
Epoch: 1/4...  Training Step: 150...  Training loss: 3.1212...  7.2210 sec/batch
Epoch: 1/4...  Training Step: 151...  Training loss: 3.1336...  7.1431 sec/batch
Epoch: 1/4...  Training Step: 152...  Training loss: 3.1465...  7.2323 sec/batch
Epoch: 1/4...  Training Step: 153...  Training loss: 3.1198...  7.1050 sec/batch
Epoch: 1/4...  Training Step: 154...  Training loss: 3.1245...  7.2037 sec/batch
Epoch: 1/4...  Training Step: 155...  Training loss: 3.1071...  7.1190 sec/batch
Epoch: 1/4...  Training Step: 156...  Training loss: 3.1110...  7.1836 sec/batch
Epoch: 1/4...  Training Step: 157...  Training loss: 3.1047...  7.2226 sec/batch
Epoch: 1/4...  Training Step: 158...  Training loss: 3.1081...  7.7245 sec/batch
Epoch: 1/4...  Training Step: 159...  Training loss: 3.0929...  7.1949 sec/batch
Epoch: 1/4...  Training Step: 160...  Training loss: 3.0897...  7.2090 sec/batch
Epoch: 1/4...  Training Step: 161...  Training loss: 3.1190...  7.1403 sec/batch
Epoch: 1/4...  Training Step: 162...  Training loss: 3.0834...  7.2109 sec/batch
Epoch: 1/4...  Training Step: 163...  Training loss: 3.0841...  7.2464 sec/batch
Epoch: 1/4...  Training Step: 164...  Training loss: 3.1029...  7.3028 sec/batch
Epoch: 1/4...  Training Step: 165...  Training loss: 3.0967...  7.2083 sec/batch
Epoch: 1/4...  Training Step: 166...  Training loss: 3.0943...  7.2087 sec/batch
Epoch: 1/4...  Training Step: 167...  Training loss: 3.1028...  7.2060 sec/batch
Epoch: 1/4...  Training Step: 168...  Training loss: 3.1050...  7.1498 sec/batch
Epoch: 1/4...  Training Step: 169...  Training loss: 3.0975...  7.1394 sec/batch
Epoch: 1/4...  Training Step: 170...  Training loss: 3.0883...  7.1434 sec/batch
Epoch: 1/4...  Training Step: 171...  Training loss: 3.1004...  7.1470 sec/batch
Epoch: 1/4...  Training Step: 172...  Training loss: 3.1305...  7.6145 sec/batch
Epoch: 1/4...  Training Step: 173...  Training loss: 3.1379...  7.2163 sec/batch
Epoch: 1/4...  Training Step: 174...  Training loss: 3.1349...  7.1504 sec/batch
Epoch: 1/4...  Training Step: 175...  Training loss: 3.1103...  7.1525 sec/batch
Epoch: 1/4...  Training Step: 176...  Training loss: 3.1069...  7.1761 sec/batch
Epoch: 1/4...  Training Step: 177...  Training loss: 3.0938...  7.1888 sec/batch
Epoch: 1/4...  Training Step: 178...  Training loss: 3.0630...  7.5515 sec/batch
Epoch: 1/4...  Training Step: 179...  Training loss: 3.0778...  7.1725 sec/batch
Epoch: 1/4...  Training Step: 180...  Training loss: 3.0681...  7.1941 sec/batch
Epoch: 1/4...  Training Step: 181...  Training loss: 3.0917...  7.1728 sec/batch
Epoch: 1/4...  Training Step: 182...  Training loss: 3.0921...  7.1649 sec/batch
Epoch: 1/4...  Training Step: 183...  Training loss: 3.0681...  7.2229 sec/batch
Epoch: 1/4...  Training Step: 184...  Training loss: 3.0855...  7.1784 sec/batch
Epoch: 1/4...  Training Step: 185...  Training loss: 3.1099...  7.2155 sec/batch
Epoch: 1/4...  Training Step: 186...  Training loss: 3.0640...  7.2063 sec/batch
Epoch: 1/4...  Training Step: 187...  Training loss: 3.0708...  7.2069 sec/batch
Epoch: 1/4...  Training Step: 188...  Training loss: 3.0521...  7.1813 sec/batch
Epoch: 1/4...  Training Step: 189...  Training loss: 3.0589...  7.1710 sec/batch
Epoch: 1/4...  Training Step: 190...  Training loss: 3.0617...  7.1460 sec/batch
Epoch: 1/4...  Training Step: 191...  Training loss: 3.0625...  7.2080 sec/batch
Epoch: 1/4...  Training Step: 192...  Training loss: 3.0306...  7.1798 sec/batch
Epoch: 1/4...  Training Step: 193...  Training loss: 3.0496...  7.1669 sec/batch
Epoch: 1/4...  Training Step: 194...  Training loss: 3.0423...  7.2176 sec/batch
Epoch: 1/4...  Training Step: 195...  Training loss: 3.0251...  7.1568 sec/batch
Epoch: 1/4...  Training Step: 196...  Training loss: 3.0289...  7.1824 sec/batch
Epoch: 1/4...  Training Step: 197...  Training loss: 3.0258...  7.2669 sec/batch
Epoch: 1/4...  Training Step: 198...  Training loss: 3.0224...  6.8703 sec/batch
Epoch: 2/4...  Training Step: 199...  Training loss: 3.0797...  6.5296 sec/batch
Epoch: 2/4...  Training Step: 200...  Training loss: 3.0058...  6.9692 sec/batch
Epoch: 2/4...  Training Step: 201...  Training loss: 3.0016...  7.7964 sec/batch
Epoch: 2/4...  Training Step: 202...  Training loss: 3.0060...  8.4357 sec/batch
Epoch: 2/4...  Training Step: 203...  Training loss: 3.0093...  7.0508 sec/batch
Epoch: 2/4...  Training Step: 204...  Training loss: 3.0147...  6.8803 sec/batch
Epoch: 2/4...  Training Step: 205...  Training loss: 3.0054...  6.9507 sec/batch
Epoch: 2/4...  Training Step: 206...  Training loss: 3.0040...  6.8592 sec/batch
Epoch: 2/4...  Training Step: 207...  Training loss: 2.9898...  6.9895 sec/batch
Epoch: 2/4...  Training Step: 208...  Training loss: 2.9790...  6.9576 sec/batch
Epoch: 2/4...  Training Step: 209...  Training loss: 2.9613...  6.8784 sec/batch
Epoch: 2/4...  Training Step: 210...  Training loss: 2.9732...  6.9768 sec/batch
Epoch: 2/4...  Training Step: 211...  Training loss: 2.9580...  6.8864 sec/batch
Epoch: 2/4...  Training Step: 212...  Training loss: 2.9782...  7.1245 sec/batch
Epoch: 2/4...  Training Step: 213...  Training loss: 2.9640...  6.9064 sec/batch
Epoch: 2/4...  Training Step: 214...  Training loss: 2.9532...  6.9096 sec/batch
Epoch: 2/4...  Training Step: 215...  Training loss: 2.9359...  6.9348 sec/batch
Epoch: 2/4...  Training Step: 216...  Training loss: 2.9717...  6.9850 sec/batch
Epoch: 2/4...  Training Step: 217...  Training loss: 2.9377...  6.9262 sec/batch
Epoch: 2/4...  Training Step: 218...  Training loss: 2.8942...  6.8814 sec/batch
Epoch: 2/4...  Training Step: 219...  Training loss: 2.9160...  6.9227 sec/batch
Epoch: 2/4...  Training Step: 220...  Training loss: 2.9269...  6.8527 sec/batch
Epoch: 2/4...  Training Step: 221...  Training loss: 2.9036...  7.6811 sec/batch
Epoch: 2/4...  Training Step: 222...  Training loss: 2.8919...  8.4743 sec/batch
Epoch: 2/4...  Training Step: 223...  Training loss: 2.8786...  7.1932 sec/batch
Epoch: 2/4...  Training Step: 224...  Training loss: 2.8876...  8.5259 sec/batch
Epoch: 2/4...  Training Step: 225...  Training loss: 2.8787...  8.0271 sec/batch
Epoch: 2/4...  Training Step: 226...  Training loss: 2.8536...  7.6010 sec/batch
Epoch: 2/4...  Training Step: 227...  Training loss: 2.8555...  7.7808 sec/batch
Epoch: 2/4...  Training Step: 228...  Training loss: 2.8560...  7.8039 sec/batch
Epoch: 2/4...  Training Step: 229...  Training loss: 2.8676...  10.4320 sec/batch
Epoch: 2/4...  Training Step: 230...  Training loss: 2.8434...  9.0296 sec/batch
Epoch: 2/4...  Training Step: 231...  Training loss: 2.8003...  12.4539 sec/batch
Epoch: 2/4...  Training Step: 232...  Training loss: 2.8281...  8.5950 sec/batch
Epoch: 2/4...  Training Step: 233...  Training loss: 2.7883...  11.1525 sec/batch
Epoch: 2/4...  Training Step: 234...  Training loss: 2.8075...  9.1779 sec/batch
Epoch: 2/4...  Training Step: 235...  Training loss: 2.7728...  8.4244 sec/batch
Epoch: 2/4...  Training Step: 236...  Training loss: 2.7497...  8.2809 sec/batch
Epoch: 2/4...  Training Step: 237...  Training loss: 2.7483...  8.9131 sec/batch
Epoch: 2/4...  Training Step: 238...  Training loss: 2.7507...  9.9202 sec/batch
Epoch: 2/4...  Training Step: 239...  Training loss: 2.7246...  9.0993 sec/batch
Epoch: 2/4...  Training Step: 240...  Training loss: 2.7192...  8.4431 sec/batch
Epoch: 2/4...  Training Step: 241...  Training loss: 2.7104...  8.3523 sec/batch
Epoch: 2/4...  Training Step: 242...  Training loss: 2.7040...  8.2353 sec/batch
Epoch: 2/4...  Training Step: 243...  Training loss: 2.6828...  8.5373 sec/batch
Epoch: 2/4...  Training Step: 244...  Training loss: 2.6764...  10.2108 sec/batch
Epoch: 2/4...  Training Step: 245...  Training loss: 2.7037...  11.7447 sec/batch
Epoch: 2/4...  Training Step: 246...  Training loss: 2.6863...  10.6894 sec/batch
Epoch: 2/4...  Training Step: 247...  Training loss: 2.6700...  13.9069 sec/batch
Epoch: 2/4...  Training Step: 248...  Training loss: 2.6848...  14.0116 sec/batch
Epoch: 2/4...  Training Step: 249...  Training loss: 2.6438...  11.4416 sec/batch
Epoch: 2/4...  Training Step: 250...  Training loss: 2.6482...  10.6438 sec/batch
Epoch: 2/4...  Training Step: 251...  Training loss: 2.6312...  11.1302 sec/batch
Epoch: 2/4...  Training Step: 252...  Training loss: 2.6212...  10.1138 sec/batch
Epoch: 2/4...  Training Step: 253...  Training loss: 2.6178...  9.1201 sec/batch
Epoch: 2/4...  Training Step: 254...  Training loss: 2.6163...  8.0531 sec/batch
Epoch: 2/4...  Training Step: 255...  Training loss: 2.6134...  7.0909 sec/batch
Epoch: 2/4...  Training Step: 256...  Training loss: 2.5939...  6.9992 sec/batch
Epoch: 2/4...  Training Step: 257...  Training loss: 2.5990...  7.0262 sec/batch
Epoch: 2/4...  Training Step: 258...  Training loss: 2.6079...  7.0619 sec/batch
Epoch: 2/4...  Training Step: 259...  Training loss: 2.5970...  6.8951 sec/batch
Epoch: 2/4...  Training Step: 260...  Training loss: 2.6064...  6.9335 sec/batch
Epoch: 2/4...  Training Step: 261...  Training loss: 2.6012...  7.0016 sec/batch
Epoch: 2/4...  Training Step: 262...  Training loss: 2.5614...  6.9924 sec/batch
Epoch: 2/4...  Training Step: 263...  Training loss: 2.5589...  7.3310 sec/batch
Epoch: 2/4...  Training Step: 264...  Training loss: 2.5820...  7.8947 sec/batch
Epoch: 2/4...  Training Step: 265...  Training loss: 2.5616...  8.2139 sec/batch
Epoch: 2/4...  Training Step: 266...  Training loss: 2.5089...  8.1896 sec/batch
Epoch: 2/4...  Training Step: 267...  Training loss: 2.5146...  7.3893 sec/batch
Epoch: 2/4...  Training Step: 268...  Training loss: 2.5449...  7.5034 sec/batch
Epoch: 2/4...  Training Step: 269...  Training loss: 2.5352...  8.4558 sec/batch
Epoch: 2/4...  Training Step: 270...  Training loss: 2.5490...  7.1949 sec/batch
Epoch: 2/4...  Training Step: 271...  Training loss: 2.5218...  7.2398 sec/batch
Epoch: 2/4...  Training Step: 272...  Training loss: 2.5128...  7.3810 sec/batch
Epoch: 2/4...  Training Step: 273...  Training loss: 2.5165...  7.4211 sec/batch
Epoch: 2/4...  Training Step: 274...  Training loss: 2.5549...  8.1199 sec/batch
Epoch: 2/4...  Training Step: 275...  Training loss: 2.5050...  8.3752 sec/batch
Epoch: 2/4...  Training Step: 276...  Training loss: 2.5133...  9.1757 sec/batch
Epoch: 2/4...  Training Step: 277...  Training loss: 2.4834...  8.2977 sec/batch
Epoch: 2/4...  Training Step: 278...  Training loss: 2.4827...  7.5949 sec/batch
Epoch: 2/4...  Training Step: 279...  Training loss: 2.4778...  7.6696 sec/batch
Epoch: 2/4...  Training Step: 280...  Training loss: 2.5051...  7.3426 sec/batch
Epoch: 2/4...  Training Step: 281...  Training loss: 2.4843...  7.5766 sec/batch
Epoch: 2/4...  Training Step: 282...  Training loss: 2.4682...  7.2440 sec/batch
Epoch: 2/4...  Training Step: 283...  Training loss: 2.4448...  7.5653 sec/batch
Epoch: 2/4...  Training Step: 284...  Training loss: 2.4435...  7.3107 sec/batch
Epoch: 2/4...  Training Step: 285...  Training loss: 2.4608...  7.3695 sec/batch
Epoch: 2/4...  Training Step: 286...  Training loss: 2.4526...  8.3797 sec/batch
Epoch: 2/4...  Training Step: 287...  Training loss: 2.4323...  7.6819 sec/batch
Epoch: 2/4...  Training Step: 288...  Training loss: 2.4539...  7.2564 sec/batch
Epoch: 2/4...  Training Step: 289...  Training loss: 2.4393...  8.3485 sec/batch
Epoch: 2/4...  Training Step: 290...  Training loss: 2.4443...  9.0078 sec/batch
Epoch: 2/4...  Training Step: 291...  Training loss: 2.4333...  8.3745 sec/batch
Epoch: 2/4...  Training Step: 292...  Training loss: 2.4012...  7.2767 sec/batch
Epoch: 2/4...  Training Step: 293...  Training loss: 2.3947...  7.9295 sec/batch
Epoch: 2/4...  Training Step: 294...  Training loss: 2.4033...  8.6106 sec/batch
Epoch: 2/4...  Training Step: 295...  Training loss: 2.4064...  7.6316 sec/batch
Epoch: 2/4...  Training Step: 296...  Training loss: 2.4034...  7.2761 sec/batch
Epoch: 2/4...  Training Step: 297...  Training loss: 2.4044...  7.2823 sec/batch
Epoch: 2/4...  Training Step: 298...  Training loss: 2.3827...  7.8065 sec/batch
Epoch: 2/4...  Training Step: 299...  Training loss: 2.4100...  10.3755 sec/batch
Epoch: 2/4...  Training Step: 300...  Training loss: 2.3983...  7.9813 sec/batch
Epoch: 2/4...  Training Step: 301...  Training loss: 2.3688...  8.1234 sec/batch
Epoch: 2/4...  Training Step: 302...  Training loss: 2.3718...  7.9505 sec/batch
Epoch: 2/4...  Training Step: 303...  Training loss: 2.3738...  7.5754 sec/batch
Epoch: 2/4...  Training Step: 304...  Training loss: 2.3741...  8.4167 sec/batch
Epoch: 2/4...  Training Step: 305...  Training loss: 2.3479...  8.0103 sec/batch
Epoch: 2/4...  Training Step: 306...  Training loss: 2.3735...  7.5330 sec/batch
Epoch: 2/4...  Training Step: 307...  Training loss: 2.3756...  7.4236 sec/batch
Epoch: 2/4...  Training Step: 308...  Training loss: 2.3240...  7.5972 sec/batch
Epoch: 2/4...  Training Step: 309...  Training loss: 2.3539...  8.9797 sec/batch
Epoch: 2/4...  Training Step: 310...  Training loss: 2.3599...  9.8445 sec/batch
Epoch: 2/4...  Training Step: 311...  Training loss: 2.3442...  7.9021 sec/batch
Epoch: 2/4...  Training Step: 312...  Training loss: 2.3370...  7.2273 sec/batch
Epoch: 2/4...  Training Step: 313...  Training loss: 2.3239...  7.1091 sec/batch
Epoch: 2/4...  Training Step: 314...  Training loss: 2.3000...  7.2220 sec/batch
Epoch: 2/4...  Training Step: 315...  Training loss: 2.3244...  7.1129 sec/batch
Epoch: 2/4...  Training Step: 316...  Training loss: 2.3292...  7.2235 sec/batch
Epoch: 2/4...  Training Step: 317...  Training loss: 2.3473...  7.1202 sec/batch
Epoch: 2/4...  Training Step: 318...  Training loss: 2.3179...  7.1345 sec/batch
Epoch: 2/4...  Training Step: 319...  Training loss: 2.3440...  7.1574 sec/batch
Epoch: 2/4...  Training Step: 320...  Training loss: 2.3224...  7.4298 sec/batch
Epoch: 2/4...  Training Step: 321...  Training loss: 2.3084...  7.1142 sec/batch
Epoch: 2/4...  Training Step: 322...  Training loss: 2.3201...  7.1918 sec/batch
Epoch: 2/4...  Training Step: 323...  Training loss: 2.2945...  7.1826 sec/batch
Epoch: 2/4...  Training Step: 324...  Training loss: 2.2852...  7.0883 sec/batch
Epoch: 2/4...  Training Step: 325...  Training loss: 2.3030...  7.0592 sec/batch
Epoch: 2/4...  Training Step: 326...  Training loss: 2.3120...  7.1023 sec/batch
Epoch: 2/4...  Training Step: 327...  Training loss: 2.2757...  7.0565 sec/batch
Epoch: 2/4...  Training Step: 328...  Training loss: 2.2855...  7.0743 sec/batch
Epoch: 2/4...  Training Step: 329...  Training loss: 2.2877...  7.2174 sec/batch
Epoch: 2/4...  Training Step: 330...  Training loss: 2.2529...  7.1998 sec/batch
Epoch: 2/4...  Training Step: 331...  Training loss: 2.2811...  7.0856 sec/batch
Epoch: 2/4...  Training Step: 332...  Training loss: 2.2843...  7.1474 sec/batch
Epoch: 2/4...  Training Step: 333...  Training loss: 2.2566...  7.0625 sec/batch
Epoch: 2/4...  Training Step: 334...  Training loss: 2.2597...  7.0827 sec/batch
Epoch: 2/4...  Training Step: 335...  Training loss: 2.2499...  7.0605 sec/batch
Epoch: 2/4...  Training Step: 336...  Training loss: 2.2631...  7.0294 sec/batch
Epoch: 2/4...  Training Step: 337...  Training loss: 2.2792...  6.9815 sec/batch
Epoch: 2/4...  Training Step: 338...  Training loss: 2.2432...  6.9737 sec/batch
Epoch: 2/4...  Training Step: 339...  Training loss: 2.2674...  7.1428 sec/batch
Epoch: 2/4...  Training Step: 340...  Training loss: 2.2319...  7.0382 sec/batch
Epoch: 2/4...  Training Step: 341...  Training loss: 2.2475...  6.9980 sec/batch
Epoch: 2/4...  Training Step: 342...  Training loss: 2.2331...  7.2387 sec/batch
Epoch: 2/4...  Training Step: 343...  Training loss: 2.2351...  7.0425 sec/batch
Epoch: 2/4...  Training Step: 344...  Training loss: 2.2637...  7.0128 sec/batch
Epoch: 2/4...  Training Step: 345...  Training loss: 2.2431...  7.0756 sec/batch
Epoch: 2/4...  Training Step: 346...  Training loss: 2.2454...  6.9958 sec/batch
Epoch: 2/4...  Training Step: 347...  Training loss: 2.2215...  6.9994 sec/batch
Epoch: 2/4...  Training Step: 348...  Training loss: 2.2088...  7.0391 sec/batch
Epoch: 2/4...  Training Step: 349...  Training loss: 2.2526...  7.0444 sec/batch
Epoch: 2/4...  Training Step: 350...  Training loss: 2.2692...  7.1028 sec/batch
Epoch: 2/4...  Training Step: 351...  Training loss: 2.2343...  7.1812 sec/batch
Epoch: 2/4...  Training Step: 352...  Training loss: 2.2258...  7.1792 sec/batch
Epoch: 2/4...  Training Step: 353...  Training loss: 2.2016...  7.0495 sec/batch
Epoch: 2/4...  Training Step: 354...  Training loss: 2.2024...  7.0063 sec/batch
Epoch: 2/4...  Training Step: 355...  Training loss: 2.1930...  7.0259 sec/batch
Epoch: 2/4...  Training Step: 356...  Training loss: 2.1952...  7.0853 sec/batch
Epoch: 2/4...  Training Step: 357...  Training loss: 2.1628...  7.0174 sec/batch
Epoch: 2/4...  Training Step: 358...  Training loss: 2.2160...  7.0252 sec/batch
Epoch: 2/4...  Training Step: 359...  Training loss: 2.2011...  7.0931 sec/batch
Epoch: 2/4...  Training Step: 360...  Training loss: 2.1645...  6.9665 sec/batch
Epoch: 2/4...  Training Step: 361...  Training loss: 2.1869...  6.9731 sec/batch
Epoch: 2/4...  Training Step: 362...  Training loss: 2.1776...  7.6449 sec/batch
Epoch: 2/4...  Training Step: 363...  Training loss: 2.1936...  9.7158 sec/batch
Epoch: 2/4...  Training Step: 364...  Training loss: 2.1798...  9.6594 sec/batch
Epoch: 2/4...  Training Step: 365...  Training loss: 2.1870...  8.8781 sec/batch
Epoch: 2/4...  Training Step: 366...  Training loss: 2.2007...  13.5979 sec/batch
Epoch: 2/4...  Training Step: 367...  Training loss: 2.1811...  11.0865 sec/batch
Epoch: 2/4...  Training Step: 368...  Training loss: 2.1528...  15.9020 sec/batch
Epoch: 2/4...  Training Step: 369...  Training loss: 2.1830...  14.5264 sec/batch
Epoch: 2/4...  Training Step: 370...  Training loss: 2.1937...  14.8732 sec/batch
Epoch: 2/4...  Training Step: 371...  Training loss: 2.2197...  19.7043 sec/batch
Epoch: 2/4...  Training Step: 372...  Training loss: 2.2144...  22.3483 sec/batch
Epoch: 2/4...  Training Step: 373...  Training loss: 2.2044...  22.3474 sec/batch
Epoch: 2/4...  Training Step: 374...  Training loss: 2.1646...  22.3671 sec/batch
Epoch: 2/4...  Training Step: 375...  Training loss: 2.1409...  22.1918 sec/batch
Epoch: 2/4...  Training Step: 376...  Training loss: 2.1440...  25.5763 sec/batch
Epoch: 2/4...  Training Step: 377...  Training loss: 2.1281...  14.4282 sec/batch
Epoch: 2/4...  Training Step: 378...  Training loss: 2.1208...  6.4319 sec/batch
Epoch: 2/4...  Training Step: 379...  Training loss: 2.1266...  6.7718 sec/batch
Epoch: 2/4...  Training Step: 380...  Training loss: 2.1436...  7.6326 sec/batch
Epoch: 2/4...  Training Step: 381...  Training loss: 2.1399...  7.3200 sec/batch
Epoch: 2/4...  Training Step: 382...  Training loss: 2.1668...  8.7613 sec/batch
Epoch: 2/4...  Training Step: 383...  Training loss: 2.1829...  22.1764 sec/batch
Epoch: 2/4...  Training Step: 384...  Training loss: 2.1389...  22.2899 sec/batch
Epoch: 2/4...  Training Step: 385...  Training loss: 2.1234...  23.4162 sec/batch
Epoch: 2/4...  Training Step: 386...  Training loss: 2.1039...  22.4900 sec/batch
Epoch: 2/4...  Training Step: 387...  Training loss: 2.1280...  22.5412 sec/batch
Epoch: 2/4...  Training Step: 388...  Training loss: 2.1223...  22.2914 sec/batch
Epoch: 2/4...  Training Step: 389...  Training loss: 2.1379...  22.3587 sec/batch
Epoch: 2/4...  Training Step: 390...  Training loss: 2.0905...  21.9557 sec/batch
Epoch: 2/4...  Training Step: 391...  Training loss: 2.1191...  19.3543 sec/batch
Epoch: 2/4...  Training Step: 392...  Training loss: 2.1078...  34.0400 sec/batch
Epoch: 2/4...  Training Step: 393...  Training loss: 2.0852...  32.4714 sec/batch
Epoch: 2/4...  Training Step: 394...  Training loss: 2.1165...  32.6064 sec/batch
Epoch: 2/4...  Training Step: 395...  Training loss: 2.0950...  32.1565 sec/batch
Epoch: 2/4...  Training Step: 396...  Training loss: 2.0934...  30.3236 sec/batch
Epoch: 3/4...  Training Step: 397...  Training loss: 2.2118...  9.5258 sec/batch
Epoch: 3/4...  Training Step: 398...  Training loss: 2.0775...  13.6115 sec/batch
Epoch: 3/4...  Training Step: 399...  Training loss: 2.0741...  12.2252 sec/batch
Epoch: 3/4...  Training Step: 400...  Training loss: 2.0882...  12.2657 sec/batch
Epoch: 3/4...  Training Step: 401...  Training loss: 2.0949...  11.8989 sec/batch
Epoch: 3/4...  Training Step: 402...  Training loss: 2.0853...  11.7561 sec/batch
Epoch: 3/4...  Training Step: 403...  Training loss: 2.1000...  12.3665 sec/batch
Epoch: 3/4...  Training Step: 404...  Training loss: 2.0942...  12.9914 sec/batch
Epoch: 3/4...  Training Step: 405...  Training loss: 2.1118...  13.5707 sec/batch
Epoch: 3/4...  Training Step: 406...  Training loss: 2.0782...  10.5618 sec/batch
Epoch: 3/4...  Training Step: 407...  Training loss: 2.0749...  14.2405 sec/batch
Epoch: 3/4...  Training Step: 408...  Training loss: 2.0628...  12.0554 sec/batch
Epoch: 3/4...  Training Step: 409...  Training loss: 2.0896...  11.3285 sec/batch
Epoch: 3/4...  Training Step: 410...  Training loss: 2.1156...  12.4940 sec/batch
Epoch: 3/4...  Training Step: 411...  Training loss: 2.0760...  12.3983 sec/batch
Epoch: 3/4...  Training Step: 412...  Training loss: 2.0545...  12.4009 sec/batch
Epoch: 3/4...  Training Step: 413...  Training loss: 2.0717...  12.8306 sec/batch
Epoch: 3/4...  Training Step: 414...  Training loss: 2.1108...  11.0202 sec/batch
Epoch: 3/4...  Training Step: 415...  Training loss: 2.0656...  13.3982 sec/batch
Epoch: 3/4...  Training Step: 416...  Training loss: 2.0594...  11.9073 sec/batch
Epoch: 3/4...  Training Step: 417...  Training loss: 2.0591...  12.2876 sec/batch
Epoch: 3/4...  Training Step: 418...  Training loss: 2.1066...  12.7825 sec/batch
Epoch: 3/4...  Training Step: 419...  Training loss: 2.0560...  13.6131 sec/batch
Epoch: 3/4...  Training Step: 420...  Training loss: 2.0523...  12.6775 sec/batch
Epoch: 3/4...  Training Step: 421...  Training loss: 2.0474...  12.4100 sec/batch
Epoch: 3/4...  Training Step: 422...  Training loss: 2.0423...  12.0157 sec/batch
Epoch: 3/4...  Training Step: 423...  Training loss: 2.0375...  11.7282 sec/batch
Epoch: 3/4...  Training Step: 424...  Training loss: 2.0568...  13.3337 sec/batch
Epoch: 3/4...  Training Step: 425...  Training loss: 2.0888...  11.9710 sec/batch
Epoch: 3/4...  Training Step: 426...  Training loss: 2.0656...  11.2013 sec/batch
Epoch: 3/4...  Training Step: 427...  Training loss: 2.0591...  13.3360 sec/batch
Epoch: 3/4...  Training Step: 428...  Training loss: 2.0295...  14.0626 sec/batch
Epoch: 3/4...  Training Step: 429...  Training loss: 2.0448...  10.8927 sec/batch
Epoch: 3/4...  Training Step: 430...  Training loss: 2.0670...  12.5115 sec/batch
Epoch: 3/4...  Training Step: 431...  Training loss: 2.0195...  11.6180 sec/batch
Epoch: 3/4...  Training Step: 432...  Training loss: 2.0509...  11.7472 sec/batch
Epoch: 3/4...  Training Step: 433...  Training loss: 2.0293...  11.7778 sec/batch
Epoch: 3/4...  Training Step: 434...  Training loss: 2.0096...  12.5478 sec/batch
Epoch: 3/4...  Training Step: 435...  Training loss: 2.0086...  12.7342 sec/batch
Epoch: 3/4...  Training Step: 436...  Training loss: 2.0024...  11.6963 sec/batch
Epoch: 3/4...  Training Step: 437...  Training loss: 2.0072...  11.0309 sec/batch
Epoch: 3/4...  Training Step: 438...  Training loss: 2.0152...  13.9692 sec/batch
Epoch: 3/4...  Training Step: 439...  Training loss: 1.9996...  11.1016 sec/batch
Epoch: 3/4...  Training Step: 440...  Training loss: 2.0049...  12.5170 sec/batch
Epoch: 3/4...  Training Step: 441...  Training loss: 2.0133...  11.1235 sec/batch
Epoch: 3/4...  Training Step: 442...  Training loss: 1.9698...  12.5883 sec/batch
Epoch: 3/4...  Training Step: 443...  Training loss: 2.0319...  12.4975 sec/batch
Epoch: 3/4...  Training Step: 444...  Training loss: 1.9960...  11.7511 sec/batch
Epoch: 3/4...  Training Step: 445...  Training loss: 2.0030...  12.5868 sec/batch
Epoch: 3/4...  Training Step: 446...  Training loss: 2.0453...  11.7577 sec/batch
Epoch: 3/4...  Training Step: 447...  Training loss: 1.9807...  13.6931 sec/batch
Epoch: 3/4...  Training Step: 448...  Training loss: 2.0455...  10.9532 sec/batch
Epoch: 3/4...  Training Step: 449...  Training loss: 2.0009...  12.3972 sec/batch
Epoch: 3/4...  Training Step: 450...  Training loss: 2.0006...  12.6919 sec/batch
Epoch: 3/4...  Training Step: 451...  Training loss: 1.9920...  12.7593 sec/batch
Epoch: 3/4...  Training Step: 452...  Training loss: 2.0062...  12.6235 sec/batch
Epoch: 3/4...  Training Step: 453...  Training loss: 2.0045...  13.1149 sec/batch
Epoch: 3/4...  Training Step: 454...  Training loss: 1.9762...  11.8809 sec/batch
Epoch: 3/4...  Training Step: 455...  Training loss: 1.9814...  11.2552 sec/batch
Epoch: 3/4...  Training Step: 456...  Training loss: 2.0113...  13.2957 sec/batch
Epoch: 3/4...  Training Step: 457...  Training loss: 1.9893...  11.7284 sec/batch
Epoch: 3/4...  Training Step: 458...  Training loss: 2.0129...  12.6645 sec/batch
Epoch: 3/4...  Training Step: 459...  Training loss: 2.0149...  11.8162 sec/batch
Epoch: 3/4...  Training Step: 460...  Training loss: 1.9890...  11.2061 sec/batch
Epoch: 3/4...  Training Step: 461...  Training loss: 1.9651...  12.1857 sec/batch
Epoch: 3/4...  Training Step: 462...  Training loss: 2.0217...  12.3688 sec/batch
Epoch: 3/4...  Training Step: 463...  Training loss: 2.0057...  12.1380 sec/batch
Epoch: 3/4...  Training Step: 464...  Training loss: 1.9556...  12.5120 sec/batch
Epoch: 3/4...  Training Step: 465...  Training loss: 1.9770...  11.5988 sec/batch
Epoch: 3/4...  Training Step: 466...  Training loss: 1.9790...  12.1813 sec/batch
Epoch: 3/4...  Training Step: 467...  Training loss: 1.9946...  12.1813 sec/batch
Epoch: 3/4...  Training Step: 468...  Training loss: 2.0024...  13.0445 sec/batch
Epoch: 3/4...  Training Step: 469...  Training loss: 1.9949...  11.9798 sec/batch
Epoch: 3/4...  Training Step: 470...  Training loss: 1.9712...  12.7568 sec/batch
Epoch: 3/4...  Training Step: 471...  Training loss: 1.9742...  12.0374 sec/batch
Epoch: 3/4...  Training Step: 472...  Training loss: 2.0091...  12.7819 sec/batch
Epoch: 3/4...  Training Step: 473...  Training loss: 1.9656...  12.6404 sec/batch
Epoch: 3/4...  Training Step: 474...  Training loss: 1.9883...  11.9729 sec/batch
Epoch: 3/4...  Training Step: 475...  Training loss: 1.9443...  12.7466 sec/batch
Epoch: 3/4...  Training Step: 476...  Training loss: 1.9606...  12.8178 sec/batch
Epoch: 3/4...  Training Step: 477...  Training loss: 1.9192...  13.0112 sec/batch
Epoch: 3/4...  Training Step: 478...  Training loss: 1.9778...  13.6206 sec/batch
Epoch: 3/4...  Training Step: 479...  Training loss: 1.9403...  12.6992 sec/batch
Epoch: 3/4...  Training Step: 480...  Training loss: 1.9560...  11.8733 sec/batch
Epoch: 3/4...  Training Step: 481...  Training loss: 1.9216...  12.7887 sec/batch
Epoch: 3/4...  Training Step: 482...  Training loss: 1.9519...  12.2124 sec/batch
Epoch: 3/4...  Training Step: 483...  Training loss: 1.9469...  13.4699 sec/batch
Epoch: 3/4...  Training Step: 484...  Training loss: 1.9290...  12.6487 sec/batch
Epoch: 3/4...  Training Step: 485...  Training loss: 1.9243...  12.0579 sec/batch
Epoch: 3/4...  Training Step: 486...  Training loss: 1.9659...  13.0148 sec/batch
Epoch: 3/4...  Training Step: 487...  Training loss: 1.9238...  12.1583 sec/batch
Epoch: 3/4...  Training Step: 488...  Training loss: 1.9414...  12.8558 sec/batch
Epoch: 3/4...  Training Step: 489...  Training loss: 1.9290...  12.1005 sec/batch
Epoch: 3/4...  Training Step: 490...  Training loss: 1.9280...  12.7542 sec/batch
Epoch: 3/4...  Training Step: 491...  Training loss: 1.9260...  12.8013 sec/batch
Epoch: 3/4...  Training Step: 492...  Training loss: 1.9387...  13.9899 sec/batch
Epoch: 3/4...  Training Step: 493...  Training loss: 1.9480...  12.2582 sec/batch
Epoch: 3/4...  Training Step: 494...  Training loss: 1.9226...  12.7280 sec/batch
Epoch: 3/4...  Training Step: 495...  Training loss: 1.9327...  11.4841 sec/batch
Epoch: 3/4...  Training Step: 496...  Training loss: 1.9071...  13.1313 sec/batch
Epoch: 3/4...  Training Step: 497...  Training loss: 1.9511...  12.8031 sec/batch
Epoch: 3/4...  Training Step: 498...  Training loss: 1.9374...  12.8078 sec/batch
Epoch: 3/4...  Training Step: 499...  Training loss: 1.9175...  12.2935 sec/batch
Epoch: 3/4...  Training Step: 500...  Training loss: 1.9156...  14.4526 sec/batch
Epoch: 3/4...  Training Step: 501...  Training loss: 1.9359...  12.3001 sec/batch
Epoch: 3/4...  Training Step: 502...  Training loss: 1.9307...  12.4284 sec/batch
Epoch: 3/4...  Training Step: 503...  Training loss: 1.9267...  12.5457 sec/batch
Epoch: 3/4...  Training Step: 504...  Training loss: 1.9387...  11.8735 sec/batch
Epoch: 3/4...  Training Step: 505...  Training loss: 1.9400...  12.5053 sec/batch
Epoch: 3/4...  Training Step: 506...  Training loss: 1.9212...  13.0097 sec/batch
Epoch: 3/4...  Training Step: 507...  Training loss: 1.9224...  13.5733 sec/batch
Epoch: 3/4...  Training Step: 508...  Training loss: 1.9151...  11.7955 sec/batch
Epoch: 3/4...  Training Step: 509...  Training loss: 1.9099...  12.4146 sec/batch
Epoch: 3/4...  Training Step: 510...  Training loss: 1.9148...  12.7034 sec/batch
Epoch: 3/4...  Training Step: 511...  Training loss: 1.8989...  11.3591 sec/batch
Epoch: 3/4...  Training Step: 512...  Training loss: 1.8774...  12.5712 sec/batch
Epoch: 3/4...  Training Step: 513...  Training loss: 1.9049...  13.2176 sec/batch
Epoch: 3/4...  Training Step: 514...  Training loss: 1.9141...  12.4629 sec/batch
Epoch: 3/4...  Training Step: 515...  Training loss: 1.9143...  11.0788 sec/batch
Epoch: 3/4...  Training Step: 516...  Training loss: 1.9109...  14.0881 sec/batch
Epoch: 3/4...  Training Step: 517...  Training loss: 1.9296...  12.6191 sec/batch
Epoch: 3/4...  Training Step: 518...  Training loss: 1.8883...  12.2609 sec/batch
Epoch: 3/4...  Training Step: 519...  Training loss: 1.8933...  12.8592 sec/batch
Epoch: 3/4...  Training Step: 520...  Training loss: 1.9322...  12.7162 sec/batch
Epoch: 3/4...  Training Step: 521...  Training loss: 1.9012...  12.0564 sec/batch
Epoch: 3/4...  Training Step: 522...  Training loss: 1.8782...  12.5330 sec/batch
Epoch: 3/4...  Training Step: 523...  Training loss: 1.9167...  13.2053 sec/batch
Epoch: 3/4...  Training Step: 524...  Training loss: 1.9119...  10.2362 sec/batch
Epoch: 3/4...  Training Step: 525...  Training loss: 1.8937...  13.3700 sec/batch
Epoch: 3/4...  Training Step: 526...  Training loss: 1.9021...  14.4651 sec/batch
Epoch: 3/4...  Training Step: 527...  Training loss: 1.8783...  11.1011 sec/batch
Epoch: 3/4...  Training Step: 528...  Training loss: 1.8752...  13.0774 sec/batch
Epoch: 3/4...  Training Step: 529...  Training loss: 1.9052...  13.3821 sec/batch
Epoch: 3/4...  Training Step: 530...  Training loss: 1.8967...  10.4829 sec/batch
Epoch: 3/4...  Training Step: 531...  Training loss: 1.8835...  13.3640 sec/batch
Epoch: 3/4...  Training Step: 532...  Training loss: 1.8903...  13.4884 sec/batch
Epoch: 3/4...  Training Step: 533...  Training loss: 1.9092...  11.1590 sec/batch
Epoch: 3/4...  Training Step: 534...  Training loss: 1.8938...  12.8021 sec/batch
Epoch: 3/4...  Training Step: 535...  Training loss: 1.9164...  12.7787 sec/batch
Epoch: 3/4...  Training Step: 536...  Training loss: 1.8792...  12.8001 sec/batch
Epoch: 3/4...  Training Step: 537...  Training loss: 1.9134...  12.2750 sec/batch
Epoch: 3/4...  Training Step: 538...  Training loss: 1.8738...  12.4226 sec/batch
Epoch: 3/4...  Training Step: 539...  Training loss: 1.8903...  11.6439 sec/batch
Epoch: 3/4...  Training Step: 540...  Training loss: 1.8838...  10.2419 sec/batch
Epoch: 3/4...  Training Step: 541...  Training loss: 1.8638...  13.5988 sec/batch
Epoch: 3/4...  Training Step: 542...  Training loss: 1.9021...  13.6075 sec/batch
Epoch: 3/4...  Training Step: 543...  Training loss: 1.8860...  11.6231 sec/batch
Epoch: 3/4...  Training Step: 544...  Training loss: 1.9025...  13.3242 sec/batch
Epoch: 3/4...  Training Step: 545...  Training loss: 1.8838...  11.7858 sec/batch
Epoch: 3/4...  Training Step: 546...  Training loss: 1.8651...  10.9341 sec/batch
Epoch: 3/4...  Training Step: 547...  Training loss: 1.8617...  12.4975 sec/batch
Epoch: 3/4...  Training Step: 548...  Training loss: 1.9037...  13.4312 sec/batch
Epoch: 3/4...  Training Step: 549...  Training loss: 1.8856...  12.8701 sec/batch
Epoch: 3/4...  Training Step: 550...  Training loss: 1.8822...  12.5659 sec/batch
Epoch: 3/4...  Training Step: 551...  Training loss: 1.8710...  12.1556 sec/batch
Epoch: 3/4...  Training Step: 552...  Training loss: 1.8509...  12.4805 sec/batch
Epoch: 3/4...  Training Step: 553...  Training loss: 1.8788...  13.2300 sec/batch
Epoch: 3/4...  Training Step: 554...  Training loss: 1.8669...  11.5354 sec/batch
Epoch: 3/4...  Training Step: 555...  Training loss: 1.8335...  13.5772 sec/batch
Epoch: 3/4...  Training Step: 556...  Training loss: 1.8895...  11.2370 sec/batch
Epoch: 3/4...  Training Step: 557...  Training loss: 1.8812...  13.5539 sec/batch
Epoch: 3/4...  Training Step: 558...  Training loss: 1.8524...  12.8890 sec/batch
Epoch: 3/4...  Training Step: 559...  Training loss: 1.8629...  12.8701 sec/batch
Epoch: 3/4...  Training Step: 560...  Training loss: 1.8562...  12.1954 sec/batch
Epoch: 3/4...  Training Step: 561...  Training loss: 1.8690...  12.6507 sec/batch
Epoch: 3/4...  Training Step: 562...  Training loss: 1.8504...  12.8290 sec/batch
Epoch: 3/4...  Training Step: 563...  Training loss: 1.8755...  11.3249 sec/batch
Epoch: 3/4...  Training Step: 564...  Training loss: 1.9054...  14.2181 sec/batch
Epoch: 3/4...  Training Step: 565...  Training loss: 1.8516...  12.4687 sec/batch
Epoch: 3/4...  Training Step: 566...  Training loss: 1.8485...  12.8191 sec/batch
Epoch: 3/4...  Training Step: 567...  Training loss: 1.8652...  12.9061 sec/batch
Epoch: 3/4...  Training Step: 568...  Training loss: 1.8789...  12.8417 sec/batch
Epoch: 3/4...  Training Step: 569...  Training loss: 1.9133...  12.3645 sec/batch
Epoch: 3/4...  Training Step: 570...  Training loss: 1.9032...  12.2083 sec/batch
Epoch: 3/4...  Training Step: 571...  Training loss: 1.9087...  12.0753 sec/batch
Epoch: 3/4...  Training Step: 572...  Training loss: 1.8499...  12.8662 sec/batch
Epoch: 3/4...  Training Step: 573...  Training loss: 1.8418...  11.9661 sec/batch
Epoch: 3/4...  Training Step: 574...  Training loss: 1.8594...  12.7269 sec/batch
Epoch: 3/4...  Training Step: 575...  Training loss: 1.8240...  13.2432 sec/batch
Epoch: 3/4...  Training Step: 576...  Training loss: 1.8155...  11.5389 sec/batch
Epoch: 3/4...  Training Step: 577...  Training loss: 1.8177...  12.8362 sec/batch
Epoch: 3/4...  Training Step: 578...  Training loss: 1.8558...  12.7602 sec/batch
Epoch: 3/4...  Training Step: 579...  Training loss: 1.8449...  13.1719 sec/batch
Epoch: 3/4...  Training Step: 580...  Training loss: 1.8711...  12.2837 sec/batch
Epoch: 3/4...  Training Step: 581...  Training loss: 1.8785...  13.0765 sec/batch
Epoch: 3/4...  Training Step: 582...  Training loss: 1.8423...  12.4595 sec/batch
Epoch: 3/4...  Training Step: 583...  Training loss: 1.8468...  12.8185 sec/batch
Epoch: 3/4...  Training Step: 584...  Training loss: 1.8321...  12.8184 sec/batch
Epoch: 3/4...  Training Step: 585...  Training loss: 1.8330...  12.5469 sec/batch
Epoch: 3/4...  Training Step: 586...  Training loss: 1.8433...  11.9476 sec/batch
Epoch: 3/4...  Training Step: 587...  Training loss: 1.8410...  11.7199 sec/batch
Epoch: 3/4...  Training Step: 588...  Training loss: 1.8024...  12.5325 sec/batch
Epoch: 3/4...  Training Step: 589...  Training loss: 1.8395...  12.5592 sec/batch
Epoch: 3/4...  Training Step: 590...  Training loss: 1.8149...  13.3846 sec/batch
Epoch: 3/4...  Training Step: 591...  Training loss: 1.7950...  12.4425 sec/batch
Epoch: 3/4...  Training Step: 592...  Training loss: 1.8249...  12.4397 sec/batch
Epoch: 3/4...  Training Step: 593...  Training loss: 1.8216...  11.9344 sec/batch
Epoch: 3/4...  Training Step: 594...  Training loss: 1.8068...  12.9068 sec/batch
Epoch: 4/4...  Training Step: 595...  Training loss: 1.9367...  12.9324 sec/batch
Epoch: 4/4...  Training Step: 596...  Training loss: 1.8044...  12.2400 sec/batch
Epoch: 4/4...  Training Step: 597...  Training loss: 1.8111...  12.6009 sec/batch
Epoch: 4/4...  Training Step: 598...  Training loss: 1.8053...  13.0226 sec/batch
Epoch: 4/4...  Training Step: 599...  Training loss: 1.8091...  12.9335 sec/batch
Epoch: 4/4...  Training Step: 600...  Training loss: 1.7884...  12.8337 sec/batch
Epoch: 4/4...  Training Step: 601...  Training loss: 1.8204...  12.8082 sec/batch
Epoch: 4/4...  Training Step: 602...  Training loss: 1.8119...  11.4567 sec/batch
Epoch: 4/4...  Training Step: 603...  Training loss: 1.8385...  12.8982 sec/batch
Epoch: 4/4...  Training Step: 604...  Training loss: 1.8205...  12.7027 sec/batch
Epoch: 4/4...  Training Step: 605...  Training loss: 1.8006...  12.7803 sec/batch
Epoch: 4/4...  Training Step: 606...  Training loss: 1.8033...  12.8363 sec/batch
Epoch: 4/4...  Training Step: 607...  Training loss: 1.8094...  12.0548 sec/batch
Epoch: 4/4...  Training Step: 608...  Training loss: 1.8524...  12.6961 sec/batch
Epoch: 4/4...  Training Step: 609...  Training loss: 1.8102...  13.4540 sec/batch
Epoch: 4/4...  Training Step: 610...  Training loss: 1.7852...  11.0828 sec/batch
Epoch: 4/4...  Training Step: 611...  Training loss: 1.8051...  13.3713 sec/batch
Epoch: 4/4...  Training Step: 612...  Training loss: 1.8437...  11.9229 sec/batch
Epoch: 4/4...  Training Step: 613...  Training loss: 1.8076...  12.8634 sec/batch
Epoch: 4/4...  Training Step: 614...  Training loss: 1.8053...  12.7259 sec/batch
Epoch: 4/4...  Training Step: 615...  Training loss: 1.7882...  12.7162 sec/batch
Epoch: 4/4...  Training Step: 616...  Training loss: 1.8496...  12.9006 sec/batch
Epoch: 4/4...  Training Step: 617...  Training loss: 1.7998...  12.6468 sec/batch
Epoch: 4/4...  Training Step: 618...  Training loss: 1.8079...  11.9756 sec/batch
Epoch: 4/4...  Training Step: 619...  Training loss: 1.7962...  13.3679 sec/batch
Epoch: 4/4...  Training Step: 620...  Training loss: 1.7805...  12.9005 sec/batch
Epoch: 4/4...  Training Step: 621...  Training loss: 1.7824...  11.2142 sec/batch
Epoch: 4/4...  Training Step: 622...  Training loss: 1.8114...  13.5782 sec/batch
Epoch: 4/4...  Training Step: 623...  Training loss: 1.8306...  13.7394 sec/batch
Epoch: 4/4...  Training Step: 624...  Training loss: 1.8228...  11.7536 sec/batch
Epoch: 4/4...  Training Step: 625...  Training loss: 1.7962...  13.4030 sec/batch
Epoch: 4/4...  Training Step: 626...  Training loss: 1.7888...  11.2617 sec/batch
Epoch: 4/4...  Training Step: 627...  Training loss: 1.8060...  12.7182 sec/batch
Epoch: 4/4...  Training Step: 628...  Training loss: 1.8232...  13.7172 sec/batch
Epoch: 4/4...  Training Step: 629...  Training loss: 1.7806...  11.3549 sec/batch
Epoch: 4/4...  Training Step: 630...  Training loss: 1.7948...  13.4867 sec/batch
Epoch: 4/4...  Training Step: 631...  Training loss: 1.7704...  13.7012 sec/batch
Epoch: 4/4...  Training Step: 632...  Training loss: 1.7535...  11.2606 sec/batch
Epoch: 4/4...  Training Step: 633...  Training loss: 1.7528...  13.0334 sec/batch
Epoch: 4/4...  Training Step: 634...  Training loss: 1.7683...  14.1057 sec/batch
Epoch: 4/4...  Training Step: 635...  Training loss: 1.7587...  12.3087 sec/batch
Epoch: 4/4...  Training Step: 636...  Training loss: 1.7952...  13.3711 sec/batch
Epoch: 4/4...  Training Step: 637...  Training loss: 1.7570...  11.3839 sec/batch
Epoch: 4/4...  Training Step: 638...  Training loss: 1.7676...  12.9644 sec/batch
Epoch: 4/4...  Training Step: 639...  Training loss: 1.7860...  12.7588 sec/batch
Epoch: 4/4...  Training Step: 640...  Training loss: 1.7230...  12.8533 sec/batch
Epoch: 4/4...  Training Step: 641...  Training loss: 1.7896...  12.7035 sec/batch
Epoch: 4/4...  Training Step: 642...  Training loss: 1.7625...  11.9891 sec/batch
Epoch: 4/4...  Training Step: 643...  Training loss: 1.7708...  13.6121 sec/batch
Epoch: 4/4...  Training Step: 644...  Training loss: 1.8208...  11.9605 sec/batch
Epoch: 4/4...  Training Step: 645...  Training loss: 1.7636...  13.5550 sec/batch
Epoch: 4/4...  Training Step: 646...  Training loss: 1.8223...  10.6354 sec/batch
Epoch: 4/4...  Training Step: 647...  Training loss: 1.7781...  12.5660 sec/batch
Epoch: 4/4...  Training Step: 648...  Training loss: 1.7809...  13.2301 sec/batch
Epoch: 4/4...  Training Step: 649...  Training loss: 1.7667...  11.6728 sec/batch
Epoch: 4/4...  Training Step: 650...  Training loss: 1.7824...  12.6289 sec/batch
Epoch: 4/4...  Training Step: 651...  Training loss: 1.7850...  12.5386 sec/batch
Epoch: 4/4...  Training Step: 652...  Training loss: 1.7620...  13.5620 sec/batch
Epoch: 4/4...  Training Step: 653...  Training loss: 1.7551...  11.9098 sec/batch
Epoch: 4/4...  Training Step: 654...  Training loss: 1.8031...  12.5791 sec/batch
Epoch: 4/4...  Training Step: 655...  Training loss: 1.7717...  11.9156 sec/batch
Epoch: 4/4...  Training Step: 656...  Training loss: 1.8191...  12.4879 sec/batch
Epoch: 4/4...  Training Step: 657...  Training loss: 1.8047...  12.8039 sec/batch
Epoch: 4/4...  Training Step: 658...  Training loss: 1.7682...  12.5458 sec/batch
Epoch: 4/4...  Training Step: 659...  Training loss: 1.7682...  11.6933 sec/batch
Epoch: 4/4...  Training Step: 660...  Training loss: 1.8091...  13.6811 sec/batch
Epoch: 4/4...  Training Step: 661...  Training loss: 1.7878...  20.8065 sec/batch
Epoch: 4/4...  Training Step: 662...  Training loss: 1.7524...  16.2615 sec/batch
Epoch: 4/4...  Training Step: 663...  Training loss: 1.7734...  24.7906 sec/batch
Epoch: 4/4...  Training Step: 664...  Training loss: 1.7617...  12.8561 sec/batch
Epoch: 4/4...  Training Step: 665...  Training loss: 1.7956...  7.9881 sec/batch
Epoch: 4/4...  Training Step: 666...  Training loss: 1.7885...  9.0453 sec/batch
Epoch: 4/4...  Training Step: 667...  Training loss: 1.7989...  8.3402 sec/batch
Epoch: 4/4...  Training Step: 668...  Training loss: 1.7572...  8.8639 sec/batch
Epoch: 4/4...  Training Step: 669...  Training loss: 1.7604...  10.8517 sec/batch
Epoch: 4/4...  Training Step: 670...  Training loss: 1.7871...  9.5396 sec/batch
Epoch: 4/4...  Training Step: 671...  Training loss: 1.7626...  8.3035 sec/batch
Epoch: 4/4...  Training Step: 672...  Training loss: 1.7605...  8.4718 sec/batch
Epoch: 4/4...  Training Step: 673...  Training loss: 1.7227...  8.5208 sec/batch
Epoch: 4/4...  Training Step: 674...  Training loss: 1.7616...  8.5323 sec/batch
Epoch: 4/4...  Training Step: 675...  Training loss: 1.7060...  8.5105 sec/batch
Epoch: 4/4...  Training Step: 676...  Training loss: 1.7768...  8.4985 sec/batch
Epoch: 4/4...  Training Step: 677...  Training loss: 1.7255...  8.8929 sec/batch
Epoch: 4/4...  Training Step: 678...  Training loss: 1.7611...  8.6843 sec/batch
Epoch: 4/4...  Training Step: 679...  Training loss: 1.7244...  8.6648 sec/batch
Epoch: 4/4...  Training Step: 680...  Training loss: 1.7392...  8.5945 sec/batch
Epoch: 4/4...  Training Step: 681...  Training loss: 1.7365...  8.6213 sec/batch
Epoch: 4/4...  Training Step: 682...  Training loss: 1.7351...  8.6451 sec/batch
Epoch: 4/4...  Training Step: 683...  Training loss: 1.7201...  8.2962 sec/batch
Epoch: 4/4...  Training Step: 684...  Training loss: 1.7565...  7.7600 sec/batch
Epoch: 4/4...  Training Step: 685...  Training loss: 1.7250...  7.8236 sec/batch
Epoch: 4/4...  Training Step: 686...  Training loss: 1.7420...  8.0652 sec/batch
Epoch: 4/4...  Training Step: 687...  Training loss: 1.7272...  7.4093 sec/batch
Epoch: 4/4...  Training Step: 688...  Training loss: 1.7221...  7.0619 sec/batch
Epoch: 4/4...  Training Step: 689...  Training loss: 1.7232...  6.7720 sec/batch
Epoch: 4/4...  Training Step: 690...  Training loss: 1.7493...  6.7958 sec/batch
Epoch: 4/4...  Training Step: 691...  Training loss: 1.7444...  7.4466 sec/batch
Epoch: 4/4...  Training Step: 692...  Training loss: 1.7173...  6.8936 sec/batch
Epoch: 4/4...  Training Step: 693...  Training loss: 1.7255...  7.4208 sec/batch
Epoch: 4/4...  Training Step: 694...  Training loss: 1.7176...  6.7687 sec/batch
Epoch: 4/4...  Training Step: 695...  Training loss: 1.7458...  6.8953 sec/batch
Epoch: 4/4...  Training Step: 696...  Training loss: 1.7366...  6.9805 sec/batch
Epoch: 4/4...  Training Step: 697...  Training loss: 1.7255...  8.2459 sec/batch
Epoch: 4/4...  Training Step: 698...  Training loss: 1.7391...  6.9898 sec/batch
Epoch: 4/4...  Training Step: 699...  Training loss: 1.7278...  6.8164 sec/batch
Epoch: 4/4...  Training Step: 700...  Training loss: 1.7539...  6.7909 sec/batch
Epoch: 4/4...  Training Step: 701...  Training loss: 1.7363...  7.4871 sec/batch
Epoch: 4/4...  Training Step: 702...  Training loss: 1.7323...  8.5947 sec/batch
Epoch: 4/4...  Training Step: 703...  Training loss: 1.7427...  9.6700 sec/batch
Epoch: 4/4...  Training Step: 704...  Training loss: 1.7484...  9.4482 sec/batch
Epoch: 4/4...  Training Step: 705...  Training loss: 1.7266...  7.6708 sec/batch
Epoch: 4/4...  Training Step: 706...  Training loss: 1.7183...  7.3727 sec/batch
Epoch: 4/4...  Training Step: 707...  Training loss: 1.7224...  7.6793 sec/batch
Epoch: 4/4...  Training Step: 708...  Training loss: 1.7305...  7.0193 sec/batch
Epoch: 4/4...  Training Step: 709...  Training loss: 1.7165...  7.0099 sec/batch
Epoch: 4/4...  Training Step: 710...  Training loss: 1.6868...  6.7949 sec/batch
Epoch: 4/4...  Training Step: 711...  Training loss: 1.7192...  6.8698 sec/batch
Epoch: 4/4...  Training Step: 712...  Training loss: 1.7268...  6.8482 sec/batch
Epoch: 4/4...  Training Step: 713...  Training loss: 1.7313...  6.7932 sec/batch
Epoch: 4/4...  Training Step: 714...  Training loss: 1.7158...  6.9176 sec/batch
Epoch: 4/4...  Training Step: 715...  Training loss: 1.7385...  6.9276 sec/batch
Epoch: 4/4...  Training Step: 716...  Training loss: 1.7012...  6.7915 sec/batch
Epoch: 4/4...  Training Step: 717...  Training loss: 1.7013...  7.0843 sec/batch
Epoch: 4/4...  Training Step: 718...  Training loss: 1.7290...  6.9812 sec/batch
Epoch: 4/4...  Training Step: 719...  Training loss: 1.7237...  6.9049 sec/batch
Epoch: 4/4...  Training Step: 720...  Training loss: 1.6865...  6.9505 sec/batch
Epoch: 4/4...  Training Step: 721...  Training loss: 1.7407...  6.9163 sec/batch
Epoch: 4/4...  Training Step: 722...  Training loss: 1.7389...  6.8920 sec/batch
Epoch: 4/4...  Training Step: 723...  Training loss: 1.7178...  6.9340 sec/batch
Epoch: 4/4...  Training Step: 724...  Training loss: 1.7131...  6.9436 sec/batch
Epoch: 4/4...  Training Step: 725...  Training loss: 1.6964...  6.8131 sec/batch
Epoch: 4/4...  Training Step: 726...  Training loss: 1.7010...  6.8769 sec/batch
Epoch: 4/4...  Training Step: 727...  Training loss: 1.7374...  6.8719 sec/batch
Epoch: 4/4...  Training Step: 728...  Training loss: 1.7281...  6.9289 sec/batch
Epoch: 4/4...  Training Step: 729...  Training loss: 1.7242...  6.9262 sec/batch
Epoch: 4/4...  Training Step: 730...  Training loss: 1.7220...  6.9095 sec/batch
Epoch: 4/4...  Training Step: 731...  Training loss: 1.7314...  6.8798 sec/batch
Epoch: 4/4...  Training Step: 732...  Training loss: 1.7199...  6.9947 sec/batch
Epoch: 4/4...  Training Step: 733...  Training loss: 1.7374...  6.9132 sec/batch
Epoch: 4/4...  Training Step: 734...  Training loss: 1.7274...  6.9458 sec/batch
Epoch: 4/4...  Training Step: 735...  Training loss: 1.7530...  6.9162 sec/batch
Epoch: 4/4...  Training Step: 736...  Training loss: 1.7100...  6.9196 sec/batch
Epoch: 4/4...  Training Step: 737...  Training loss: 1.7193...  7.2359 sec/batch
Epoch: 4/4...  Training Step: 738...  Training loss: 1.7208...  6.8807 sec/batch
Epoch: 4/4...  Training Step: 739...  Training loss: 1.7022...  6.8726 sec/batch
Epoch: 4/4...  Training Step: 740...  Training loss: 1.7394...  6.9174 sec/batch
Epoch: 4/4...  Training Step: 741...  Training loss: 1.7242...  7.0309 sec/batch
Epoch: 4/4...  Training Step: 742...  Training loss: 1.7367...  6.8618 sec/batch
Epoch: 4/4...  Training Step: 743...  Training loss: 1.7217...  6.8789 sec/batch
Epoch: 4/4...  Training Step: 744...  Training loss: 1.7028...  6.8975 sec/batch
Epoch: 4/4...  Training Step: 745...  Training loss: 1.6808...  6.9057 sec/batch
Epoch: 4/4...  Training Step: 746...  Training loss: 1.7259...  7.1495 sec/batch
Epoch: 4/4...  Training Step: 747...  Training loss: 1.7259...  6.9032 sec/batch
Epoch: 4/4...  Training Step: 748...  Training loss: 1.7050...  6.9190 sec/batch
Epoch: 4/4...  Training Step: 749...  Training loss: 1.7137...  8.2311 sec/batch
Epoch: 4/4...  Training Step: 750...  Training loss: 1.6958...  7.6285 sec/batch
Epoch: 4/4...  Training Step: 751...  Training loss: 1.7205...  7.2160 sec/batch
Epoch: 4/4...  Training Step: 752...  Training loss: 1.7101...  8.1946 sec/batch
Epoch: 4/4...  Training Step: 753...  Training loss: 1.6731...  7.2009 sec/batch
Epoch: 4/4...  Training Step: 754...  Training loss: 1.7385...  6.9120 sec/batch
Epoch: 4/4...  Training Step: 755...  Training loss: 1.7281...  6.8942 sec/batch
Epoch: 4/4...  Training Step: 756...  Training loss: 1.7052...  6.9108 sec/batch
Epoch: 4/4...  Training Step: 757...  Training loss: 1.7063...  6.9666 sec/batch
Epoch: 4/4...  Training Step: 758...  Training loss: 1.6985...  6.9801 sec/batch
Epoch: 4/4...  Training Step: 759...  Training loss: 1.7002...  6.8511 sec/batch
Epoch: 4/4...  Training Step: 760...  Training loss: 1.6984...  7.7397 sec/batch
Epoch: 4/4...  Training Step: 761...  Training loss: 1.7224...  7.3908 sec/batch
Epoch: 4/4...  Training Step: 762...  Training loss: 1.7623...  7.6811 sec/batch
Epoch: 4/4...  Training Step: 763...  Training loss: 1.7023...  7.7702 sec/batch
Epoch: 4/4...  Training Step: 764...  Training loss: 1.7003...  7.2082 sec/batch
Epoch: 4/4...  Training Step: 765...  Training loss: 1.6964...  7.1819 sec/batch
Epoch: 4/4...  Training Step: 766...  Training loss: 1.7202...  7.8403 sec/batch
Epoch: 4/4...  Training Step: 767...  Training loss: 1.7591...  7.6788 sec/batch
Epoch: 4/4...  Training Step: 768...  Training loss: 1.7432...  7.1485 sec/batch
Epoch: 4/4...  Training Step: 769...  Training loss: 1.7424...  7.1242 sec/batch
Epoch: 4/4...  Training Step: 770...  Training loss: 1.6903...  7.1166 sec/batch
Epoch: 4/4...  Training Step: 771...  Training loss: 1.6805...  6.9481 sec/batch
Epoch: 4/4...  Training Step: 772...  Training loss: 1.7123...  7.0120 sec/batch
Epoch: 4/4...  Training Step: 773...  Training loss: 1.6639...  6.8916 sec/batch
Epoch: 4/4...  Training Step: 774...  Training loss: 1.6610...  7.0388 sec/batch
Epoch: 4/4...  Training Step: 775...  Training loss: 1.6700...  7.0355 sec/batch
Epoch: 4/4...  Training Step: 776...  Training loss: 1.7016...  7.0290 sec/batch
Epoch: 4/4...  Training Step: 777...  Training loss: 1.6893...  6.8701 sec/batch
Epoch: 4/4...  Training Step: 778...  Training loss: 1.7162...  6.8937 sec/batch
Epoch: 4/4...  Training Step: 779...  Training loss: 1.7301...  7.2554 sec/batch
Epoch: 4/4...  Training Step: 780...  Training loss: 1.6738...  7.0538 sec/batch
Epoch: 4/4...  Training Step: 781...  Training loss: 1.7035...  6.9741 sec/batch
Epoch: 4/4...  Training Step: 782...  Training loss: 1.6794...  6.9531 sec/batch
Epoch: 4/4...  Training Step: 783...  Training loss: 1.6966...  7.0081 sec/batch
Epoch: 4/4...  Training Step: 784...  Training loss: 1.6960...  7.0545 sec/batch
Epoch: 4/4...  Training Step: 785...  Training loss: 1.6956...  6.9571 sec/batch
Epoch: 4/4...  Training Step: 786...  Training loss: 1.6657...  6.9527 sec/batch
Epoch: 4/4...  Training Step: 787...  Training loss: 1.6883...  7.0348 sec/batch
Epoch: 4/4...  Training Step: 788...  Training loss: 1.6684...  6.9915 sec/batch
Epoch: 4/4...  Training Step: 789...  Training loss: 1.6549...  6.9759 sec/batch
Epoch: 4/4...  Training Step: 790...  Training loss: 1.6892...  6.9770 sec/batch
Epoch: 4/4...  Training Step: 791...  Training loss: 1.6720...  6.9991 sec/batch
Epoch: 4/4...  Training Step: 792...  Training loss: 1.6737...  6.9212 sec/batch

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [122]:
tf.train.get_checkpoint_state('checkpoints')


Out[122]:
model_checkpoint_path: "checkpoints/i792_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i600_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i792_l512.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [123]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [124]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [125]:
tf.train.latest_checkpoint('checkpoints')


Out[125]:
'checkpoints/i792_l512.ckpt'

In [126]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i792_l512.ckpt
Far had the comploon, and the send himself, and how have himself the same of his stalt in anster
sumpant the coulden hand, that she shate to be her told her as the momant."

"Well, and
sation that and said that the
pistrown....
 he despend to the chook and
her head.

"Yes, and he he had nather a dright with his was to at home anywing of the pisition."

And a colling it the
bord,
who calling a dropiting the man the stret was thought with," that he said a secour and word over the
recretty
was he say and as
that an ond of all whething to
the contalle and she could not have all the stranger," said he had not to some one was her, and whisken him in her has been. The said
and at the pertating his followed had bones, what was thing, and stelle that so me a mee to
do some to
said,
three was stain, with a
little but, and which a more husband was a man in
a so triev. The decient the beging had
have at his hand and all
she was to the sour the soun of the sards he suppants were saids, the seesity. "What always there's not tell, and alway have having or the saying and with the chartes and the comstangry of the counder," he said him and when the churdly and
tell her
for himself and tell."

"You had be the porting that the made it was so the pair, and teen and was
trild, and his siling, had see to him," see he
heart and had been so head, but he what he said. And he said, and some of something all the sumply thisked and sais to soo how. The caultry, and that with his heart and her
feal in the
beed of the stande of home in the sersing and
was
as take, but the princious and the peace and a means think as the feath into a came and the must began and with the proscalle was the met of taking of the same.

"When he would nothen she was that
in her she were beed throme,
but the changer always words all her,, as
the most and and see to his brother him. He, and they thought to be handed and the more the somether the pais of an offer he was shinged to the prince as he
seemed at," he was not her 

In [127]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i200_l512.ckpt
Far  tt e  haeess aaeetsesn tae h  htte t tth  hoe eeee  htei  tthe e tottatthett t aa eo eatttat htan aenne het aa hee  teon htaeten tatt  teane aat  otoasen h ae teoest aonn  heessa aaen ha  eeet  ae ha the t thet aaees et aenss  ttoeons teaee aet theen  aenne ae  eaese hota ha tae tooese a eat hten a ho  th toe  otet  eonsee h henset hati thaieet  teon  o e tateetaiest  h  oo tho to  eo eeset tee eo hei  atan  oh a ae  he hoansnt het a h e  oon  hetin h eos  eteee t oen h henesssene heseee  e a  ohest aetaet haetn  he eoe aan e  h thatt atettaitan he toe  ao eth  htee aae  he ha e aoe  a ho  o eate h eot aeeete ae aet hen eeta ao eooes  a toh henn taet t oeo hei  a  tooet t oha  ohott aee a a a tee e  oe  oee e  ota tee teo a et  e aaoen  hat he e eessis  hoa hoes heitthene  he haees  ae hte honee teo ha t he t oete heiss ho heietthe hei the ete  hteet henn taasse aoeesse te  ah  a oen tatanne th  haa t taetane  hee  heneen ae  at eoo taon eos htt t tth aae ht teta eo ta ae hteti tethae

In [128]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i600_l512.ckpt
Far tood to take, and the
ence sand one with all a corsors it when an alowat attered
the prilies of this had brow the seeted of she was a to she sour husbars in the
carest of him.." Alexatdy her hears.

"I'r her
sardited him, was soming, then same one
had thimed of this still him. The could to her tray, and that he same on
the sert was and wint hear of
have and with all to to the stanst, he had a pruncess who house thought to the supered it
hand that the
prepansed him, which his had but, he was tood to the parstion the presentity, with said. Theye
some, and the persicess. His want to
sto had stond, and the sompition, and shisted a miling, bugh, and ther, as it she wist she came her was as the
rear his hears, and was astording of the soor at inseered him but to him.. The dreat had be suppire it the face, and all the chard of the strich with him to a more had been how a ment of her
said hose her heast all, the
word of all the wards as how a surce he to her
to the poitions,
stays of won't
had

In [129]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="venkat is a good kid")
print(samp)


INFO:tensorflow:Restoring parameters from checkpoints/i1200_l512.ckpt
---------------------------------------------------------------------------
NotFoundError                             Traceback (most recent call last)
/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1038     try:
-> 1039       return fn(*args)
   1040     except errors.OpError as e:

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1020                                  feed_dict, fetch_list, target_list,
-> 1021                                  status, run_metadata)
   1022 

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i1200_l512.ckpt
	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

NotFoundError                             Traceback (most recent call last)
<ipython-input-129-d7e3abdf30ff> in <module>()
      1 checkpoint = 'checkpoints/i1200_l512.ckpt'
----> 2 samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="venkat is a good kid")
      3 print(samp)

<ipython-input-124-8eb787ae9642> in sample(checkpoint, n_samples, lstm_size, vocab_size, prime)
      4     saver = tf.train.Saver()
      5     with tf.Session() as sess:
----> 6         saver.restore(sess, checkpoint)
      7         new_state = sess.run(model.initial_state)
      8         for c in prime:

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
   1455     logging.info("Restoring parameters from %s", save_path)
   1456     sess.run(self.saver_def.restore_op_name,
-> 1457              {self.saver_def.filename_tensor_name: save_path})
   1458 
   1459   @staticmethod

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    776     try:
    777       result = self._run(None, fetches, feed_dict, options_ptr,
--> 778                          run_metadata_ptr)
    779       if run_metadata:
    780         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    980     if final_fetches or final_targets:
    981       results = self._do_run(handle, final_targets, final_fetches,
--> 982                              feed_dict_string, options, run_metadata)
    983     else:
    984       results = []

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1030     if handle is None:
   1031       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1032                            target_list, options, run_metadata)
   1033     else:
   1034       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1050         except KeyError:
   1051           pass
-> 1052       raise type(e)(node_def, op, message)
   1053 
   1054   def _extend_graph(self):

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i1200_l512.ckpt
	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at:
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 477, in start
    ioloop.IOLoop.instance().start()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tornado/ioloop.py", line 888, in start
    handler_func(fd_obj, events)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
    handler(stream, idents, msg)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request
    user_expressions, allow_stdin)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 533, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2683, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2787, in run_ast_nodes
    if self.run_code(code, result):
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2847, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-129-d7e3abdf30ff>", line 2, in <module>
    samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="venkat is a good kid")
  File "<ipython-input-124-8eb787ae9642>", line 4, in sample
    saver = tf.train.Saver()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1056, in __init__
    self.build()
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1086, in build
    restore_sequentially=self._restore_sequentially)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 691, in build
    restore_sequentially, reshape)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 247, in restore_op
    [spec.tensor.dtype])[0])
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 669, in restore_v2
    dtypes=dtypes, name=name)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/Users/venkata/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i1200_l512.ckpt
	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

In [ ]:


In [ ]: