Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [4]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [5]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [6]:
text[:100]


Out[6]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

And we can see the characters encoded as integers.


In [7]:
encoded[:100]


Out[7]:
array([31, 64, 57, 72, 76, 61, 74,  1, 16,  0,  0,  0, 36, 57, 72, 72, 81,
        1, 62, 57, 69, 65, 68, 65, 61, 75,  1, 57, 74, 61,  1, 57, 68, 68,
        1, 57, 68, 65, 67, 61, 26,  1, 61, 78, 61, 74, 81,  1, 77, 70, 64,
       57, 72, 72, 81,  1, 62, 57, 69, 65, 68, 81,  1, 65, 75,  1, 77, 70,
       64, 57, 72, 72, 81,  1, 65, 70,  1, 65, 76, 75,  1, 71, 79, 70,  0,
       79, 57, 81, 13,  0,  0, 33, 78, 61, 74, 81, 76, 64, 65, 70], dtype=int32)

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [8]:
len(vocab)


Out[8]:
83

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.


In [9]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the number of characters per batch and number of batches we can make
    characters_per_batch = n_seqs * n_steps
    n_batches = len(arr)//characters_per_batch
    
    # Keep only enough characters to make full batches
    arr = arr[:n_batches * characters_per_batch]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs, -1))
    
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:, n:n+n_steps]
        # The targets, shifted by one
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [10]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)

In [11]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[31 64 57 72 76 61 74  1 16  0]
 [ 1 57 69  1 70 71 76  1 63 71]
 [78 65 70 13  0  0  3 53 61 75]
 [70  1 60 77 74 65 70 63  1 64]
 [ 1 65 76  1 65 75 11  1 75 65]
 [ 1 37 76  1 79 57 75  0 71 70]
 [64 61 70  1 59 71 69 61  1 62]
 [26  1 58 77 76  1 70 71 79  1]
 [76  1 65 75 70  7 76 13  1 48]
 [ 1 75 57 65 60  1 76 71  1 64]]

y
 [[64 57 72 76 61 74  1 16  0  0]
 [57 69  1 70 71 76  1 63 71 65]
 [65 70 13  0  0  3 53 61 75 11]
 [ 1 60 77 74 65 70 63  1 64 65]
 [65 76  1 65 75 11  1 75 65 74]
 [37 76  1 79 57 75  0 71 70 68]
 [61 70  1 59 71 69 61  1 62 71]
 [ 1 58 77 76  1 70 71 79  1 75]
 [ 1 65 75 70  7 76 13  1 48 64]
 [75 57 65 60  1 76 71  1 64 61]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.


In [12]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like

def build_cell(num_units, keep_prob):
    lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

    return drop

tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])

Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Below, we implement the build_lstm function to create these LSTM cells and the initial state.


In [13]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    
    def build_cell(lstm_size, keep_prob):
        # Use a basic LSTM cell
        lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        
        # Add dropout to the cell
        drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
        return drop
    
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
    initial_state = cell.zero_state(batch_size, tf.float32)
    
    return cell, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.


In [14]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        x: Input tensor
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''

    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    # That is, the shape should be batch_size*num_steps rows by lstm_size columns
    seq_output = tf.concat(lstm_output, axis=1)
    x = tf.reshape(seq_output, [-1, in_size])
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
        softmax_b = tf.Variable(tf.zeros(out_size))
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.matmul(x, softmax_w) + softmax_b
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits, name='predictions')
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.


In [15]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    
    # One-hot encode targets and reshape to match logits, one row per batch_size per step
    y_one_hot = tf.one_hot(targets, num_classes)
    y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
    
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
    loss = tf.reduce_mean(loss)
    return loss

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [16]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.


In [17]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)

        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN and collect the outputs
        outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here I'm defining the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [18]:
batch_size = 200        # Sequences per batch
num_steps = 50         # Number of sequence steps per batch
lstm_size = 128         # Size of hidden layers in LSTMs
num_layers = 2          # Number of LSTM layers
learning_rate = 0.01   # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt


In [20]:
epochs = 10
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


Epoch: 1/10...  Training Step: 1...  Training loss: 4.4181...  4.5661 sec/batch
Epoch: 1/10...  Training Step: 2...  Training loss: 4.3345...  4.3059 sec/batch
Epoch: 1/10...  Training Step: 3...  Training loss: 3.8755...  4.4170 sec/batch
Epoch: 1/10...  Training Step: 4...  Training loss: 5.5256...  4.4646 sec/batch
Epoch: 1/10...  Training Step: 5...  Training loss: 4.2940...  4.7424 sec/batch
Epoch: 1/10...  Training Step: 6...  Training loss: 3.7873...  4.6983 sec/batch
Epoch: 1/10...  Training Step: 7...  Training loss: 3.6621...  4.4768 sec/batch
Epoch: 1/10...  Training Step: 8...  Training loss: 3.6074...  4.5873 sec/batch
Epoch: 1/10...  Training Step: 9...  Training loss: 3.5020...  4.4811 sec/batch
Epoch: 1/10...  Training Step: 10...  Training loss: 3.4590...  4.4242 sec/batch
Epoch: 1/10...  Training Step: 11...  Training loss: 3.4040...  4.4930 sec/batch
Epoch: 1/10...  Training Step: 12...  Training loss: 3.3932...  4.2266 sec/batch
Epoch: 1/10...  Training Step: 13...  Training loss: 3.3651...  4.3769 sec/batch
Epoch: 1/10...  Training Step: 14...  Training loss: 3.3805...  4.3811 sec/batch
Epoch: 1/10...  Training Step: 15...  Training loss: 3.3515...  4.2457 sec/batch
Epoch: 1/10...  Training Step: 16...  Training loss: 3.3203...  4.1158 sec/batch
Epoch: 1/10...  Training Step: 17...  Training loss: 3.2985...  4.3482 sec/batch
Epoch: 1/10...  Training Step: 18...  Training loss: 3.3082...  4.2074 sec/batch
Epoch: 1/10...  Training Step: 19...  Training loss: 3.2809...  4.6457 sec/batch
Epoch: 1/10...  Training Step: 20...  Training loss: 3.2520...  4.2445 sec/batch
Epoch: 1/10...  Training Step: 21...  Training loss: 3.2777...  4.3224 sec/batch
Epoch: 1/10...  Training Step: 22...  Training loss: 3.2617...  4.3016 sec/batch
Epoch: 1/10...  Training Step: 23...  Training loss: 3.2540...  4.2835 sec/batch
Epoch: 1/10...  Training Step: 24...  Training loss: 3.2460...  4.3940 sec/batch
Epoch: 1/10...  Training Step: 25...  Training loss: 3.2299...  4.2817 sec/batch
Epoch: 1/10...  Training Step: 26...  Training loss: 3.2341...  4.3060 sec/batch
Epoch: 1/10...  Training Step: 27...  Training loss: 3.2337...  4.4664 sec/batch
Epoch: 1/10...  Training Step: 28...  Training loss: 3.2092...  4.2176 sec/batch
Epoch: 1/10...  Training Step: 29...  Training loss: 3.2162...  4.4200 sec/batch
Epoch: 1/10...  Training Step: 30...  Training loss: 3.2226...  4.5022 sec/batch
Epoch: 1/10...  Training Step: 31...  Training loss: 3.2445...  4.3304 sec/batch
Epoch: 1/10...  Training Step: 32...  Training loss: 3.2012...  4.2257 sec/batch
Epoch: 1/10...  Training Step: 33...  Training loss: 3.1916...  4.2822 sec/batch
Epoch: 1/10...  Training Step: 34...  Training loss: 3.2139...  4.3032 sec/batch
Epoch: 1/10...  Training Step: 35...  Training loss: 3.1867...  4.3893 sec/batch
Epoch: 1/10...  Training Step: 36...  Training loss: 3.2150...  4.2720 sec/batch
Epoch: 1/10...  Training Step: 37...  Training loss: 3.1766...  4.4537 sec/batch
Epoch: 1/10...  Training Step: 38...  Training loss: 3.1825...  4.3758 sec/batch
Epoch: 1/10...  Training Step: 39...  Training loss: 3.1727...  4.4245 sec/batch
Epoch: 1/10...  Training Step: 40...  Training loss: 3.1758...  4.3621 sec/batch
Epoch: 1/10...  Training Step: 41...  Training loss: 3.1627...  4.3511 sec/batch
Epoch: 1/10...  Training Step: 42...  Training loss: 3.1705...  4.3671 sec/batch
Epoch: 1/10...  Training Step: 43...  Training loss: 3.1612...  4.2626 sec/batch
Epoch: 1/10...  Training Step: 44...  Training loss: 3.1501...  4.3422 sec/batch
Epoch: 1/10...  Training Step: 45...  Training loss: 3.1550...  4.3591 sec/batch
Epoch: 1/10...  Training Step: 46...  Training loss: 3.1682...  4.3800 sec/batch
Epoch: 1/10...  Training Step: 47...  Training loss: 3.1694...  4.3788 sec/batch
Epoch: 1/10...  Training Step: 48...  Training loss: 3.1767...  4.2501 sec/batch
Epoch: 1/10...  Training Step: 49...  Training loss: 3.1711...  4.2806 sec/batch
Epoch: 1/10...  Training Step: 50...  Training loss: 3.1731...  4.3387 sec/batch
Epoch: 1/10...  Training Step: 51...  Training loss: 3.1599...  4.5605 sec/batch
Epoch: 1/10...  Training Step: 52...  Training loss: 3.1548...  4.3810 sec/batch
Epoch: 1/10...  Training Step: 53...  Training loss: 3.1577...  4.4033 sec/batch
Epoch: 1/10...  Training Step: 54...  Training loss: 3.1441...  4.4165 sec/batch
Epoch: 1/10...  Training Step: 55...  Training loss: 3.1558...  4.3046 sec/batch
Epoch: 1/10...  Training Step: 56...  Training loss: 3.1351...  4.3710 sec/batch
Epoch: 1/10...  Training Step: 57...  Training loss: 3.1434...  4.3135 sec/batch
Epoch: 1/10...  Training Step: 58...  Training loss: 3.1489...  4.3066 sec/batch
Epoch: 1/10...  Training Step: 59...  Training loss: 3.1400...  4.4948 sec/batch
Epoch: 1/10...  Training Step: 60...  Training loss: 3.1368...  4.4246 sec/batch
Epoch: 1/10...  Training Step: 61...  Training loss: 3.1462...  4.3878 sec/batch
Epoch: 1/10...  Training Step: 62...  Training loss: 3.1598...  4.2764 sec/batch
Epoch: 1/10...  Training Step: 63...  Training loss: 3.1601...  4.3444 sec/batch
Epoch: 1/10...  Training Step: 64...  Training loss: 3.1183...  4.3382 sec/batch
Epoch: 1/10...  Training Step: 65...  Training loss: 3.1288...  4.5298 sec/batch
Epoch: 1/10...  Training Step: 66...  Training loss: 3.1521...  4.4366 sec/batch
Epoch: 1/10...  Training Step: 67...  Training loss: 3.1444...  4.3311 sec/batch
Epoch: 1/10...  Training Step: 68...  Training loss: 3.1001...  4.4150 sec/batch
Epoch: 1/10...  Training Step: 69...  Training loss: 3.1137...  4.4626 sec/batch
Epoch: 1/10...  Training Step: 70...  Training loss: 3.1383...  4.5011 sec/batch
Epoch: 1/10...  Training Step: 71...  Training loss: 3.1252...  4.4514 sec/batch
Epoch: 1/10...  Training Step: 72...  Training loss: 3.1482...  4.3983 sec/batch
Epoch: 1/10...  Training Step: 73...  Training loss: 3.1246...  4.7226 sec/batch
Epoch: 1/10...  Training Step: 74...  Training loss: 3.1274...  4.7684 sec/batch
Epoch: 1/10...  Training Step: 75...  Training loss: 3.1383...  5.0579 sec/batch
Epoch: 1/10...  Training Step: 76...  Training loss: 3.1463...  4.9776 sec/batch
Epoch: 1/10...  Training Step: 77...  Training loss: 3.1330...  4.8060 sec/batch
Epoch: 1/10...  Training Step: 78...  Training loss: 3.1198...  4.7533 sec/batch
Epoch: 1/10...  Training Step: 79...  Training loss: 3.1183...  4.9334 sec/batch
Epoch: 1/10...  Training Step: 80...  Training loss: 3.1065...  5.0121 sec/batch
Epoch: 1/10...  Training Step: 81...  Training loss: 3.1041...  4.7112 sec/batch
Epoch: 1/10...  Training Step: 82...  Training loss: 3.1243...  4.8725 sec/batch
Epoch: 1/10...  Training Step: 83...  Training loss: 3.1242...  4.9754 sec/batch
Epoch: 1/10...  Training Step: 84...  Training loss: 3.1093...  4.7352 sec/batch
Epoch: 1/10...  Training Step: 85...  Training loss: 3.0922...  4.3364 sec/batch
Epoch: 1/10...  Training Step: 86...  Training loss: 3.0988...  4.2891 sec/batch
Epoch: 1/10...  Training Step: 87...  Training loss: 3.0955...  4.4477 sec/batch
Epoch: 1/10...  Training Step: 88...  Training loss: 3.0917...  4.7620 sec/batch
Epoch: 1/10...  Training Step: 89...  Training loss: 3.1108...  4.8102 sec/batch
Epoch: 1/10...  Training Step: 90...  Training loss: 3.1049...  5.0304 sec/batch
Epoch: 1/10...  Training Step: 91...  Training loss: 3.1012...  4.9348 sec/batch
Epoch: 1/10...  Training Step: 92...  Training loss: 3.0837...  4.7975 sec/batch
Epoch: 1/10...  Training Step: 93...  Training loss: 3.0922...  4.8635 sec/batch
Epoch: 1/10...  Training Step: 94...  Training loss: 3.0875...  4.7667 sec/batch
Epoch: 1/10...  Training Step: 95...  Training loss: 3.0843...  4.8455 sec/batch
Epoch: 1/10...  Training Step: 96...  Training loss: 3.0707...  4.8663 sec/batch
Epoch: 1/10...  Training Step: 97...  Training loss: 3.0788...  4.8254 sec/batch
Epoch: 1/10...  Training Step: 98...  Training loss: 3.0781...  4.8824 sec/batch
Epoch: 1/10...  Training Step: 99...  Training loss: 3.0960...  4.6981 sec/batch
Epoch: 1/10...  Training Step: 100...  Training loss: 3.0715...  4.7365 sec/batch
Epoch: 1/10...  Training Step: 101...  Training loss: 3.0861...  5.3545 sec/batch
Epoch: 1/10...  Training Step: 102...  Training loss: 3.0815...  4.8407 sec/batch
Epoch: 1/10...  Training Step: 103...  Training loss: 3.0766...  4.9995 sec/batch
Epoch: 1/10...  Training Step: 104...  Training loss: 3.0715...  5.1289 sec/batch
Epoch: 1/10...  Training Step: 105...  Training loss: 3.0636...  5.3116 sec/batch
Epoch: 1/10...  Training Step: 106...  Training loss: 3.0653...  5.1120 sec/batch
Epoch: 1/10...  Training Step: 107...  Training loss: 3.0377...  4.9903 sec/batch
Epoch: 1/10...  Training Step: 108...  Training loss: 3.0527...  4.9944 sec/batch
Epoch: 1/10...  Training Step: 109...  Training loss: 3.0534...  4.8239 sec/batch
Epoch: 1/10...  Training Step: 110...  Training loss: 3.0210...  4.8684 sec/batch
Epoch: 1/10...  Training Step: 111...  Training loss: 3.0353...  4.8740 sec/batch
Epoch: 1/10...  Training Step: 112...  Training loss: 3.0318...  4.9977 sec/batch
Epoch: 1/10...  Training Step: 113...  Training loss: 3.0136...  4.8347 sec/batch
Epoch: 1/10...  Training Step: 114...  Training loss: 3.1920...  4.8159 sec/batch
Epoch: 1/10...  Training Step: 115...  Training loss: 3.0548...  4.9056 sec/batch
Epoch: 1/10...  Training Step: 116...  Training loss: 3.0387...  4.7645 sec/batch
Epoch: 1/10...  Training Step: 117...  Training loss: 3.0343...  4.7378 sec/batch
Epoch: 1/10...  Training Step: 118...  Training loss: 3.0358...  5.0917 sec/batch
Epoch: 1/10...  Training Step: 119...  Training loss: 3.0407...  4.9077 sec/batch
Epoch: 1/10...  Training Step: 120...  Training loss: 3.0182...  5.0274 sec/batch
Epoch: 1/10...  Training Step: 121...  Training loss: 3.0439...  4.9943 sec/batch
Epoch: 1/10...  Training Step: 122...  Training loss: 3.0174...  4.8721 sec/batch
Epoch: 1/10...  Training Step: 123...  Training loss: 3.0107...  4.8103 sec/batch
Epoch: 1/10...  Training Step: 124...  Training loss: 3.0213...  4.8100 sec/batch
Epoch: 1/10...  Training Step: 125...  Training loss: 2.9948...  5.1138 sec/batch
Epoch: 1/10...  Training Step: 126...  Training loss: 2.9739...  4.8449 sec/batch
Epoch: 1/10...  Training Step: 127...  Training loss: 2.9900...  4.8607 sec/batch
Epoch: 1/10...  Training Step: 128...  Training loss: 2.9939...  5.0006 sec/batch
Epoch: 1/10...  Training Step: 129...  Training loss: 2.9779...  4.9048 sec/batch
Epoch: 1/10...  Training Step: 130...  Training loss: 2.9806...  4.8944 sec/batch
Epoch: 1/10...  Training Step: 131...  Training loss: 2.9742...  5.0577 sec/batch
Epoch: 1/10...  Training Step: 132...  Training loss: 2.9382...  5.2231 sec/batch
Epoch: 1/10...  Training Step: 133...  Training loss: 2.9494...  5.2426 sec/batch
Epoch: 1/10...  Training Step: 134...  Training loss: 2.9382...  4.9442 sec/batch
Epoch: 1/10...  Training Step: 135...  Training loss: 2.9047...  5.1207 sec/batch
Epoch: 1/10...  Training Step: 136...  Training loss: 2.9041...  5.0464 sec/batch
Epoch: 1/10...  Training Step: 137...  Training loss: 2.9112...  4.7782 sec/batch
Epoch: 1/10...  Training Step: 138...  Training loss: 2.8865...  4.8318 sec/batch
Epoch: 1/10...  Training Step: 139...  Training loss: 2.9071...  4.7205 sec/batch
Epoch: 1/10...  Training Step: 140...  Training loss: 2.8833...  4.8680 sec/batch
Epoch: 1/10...  Training Step: 141...  Training loss: 2.8872...  5.1225 sec/batch
Epoch: 1/10...  Training Step: 142...  Training loss: 2.8425...  5.3683 sec/batch
Epoch: 1/10...  Training Step: 143...  Training loss: 2.8616...  5.0911 sec/batch
Epoch: 1/10...  Training Step: 144...  Training loss: 2.8499...  5.0973 sec/batch
Epoch: 1/10...  Training Step: 145...  Training loss: 2.8474...  4.7738 sec/batch
Epoch: 1/10...  Training Step: 146...  Training loss: 2.8498...  4.6566 sec/batch
Epoch: 1/10...  Training Step: 147...  Training loss: 2.8303...  5.1067 sec/batch
Epoch: 1/10...  Training Step: 148...  Training loss: 2.8512...  4.6288 sec/batch
Epoch: 1/10...  Training Step: 149...  Training loss: 2.7989...  5.3238 sec/batch
Epoch: 1/10...  Training Step: 150...  Training loss: 2.8070...  4.9458 sec/batch
Epoch: 1/10...  Training Step: 151...  Training loss: 2.8171...  4.4837 sec/batch
Epoch: 1/10...  Training Step: 152...  Training loss: 2.8368...  4.4856 sec/batch
Epoch: 1/10...  Training Step: 153...  Training loss: 2.7950...  5.3128 sec/batch
Epoch: 1/10...  Training Step: 154...  Training loss: 2.7863...  4.8709 sec/batch
Epoch: 1/10...  Training Step: 155...  Training loss: 2.7575...  2.9895 sec/batch
Epoch: 1/10...  Training Step: 156...  Training loss: 2.7538...  3.1530 sec/batch
Epoch: 1/10...  Training Step: 157...  Training loss: 2.7231...  2.7048 sec/batch
Epoch: 1/10...  Training Step: 158...  Training loss: 2.7268...  2.6407 sec/batch
Epoch: 1/10...  Training Step: 159...  Training loss: 2.7015...  2.8605 sec/batch
Epoch: 1/10...  Training Step: 160...  Training loss: 2.7300...  2.7239 sec/batch
Epoch: 1/10...  Training Step: 161...  Training loss: 2.7102...  2.7040 sec/batch
Epoch: 1/10...  Training Step: 162...  Training loss: 2.6847...  2.8105 sec/batch
Epoch: 1/10...  Training Step: 163...  Training loss: 2.6809...  2.7939 sec/batch
Epoch: 1/10...  Training Step: 164...  Training loss: 2.6774...  2.9119 sec/batch
Epoch: 1/10...  Training Step: 165...  Training loss: 2.6948...  2.7353 sec/batch
Epoch: 1/10...  Training Step: 166...  Training loss: 2.6668...  2.6950 sec/batch
Epoch: 1/10...  Training Step: 167...  Training loss: 2.6603...  2.7896 sec/batch
Epoch: 1/10...  Training Step: 168...  Training loss: 2.6608...  2.8820 sec/batch
Epoch: 1/10...  Training Step: 169...  Training loss: 2.6491...  2.7079 sec/batch
Epoch: 1/10...  Training Step: 170...  Training loss: 2.6267...  2.7930 sec/batch
Epoch: 1/10...  Training Step: 171...  Training loss: 2.6376...  2.6446 sec/batch
Epoch: 1/10...  Training Step: 172...  Training loss: 2.6636...  2.6514 sec/batch
Epoch: 1/10...  Training Step: 173...  Training loss: 2.6507...  2.5813 sec/batch
Epoch: 1/10...  Training Step: 174...  Training loss: 2.6463...  2.5675 sec/batch
Epoch: 1/10...  Training Step: 175...  Training loss: 2.6283...  2.7526 sec/batch
Epoch: 1/10...  Training Step: 176...  Training loss: 2.6063...  2.8558 sec/batch
Epoch: 1/10...  Training Step: 177...  Training loss: 2.5954...  2.6281 sec/batch
Epoch: 1/10...  Training Step: 178...  Training loss: 2.5671...  2.6270 sec/batch
Epoch: 1/10...  Training Step: 179...  Training loss: 2.5680...  2.8700 sec/batch
Epoch: 1/10...  Training Step: 180...  Training loss: 2.5456...  2.7806 sec/batch
Epoch: 1/10...  Training Step: 181...  Training loss: 2.5709...  2.7926 sec/batch
Epoch: 1/10...  Training Step: 182...  Training loss: 2.5635...  2.7340 sec/batch
Epoch: 1/10...  Training Step: 183...  Training loss: 2.5449...  2.7364 sec/batch
Epoch: 1/10...  Training Step: 184...  Training loss: 2.5678...  2.5898 sec/batch
Epoch: 1/10...  Training Step: 185...  Training loss: 2.6262...  2.6831 sec/batch
Epoch: 1/10...  Training Step: 186...  Training loss: 2.5963...  2.6971 sec/batch
Epoch: 1/10...  Training Step: 187...  Training loss: 2.5166...  2.7853 sec/batch
Epoch: 1/10...  Training Step: 188...  Training loss: 2.5097...  2.6618 sec/batch
Epoch: 1/10...  Training Step: 189...  Training loss: 2.5163...  2.6479 sec/batch
Epoch: 1/10...  Training Step: 190...  Training loss: 2.5245...  2.6712 sec/batch
Epoch: 1/10...  Training Step: 191...  Training loss: 2.5293...  2.7234 sec/batch
Epoch: 1/10...  Training Step: 192...  Training loss: 2.4962...  2.7164 sec/batch
Epoch: 1/10...  Training Step: 193...  Training loss: 2.5124...  2.7053 sec/batch
Epoch: 1/10...  Training Step: 194...  Training loss: 2.5030...  2.7072 sec/batch
Epoch: 1/10...  Training Step: 195...  Training loss: 2.4965...  2.7116 sec/batch
Epoch: 1/10...  Training Step: 196...  Training loss: 2.4920...  2.6974 sec/batch
Epoch: 1/10...  Training Step: 197...  Training loss: 2.4889...  2.9556 sec/batch
Epoch: 1/10...  Training Step: 198...  Training loss: 2.4876...  2.7066 sec/batch
Epoch: 2/10...  Training Step: 199...  Training loss: 2.5680...  2.8458 sec/batch
Epoch: 2/10...  Training Step: 200...  Training loss: 2.4599...  2.6528 sec/batch
Epoch: 2/10...  Training Step: 201...  Training loss: 2.4820...  2.6679 sec/batch
Epoch: 2/10...  Training Step: 202...  Training loss: 2.4795...  2.6059 sec/batch
Epoch: 2/10...  Training Step: 203...  Training loss: 2.4767...  2.6075 sec/batch
Epoch: 2/10...  Training Step: 204...  Training loss: 2.4793...  2.8744 sec/batch
Epoch: 2/10...  Training Step: 205...  Training loss: 2.4747...  2.6670 sec/batch
Epoch: 2/10...  Training Step: 206...  Training loss: 2.4815...  2.8020 sec/batch
Epoch: 2/10...  Training Step: 207...  Training loss: 2.4946...  2.7497 sec/batch
Epoch: 2/10...  Training Step: 208...  Training loss: 2.4607...  2.7827 sec/batch
Epoch: 2/10...  Training Step: 209...  Training loss: 2.4524...  2.7067 sec/batch
Epoch: 2/10...  Training Step: 210...  Training loss: 2.4634...  2.8358 sec/batch
Epoch: 2/10...  Training Step: 211...  Training loss: 2.4517...  2.6202 sec/batch
Epoch: 2/10...  Training Step: 212...  Training loss: 2.4912...  2.7576 sec/batch
Epoch: 2/10...  Training Step: 213...  Training loss: 2.4639...  2.8543 sec/batch
Epoch: 2/10...  Training Step: 214...  Training loss: 2.4600...  2.6887 sec/batch
Epoch: 2/10...  Training Step: 215...  Training loss: 2.4610...  2.7589 sec/batch
Epoch: 2/10...  Training Step: 216...  Training loss: 2.4878...  2.6806 sec/batch
Epoch: 2/10...  Training Step: 217...  Training loss: 2.4555...  2.6855 sec/batch
Epoch: 2/10...  Training Step: 218...  Training loss: 2.4319...  2.6337 sec/batch
Epoch: 2/10...  Training Step: 219...  Training loss: 2.4424...  2.6479 sec/batch
Epoch: 2/10...  Training Step: 220...  Training loss: 2.4726...  2.6597 sec/batch
Epoch: 2/10...  Training Step: 221...  Training loss: 2.4493...  2.6251 sec/batch
Epoch: 2/10...  Training Step: 222...  Training loss: 2.4293...  2.6003 sec/batch
Epoch: 2/10...  Training Step: 223...  Training loss: 2.4223...  2.7210 sec/batch
Epoch: 2/10...  Training Step: 224...  Training loss: 2.4342...  2.6229 sec/batch
Epoch: 2/10...  Training Step: 225...  Training loss: 2.4212...  2.5792 sec/batch
Epoch: 2/10...  Training Step: 226...  Training loss: 2.4233...  2.6893 sec/batch
Epoch: 2/10...  Training Step: 227...  Training loss: 2.4447...  2.6415 sec/batch
Epoch: 2/10...  Training Step: 228...  Training loss: 2.4283...  2.5986 sec/batch
Epoch: 2/10...  Training Step: 229...  Training loss: 2.4384...  2.5845 sec/batch
Epoch: 2/10...  Training Step: 230...  Training loss: 2.3988...  2.7625 sec/batch
Epoch: 2/10...  Training Step: 231...  Training loss: 2.3946...  2.6248 sec/batch
Epoch: 2/10...  Training Step: 232...  Training loss: 2.4251...  2.6929 sec/batch
Epoch: 2/10...  Training Step: 233...  Training loss: 2.3981...  2.7751 sec/batch
Epoch: 2/10...  Training Step: 234...  Training loss: 2.4131...  2.6742 sec/batch
Epoch: 2/10...  Training Step: 235...  Training loss: 2.3893...  2.7765 sec/batch
Epoch: 2/10...  Training Step: 236...  Training loss: 2.3850...  2.7062 sec/batch
Epoch: 2/10...  Training Step: 237...  Training loss: 2.3747...  2.6537 sec/batch
Epoch: 2/10...  Training Step: 238...  Training loss: 2.3785...  2.7767 sec/batch
Epoch: 2/10...  Training Step: 239...  Training loss: 2.3741...  2.5873 sec/batch
Epoch: 2/10...  Training Step: 240...  Training loss: 2.3812...  2.5729 sec/batch
Epoch: 2/10...  Training Step: 241...  Training loss: 2.3756...  2.6937 sec/batch
Epoch: 2/10...  Training Step: 242...  Training loss: 2.3673...  2.6218 sec/batch
Epoch: 2/10...  Training Step: 243...  Training loss: 2.3736...  2.6119 sec/batch
Epoch: 2/10...  Training Step: 244...  Training loss: 2.3245...  2.6178 sec/batch
Epoch: 2/10...  Training Step: 245...  Training loss: 2.3978...  2.8883 sec/batch
Epoch: 2/10...  Training Step: 246...  Training loss: 2.3714...  2.7355 sec/batch
Epoch: 2/10...  Training Step: 247...  Training loss: 2.3752...  2.5770 sec/batch
Epoch: 2/10...  Training Step: 248...  Training loss: 2.3882...  2.5794 sec/batch
Epoch: 2/10...  Training Step: 249...  Training loss: 2.3476...  2.6794 sec/batch
Epoch: 2/10...  Training Step: 250...  Training loss: 2.3915...  2.5775 sec/batch
Epoch: 2/10...  Training Step: 251...  Training loss: 2.3592...  2.5962 sec/batch
Epoch: 2/10...  Training Step: 252...  Training loss: 2.3522...  2.5787 sec/batch
Epoch: 2/10...  Training Step: 253...  Training loss: 2.3517...  2.5674 sec/batch
Epoch: 2/10...  Training Step: 254...  Training loss: 2.3619...  2.6576 sec/batch
Epoch: 2/10...  Training Step: 255...  Training loss: 2.3468...  2.6926 sec/batch
Epoch: 2/10...  Training Step: 256...  Training loss: 2.3484...  2.6270 sec/batch
Epoch: 2/10...  Training Step: 257...  Training loss: 2.3399...  2.7932 sec/batch
Epoch: 2/10...  Training Step: 258...  Training loss: 2.3683...  2.7809 sec/batch
Epoch: 2/10...  Training Step: 259...  Training loss: 2.3479...  2.7896 sec/batch
Epoch: 2/10...  Training Step: 260...  Training loss: 2.3550...  2.5953 sec/batch
Epoch: 2/10...  Training Step: 261...  Training loss: 2.3668...  2.6309 sec/batch
Epoch: 2/10...  Training Step: 262...  Training loss: 2.3355...  2.5826 sec/batch
Epoch: 2/10...  Training Step: 263...  Training loss: 2.3237...  2.5872 sec/batch
Epoch: 2/10...  Training Step: 264...  Training loss: 2.3634...  2.7754 sec/batch
Epoch: 2/10...  Training Step: 265...  Training loss: 2.3423...  2.5934 sec/batch
Epoch: 2/10...  Training Step: 266...  Training loss: 2.3122...  2.5941 sec/batch
Epoch: 2/10...  Training Step: 267...  Training loss: 2.3084...  2.5899 sec/batch
Epoch: 2/10...  Training Step: 268...  Training loss: 2.3305...  2.5858 sec/batch
Epoch: 2/10...  Training Step: 269...  Training loss: 2.3405...  2.6724 sec/batch
Epoch: 2/10...  Training Step: 270...  Training loss: 2.3326...  2.6182 sec/batch
Epoch: 2/10...  Training Step: 271...  Training loss: 2.3274...  2.8444 sec/batch
Epoch: 2/10...  Training Step: 272...  Training loss: 2.3098...  2.9020 sec/batch
Epoch: 2/10...  Training Step: 273...  Training loss: 2.3070...  2.6893 sec/batch
Epoch: 2/10...  Training Step: 274...  Training loss: 2.3582...  2.8121 sec/batch
Epoch: 2/10...  Training Step: 275...  Training loss: 2.3129...  2.7233 sec/batch
Epoch: 2/10...  Training Step: 276...  Training loss: 2.3268...  2.6070 sec/batch
Epoch: 2/10...  Training Step: 277...  Training loss: 2.2930...  2.5951 sec/batch
Epoch: 2/10...  Training Step: 278...  Training loss: 2.3052...  2.6197 sec/batch
Epoch: 2/10...  Training Step: 279...  Training loss: 2.2828...  2.5924 sec/batch
Epoch: 2/10...  Training Step: 280...  Training loss: 2.3188...  2.5835 sec/batch
Epoch: 2/10...  Training Step: 281...  Training loss: 2.2802...  2.5898 sec/batch
Epoch: 2/10...  Training Step: 282...  Training loss: 2.2797...  2.6342 sec/batch
Epoch: 2/10...  Training Step: 283...  Training loss: 2.2500...  2.6695 sec/batch
Epoch: 2/10...  Training Step: 284...  Training loss: 2.2837...  2.6082 sec/batch
Epoch: 2/10...  Training Step: 285...  Training loss: 2.2912...  2.6388 sec/batch
Epoch: 2/10...  Training Step: 286...  Training loss: 2.2843...  2.6176 sec/batch
Epoch: 2/10...  Training Step: 287...  Training loss: 2.2729...  2.6348 sec/batch
Epoch: 2/10...  Training Step: 288...  Training loss: 2.2963...  2.6120 sec/batch
Epoch: 2/10...  Training Step: 289...  Training loss: 2.2709...  2.5915 sec/batch
Epoch: 2/10...  Training Step: 290...  Training loss: 2.2779...  2.7299 sec/batch
Epoch: 2/10...  Training Step: 291...  Training loss: 2.2638...  2.5880 sec/batch
Epoch: 2/10...  Training Step: 292...  Training loss: 2.2630...  2.8060 sec/batch
Epoch: 2/10...  Training Step: 293...  Training loss: 2.2600...  2.6759 sec/batch
Epoch: 2/10...  Training Step: 294...  Training loss: 2.2530...  2.8195 sec/batch
Epoch: 2/10...  Training Step: 295...  Training loss: 2.2624...  2.7485 sec/batch
Epoch: 2/10...  Training Step: 296...  Training loss: 2.2678...  2.6226 sec/batch
Epoch: 2/10...  Training Step: 297...  Training loss: 2.2494...  2.6914 sec/batch
Epoch: 2/10...  Training Step: 298...  Training loss: 2.2429...  2.6837 sec/batch
Epoch: 2/10...  Training Step: 299...  Training loss: 2.2746...  2.6605 sec/batch
Epoch: 2/10...  Training Step: 300...  Training loss: 2.2617...  2.6143 sec/batch
Epoch: 2/10...  Training Step: 301...  Training loss: 2.2397...  2.7070 sec/batch
Epoch: 2/10...  Training Step: 302...  Training loss: 2.2426...  2.6374 sec/batch
Epoch: 2/10...  Training Step: 303...  Training loss: 2.2385...  2.6882 sec/batch
Epoch: 2/10...  Training Step: 304...  Training loss: 2.2499...  2.7012 sec/batch
Epoch: 2/10...  Training Step: 305...  Training loss: 2.2428...  2.6464 sec/batch
Epoch: 2/10...  Training Step: 306...  Training loss: 2.2773...  2.6887 sec/batch
Epoch: 2/10...  Training Step: 307...  Training loss: 2.2586...  2.5775 sec/batch
Epoch: 2/10...  Training Step: 308...  Training loss: 2.2344...  2.6314 sec/batch
Epoch: 2/10...  Training Step: 309...  Training loss: 2.2457...  2.6724 sec/batch
Epoch: 2/10...  Training Step: 310...  Training loss: 2.2526...  2.7136 sec/batch
Epoch: 2/10...  Training Step: 311...  Training loss: 2.2380...  2.6812 sec/batch
Epoch: 2/10...  Training Step: 312...  Training loss: 2.2204...  2.7222 sec/batch
Epoch: 2/10...  Training Step: 313...  Training loss: 2.2250...  2.7328 sec/batch
Epoch: 2/10...  Training Step: 314...  Training loss: 2.2040...  2.8852 sec/batch
Epoch: 2/10...  Training Step: 315...  Training loss: 2.2248...  2.8791 sec/batch
Epoch: 2/10...  Training Step: 316...  Training loss: 2.2329...  2.6921 sec/batch
Epoch: 2/10...  Training Step: 317...  Training loss: 2.2439...  2.7145 sec/batch
Epoch: 2/10...  Training Step: 318...  Training loss: 2.2349...  2.6960 sec/batch
Epoch: 2/10...  Training Step: 319...  Training loss: 2.2538...  2.7299 sec/batch
Epoch: 2/10...  Training Step: 320...  Training loss: 2.2151...  2.6843 sec/batch
Epoch: 2/10...  Training Step: 321...  Training loss: 2.2047...  2.6840 sec/batch
Epoch: 2/10...  Training Step: 322...  Training loss: 2.2504...  2.7458 sec/batch
Epoch: 2/10...  Training Step: 323...  Training loss: 2.2212...  2.7844 sec/batch
Epoch: 2/10...  Training Step: 324...  Training loss: 2.1994...  2.8039 sec/batch
Epoch: 2/10...  Training Step: 325...  Training loss: 2.2286...  2.6334 sec/batch
Epoch: 2/10...  Training Step: 326...  Training loss: 2.2252...  2.5802 sec/batch
Epoch: 2/10...  Training Step: 327...  Training loss: 2.2276...  2.6866 sec/batch
Epoch: 2/10...  Training Step: 328...  Training loss: 2.2237...  2.7142 sec/batch
Epoch: 2/10...  Training Step: 329...  Training loss: 2.1981...  2.6014 sec/batch
Epoch: 2/10...  Training Step: 330...  Training loss: 2.1910...  2.6128 sec/batch
Epoch: 2/10...  Training Step: 331...  Training loss: 2.2184...  2.5949 sec/batch
Epoch: 2/10...  Training Step: 332...  Training loss: 2.2228...  2.6692 sec/batch
Epoch: 2/10...  Training Step: 333...  Training loss: 2.2047...  2.6842 sec/batch
Epoch: 2/10...  Training Step: 334...  Training loss: 2.2189...  2.8386 sec/batch
Epoch: 2/10...  Training Step: 335...  Training loss: 2.2109...  2.7304 sec/batch
Epoch: 2/10...  Training Step: 336...  Training loss: 2.2101...  2.7276 sec/batch
Epoch: 2/10...  Training Step: 337...  Training loss: 2.2336...  2.7895 sec/batch
Epoch: 2/10...  Training Step: 338...  Training loss: 2.1983...  2.6502 sec/batch
Epoch: 2/10...  Training Step: 339...  Training loss: 2.2186...  2.7187 sec/batch
Epoch: 2/10...  Training Step: 340...  Training loss: 2.1850...  2.6667 sec/batch
Epoch: 2/10...  Training Step: 341...  Training loss: 2.1790...  2.8153 sec/batch
Epoch: 2/10...  Training Step: 342...  Training loss: 2.1903...  2.7590 sec/batch
Epoch: 2/10...  Training Step: 343...  Training loss: 2.1840...  2.7590 sec/batch
Epoch: 2/10...  Training Step: 344...  Training loss: 2.2119...  2.7293 sec/batch
Epoch: 2/10...  Training Step: 345...  Training loss: 2.1981...  2.6845 sec/batch
Epoch: 2/10...  Training Step: 346...  Training loss: 2.2046...  2.6342 sec/batch
Epoch: 2/10...  Training Step: 347...  Training loss: 2.1788...  2.6182 sec/batch
Epoch: 2/10...  Training Step: 348...  Training loss: 2.1647...  2.6835 sec/batch
Epoch: 2/10...  Training Step: 349...  Training loss: 2.1880...  2.6307 sec/batch
Epoch: 2/10...  Training Step: 350...  Training loss: 2.2146...  2.6138 sec/batch
Epoch: 2/10...  Training Step: 351...  Training loss: 2.1905...  2.7965 sec/batch
Epoch: 2/10...  Training Step: 352...  Training loss: 2.1919...  2.8462 sec/batch
Epoch: 2/10...  Training Step: 353...  Training loss: 2.1638...  2.7499 sec/batch
Epoch: 2/10...  Training Step: 354...  Training loss: 2.1729...  2.7561 sec/batch
Epoch: 2/10...  Training Step: 355...  Training loss: 2.1696...  2.5804 sec/batch
Epoch: 2/10...  Training Step: 356...  Training loss: 2.1627...  2.5858 sec/batch
Epoch: 2/10...  Training Step: 357...  Training loss: 2.1369...  2.5854 sec/batch
Epoch: 2/10...  Training Step: 358...  Training loss: 2.2026...  2.5964 sec/batch
Epoch: 2/10...  Training Step: 359...  Training loss: 2.1785...  2.6833 sec/batch
Epoch: 2/10...  Training Step: 360...  Training loss: 2.1600...  2.6166 sec/batch
Epoch: 2/10...  Training Step: 361...  Training loss: 2.1629...  2.6124 sec/batch
Epoch: 2/10...  Training Step: 362...  Training loss: 2.1681...  2.5906 sec/batch
Epoch: 2/10...  Training Step: 363...  Training loss: 2.1651...  2.5792 sec/batch
Epoch: 2/10...  Training Step: 364...  Training loss: 2.1517...  2.6106 sec/batch
Epoch: 2/10...  Training Step: 365...  Training loss: 2.1710...  2.6255 sec/batch
Epoch: 2/10...  Training Step: 366...  Training loss: 2.1797...  2.6616 sec/batch
Epoch: 2/10...  Training Step: 367...  Training loss: 2.1466...  2.8429 sec/batch
Epoch: 2/10...  Training Step: 368...  Training loss: 2.1491...  2.6759 sec/batch
Epoch: 2/10...  Training Step: 369...  Training loss: 2.1392...  2.7889 sec/batch
Epoch: 2/10...  Training Step: 370...  Training loss: 2.1489...  2.9531 sec/batch
Epoch: 2/10...  Training Step: 371...  Training loss: 2.1817...  2.7191 sec/batch
Epoch: 2/10...  Training Step: 372...  Training loss: 2.1673...  2.5887 sec/batch
Epoch: 2/10...  Training Step: 373...  Training loss: 2.1683...  2.5839 sec/batch
Epoch: 2/10...  Training Step: 374...  Training loss: 2.1469...  2.8507 sec/batch
Epoch: 2/10...  Training Step: 375...  Training loss: 2.1342...  2.5820 sec/batch
Epoch: 2/10...  Training Step: 376...  Training loss: 2.1420...  2.8344 sec/batch
Epoch: 2/10...  Training Step: 377...  Training loss: 2.1156...  2.6457 sec/batch
Epoch: 2/10...  Training Step: 378...  Training loss: 2.1057...  2.5833 sec/batch
Epoch: 2/10...  Training Step: 379...  Training loss: 2.1267...  2.6648 sec/batch
Epoch: 2/10...  Training Step: 380...  Training loss: 2.1378...  2.7010 sec/batch
Epoch: 2/10...  Training Step: 381...  Training loss: 2.1450...  2.6111 sec/batch
Epoch: 2/10...  Training Step: 382...  Training loss: 2.1546...  2.8348 sec/batch
Epoch: 2/10...  Training Step: 383...  Training loss: 2.1415...  2.6309 sec/batch
Epoch: 2/10...  Training Step: 384...  Training loss: 2.1236...  2.6747 sec/batch
Epoch: 2/10...  Training Step: 385...  Training loss: 2.1204...  2.6177 sec/batch
Epoch: 2/10...  Training Step: 386...  Training loss: 2.1071...  2.5765 sec/batch
Epoch: 2/10...  Training Step: 387...  Training loss: 2.1151...  2.6413 sec/batch
Epoch: 2/10...  Training Step: 388...  Training loss: 2.1276...  2.6416 sec/batch
Epoch: 2/10...  Training Step: 389...  Training loss: 2.1283...  2.7111 sec/batch
Epoch: 2/10...  Training Step: 390...  Training loss: 2.0934...  2.7046 sec/batch
Epoch: 2/10...  Training Step: 391...  Training loss: 2.1190...  2.6486 sec/batch
Epoch: 2/10...  Training Step: 392...  Training loss: 2.1054...  2.7646 sec/batch
Epoch: 2/10...  Training Step: 393...  Training loss: 2.0911...  2.6148 sec/batch
Epoch: 2/10...  Training Step: 394...  Training loss: 2.1184...  2.6492 sec/batch
Epoch: 2/10...  Training Step: 395...  Training loss: 2.1139...  2.9793 sec/batch
Epoch: 2/10...  Training Step: 396...  Training loss: 2.1036...  2.6131 sec/batch
Epoch: 3/10...  Training Step: 397...  Training loss: 2.1797...  2.7506 sec/batch
Epoch: 3/10...  Training Step: 398...  Training loss: 2.0840...  2.7081 sec/batch
Epoch: 3/10...  Training Step: 399...  Training loss: 2.0942...  2.5991 sec/batch
Epoch: 3/10...  Training Step: 400...  Training loss: 2.0970...  2.6050 sec/batch
Epoch: 3/10...  Training Step: 401...  Training loss: 2.1041...  2.6838 sec/batch
Epoch: 3/10...  Training Step: 402...  Training loss: 2.0704...  2.8044 sec/batch
Epoch: 3/10...  Training Step: 403...  Training loss: 2.0983...  2.6092 sec/batch
Epoch: 3/10...  Training Step: 404...  Training loss: 2.1006...  2.8721 sec/batch
Epoch: 3/10...  Training Step: 405...  Training loss: 2.1265...  2.5799 sec/batch
Epoch: 3/10...  Training Step: 406...  Training loss: 2.0856...  2.7059 sec/batch
Epoch: 3/10...  Training Step: 407...  Training loss: 2.0761...  2.7321 sec/batch
Epoch: 3/10...  Training Step: 408...  Training loss: 2.0799...  2.7190 sec/batch
Epoch: 3/10...  Training Step: 409...  Training loss: 2.0908...  2.6902 sec/batch
Epoch: 3/10...  Training Step: 410...  Training loss: 2.1203...  2.6664 sec/batch
Epoch: 3/10...  Training Step: 411...  Training loss: 2.0810...  2.9169 sec/batch
Epoch: 3/10...  Training Step: 412...  Training loss: 2.0669...  2.6810 sec/batch
Epoch: 3/10...  Training Step: 413...  Training loss: 2.0843...  2.8970 sec/batch
Epoch: 3/10...  Training Step: 414...  Training loss: 2.1207...  2.8878 sec/batch
Epoch: 3/10...  Training Step: 415...  Training loss: 2.0873...  2.9217 sec/batch
Epoch: 3/10...  Training Step: 416...  Training loss: 2.0857...  2.8468 sec/batch
Epoch: 3/10...  Training Step: 417...  Training loss: 2.0707...  2.9187 sec/batch
Epoch: 3/10...  Training Step: 418...  Training loss: 2.1240...  3.0456 sec/batch
Epoch: 3/10...  Training Step: 419...  Training loss: 2.0911...  3.2994 sec/batch
Epoch: 3/10...  Training Step: 420...  Training loss: 2.0745...  3.1106 sec/batch
Epoch: 3/10...  Training Step: 421...  Training loss: 2.0842...  3.0447 sec/batch
Epoch: 3/10...  Training Step: 422...  Training loss: 2.0601...  3.5732 sec/batch
Epoch: 3/10...  Training Step: 423...  Training loss: 2.0594...  3.6705 sec/batch
Epoch: 3/10...  Training Step: 424...  Training loss: 2.0834...  3.6820 sec/batch
Epoch: 3/10...  Training Step: 425...  Training loss: 2.1118...  3.8220 sec/batch
Epoch: 3/10...  Training Step: 426...  Training loss: 2.0776...  3.7866 sec/batch
Epoch: 3/10...  Training Step: 427...  Training loss: 2.0589...  3.7534 sec/batch
Epoch: 3/10...  Training Step: 428...  Training loss: 2.0478...  3.5060 sec/batch
Epoch: 3/10...  Training Step: 429...  Training loss: 2.0564...  2.9606 sec/batch
Epoch: 3/10...  Training Step: 430...  Training loss: 2.1017...  2.9794 sec/batch
Epoch: 3/10...  Training Step: 431...  Training loss: 2.0474...  2.9282 sec/batch
Epoch: 3/10...  Training Step: 432...  Training loss: 2.0526...  3.0287 sec/batch
Epoch: 3/10...  Training Step: 433...  Training loss: 2.0560...  3.0121 sec/batch
Epoch: 3/10...  Training Step: 434...  Training loss: 2.0234...  2.9727 sec/batch
Epoch: 3/10...  Training Step: 435...  Training loss: 2.0249...  2.9871 sec/batch
Epoch: 3/10...  Training Step: 436...  Training loss: 2.0212...  2.9592 sec/batch
Epoch: 3/10...  Training Step: 437...  Training loss: 2.0322...  2.9945 sec/batch
Epoch: 3/10...  Training Step: 438...  Training loss: 2.0484...  3.1395 sec/batch
Epoch: 3/10...  Training Step: 439...  Training loss: 2.0266...  3.0349 sec/batch
Epoch: 3/10...  Training Step: 440...  Training loss: 2.0254...  2.9938 sec/batch
Epoch: 3/10...  Training Step: 441...  Training loss: 2.0523...  3.0141 sec/batch
Epoch: 3/10...  Training Step: 442...  Training loss: 1.9826...  2.9594 sec/batch
Epoch: 3/10...  Training Step: 443...  Training loss: 2.0535...  2.9407 sec/batch
Epoch: 3/10...  Training Step: 444...  Training loss: 2.0264...  3.1024 sec/batch
Epoch: 3/10...  Training Step: 445...  Training loss: 2.0292...  2.9681 sec/batch
Epoch: 3/10...  Training Step: 446...  Training loss: 2.0667...  3.2088 sec/batch
Epoch: 3/10...  Training Step: 447...  Training loss: 2.0001...  2.9926 sec/batch
Epoch: 3/10...  Training Step: 448...  Training loss: 2.0832...  2.9535 sec/batch
Epoch: 3/10...  Training Step: 449...  Training loss: 2.0283...  3.0858 sec/batch
Epoch: 3/10...  Training Step: 450...  Training loss: 2.0262...  3.0316 sec/batch
Epoch: 3/10...  Training Step: 451...  Training loss: 2.0200...  2.9744 sec/batch
Epoch: 3/10...  Training Step: 452...  Training loss: 2.0444...  2.9387 sec/batch
Epoch: 3/10...  Training Step: 453...  Training loss: 2.0459...  2.9607 sec/batch
Epoch: 3/10...  Training Step: 454...  Training loss: 2.0355...  2.9817 sec/batch
Epoch: 3/10...  Training Step: 455...  Training loss: 2.0140...  2.9818 sec/batch
Epoch: 3/10...  Training Step: 456...  Training loss: 2.0649...  2.9661 sec/batch
Epoch: 3/10...  Training Step: 457...  Training loss: 2.0291...  2.9465 sec/batch
Epoch: 3/10...  Training Step: 458...  Training loss: 2.0662...  2.9558 sec/batch
Epoch: 3/10...  Training Step: 459...  Training loss: 2.0546...  2.9807 sec/batch
Epoch: 3/10...  Training Step: 460...  Training loss: 2.0285...  3.0014 sec/batch
Epoch: 3/10...  Training Step: 461...  Training loss: 2.0146...  2.9621 sec/batch
Epoch: 3/10...  Training Step: 462...  Training loss: 2.0643...  2.9424 sec/batch
Epoch: 3/10...  Training Step: 463...  Training loss: 2.0263...  2.9407 sec/batch
Epoch: 3/10...  Training Step: 464...  Training loss: 2.0016...  3.0718 sec/batch
Epoch: 3/10...  Training Step: 465...  Training loss: 2.0124...  2.9706 sec/batch
Epoch: 3/10...  Training Step: 466...  Training loss: 2.0249...  2.9208 sec/batch
Epoch: 3/10...  Training Step: 467...  Training loss: 2.0553...  2.9367 sec/batch
Epoch: 3/10...  Training Step: 468...  Training loss: 2.0267...  2.9555 sec/batch
Epoch: 3/10...  Training Step: 469...  Training loss: 2.0339...  3.1240 sec/batch
Epoch: 3/10...  Training Step: 470...  Training loss: 2.0042...  2.9482 sec/batch
Epoch: 3/10...  Training Step: 471...  Training loss: 2.0092...  3.0510 sec/batch
Epoch: 3/10...  Training Step: 472...  Training loss: 2.0533...  3.0887 sec/batch
Epoch: 3/10...  Training Step: 473...  Training loss: 2.0137...  3.0865 sec/batch
Epoch: 3/10...  Training Step: 474...  Training loss: 2.0184...  3.1016 sec/batch
Epoch: 3/10...  Training Step: 475...  Training loss: 1.9743...  2.9539 sec/batch
Epoch: 3/10...  Training Step: 476...  Training loss: 1.9985...  2.9742 sec/batch
Epoch: 3/10...  Training Step: 477...  Training loss: 1.9650...  3.0182 sec/batch
Epoch: 3/10...  Training Step: 478...  Training loss: 2.0276...  2.9436 sec/batch
Epoch: 3/10...  Training Step: 479...  Training loss: 1.9699...  2.6317 sec/batch
Epoch: 3/10...  Training Step: 480...  Training loss: 1.9976...  2.6061 sec/batch
Epoch: 3/10...  Training Step: 481...  Training loss: 1.9657...  2.6909 sec/batch
Epoch: 3/10...  Training Step: 482...  Training loss: 1.9808...  2.6249 sec/batch
Epoch: 3/10...  Training Step: 483...  Training loss: 1.9946...  2.5792 sec/batch
Epoch: 3/10...  Training Step: 484...  Training loss: 1.9836...  2.6406 sec/batch
Epoch: 3/10...  Training Step: 485...  Training loss: 1.9620...  2.6744 sec/batch
Epoch: 3/10...  Training Step: 486...  Training loss: 1.9979...  2.6630 sec/batch
Epoch: 3/10...  Training Step: 487...  Training loss: 1.9685...  2.9221 sec/batch
Epoch: 3/10...  Training Step: 488...  Training loss: 1.9800...  2.6868 sec/batch
Epoch: 3/10...  Training Step: 489...  Training loss: 1.9459...  2.7122 sec/batch
Epoch: 3/10...  Training Step: 490...  Training loss: 1.9619...  2.8523 sec/batch
Epoch: 3/10...  Training Step: 491...  Training loss: 1.9601...  2.6537 sec/batch
Epoch: 3/10...  Training Step: 492...  Training loss: 1.9774...  2.6386 sec/batch
Epoch: 3/10...  Training Step: 493...  Training loss: 1.9819...  2.6728 sec/batch
Epoch: 3/10...  Training Step: 494...  Training loss: 1.9621...  2.7399 sec/batch
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-20-c03e74341564> in <module>()
     28                                                  model.final_state,
     29                                                  model.optimizer], 
---> 30                                                  feed_dict=feed)
     31 
     32             end = time.time()

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1020   def _do_call(self, fn, *args):
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:
   1024       message = compat.as_text(e.message)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1002         return tf_session.TF_Run(session, options,
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 
   1006     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [21]:
tf.train.get_checkpoint_state('checkpoints')


Out[21]:
model_checkpoint_path: "checkpoints/i400_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i400_l512.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [22]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [23]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [24]:
tf.train.latest_checkpoint('checkpoints')


Out[24]:
'checkpoints/i400_l512.ckpt'

In [25]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)


Farsins he wand as the pestented and thas anderes whith hered he wald, anding thand and sord her sast ond sealing at in the with sher and on wer thoures shas wath sall sail tere to the promsente sead
thom ham sat the wast hase the his
aspored,
to the hes to seat oun to his andion and the was of the peasion tore antorstint and hemass the past tit his andite allowhing as that his and of and his anden ther and an werteden the head of thor astion the
hiss and a sor torenss at his werit on to him he wor the saides of the histertess athe seiled to hum the wing of
the sance see time an ontering wour he measide, anl on ham, ho houd sele, that har he said of to
hor shim ther thare and wost her
aded in at and the perasting har that seald, the pase out of her wan of at hes soul of aly hors offing it tha sele and ofte the said, has shins aflo sourdion. Thithit, and aspencing. She was torke whe sale and the call and, and hard sean an hras wish to hor whes anded hes has him. He wall thom hished sains of te comlare and south with this wome sittith ond the corssion, and her whot shin he has he tered as her tho serit of the porsice., haressid, and soud he sald wark of she
hised on te and the wass, shad
whas hid wand tell sisse of
shemising with hor he some thas and so time to hus had to thime thout of the sersens of the posetter of hor anstelited the has him sat the woused, anlinge sispestath hes
and her sheser to ther
elofe himssats, were hed alde athersing wor the howher, we her ald and ta them ther thoughe the peress titer at hit at hish sorstill, as the homes the persite and ant at incorssice, and alloun te here the coulser one hish sithe whot wall his hid wath on tile a pacte hid hor ane his he seas she to sat of
torenther. Ho wasdes an what some that had ant thom, tat on the hits the sissting wast the mishert of of at he master them aly,,
thin she
must on had the hand seaning a tome so cansion to som the ware hiss te ald the sere saited to he soud herer the ham and as his the song

In [26]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Fardd
" ho so weta he his thet on he whas sho hise thas arersas, sor wh sas the who te this asd and te antes her
ast ons he and that on tot are he toth sot or te hos ho wat se the the so an he an sorist wot tan his sos thin tho the the to tho she hat ofrade tore ho the hed ans ander or antin ot wat oh

other al tase shes on tet hir an ta win thas teth se the thim as ansan he hom
and he tho sist hos wis she thons.
The wos the hhim thit has asethe hir tatite hit tan sote thers on he hom ant te wethe hor whath tarend an of ho hed an his shit an ant tom whas thar tho her he as tored
os ot he thas
se and hise so ant or that he the worint hh war the se sons tout th ant ot he woshe sos wand an he ased timer and hor thath

hos this sosed the te hhe as and ant on thas wangindes
asdid her ared ant ha shessind ta and withe he war he te ales an se she th sh mome shathe hits har wot wher se want he tised wo seled, tat teres te aled, an the and toun her he thin se had wethes aded ther sat hang on hes
ot

In [27]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


---------------------------------------------------------------------------
NotFoundError                             Traceback (most recent call last)
/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

During handling of the above exception, another exception occurred:

NotFoundError                             Traceback (most recent call last)
<ipython-input-27-7c4e18bbddc1> in <module>()
      1 checkpoint = 'checkpoints/i600_l512.ckpt'
----> 2 samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
      3 print(samp)

<ipython-input-23-8eb787ae9642> in sample(checkpoint, n_samples, lstm_size, vocab_size, prime)
      4     saver = tf.train.Saver()
      5     with tf.Session() as sess:
----> 6         saver.restore(sess, checkpoint)
      7         new_state = sess.run(model.initial_state)
      8         for c in prime:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
   1426       return
   1427     sess.run(self.saver_def.restore_op_name,
-> 1428              {self.saver_def.filename_tensor_name: save_path})
   1429 
   1430   @staticmethod

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1033         except KeyError:
   1034           pass
-> 1035       raise type(e)(node_def, op, message)
   1036 
   1037   def _extend_graph(self):

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

Caused by op 'save/RestoreV2_18', defined at:
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>
    app.launch_new_instance()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start
    ioloop.IOLoop.instance().start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start
    handler_func(fd_obj, events)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
    handler(stream, idents, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
    user_expressions, allow_stdin)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
    if self.run_code(code, result):
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-27-7c4e18bbddc1>", line 2, in <module>
    samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
  File "<ipython-input-23-8eb787ae9642>", line 4, in sample
    saver = tf.train.Saver()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 242, in restore_op
    [spec.tensor.dtype])[0])
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
    dtypes=dtypes, name=name)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

In [29]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


Farcial the
confiring to the mone of the correm and thinds. She
she saw the
streads of herself hand only astended of the carres to her his some of the princess of which he came him of
all that his white the dreasing of
thisking the princess and with she was she had
bettee a still and he was happined, with the pood on the mush to the peaters and seet it.

"The possess a streatich, the may were notine at his mate a misted
and the
man of the mother at the same of the seem her
felt. He had not here.

"I conest only be alw you thinking that the partion
of their said."

"A much then you make all her
somether. Hower their centing
about
this, and I won't give it in
himself.
I had not come at any see it will that there she chile no one that him.

"The distiction with you all.... It was
a mone of the mind were starding to the simple to a mone. It to be to ser in the place," said Vronsky.
"And a plais in
his face, has alled in the consess on at they to gan in the sint
at as that
he would not be and t