Anna KaRNNa

In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [7]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [8]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [9]:
text[:198]


Out[9]:
"Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverything was in confusion in the Oblonskys' house. The wife had\ndiscovered that the husband was carrying "

And we can see the characters encoded as integers.


In [10]:
encoded[:198]


Out[10]:
array([31, 64, 57, 72, 76, 61, 74,  1, 16,  0,  0,  0, 36, 57, 72, 72, 81,
        1, 62, 57, 69, 65, 68, 65, 61, 75,  1, 57, 74, 61,  1, 57, 68, 68,
        1, 57, 68, 65, 67, 61, 26,  1, 61, 78, 61, 74, 81,  1, 77, 70, 64,
       57, 72, 72, 81,  1, 62, 57, 69, 65, 68, 81,  1, 65, 75,  1, 77, 70,
       64, 57, 72, 72, 81,  1, 65, 70,  1, 65, 76, 75,  1, 71, 79, 70,  0,
       79, 57, 81, 13,  0,  0, 33, 78, 61, 74, 81, 76, 64, 65, 70, 63,  1,
       79, 57, 75,  1, 65, 70,  1, 59, 71, 70, 62, 77, 75, 65, 71, 70,  1,
       65, 70,  1, 76, 64, 61,  1, 43, 58, 68, 71, 70, 75, 67, 81, 75,  7,
        1, 64, 71, 77, 75, 61, 13,  1, 48, 64, 61,  1, 79, 65, 62, 61,  1,
       64, 57, 60,  0, 60, 65, 75, 59, 71, 78, 61, 74, 61, 60,  1, 76, 64,
       57, 76,  1, 76, 64, 61,  1, 64, 77, 75, 58, 57, 70, 60,  1, 79, 57,
       75,  1, 59, 57, 74, 74, 81, 65, 70, 63,  1], dtype=int32)

Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.


In [11]:
len(vocab)


Out[11]:
83

In [12]:
vocab_to_int


Out[12]:
{'\n': 0,
 ' ': 1,
 '!': 2,
 '"': 3,
 '$': 4,
 '%': 5,
 '&': 6,
 "'": 7,
 '(': 8,
 ')': 9,
 '*': 10,
 ',': 11,
 '-': 12,
 '.': 13,
 '/': 14,
 '0': 15,
 '1': 16,
 '2': 17,
 '3': 18,
 '4': 19,
 '5': 20,
 '6': 21,
 '7': 22,
 '8': 23,
 '9': 24,
 ':': 25,
 ';': 26,
 '?': 27,
 '@': 28,
 'A': 29,
 'B': 30,
 'C': 31,
 'D': 32,
 'E': 33,
 'F': 34,
 'G': 35,
 'H': 36,
 'I': 37,
 'J': 38,
 'K': 39,
 'L': 40,
 'M': 41,
 'N': 42,
 'O': 43,
 'P': 44,
 'Q': 45,
 'R': 46,
 'S': 47,
 'T': 48,
 'U': 49,
 'V': 50,
 'W': 51,
 'X': 52,
 'Y': 53,
 'Z': 54,
 '_': 55,
 '`': 56,
 'a': 57,
 'b': 58,
 'c': 59,
 'd': 60,
 'e': 61,
 'f': 62,
 'g': 63,
 'h': 64,
 'i': 65,
 'j': 66,
 'k': 67,
 'l': 68,
 'm': 69,
 'n': 70,
 'o': 71,
 'p': 72,
 'q': 73,
 'r': 74,
 's': 75,
 't': 76,
 'u': 77,
 'v': 78,
 'w': 79,
 'x': 80,
 'y': 81,
 'z': 82}

Making training mini-batches

Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.

Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.


In [46]:
def get_batches(arr, n_seqs, n_steps):
    '''Create a generator that returns batches of size
       n_seqs x n_steps from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       n_seqs: Batch size, the number of sequences per batch
       n_steps: Number of sequence steps per batch
    '''
    # Get the number of characters per batch and number of batches we can make
    characters_per_batch = n_seqs*n_steps
    n_batches = int(len(arr)/(n_seqs*n_steps))
    print(characters_per_batch, n_batches)
    
    # Keep only enough characters to make full batches
    arr = arr[:characters_per_batch*n_batches]
    
    # Reshape into n_seqs rows
    arr = arr.reshape((n_seqs,-1))
    print(arr)
    for n in range(0, arr.shape[1], n_steps):
        # The features
        x = arr[:,n:n+n_steps]
        # The targets, shifted by one
        #y = arr[:,n+1:n+1+n_steps]
        y = np.zeros_like(x)
        y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
        yield x, y

Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.


In [47]:
batches = get_batches(encoded, 10, 50)
x, y = next(batches)


500 3970
[[31 64 57 ..., 11  1 37]
 [ 1 57 69 ...,  1 40 61]
 [78 65 70 ..., 61 78 65]
 ..., 
 [26  1 58 ..., 81  1 65]
 [76  1 65 ..., 75 64 61]
 [ 1 75 57 ..., 65 71 70]]

In [48]:
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[31 64 57 72 76 61 74  1 16  0]
 [ 1 57 69  1 70 71 76  1 63 71]
 [78 65 70 13  0  0  3 53 61 75]
 [70  1 60 77 74 65 70 63  1 64]
 [ 1 65 76  1 65 75 11  1 75 65]
 [ 1 37 76  1 79 57 75  0 71 70]
 [64 61 70  1 59 71 69 61  1 62]
 [26  1 58 77 76  1 70 71 79  1]
 [76  1 65 75 70  7 76 13  1 48]
 [ 1 75 57 65 60  1 76 71  1 64]]

y
 [[64 57 72 76 61 74  1 16  0  0]
 [57 69  1 70 71 76  1 63 71 65]
 [65 70 13  0  0  3 53 61 75 11]
 [ 1 60 77 74 65 70 63  1 64 65]
 [65 76  1 65 75 11  1 75 65 74]
 [37 76  1 79 57 75  0 71 70 68]
 [61 70  1 59 71 69 61  1 62 71]
 [ 1 58 77 76  1 70 71 79  1 75]
 [ 1 65 75 70  7 76 13  1 48 64]
 [75 57 65 60  1 76 71  1 64 61]]

If you implemented get_batches correctly, the above output should look something like

x
 [[55 63 69 22  6 76 45  5 16 35]
 [ 5 69  1  5 12 52  6  5 56 52]
 [48 29 12 61 35 35  8 64 76 78]
 [12  5 24 39 45 29 12 56  5 63]
 [ 5 29  6  5 29 78 28  5 78 29]
 [ 5 13  6  5 36 69 78 35 52 12]
 [63 76 12  5 18 52  1 76  5 58]
 [34  5 73 39  6  5 12 52 36  5]
 [ 6  5 29 78 12 79  6 61  5 59]
 [ 5 78 69 29 24  5  6 52  5 63]]

y
 [[63 69 22  6 76 45  5 16 35 35]
 [69  1  5 12 52  6  5 56 52 29]
 [29 12 61 35 35  8 64 76 78 28]
 [ 5 24 39 45 29 12 56  5 63 29]
 [29  6  5 29 78 28  5 78 29 45]
 [13  6  5 36 69 78 35 52 12 43]
 [76 12  5 18 52  1 76  5 58 52]
 [ 5 73 39  6  5 12 52 36  5 78]
 [ 5 29 78 12 79  6 61  5 59 63]
 [78 69 29 24  5  6 52  5 63 76]]

although the exact numbers will be different. Check to make sure the data is shifted over one step for y.

Building the model

Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.

Inputs

First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.

Exercise: Create the input placeholders in the function below.


In [16]:
def build_inputs(batch_size, num_steps):
    ''' Define placeholders for inputs, targets, and dropout 
    
        Arguments
        ---------
        batch_size: Batch size, number of sequences per batch
        num_steps: Number of sequence steps in a batch
        
    '''
    # Declare placeholders we'll feed into the graph
    inputs = tf.placeholder(
                tf.int32,
                shape=[batch_size, num_steps])
    targets = tf.placeholder(
                tf.int32,
                shape=[batch_size, num_steps])
    
    # Keep probability placeholder for drop out layers
    keep_prob = tf.placeholder(tf.float32)
    
    return inputs, targets, keep_prob

LSTM Cell

Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.

We first create a basic LSTM cell with

lstm = tf.contrib.rnn.BasicLSTMCell(num_units)

where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with

tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this

tf.contrib.rnn.MultiRNNCell([cell]*num_layers)

This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like

def build_cell(num_units, keep_prob):
    lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

    return drop

tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])

Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.

We also need to create an initial cell state of all zeros. This can be done like so

initial_state = cell.zero_state(batch_size, tf.float32)

Below, we implement the build_lstm function to create these LSTM cells and the initial state.


In [17]:
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
    ''' Build LSTM cell.
    
        Arguments
        ---------
        keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
        lstm_size: Size of the hidden layers in the LSTM cells
        num_layers: Number of LSTM layers
        batch_size: Batch size

    '''
    ### Build the LSTM Cell
    # Use a basic LSTM cell
    lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
    
    # Add dropout to the cell outputs
    drop = tf.contrib.rnn.DropoutWrapper(
                        lstm,
                        output_keep_prob = keep_prob)
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers)
    initial_state = cell.zero_state(batch_size,tf.float32)
    
    return cell, initial_state

RNN Output

Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.

If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.

We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.

One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.

Exercise: Implement the output layer in the function below.


In [27]:
def build_output(lstm_output, in_size, out_size):
    ''' Build a softmax layer, return the softmax output and logits.
    
        Arguments
        ---------
        
        lstm_output: List of output tensors from the LSTM layer
        in_size: Size of the input tensor, for example, size of the LSTM cells
        out_size: Size of this softmax layer
    
    '''

    # Reshape output so it's a bunch of rows, one row for each step for each sequence.
    
    # Concatenate lstm_output over axis 1 (the columns)
    seq_output = tf.concat(lstm_output,axis=1)
    # Reshape seq_output to a 2D tensor with lstm_size columns
    x = tf.reshape(seq_output,(-1,in_size))
    
    # Connect the RNN outputs to a softmax layer
    with tf.variable_scope('softmax'):
        # Create the weight and bias variables here
        softmax_w = tf.Variable(
                            tf.random_normal([in_size,out_size], stddev=0.05))
        softmax_b = tf.Variable(tf.zeros([out_size]))
                            
                            
    
    # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
    # of rows of logit outputs, one for each step and sequence
    logits = tf.add(tf.matmul(x,softmax_w),softmax_b)
    
    
    # Use softmax to get the probabilities for predicted characters
    out = tf.nn.softmax(logits)
    
    return out, logits

Training loss

Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.

Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.

Exercise: Implement the loss calculation in the function below.


In [35]:
def build_loss(logits, targets, lstm_size, num_classes):
    ''' Calculate the loss from the logits and the targets.
    
        Arguments
        ---------
        logits: Logits from final fully connected layer
        targets: Targets for supervised learning
        lstm_size: Number of LSTM hidden units
        num_classes: Number of classes in targets
        
    '''
    
    # One-hot encode targets and reshape to match logits, one row per sequence per step
    y_one_hot = tf.one_hot(targets, num_classes)
    y_reshaped = tf.reshape(y_one_hot,(-1,num_classes))
    
    # Softmax cross entropy loss
    loss = tf.nn.softmax_cross_entropy_with_logits(logits = logits,labels = y_reshaped)
    
    
    return tf.reduce_mean(loss)

Optimizer

Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.


In [20]:
def build_optimizer(loss, learning_rate, grad_clip):
    ''' Build optmizer for training, using gradient clipping.
    
        Arguments:
        loss: Network loss
        learning_rate: Learning rate for optimizer
    
    '''
    
    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))
    
    return optimizer

Build the network

Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.

Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.


In [37]:
class CharRNN:
    
    def __init__(self, num_classes, batch_size=64, num_steps=50, 
                       lstm_size=128, num_layers=2, learning_rate=0.001, 
                       grad_clip=5, sampling=False):
    
        # When we're using this network for sampling later, we'll be passing in
        # one character at a time, so providing an option for that
        if sampling == True:
            batch_size, num_steps = 1, 1
        else:
            batch_size, num_steps = batch_size, num_steps

        tf.reset_default_graph()
        
        # Build the input placeholder tensors
        self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
        
        # Build the LSTM cell
        cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)

        ### Run the data through the RNN layers
        # First, one-hot encode the input tokens
        x_one_hot = tf.one_hot(self.inputs, num_classes)
        
        # Run each sequence step through the RNN with tf.nn.dynamic_rnn 
        outputs, state = tf.nn.dynamic_rnn(cell,x_one_hot, initial_state = self.initial_state)
        self.final_state = state
        
        # Get softmax predictions and logits
        self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
        
        # Loss and optimizer (with gradient clipping)
        self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
        self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)

Hyperparameters

Here are the hyperparameters for the network.

  • batch_size - Number of sequences running through the network in one pass.
  • num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lstm_size - The number of units in the hidden layers.
  • num_layers - Number of hidden LSTM layers to use
  • learning_rate - Learning rate for training
  • keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.


In [44]:
batch_size = 200         # Sequences per batch
num_steps = 50          # Number of sequence steps per batch
lstm_size = 128         # Size of hidden layers in LSTMs
num_layers = 1          # Number of LSTM layers
learning_rate = 0.01    # Learning rate
keep_prob = 0.5         # Dropout keep probability

Time for training

This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.

Here I'm saving checkpoints with the format

i{iteration number}_l{# hidden layer units}.ckpt

Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.


In [50]:
epochs = 10
# Save every N iterations
save_every_n = 200

model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
                lstm_size=lstm_size, num_layers=num_layers, 
                learning_rate=learning_rate)

saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/______.ckpt')
    counter = 0
    for e in range(epochs):
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for x, y in get_batches(encoded, batch_size, num_steps):
            counter += 1
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: keep_prob,
                    model.initial_state: new_state}

           # print(x.shape)
           # print(y.shape)
            batch_loss, new_state, _ = sess.run([model.loss, 
                                                 model.final_state, 
                                                 model.optimizer], 
                                                 feed_dict=feed)
            
            end = time.time()
            print('Epoch: {}/{}... '.format(e+1, epochs),
                  'Training Step: {}... '.format(counter),
                  'Training loss: {:.4f}... '.format(batch_loss),
                  '{:.4f} sec/batch'.format((end-start)))
        
            if (counter % save_every_n == 0):
                saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
    
    saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))


10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 1/10...  Training Step: 1...  Training loss: 4.4239...  0.1983 sec/batch
Epoch: 1/10...  Training Step: 2...  Training loss: 4.3640...  0.1983 sec/batch
Epoch: 1/10...  Training Step: 3...  Training loss: 3.7821...  0.1652 sec/batch
Epoch: 1/10...  Training Step: 4...  Training loss: 3.2696...  0.1766 sec/batch
Epoch: 1/10...  Training Step: 5...  Training loss: 3.1713...  0.1716 sec/batch
Epoch: 1/10...  Training Step: 6...  Training loss: 3.1876...  0.1790 sec/batch
Epoch: 1/10...  Training Step: 7...  Training loss: 3.1615...  0.1759 sec/batch
Epoch: 1/10...  Training Step: 8...  Training loss: 3.1534...  0.2025 sec/batch
Epoch: 1/10...  Training Step: 9...  Training loss: 3.1364...  0.1604 sec/batch
Epoch: 1/10...  Training Step: 10...  Training loss: 3.1499...  0.1757 sec/batch
Epoch: 1/10...  Training Step: 11...  Training loss: 3.1503...  0.1653 sec/batch
Epoch: 1/10...  Training Step: 12...  Training loss: 3.1215...  0.2176 sec/batch
Epoch: 1/10...  Training Step: 13...  Training loss: 3.1318...  0.2048 sec/batch
Epoch: 1/10...  Training Step: 14...  Training loss: 3.1141...  0.3762 sec/batch
Epoch: 1/10...  Training Step: 15...  Training loss: 3.0957...  0.3351 sec/batch
Epoch: 1/10...  Training Step: 16...  Training loss: 3.1218...  0.3509 sec/batch
Epoch: 1/10...  Training Step: 17...  Training loss: 3.1116...  0.3381 sec/batch
Epoch: 1/10...  Training Step: 18...  Training loss: 3.0725...  0.3316 sec/batch
Epoch: 1/10...  Training Step: 19...  Training loss: 3.0989...  0.3794 sec/batch
Epoch: 1/10...  Training Step: 20...  Training loss: 3.0950...  0.3658 sec/batch
Epoch: 1/10...  Training Step: 21...  Training loss: 3.0689...  0.3679 sec/batch
Epoch: 1/10...  Training Step: 22...  Training loss: 3.0624...  0.3398 sec/batch
Epoch: 1/10...  Training Step: 23...  Training loss: 3.0824...  0.4768 sec/batch
Epoch: 1/10...  Training Step: 24...  Training loss: 3.0731...  0.4168 sec/batch
Epoch: 1/10...  Training Step: 25...  Training loss: 3.0776...  0.3239 sec/batch
Epoch: 1/10...  Training Step: 26...  Training loss: 3.0618...  0.3060 sec/batch
Epoch: 1/10...  Training Step: 27...  Training loss: 3.0725...  0.3239 sec/batch
Epoch: 1/10...  Training Step: 28...  Training loss: 3.0453...  0.3574 sec/batch
Epoch: 1/10...  Training Step: 29...  Training loss: 3.0388...  0.3615 sec/batch
Epoch: 1/10...  Training Step: 30...  Training loss: 3.0325...  0.3293 sec/batch
Epoch: 1/10...  Training Step: 31...  Training loss: 3.0178...  0.3738 sec/batch
Epoch: 1/10...  Training Step: 32...  Training loss: 3.0139...  0.4527 sec/batch
Epoch: 1/10...  Training Step: 33...  Training loss: 2.9950...  0.4078 sec/batch
Epoch: 1/10...  Training Step: 34...  Training loss: 2.9863...  0.3399 sec/batch
Epoch: 1/10...  Training Step: 35...  Training loss: 2.9895...  0.4395 sec/batch
Epoch: 1/10...  Training Step: 36...  Training loss: 2.9779...  0.3832 sec/batch
Epoch: 1/10...  Training Step: 37...  Training loss: 2.9695...  0.4035 sec/batch
Epoch: 1/10...  Training Step: 38...  Training loss: 2.9554...  0.3512 sec/batch
Epoch: 1/10...  Training Step: 39...  Training loss: 2.9193...  0.3807 sec/batch
Epoch: 1/10...  Training Step: 40...  Training loss: 2.8980...  0.3421 sec/batch
Epoch: 1/10...  Training Step: 41...  Training loss: 2.8833...  0.3111 sec/batch
Epoch: 1/10...  Training Step: 42...  Training loss: 2.8686...  0.2874 sec/batch
Epoch: 1/10...  Training Step: 43...  Training loss: 2.8647...  0.3356 sec/batch
Epoch: 1/10...  Training Step: 44...  Training loss: 2.8468...  0.4401 sec/batch
Epoch: 1/10...  Training Step: 45...  Training loss: 2.7980...  0.3781 sec/batch
Epoch: 1/10...  Training Step: 46...  Training loss: 2.8001...  0.3419 sec/batch
Epoch: 1/10...  Training Step: 47...  Training loss: 2.7798...  0.3105 sec/batch
Epoch: 1/10...  Training Step: 48...  Training loss: 2.7363...  0.4339 sec/batch
Epoch: 1/10...  Training Step: 49...  Training loss: 2.7113...  0.4112 sec/batch
Epoch: 1/10...  Training Step: 50...  Training loss: 2.7165...  0.4727 sec/batch
Epoch: 1/10...  Training Step: 51...  Training loss: 2.6899...  0.3766 sec/batch
Epoch: 1/10...  Training Step: 52...  Training loss: 2.6828...  0.3682 sec/batch
Epoch: 1/10...  Training Step: 53...  Training loss: 2.6541...  0.3082 sec/batch
Epoch: 1/10...  Training Step: 54...  Training loss: 2.6087...  0.3513 sec/batch
Epoch: 1/10...  Training Step: 55...  Training loss: 2.6299...  0.3234 sec/batch
Epoch: 1/10...  Training Step: 56...  Training loss: 2.6005...  0.4135 sec/batch
Epoch: 1/10...  Training Step: 57...  Training loss: 2.5994...  0.3429 sec/batch
Epoch: 1/10...  Training Step: 58...  Training loss: 2.5809...  0.3901 sec/batch
Epoch: 1/10...  Training Step: 59...  Training loss: 2.5523...  0.3509 sec/batch
Epoch: 1/10...  Training Step: 60...  Training loss: 2.5716...  0.3766 sec/batch
Epoch: 1/10...  Training Step: 61...  Training loss: 2.6159...  0.3873 sec/batch
Epoch: 1/10...  Training Step: 62...  Training loss: 2.5491...  0.3036 sec/batch
Epoch: 1/10...  Training Step: 63...  Training loss: 2.5652...  0.3573 sec/batch
Epoch: 1/10...  Training Step: 64...  Training loss: 2.5241...  0.3080 sec/batch
Epoch: 1/10...  Training Step: 65...  Training loss: 2.4927...  0.4156 sec/batch
Epoch: 1/10...  Training Step: 66...  Training loss: 2.5373...  0.3292 sec/batch
Epoch: 1/10...  Training Step: 67...  Training loss: 2.5156...  0.3134 sec/batch
Epoch: 1/10...  Training Step: 68...  Training loss: 2.5016...  0.3680 sec/batch
Epoch: 1/10...  Training Step: 69...  Training loss: 2.5061...  0.3720 sec/batch
Epoch: 1/10...  Training Step: 70...  Training loss: 2.4590...  0.3610 sec/batch
Epoch: 1/10...  Training Step: 71...  Training loss: 2.4722...  0.2823 sec/batch
Epoch: 1/10...  Training Step: 72...  Training loss: 2.4681...  0.3729 sec/batch
Epoch: 1/10...  Training Step: 73...  Training loss: 2.4511...  0.3040 sec/batch
Epoch: 1/10...  Training Step: 74...  Training loss: 2.4453...  0.3142 sec/batch
Epoch: 1/10...  Training Step: 75...  Training loss: 2.4182...  0.3166 sec/batch
Epoch: 1/10...  Training Step: 76...  Training loss: 2.4292...  0.3835 sec/batch
Epoch: 1/10...  Training Step: 77...  Training loss: 2.4087...  0.3624 sec/batch
Epoch: 1/10...  Training Step: 78...  Training loss: 2.4352...  0.3879 sec/batch
Epoch: 1/10...  Training Step: 79...  Training loss: 2.4071...  0.3622 sec/batch
Epoch: 1/10...  Training Step: 80...  Training loss: 2.4282...  0.2658 sec/batch
Epoch: 1/10...  Training Step: 81...  Training loss: 2.3968...  0.3374 sec/batch
Epoch: 1/10...  Training Step: 82...  Training loss: 2.3743...  0.3345 sec/batch
Epoch: 1/10...  Training Step: 83...  Training loss: 2.3935...  0.2841 sec/batch
Epoch: 1/10...  Training Step: 84...  Training loss: 2.3924...  0.4190 sec/batch
Epoch: 1/10...  Training Step: 85...  Training loss: 2.3659...  0.4004 sec/batch
Epoch: 1/10...  Training Step: 86...  Training loss: 2.3535...  0.3007 sec/batch
Epoch: 1/10...  Training Step: 87...  Training loss: 2.3550...  0.3785 sec/batch
Epoch: 1/10...  Training Step: 88...  Training loss: 2.3921...  0.3194 sec/batch
Epoch: 1/10...  Training Step: 89...  Training loss: 2.3867...  0.3898 sec/batch
Epoch: 1/10...  Training Step: 90...  Training loss: 2.3912...  0.3515 sec/batch
Epoch: 1/10...  Training Step: 91...  Training loss: 2.3252...  0.4255 sec/batch
Epoch: 1/10...  Training Step: 92...  Training loss: 2.3719...  0.3504 sec/batch
Epoch: 1/10...  Training Step: 93...  Training loss: 2.3841...  0.2882 sec/batch
Epoch: 1/10...  Training Step: 94...  Training loss: 2.3720...  0.3671 sec/batch
Epoch: 1/10...  Training Step: 95...  Training loss: 2.3534...  0.3006 sec/batch
Epoch: 1/10...  Training Step: 96...  Training loss: 2.3456...  0.3630 sec/batch
Epoch: 1/10...  Training Step: 97...  Training loss: 2.3404...  0.3859 sec/batch
Epoch: 1/10...  Training Step: 98...  Training loss: 2.3568...  0.3578 sec/batch
Epoch: 1/10...  Training Step: 99...  Training loss: 2.3348...  0.3678 sec/batch
Epoch: 1/10...  Training Step: 100...  Training loss: 2.3348...  0.2965 sec/batch
Epoch: 1/10...  Training Step: 101...  Training loss: 2.2961...  0.3492 sec/batch
Epoch: 1/10...  Training Step: 102...  Training loss: 2.2960...  0.3688 sec/batch
Epoch: 1/10...  Training Step: 103...  Training loss: 2.3025...  0.3820 sec/batch
Epoch: 1/10...  Training Step: 104...  Training loss: 2.3498...  0.4153 sec/batch
Epoch: 1/10...  Training Step: 105...  Training loss: 2.3299...  0.3267 sec/batch
Epoch: 1/10...  Training Step: 106...  Training loss: 2.2993...  0.2865 sec/batch
Epoch: 1/10...  Training Step: 107...  Training loss: 2.2788...  0.3817 sec/batch
Epoch: 1/10...  Training Step: 108...  Training loss: 2.2929...  0.3532 sec/batch
Epoch: 1/10...  Training Step: 109...  Training loss: 2.2930...  0.3310 sec/batch
Epoch: 1/10...  Training Step: 110...  Training loss: 2.2723...  0.4130 sec/batch
Epoch: 1/10...  Training Step: 111...  Training loss: 2.2893...  0.3114 sec/batch
Epoch: 1/10...  Training Step: 112...  Training loss: 2.2553...  0.3330 sec/batch
Epoch: 1/10...  Training Step: 113...  Training loss: 2.2545...  0.4142 sec/batch
Epoch: 1/10...  Training Step: 114...  Training loss: 2.2782...  0.3667 sec/batch
Epoch: 1/10...  Training Step: 115...  Training loss: 2.2530...  0.3292 sec/batch
Epoch: 1/10...  Training Step: 116...  Training loss: 2.2332...  0.3628 sec/batch
Epoch: 1/10...  Training Step: 117...  Training loss: 2.2717...  0.3480 sec/batch
Epoch: 1/10...  Training Step: 118...  Training loss: 2.2060...  0.2975 sec/batch
Epoch: 1/10...  Training Step: 119...  Training loss: 2.2072...  0.2977 sec/batch
Epoch: 1/10...  Training Step: 120...  Training loss: 2.2623...  0.3493 sec/batch
Epoch: 1/10...  Training Step: 121...  Training loss: 2.2716...  0.4644 sec/batch
Epoch: 1/10...  Training Step: 122...  Training loss: 2.2283...  0.3938 sec/batch
Epoch: 1/10...  Training Step: 123...  Training loss: 2.2407...  0.3511 sec/batch
Epoch: 1/10...  Training Step: 124...  Training loss: 2.2294...  0.4526 sec/batch
Epoch: 1/10...  Training Step: 125...  Training loss: 2.2186...  0.3521 sec/batch
Epoch: 1/10...  Training Step: 126...  Training loss: 2.2151...  0.3479 sec/batch
Epoch: 1/10...  Training Step: 127...  Training loss: 2.2141...  0.4357 sec/batch
Epoch: 1/10...  Training Step: 128...  Training loss: 2.1904...  0.4182 sec/batch
Epoch: 1/10...  Training Step: 129...  Training loss: 2.1909...  0.3108 sec/batch
Epoch: 1/10...  Training Step: 130...  Training loss: 2.2037...  0.3626 sec/batch
Epoch: 1/10...  Training Step: 131...  Training loss: 2.2157...  0.2891 sec/batch
Epoch: 1/10...  Training Step: 132...  Training loss: 2.2241...  0.3230 sec/batch
Epoch: 1/10...  Training Step: 133...  Training loss: 2.1852...  0.3586 sec/batch
Epoch: 1/10...  Training Step: 134...  Training loss: 2.2043...  0.3338 sec/batch
Epoch: 1/10...  Training Step: 135...  Training loss: 2.1621...  0.3150 sec/batch
Epoch: 1/10...  Training Step: 136...  Training loss: 2.2035...  0.3260 sec/batch
Epoch: 1/10...  Training Step: 137...  Training loss: 2.1657...  0.3727 sec/batch
Epoch: 1/10...  Training Step: 138...  Training loss: 2.1895...  0.3793 sec/batch
Epoch: 1/10...  Training Step: 139...  Training loss: 2.2050...  0.3986 sec/batch
Epoch: 1/10...  Training Step: 140...  Training loss: 2.1497...  0.2918 sec/batch
Epoch: 1/10...  Training Step: 141...  Training loss: 2.1512...  0.3498 sec/batch
Epoch: 1/10...  Training Step: 142...  Training loss: 2.1688...  0.3389 sec/batch
Epoch: 1/10...  Training Step: 143...  Training loss: 2.1626...  0.3190 sec/batch
Epoch: 1/10...  Training Step: 144...  Training loss: 2.1833...  0.2713 sec/batch
Epoch: 1/10...  Training Step: 145...  Training loss: 2.1832...  0.2547 sec/batch
Epoch: 1/10...  Training Step: 146...  Training loss: 2.1766...  0.3812 sec/batch
Epoch: 1/10...  Training Step: 147...  Training loss: 2.1749...  0.2884 sec/batch
Epoch: 1/10...  Training Step: 148...  Training loss: 2.1573...  0.3006 sec/batch
Epoch: 1/10...  Training Step: 149...  Training loss: 2.1656...  0.3300 sec/batch
Epoch: 1/10...  Training Step: 150...  Training loss: 2.1600...  0.3162 sec/batch
Epoch: 1/10...  Training Step: 151...  Training loss: 2.1838...  0.3228 sec/batch
Epoch: 1/10...  Training Step: 152...  Training loss: 2.1730...  0.3112 sec/batch
Epoch: 1/10...  Training Step: 153...  Training loss: 2.1545...  0.3283 sec/batch
Epoch: 1/10...  Training Step: 154...  Training loss: 2.1161...  0.3522 sec/batch
Epoch: 1/10...  Training Step: 155...  Training loss: 2.1315...  0.3554 sec/batch
Epoch: 1/10...  Training Step: 156...  Training loss: 2.1200...  0.2972 sec/batch
Epoch: 1/10...  Training Step: 157...  Training loss: 2.1134...  0.3026 sec/batch
Epoch: 1/10...  Training Step: 158...  Training loss: 2.0889...  0.2889 sec/batch
Epoch: 1/10...  Training Step: 159...  Training loss: 2.1099...  0.3278 sec/batch
Epoch: 1/10...  Training Step: 160...  Training loss: 2.0731...  0.3382 sec/batch
Epoch: 1/10...  Training Step: 161...  Training loss: 2.0859...  0.3927 sec/batch
Epoch: 1/10...  Training Step: 162...  Training loss: 2.0549...  0.3231 sec/batch
Epoch: 1/10...  Training Step: 163...  Training loss: 2.1001...  0.4613 sec/batch
Epoch: 1/10...  Training Step: 164...  Training loss: 2.0882...  0.3488 sec/batch
Epoch: 1/10...  Training Step: 165...  Training loss: 2.0992...  0.3760 sec/batch
Epoch: 1/10...  Training Step: 166...  Training loss: 2.0658...  0.3219 sec/batch
Epoch: 1/10...  Training Step: 167...  Training loss: 2.0840...  0.3126 sec/batch
Epoch: 1/10...  Training Step: 168...  Training loss: 2.0666...  0.3055 sec/batch
Epoch: 1/10...  Training Step: 169...  Training loss: 2.0754...  0.3883 sec/batch
Epoch: 1/10...  Training Step: 170...  Training loss: 2.0568...  0.2950 sec/batch
Epoch: 1/10...  Training Step: 171...  Training loss: 2.0962...  0.3670 sec/batch
Epoch: 1/10...  Training Step: 172...  Training loss: 2.0755...  0.3820 sec/batch
Epoch: 1/10...  Training Step: 173...  Training loss: 2.0795...  0.3706 sec/batch
Epoch: 1/10...  Training Step: 174...  Training loss: 2.0670...  0.4099 sec/batch
Epoch: 1/10...  Training Step: 175...  Training loss: 2.0375...  0.3038 sec/batch
Epoch: 1/10...  Training Step: 176...  Training loss: 2.0718...  0.3341 sec/batch
Epoch: 1/10...  Training Step: 177...  Training loss: 2.0248...  0.3674 sec/batch
Epoch: 1/10...  Training Step: 178...  Training loss: 2.0446...  0.3530 sec/batch
Epoch: 1/10...  Training Step: 179...  Training loss: 2.0522...  0.3852 sec/batch
Epoch: 1/10...  Training Step: 180...  Training loss: 2.0600...  0.3024 sec/batch
Epoch: 1/10...  Training Step: 181...  Training loss: 2.0633...  0.3047 sec/batch
Epoch: 1/10...  Training Step: 182...  Training loss: 2.0302...  0.2855 sec/batch
Epoch: 1/10...  Training Step: 183...  Training loss: 2.0561...  0.3114 sec/batch
Epoch: 1/10...  Training Step: 184...  Training loss: 2.0660...  0.2842 sec/batch
Epoch: 1/10...  Training Step: 185...  Training loss: 2.0215...  0.3306 sec/batch
Epoch: 1/10...  Training Step: 186...  Training loss: 2.0050...  0.4197 sec/batch
Epoch: 1/10...  Training Step: 187...  Training loss: 2.0196...  0.2952 sec/batch
Epoch: 1/10...  Training Step: 188...  Training loss: 2.0281...  0.3950 sec/batch
Epoch: 1/10...  Training Step: 189...  Training loss: 2.0104...  0.3083 sec/batch
Epoch: 1/10...  Training Step: 190...  Training loss: 2.0371...  0.2926 sec/batch
Epoch: 1/10...  Training Step: 191...  Training loss: 2.0133...  0.3826 sec/batch
Epoch: 1/10...  Training Step: 192...  Training loss: 2.0026...  0.3444 sec/batch
Epoch: 1/10...  Training Step: 193...  Training loss: 2.0033...  0.3809 sec/batch
Epoch: 1/10...  Training Step: 194...  Training loss: 2.0069...  0.3424 sec/batch
Epoch: 1/10...  Training Step: 195...  Training loss: 1.9900...  0.3158 sec/batch
Epoch: 1/10...  Training Step: 196...  Training loss: 1.9999...  0.2759 sec/batch
Epoch: 1/10...  Training Step: 197...  Training loss: 1.9976...  0.2933 sec/batch
Epoch: 1/10...  Training Step: 198...  Training loss: 1.9886...  0.3836 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 2/10...  Training Step: 199...  Training loss: 2.0458...  0.3690 sec/batch
Epoch: 2/10...  Training Step: 200...  Training loss: 2.0010...  0.3596 sec/batch
Epoch: 2/10...  Training Step: 201...  Training loss: 1.9946...  0.4154 sec/batch
Epoch: 2/10...  Training Step: 202...  Training loss: 1.9948...  0.3426 sec/batch
Epoch: 2/10...  Training Step: 203...  Training loss: 1.9808...  0.3357 sec/batch
Epoch: 2/10...  Training Step: 204...  Training loss: 1.9944...  0.3819 sec/batch
Epoch: 2/10...  Training Step: 205...  Training loss: 1.9640...  0.3003 sec/batch
Epoch: 2/10...  Training Step: 206...  Training loss: 1.9876...  0.3209 sec/batch
Epoch: 2/10...  Training Step: 207...  Training loss: 1.9719...  0.3229 sec/batch
Epoch: 2/10...  Training Step: 208...  Training loss: 1.9914...  0.3139 sec/batch
Epoch: 2/10...  Training Step: 209...  Training loss: 1.9679...  0.3969 sec/batch
Epoch: 2/10...  Training Step: 210...  Training loss: 1.9693...  0.3972 sec/batch
Epoch: 2/10...  Training Step: 211...  Training loss: 1.9738...  0.3764 sec/batch
Epoch: 2/10...  Training Step: 212...  Training loss: 1.9826...  0.3530 sec/batch
Epoch: 2/10...  Training Step: 213...  Training loss: 1.9520...  0.3097 sec/batch
Epoch: 2/10...  Training Step: 214...  Training loss: 1.9981...  0.3808 sec/batch
Epoch: 2/10...  Training Step: 215...  Training loss: 1.9708...  0.3920 sec/batch
Epoch: 2/10...  Training Step: 216...  Training loss: 1.9974...  0.3348 sec/batch
Epoch: 2/10...  Training Step: 217...  Training loss: 1.9753...  0.3202 sec/batch
Epoch: 2/10...  Training Step: 218...  Training loss: 1.9666...  0.3693 sec/batch
Epoch: 2/10...  Training Step: 219...  Training loss: 1.9706...  0.2876 sec/batch
Epoch: 2/10...  Training Step: 220...  Training loss: 1.9432...  0.2956 sec/batch
Epoch: 2/10...  Training Step: 221...  Training loss: 1.9299...  0.3758 sec/batch
Epoch: 2/10...  Training Step: 222...  Training loss: 1.9585...  0.3138 sec/batch
Epoch: 2/10...  Training Step: 223...  Training loss: 1.9430...  0.3426 sec/batch
Epoch: 2/10...  Training Step: 224...  Training loss: 1.9638...  0.4020 sec/batch
Epoch: 2/10...  Training Step: 225...  Training loss: 1.9806...  0.3419 sec/batch
Epoch: 2/10...  Training Step: 226...  Training loss: 1.9565...  0.3191 sec/batch
Epoch: 2/10...  Training Step: 227...  Training loss: 1.9291...  0.3484 sec/batch
Epoch: 2/10...  Training Step: 228...  Training loss: 1.9497...  0.4270 sec/batch
Epoch: 2/10...  Training Step: 229...  Training loss: 1.9259...  0.3647 sec/batch
Epoch: 2/10...  Training Step: 230...  Training loss: 1.9455...  0.3541 sec/batch
Epoch: 2/10...  Training Step: 231...  Training loss: 1.8909...  0.3012 sec/batch
Epoch: 2/10...  Training Step: 232...  Training loss: 1.9440...  0.3020 sec/batch
Epoch: 2/10...  Training Step: 233...  Training loss: 1.9480...  0.3549 sec/batch
Epoch: 2/10...  Training Step: 234...  Training loss: 1.9573...  0.3147 sec/batch
Epoch: 2/10...  Training Step: 235...  Training loss: 1.9475...  0.3721 sec/batch
Epoch: 2/10...  Training Step: 236...  Training loss: 1.9236...  0.4173 sec/batch
Epoch: 2/10...  Training Step: 237...  Training loss: 1.9235...  0.2955 sec/batch
Epoch: 2/10...  Training Step: 238...  Training loss: 1.9440...  0.4008 sec/batch
Epoch: 2/10...  Training Step: 239...  Training loss: 1.9209...  0.4147 sec/batch
Epoch: 2/10...  Training Step: 240...  Training loss: 1.9308...  0.2979 sec/batch
Epoch: 2/10...  Training Step: 241...  Training loss: 1.9769...  0.3092 sec/batch
Epoch: 2/10...  Training Step: 242...  Training loss: 1.9321...  0.3296 sec/batch
Epoch: 2/10...  Training Step: 243...  Training loss: 1.8988...  0.3744 sec/batch
Epoch: 2/10...  Training Step: 244...  Training loss: 1.9128...  0.3120 sec/batch
Epoch: 2/10...  Training Step: 245...  Training loss: 1.9184...  0.3463 sec/batch
Epoch: 2/10...  Training Step: 246...  Training loss: 1.8874...  0.2958 sec/batch
Epoch: 2/10...  Training Step: 247...  Training loss: 1.9319...  0.3363 sec/batch
Epoch: 2/10...  Training Step: 248...  Training loss: 1.9166...  0.3827 sec/batch
Epoch: 2/10...  Training Step: 249...  Training loss: 1.8969...  0.3217 sec/batch
Epoch: 2/10...  Training Step: 250...  Training loss: 1.9048...  0.3286 sec/batch
Epoch: 2/10...  Training Step: 251...  Training loss: 1.8940...  0.3136 sec/batch
Epoch: 2/10...  Training Step: 252...  Training loss: 1.8893...  0.4383 sec/batch
Epoch: 2/10...  Training Step: 253...  Training loss: 1.9464...  0.3517 sec/batch
Epoch: 2/10...  Training Step: 254...  Training loss: 1.9043...  0.3376 sec/batch
Epoch: 2/10...  Training Step: 255...  Training loss: 1.9334...  0.3513 sec/batch
Epoch: 2/10...  Training Step: 256...  Training loss: 1.9113...  0.4030 sec/batch
Epoch: 2/10...  Training Step: 257...  Training loss: 1.9067...  0.3257 sec/batch
Epoch: 2/10...  Training Step: 258...  Training loss: 1.9125...  0.3115 sec/batch
Epoch: 2/10...  Training Step: 259...  Training loss: 1.9083...  0.3950 sec/batch
Epoch: 2/10...  Training Step: 260...  Training loss: 1.9129...  0.4291 sec/batch
Epoch: 2/10...  Training Step: 261...  Training loss: 1.8751...  0.3444 sec/batch
Epoch: 2/10...  Training Step: 262...  Training loss: 1.8827...  0.4102 sec/batch
Epoch: 2/10...  Training Step: 263...  Training loss: 1.8687...  0.3355 sec/batch
Epoch: 2/10...  Training Step: 264...  Training loss: 1.8996...  0.3989 sec/batch
Epoch: 2/10...  Training Step: 265...  Training loss: 1.9280...  0.3537 sec/batch
Epoch: 2/10...  Training Step: 266...  Training loss: 1.8858...  0.3417 sec/batch
Epoch: 2/10...  Training Step: 267...  Training loss: 1.8840...  0.3411 sec/batch
Epoch: 2/10...  Training Step: 268...  Training loss: 1.8811...  0.3277 sec/batch
Epoch: 2/10...  Training Step: 269...  Training loss: 1.8889...  0.3120 sec/batch
Epoch: 2/10...  Training Step: 270...  Training loss: 1.8926...  0.2786 sec/batch
Epoch: 2/10...  Training Step: 271...  Training loss: 1.8859...  0.2729 sec/batch
Epoch: 2/10...  Training Step: 272...  Training loss: 1.8921...  0.3077 sec/batch
Epoch: 2/10...  Training Step: 273...  Training loss: 1.8498...  0.4030 sec/batch
Epoch: 2/10...  Training Step: 274...  Training loss: 1.8903...  0.3809 sec/batch
Epoch: 2/10...  Training Step: 275...  Training loss: 1.8724...  0.4360 sec/batch
Epoch: 2/10...  Training Step: 276...  Training loss: 1.8481...  0.3992 sec/batch
Epoch: 2/10...  Training Step: 277...  Training loss: 1.8678...  0.3629 sec/batch
Epoch: 2/10...  Training Step: 278...  Training loss: 1.8750...  0.3201 sec/batch
Epoch: 2/10...  Training Step: 279...  Training loss: 1.8535...  0.3086 sec/batch
Epoch: 2/10...  Training Step: 280...  Training loss: 1.8442...  0.4438 sec/batch
Epoch: 2/10...  Training Step: 281...  Training loss: 1.8811...  0.2720 sec/batch
Epoch: 2/10...  Training Step: 282...  Training loss: 1.8940...  0.3716 sec/batch
Epoch: 2/10...  Training Step: 283...  Training loss: 1.8544...  0.3490 sec/batch
Epoch: 2/10...  Training Step: 284...  Training loss: 1.8447...  0.4272 sec/batch
Epoch: 2/10...  Training Step: 285...  Training loss: 1.8394...  0.3776 sec/batch
Epoch: 2/10...  Training Step: 286...  Training loss: 1.8639...  0.3073 sec/batch
Epoch: 2/10...  Training Step: 287...  Training loss: 1.8646...  0.3409 sec/batch
Epoch: 2/10...  Training Step: 288...  Training loss: 1.8646...  0.3079 sec/batch
Epoch: 2/10...  Training Step: 289...  Training loss: 1.8076...  0.2957 sec/batch
Epoch: 2/10...  Training Step: 290...  Training loss: 1.8221...  0.3035 sec/batch
Epoch: 2/10...  Training Step: 291...  Training loss: 1.8426...  0.3551 sec/batch
Epoch: 2/10...  Training Step: 292...  Training loss: 1.8692...  0.3107 sec/batch
Epoch: 2/10...  Training Step: 293...  Training loss: 1.8454...  0.3951 sec/batch
Epoch: 2/10...  Training Step: 294...  Training loss: 1.8447...  0.2837 sec/batch
Epoch: 2/10...  Training Step: 295...  Training loss: 1.8421...  0.2978 sec/batch
Epoch: 2/10...  Training Step: 296...  Training loss: 1.8718...  0.3336 sec/batch
Epoch: 2/10...  Training Step: 297...  Training loss: 1.8631...  0.4043 sec/batch
Epoch: 2/10...  Training Step: 298...  Training loss: 1.8686...  0.3666 sec/batch
Epoch: 2/10...  Training Step: 299...  Training loss: 1.8079...  0.2946 sec/batch
Epoch: 2/10...  Training Step: 300...  Training loss: 1.8295...  0.3465 sec/batch
Epoch: 2/10...  Training Step: 301...  Training loss: 1.8440...  0.3366 sec/batch
Epoch: 2/10...  Training Step: 302...  Training loss: 1.8699...  0.3819 sec/batch
Epoch: 2/10...  Training Step: 303...  Training loss: 1.8763...  0.3269 sec/batch
Epoch: 2/10...  Training Step: 304...  Training loss: 1.8391...  0.3308 sec/batch
Epoch: 2/10...  Training Step: 305...  Training loss: 1.8461...  0.3429 sec/batch
Epoch: 2/10...  Training Step: 306...  Training loss: 1.8409...  0.3212 sec/batch
Epoch: 2/10...  Training Step: 307...  Training loss: 1.8581...  0.2914 sec/batch
Epoch: 2/10...  Training Step: 308...  Training loss: 1.8166...  0.2987 sec/batch
Epoch: 2/10...  Training Step: 309...  Training loss: 1.8461...  0.3098 sec/batch
Epoch: 2/10...  Training Step: 310...  Training loss: 1.8182...  0.3026 sec/batch
Epoch: 2/10...  Training Step: 311...  Training loss: 1.8305...  0.3739 sec/batch
Epoch: 2/10...  Training Step: 312...  Training loss: 1.8528...  0.4549 sec/batch
Epoch: 2/10...  Training Step: 313...  Training loss: 1.8229...  0.3886 sec/batch
Epoch: 2/10...  Training Step: 314...  Training loss: 1.8283...  0.3560 sec/batch
Epoch: 2/10...  Training Step: 315...  Training loss: 1.8291...  0.3250 sec/batch
Epoch: 2/10...  Training Step: 316...  Training loss: 1.7975...  0.3493 sec/batch
Epoch: 2/10...  Training Step: 317...  Training loss: 1.8125...  0.3927 sec/batch
Epoch: 2/10...  Training Step: 318...  Training loss: 1.8594...  0.3221 sec/batch
Epoch: 2/10...  Training Step: 319...  Training loss: 1.8542...  0.3048 sec/batch
Epoch: 2/10...  Training Step: 320...  Training loss: 1.8261...  0.2565 sec/batch
Epoch: 2/10...  Training Step: 321...  Training loss: 1.8475...  0.3094 sec/batch
Epoch: 2/10...  Training Step: 322...  Training loss: 1.8526...  0.2969 sec/batch
Epoch: 2/10...  Training Step: 323...  Training loss: 1.8240...  0.3249 sec/batch
Epoch: 2/10...  Training Step: 324...  Training loss: 1.8328...  0.2828 sec/batch
Epoch: 2/10...  Training Step: 325...  Training loss: 1.8355...  0.3333 sec/batch
Epoch: 2/10...  Training Step: 326...  Training loss: 1.8112...  0.4687 sec/batch
Epoch: 2/10...  Training Step: 327...  Training loss: 1.8036...  0.4215 sec/batch
Epoch: 2/10...  Training Step: 328...  Training loss: 1.8190...  0.4173 sec/batch
Epoch: 2/10...  Training Step: 329...  Training loss: 1.8119...  0.3124 sec/batch
Epoch: 2/10...  Training Step: 330...  Training loss: 1.8362...  0.3260 sec/batch
Epoch: 2/10...  Training Step: 331...  Training loss: 1.8143...  0.3193 sec/batch
Epoch: 2/10...  Training Step: 332...  Training loss: 1.8211...  0.2731 sec/batch
Epoch: 2/10...  Training Step: 333...  Training loss: 1.7972...  0.2935 sec/batch
Epoch: 2/10...  Training Step: 334...  Training loss: 1.8395...  0.4528 sec/batch
Epoch: 2/10...  Training Step: 335...  Training loss: 1.8521...  0.3954 sec/batch
Epoch: 2/10...  Training Step: 336...  Training loss: 1.8256...  0.3404 sec/batch
Epoch: 2/10...  Training Step: 337...  Training loss: 1.8185...  0.3883 sec/batch
Epoch: 2/10...  Training Step: 338...  Training loss: 1.7911...  0.2888 sec/batch
Epoch: 2/10...  Training Step: 339...  Training loss: 1.8136...  0.3616 sec/batch
Epoch: 2/10...  Training Step: 340...  Training loss: 1.8169...  0.2954 sec/batch
Epoch: 2/10...  Training Step: 341...  Training loss: 1.8020...  0.3256 sec/batch
Epoch: 2/10...  Training Step: 342...  Training loss: 1.8272...  0.3504 sec/batch
Epoch: 2/10...  Training Step: 343...  Training loss: 1.8199...  0.3916 sec/batch
Epoch: 2/10...  Training Step: 344...  Training loss: 1.8409...  0.2910 sec/batch
Epoch: 2/10...  Training Step: 345...  Training loss: 1.8252...  0.2953 sec/batch
Epoch: 2/10...  Training Step: 346...  Training loss: 1.8181...  0.3273 sec/batch
Epoch: 2/10...  Training Step: 347...  Training loss: 1.8174...  0.3116 sec/batch
Epoch: 2/10...  Training Step: 348...  Training loss: 1.8175...  0.3380 sec/batch
Epoch: 2/10...  Training Step: 349...  Training loss: 1.8421...  0.3478 sec/batch
Epoch: 2/10...  Training Step: 350...  Training loss: 1.8178...  0.3780 sec/batch
Epoch: 2/10...  Training Step: 351...  Training loss: 1.8116...  0.3708 sec/batch
Epoch: 2/10...  Training Step: 352...  Training loss: 1.7768...  0.3637 sec/batch
Epoch: 2/10...  Training Step: 353...  Training loss: 1.7997...  0.3386 sec/batch
Epoch: 2/10...  Training Step: 354...  Training loss: 1.7934...  0.3052 sec/batch
Epoch: 2/10...  Training Step: 355...  Training loss: 1.7944...  0.4172 sec/batch
Epoch: 2/10...  Training Step: 356...  Training loss: 1.7782...  0.3108 sec/batch
Epoch: 2/10...  Training Step: 357...  Training loss: 1.7874...  0.2826 sec/batch
Epoch: 2/10...  Training Step: 358...  Training loss: 1.7611...  0.3049 sec/batch
Epoch: 2/10...  Training Step: 359...  Training loss: 1.7694...  0.3403 sec/batch
Epoch: 2/10...  Training Step: 360...  Training loss: 1.7390...  0.3558 sec/batch
Epoch: 2/10...  Training Step: 361...  Training loss: 1.7827...  0.3251 sec/batch
Epoch: 2/10...  Training Step: 362...  Training loss: 1.7848...  0.4595 sec/batch
Epoch: 2/10...  Training Step: 363...  Training loss: 1.7916...  0.3272 sec/batch
Epoch: 2/10...  Training Step: 364...  Training loss: 1.7627...  0.2741 sec/batch
Epoch: 2/10...  Training Step: 365...  Training loss: 1.7821...  0.3734 sec/batch
Epoch: 2/10...  Training Step: 366...  Training loss: 1.7821...  0.3688 sec/batch
Epoch: 2/10...  Training Step: 367...  Training loss: 1.7812...  0.2806 sec/batch
Epoch: 2/10...  Training Step: 368...  Training loss: 1.7652...  0.3639 sec/batch
Epoch: 2/10...  Training Step: 369...  Training loss: 1.7998...  0.2900 sec/batch
Epoch: 2/10...  Training Step: 370...  Training loss: 1.7785...  0.2841 sec/batch
Epoch: 2/10...  Training Step: 371...  Training loss: 1.7679...  0.3012 sec/batch
Epoch: 2/10...  Training Step: 372...  Training loss: 1.7603...  0.3071 sec/batch
Epoch: 2/10...  Training Step: 373...  Training loss: 1.7632...  0.3572 sec/batch
Epoch: 2/10...  Training Step: 374...  Training loss: 1.7643...  0.3321 sec/batch
Epoch: 2/10...  Training Step: 375...  Training loss: 1.7315...  0.3845 sec/batch
Epoch: 2/10...  Training Step: 376...  Training loss: 1.7594...  0.4008 sec/batch
Epoch: 2/10...  Training Step: 377...  Training loss: 1.7735...  0.2896 sec/batch
Epoch: 2/10...  Training Step: 378...  Training loss: 1.7950...  0.3123 sec/batch
Epoch: 2/10...  Training Step: 379...  Training loss: 1.7775...  0.3149 sec/batch
Epoch: 2/10...  Training Step: 380...  Training loss: 1.7657...  0.3109 sec/batch
Epoch: 2/10...  Training Step: 381...  Training loss: 1.7708...  0.3082 sec/batch
Epoch: 2/10...  Training Step: 382...  Training loss: 1.7692...  0.3227 sec/batch
Epoch: 2/10...  Training Step: 383...  Training loss: 1.7599...  0.3569 sec/batch
Epoch: 2/10...  Training Step: 384...  Training loss: 1.7127...  0.2869 sec/batch
Epoch: 2/10...  Training Step: 385...  Training loss: 1.7587...  0.3659 sec/batch
Epoch: 2/10...  Training Step: 386...  Training loss: 1.7505...  0.2763 sec/batch
Epoch: 2/10...  Training Step: 387...  Training loss: 1.7426...  0.3067 sec/batch
Epoch: 2/10...  Training Step: 388...  Training loss: 1.7514...  0.3329 sec/batch
Epoch: 2/10...  Training Step: 389...  Training loss: 1.7561...  0.3358 sec/batch
Epoch: 2/10...  Training Step: 390...  Training loss: 1.7442...  0.3224 sec/batch
Epoch: 2/10...  Training Step: 391...  Training loss: 1.7601...  0.3411 sec/batch
Epoch: 2/10...  Training Step: 392...  Training loss: 1.7493...  0.3283 sec/batch
Epoch: 2/10...  Training Step: 393...  Training loss: 1.7462...  0.3585 sec/batch
Epoch: 2/10...  Training Step: 394...  Training loss: 1.7354...  0.3346 sec/batch
Epoch: 2/10...  Training Step: 395...  Training loss: 1.7456...  0.2997 sec/batch
Epoch: 2/10...  Training Step: 396...  Training loss: 1.7376...  0.3407 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 3/10...  Training Step: 397...  Training loss: 1.8109...  0.3186 sec/batch
Epoch: 3/10...  Training Step: 398...  Training loss: 1.7588...  0.2836 sec/batch
Epoch: 3/10...  Training Step: 399...  Training loss: 1.7559...  0.3453 sec/batch
Epoch: 3/10...  Training Step: 400...  Training loss: 1.7680...  0.3907 sec/batch
Epoch: 3/10...  Training Step: 401...  Training loss: 1.7438...  0.3412 sec/batch
Epoch: 3/10...  Training Step: 402...  Training loss: 1.7483...  0.3260 sec/batch
Epoch: 3/10...  Training Step: 403...  Training loss: 1.7262...  0.3802 sec/batch
Epoch: 3/10...  Training Step: 404...  Training loss: 1.7610...  0.3941 sec/batch
Epoch: 3/10...  Training Step: 405...  Training loss: 1.7350...  0.2929 sec/batch
Epoch: 3/10...  Training Step: 406...  Training loss: 1.7458...  0.3380 sec/batch
Epoch: 3/10...  Training Step: 407...  Training loss: 1.7208...  0.3723 sec/batch
Epoch: 3/10...  Training Step: 408...  Training loss: 1.7253...  0.3459 sec/batch
Epoch: 3/10...  Training Step: 409...  Training loss: 1.7327...  0.2940 sec/batch
Epoch: 3/10...  Training Step: 410...  Training loss: 1.7569...  0.2994 sec/batch
Epoch: 3/10...  Training Step: 411...  Training loss: 1.7206...  0.3266 sec/batch
Epoch: 3/10...  Training Step: 412...  Training loss: 1.7701...  0.3780 sec/batch
Epoch: 3/10...  Training Step: 413...  Training loss: 1.7387...  0.2941 sec/batch
Epoch: 3/10...  Training Step: 414...  Training loss: 1.7701...  0.4128 sec/batch
Epoch: 3/10...  Training Step: 415...  Training loss: 1.7397...  0.3939 sec/batch
Epoch: 3/10...  Training Step: 416...  Training loss: 1.7363...  0.3597 sec/batch
Epoch: 3/10...  Training Step: 417...  Training loss: 1.7583...  0.3389 sec/batch
Epoch: 3/10...  Training Step: 418...  Training loss: 1.7258...  0.3719 sec/batch
Epoch: 3/10...  Training Step: 419...  Training loss: 1.7078...  0.3012 sec/batch
Epoch: 3/10...  Training Step: 420...  Training loss: 1.7410...  0.3538 sec/batch
Epoch: 3/10...  Training Step: 421...  Training loss: 1.7209...  0.2911 sec/batch
Epoch: 3/10...  Training Step: 422...  Training loss: 1.7378...  0.3266 sec/batch
Epoch: 3/10...  Training Step: 423...  Training loss: 1.7561...  0.3493 sec/batch
Epoch: 3/10...  Training Step: 424...  Training loss: 1.7494...  0.3547 sec/batch
Epoch: 3/10...  Training Step: 425...  Training loss: 1.7032...  0.3494 sec/batch
Epoch: 3/10...  Training Step: 426...  Training loss: 1.7442...  0.3556 sec/batch
Epoch: 3/10...  Training Step: 427...  Training loss: 1.7131...  0.3720 sec/batch
Epoch: 3/10...  Training Step: 428...  Training loss: 1.7260...  0.3126 sec/batch
Epoch: 3/10...  Training Step: 429...  Training loss: 1.6912...  0.3597 sec/batch
Epoch: 3/10...  Training Step: 430...  Training loss: 1.7469...  0.4069 sec/batch
Epoch: 3/10...  Training Step: 431...  Training loss: 1.7441...  0.3044 sec/batch
Epoch: 3/10...  Training Step: 432...  Training loss: 1.7493...  0.3067 sec/batch
Epoch: 3/10...  Training Step: 433...  Training loss: 1.7539...  0.3280 sec/batch
Epoch: 3/10...  Training Step: 434...  Training loss: 1.7235...  0.3584 sec/batch
Epoch: 3/10...  Training Step: 435...  Training loss: 1.7291...  0.3169 sec/batch
Epoch: 3/10...  Training Step: 436...  Training loss: 1.7504...  0.3181 sec/batch
Epoch: 3/10...  Training Step: 437...  Training loss: 1.7174...  0.4031 sec/batch
Epoch: 3/10...  Training Step: 438...  Training loss: 1.7227...  0.4086 sec/batch
Epoch: 3/10...  Training Step: 439...  Training loss: 1.7575...  0.2607 sec/batch
Epoch: 3/10...  Training Step: 440...  Training loss: 1.7270...  0.3803 sec/batch
Epoch: 3/10...  Training Step: 441...  Training loss: 1.6931...  0.3740 sec/batch
Epoch: 3/10...  Training Step: 442...  Training loss: 1.7105...  0.2991 sec/batch
Epoch: 3/10...  Training Step: 443...  Training loss: 1.7124...  0.4144 sec/batch
Epoch: 3/10...  Training Step: 444...  Training loss: 1.6981...  0.4179 sec/batch
Epoch: 3/10...  Training Step: 445...  Training loss: 1.7405...  0.4349 sec/batch
Epoch: 3/10...  Training Step: 446...  Training loss: 1.7202...  0.3535 sec/batch
Epoch: 3/10...  Training Step: 447...  Training loss: 1.7018...  0.2921 sec/batch
Epoch: 3/10...  Training Step: 448...  Training loss: 1.7099...  0.3383 sec/batch
Epoch: 3/10...  Training Step: 449...  Training loss: 1.6891...  0.3676 sec/batch
Epoch: 3/10...  Training Step: 450...  Training loss: 1.7010...  0.3569 sec/batch
Epoch: 3/10...  Training Step: 451...  Training loss: 1.7649...  0.3565 sec/batch
Epoch: 3/10...  Training Step: 452...  Training loss: 1.7130...  0.2972 sec/batch
Epoch: 3/10...  Training Step: 453...  Training loss: 1.7411...  0.3554 sec/batch
Epoch: 3/10...  Training Step: 454...  Training loss: 1.7319...  0.3646 sec/batch
Epoch: 3/10...  Training Step: 455...  Training loss: 1.7147...  0.3390 sec/batch
Epoch: 3/10...  Training Step: 456...  Training loss: 1.7387...  0.4152 sec/batch
Epoch: 3/10...  Training Step: 457...  Training loss: 1.7065...  0.3724 sec/batch
Epoch: 3/10...  Training Step: 458...  Training loss: 1.7229...  0.3192 sec/batch
Epoch: 3/10...  Training Step: 459...  Training loss: 1.6935...  0.3114 sec/batch
Epoch: 3/10...  Training Step: 460...  Training loss: 1.6799...  0.3062 sec/batch
Epoch: 3/10...  Training Step: 461...  Training loss: 1.6894...  0.3353 sec/batch
Epoch: 3/10...  Training Step: 462...  Training loss: 1.7234...  0.3545 sec/batch
Epoch: 3/10...  Training Step: 463...  Training loss: 1.7550...  0.3730 sec/batch
Epoch: 3/10...  Training Step: 464...  Training loss: 1.7046...  0.3815 sec/batch
Epoch: 3/10...  Training Step: 465...  Training loss: 1.7103...  0.3384 sec/batch
Epoch: 3/10...  Training Step: 466...  Training loss: 1.7044...  0.3417 sec/batch
Epoch: 3/10...  Training Step: 467...  Training loss: 1.7146...  0.3187 sec/batch
Epoch: 3/10...  Training Step: 468...  Training loss: 1.7193...  0.2858 sec/batch
Epoch: 3/10...  Training Step: 469...  Training loss: 1.7020...  0.3716 sec/batch
Epoch: 3/10...  Training Step: 470...  Training loss: 1.7144...  0.3522 sec/batch
Epoch: 3/10...  Training Step: 471...  Training loss: 1.6813...  0.3845 sec/batch
Epoch: 3/10...  Training Step: 472...  Training loss: 1.7283...  0.3417 sec/batch
Epoch: 3/10...  Training Step: 473...  Training loss: 1.6941...  0.3098 sec/batch
Epoch: 3/10...  Training Step: 474...  Training loss: 1.6780...  0.3528 sec/batch
Epoch: 3/10...  Training Step: 475...  Training loss: 1.6862...  0.3076 sec/batch
Epoch: 3/10...  Training Step: 476...  Training loss: 1.7133...  0.3265 sec/batch
Epoch: 3/10...  Training Step: 477...  Training loss: 1.6792...  0.3559 sec/batch
Epoch: 3/10...  Training Step: 478...  Training loss: 1.6842...  0.3600 sec/batch
Epoch: 3/10...  Training Step: 479...  Training loss: 1.7226...  0.3714 sec/batch
Epoch: 3/10...  Training Step: 480...  Training loss: 1.7317...  0.3660 sec/batch
Epoch: 3/10...  Training Step: 481...  Training loss: 1.6854...  0.3421 sec/batch
Epoch: 3/10...  Training Step: 482...  Training loss: 1.6853...  0.3004 sec/batch
Epoch: 3/10...  Training Step: 483...  Training loss: 1.6765...  0.3572 sec/batch
Epoch: 3/10...  Training Step: 484...  Training loss: 1.7052...  0.3135 sec/batch
Epoch: 3/10...  Training Step: 485...  Training loss: 1.7110...  0.2972 sec/batch
Epoch: 3/10...  Training Step: 486...  Training loss: 1.7208...  0.3429 sec/batch
Epoch: 3/10...  Training Step: 487...  Training loss: 1.6564...  0.3880 sec/batch
Epoch: 3/10...  Training Step: 488...  Training loss: 1.6700...  0.3594 sec/batch
Epoch: 3/10...  Training Step: 489...  Training loss: 1.6867...  0.3607 sec/batch
Epoch: 3/10...  Training Step: 490...  Training loss: 1.7118...  0.3236 sec/batch
Epoch: 3/10...  Training Step: 491...  Training loss: 1.6904...  0.3382 sec/batch
Epoch: 3/10...  Training Step: 492...  Training loss: 1.7003...  0.3099 sec/batch
Epoch: 3/10...  Training Step: 493...  Training loss: 1.6970...  0.3594 sec/batch
Epoch: 3/10...  Training Step: 494...  Training loss: 1.7241...  0.3732 sec/batch
Epoch: 3/10...  Training Step: 495...  Training loss: 1.7169...  0.3290 sec/batch
Epoch: 3/10...  Training Step: 496...  Training loss: 1.7187...  0.3074 sec/batch
Epoch: 3/10...  Training Step: 497...  Training loss: 1.6639...  0.3540 sec/batch
Epoch: 3/10...  Training Step: 498...  Training loss: 1.6769...  0.2917 sec/batch
Epoch: 3/10...  Training Step: 499...  Training loss: 1.6922...  0.3213 sec/batch
Epoch: 3/10...  Training Step: 500...  Training loss: 1.7096...  0.3954 sec/batch
Epoch: 3/10...  Training Step: 501...  Training loss: 1.7289...  0.3273 sec/batch
Epoch: 3/10...  Training Step: 502...  Training loss: 1.6862...  0.2950 sec/batch
Epoch: 3/10...  Training Step: 503...  Training loss: 1.7062...  0.3080 sec/batch
Epoch: 3/10...  Training Step: 504...  Training loss: 1.7020...  0.3171 sec/batch
Epoch: 3/10...  Training Step: 505...  Training loss: 1.7098...  0.4063 sec/batch
Epoch: 3/10...  Training Step: 506...  Training loss: 1.6723...  0.3417 sec/batch
Epoch: 3/10...  Training Step: 507...  Training loss: 1.7003...  0.2433 sec/batch
Epoch: 3/10...  Training Step: 508...  Training loss: 1.6844...  0.3380 sec/batch
Epoch: 3/10...  Training Step: 509...  Training loss: 1.6913...  0.3938 sec/batch
Epoch: 3/10...  Training Step: 510...  Training loss: 1.7264...  0.3488 sec/batch
Epoch: 3/10...  Training Step: 511...  Training loss: 1.6866...  0.2880 sec/batch
Epoch: 3/10...  Training Step: 512...  Training loss: 1.6929...  0.2869 sec/batch
Epoch: 3/10...  Training Step: 513...  Training loss: 1.6815...  0.3191 sec/batch
Epoch: 3/10...  Training Step: 514...  Training loss: 1.6667...  0.3579 sec/batch
Epoch: 3/10...  Training Step: 515...  Training loss: 1.6758...  0.3443 sec/batch
Epoch: 3/10...  Training Step: 516...  Training loss: 1.7293...  0.3638 sec/batch
Epoch: 3/10...  Training Step: 517...  Training loss: 1.7028...  0.4025 sec/batch
Epoch: 3/10...  Training Step: 518...  Training loss: 1.7030...  0.3449 sec/batch
Epoch: 3/10...  Training Step: 519...  Training loss: 1.7184...  0.3123 sec/batch
Epoch: 3/10...  Training Step: 520...  Training loss: 1.7290...  0.2870 sec/batch
Epoch: 3/10...  Training Step: 521...  Training loss: 1.6974...  0.3568 sec/batch
Epoch: 3/10...  Training Step: 522...  Training loss: 1.7014...  0.3501 sec/batch
Epoch: 3/10...  Training Step: 523...  Training loss: 1.7003...  0.3270 sec/batch
Epoch: 3/10...  Training Step: 524...  Training loss: 1.6817...  0.3160 sec/batch
Epoch: 3/10...  Training Step: 525...  Training loss: 1.6842...  0.3235 sec/batch
Epoch: 3/10...  Training Step: 526...  Training loss: 1.6843...  0.3479 sec/batch
Epoch: 3/10...  Training Step: 527...  Training loss: 1.6754...  0.3738 sec/batch
Epoch: 3/10...  Training Step: 528...  Training loss: 1.7004...  0.4710 sec/batch
Epoch: 3/10...  Training Step: 529...  Training loss: 1.6949...  0.3168 sec/batch
Epoch: 3/10...  Training Step: 530...  Training loss: 1.6883...  0.3801 sec/batch
Epoch: 3/10...  Training Step: 531...  Training loss: 1.6706...  0.3382 sec/batch
Epoch: 3/10...  Training Step: 532...  Training loss: 1.7074...  0.3797 sec/batch
Epoch: 3/10...  Training Step: 533...  Training loss: 1.7306...  0.3209 sec/batch
Epoch: 3/10...  Training Step: 534...  Training loss: 1.6994...  0.3365 sec/batch
Epoch: 3/10...  Training Step: 535...  Training loss: 1.6825...  0.3491 sec/batch
Epoch: 3/10...  Training Step: 536...  Training loss: 1.6645...  0.2892 sec/batch
Epoch: 3/10...  Training Step: 537...  Training loss: 1.6921...  0.2836 sec/batch
Epoch: 3/10...  Training Step: 538...  Training loss: 1.6939...  0.3130 sec/batch
Epoch: 3/10...  Training Step: 539...  Training loss: 1.6719...  0.2894 sec/batch
Epoch: 3/10...  Training Step: 540...  Training loss: 1.6938...  0.3792 sec/batch
Epoch: 3/10...  Training Step: 541...  Training loss: 1.6924...  0.2862 sec/batch
Epoch: 3/10...  Training Step: 542...  Training loss: 1.7096...  0.3226 sec/batch
Epoch: 3/10...  Training Step: 543...  Training loss: 1.6957...  0.3421 sec/batch
Epoch: 3/10...  Training Step: 544...  Training loss: 1.6873...  0.3678 sec/batch
Epoch: 3/10...  Training Step: 545...  Training loss: 1.6913...  0.4081 sec/batch
Epoch: 3/10...  Training Step: 546...  Training loss: 1.6917...  0.3027 sec/batch
Epoch: 3/10...  Training Step: 547...  Training loss: 1.7176...  0.3000 sec/batch
Epoch: 3/10...  Training Step: 548...  Training loss: 1.6935...  0.3821 sec/batch
Epoch: 3/10...  Training Step: 549...  Training loss: 1.6855...  0.3226 sec/batch
Epoch: 3/10...  Training Step: 550...  Training loss: 1.6429...  0.3064 sec/batch
Epoch: 3/10...  Training Step: 551...  Training loss: 1.6779...  0.3936 sec/batch
Epoch: 3/10...  Training Step: 552...  Training loss: 1.6734...  0.4222 sec/batch
Epoch: 3/10...  Training Step: 553...  Training loss: 1.6721...  0.3196 sec/batch
Epoch: 3/10...  Training Step: 554...  Training loss: 1.6623...  0.3409 sec/batch
Epoch: 3/10...  Training Step: 555...  Training loss: 1.6709...  0.4476 sec/batch
Epoch: 3/10...  Training Step: 556...  Training loss: 1.6392...  0.3550 sec/batch
Epoch: 3/10...  Training Step: 557...  Training loss: 1.6500...  0.2947 sec/batch
Epoch: 3/10...  Training Step: 558...  Training loss: 1.6267...  0.3371 sec/batch
Epoch: 3/10...  Training Step: 559...  Training loss: 1.6557...  0.3684 sec/batch
Epoch: 3/10...  Training Step: 560...  Training loss: 1.6711...  0.3023 sec/batch
Epoch: 3/10...  Training Step: 561...  Training loss: 1.6717...  0.3352 sec/batch
Epoch: 3/10...  Training Step: 562...  Training loss: 1.6515...  0.2613 sec/batch
Epoch: 3/10...  Training Step: 563...  Training loss: 1.6653...  0.3054 sec/batch
Epoch: 3/10...  Training Step: 564...  Training loss: 1.6732...  0.3092 sec/batch
Epoch: 3/10...  Training Step: 565...  Training loss: 1.6642...  0.3822 sec/batch
Epoch: 3/10...  Training Step: 566...  Training loss: 1.6592...  0.2879 sec/batch
Epoch: 3/10...  Training Step: 567...  Training loss: 1.6842...  0.3523 sec/batch
Epoch: 3/10...  Training Step: 568...  Training loss: 1.6590...  0.3406 sec/batch
Epoch: 3/10...  Training Step: 569...  Training loss: 1.6511...  0.3976 sec/batch
Epoch: 3/10...  Training Step: 570...  Training loss: 1.6359...  0.3537 sec/batch
Epoch: 3/10...  Training Step: 571...  Training loss: 1.6551...  0.3489 sec/batch
Epoch: 3/10...  Training Step: 572...  Training loss: 1.6534...  0.3215 sec/batch
Epoch: 3/10...  Training Step: 573...  Training loss: 1.6137...  0.3471 sec/batch
Epoch: 3/10...  Training Step: 574...  Training loss: 1.6577...  0.3384 sec/batch
Epoch: 3/10...  Training Step: 575...  Training loss: 1.6616...  0.3215 sec/batch
Epoch: 3/10...  Training Step: 576...  Training loss: 1.6798...  0.3973 sec/batch
Epoch: 3/10...  Training Step: 577...  Training loss: 1.6676...  0.3069 sec/batch
Epoch: 3/10...  Training Step: 578...  Training loss: 1.6500...  0.3626 sec/batch
Epoch: 3/10...  Training Step: 579...  Training loss: 1.6580...  0.2990 sec/batch
Epoch: 3/10...  Training Step: 580...  Training loss: 1.6536...  0.3494 sec/batch
Epoch: 3/10...  Training Step: 581...  Training loss: 1.6576...  0.2927 sec/batch
Epoch: 3/10...  Training Step: 582...  Training loss: 1.5975...  0.3640 sec/batch
Epoch: 3/10...  Training Step: 583...  Training loss: 1.6518...  0.2897 sec/batch
Epoch: 3/10...  Training Step: 584...  Training loss: 1.6419...  0.4168 sec/batch
Epoch: 3/10...  Training Step: 585...  Training loss: 1.6318...  0.3921 sec/batch
Epoch: 3/10...  Training Step: 586...  Training loss: 1.6435...  0.3166 sec/batch
Epoch: 3/10...  Training Step: 587...  Training loss: 1.6428...  0.2952 sec/batch
Epoch: 3/10...  Training Step: 588...  Training loss: 1.6472...  0.3265 sec/batch
Epoch: 3/10...  Training Step: 589...  Training loss: 1.6626...  0.3187 sec/batch
Epoch: 3/10...  Training Step: 590...  Training loss: 1.6371...  0.3890 sec/batch
Epoch: 3/10...  Training Step: 591...  Training loss: 1.6499...  0.3144 sec/batch
Epoch: 3/10...  Training Step: 592...  Training loss: 1.6264...  0.2863 sec/batch
Epoch: 3/10...  Training Step: 593...  Training loss: 1.6456...  0.3680 sec/batch
Epoch: 3/10...  Training Step: 594...  Training loss: 1.6351...  0.3023 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 4/10...  Training Step: 595...  Training loss: 1.7120...  0.3932 sec/batch
Epoch: 4/10...  Training Step: 596...  Training loss: 1.6609...  0.2867 sec/batch
Epoch: 4/10...  Training Step: 597...  Training loss: 1.6547...  0.3753 sec/batch
Epoch: 4/10...  Training Step: 598...  Training loss: 1.6636...  0.3916 sec/batch
Epoch: 4/10...  Training Step: 599...  Training loss: 1.6418...  0.3202 sec/batch
Epoch: 4/10...  Training Step: 600...  Training loss: 1.6486...  0.2959 sec/batch
Epoch: 4/10...  Training Step: 601...  Training loss: 1.6258...  0.2981 sec/batch
Epoch: 4/10...  Training Step: 602...  Training loss: 1.6693...  0.2764 sec/batch
Epoch: 4/10...  Training Step: 603...  Training loss: 1.6293...  0.3379 sec/batch
Epoch: 4/10...  Training Step: 604...  Training loss: 1.6455...  0.3177 sec/batch
Epoch: 4/10...  Training Step: 605...  Training loss: 1.6190...  0.2922 sec/batch
Epoch: 4/10...  Training Step: 606...  Training loss: 1.6235...  0.3609 sec/batch
Epoch: 4/10...  Training Step: 607...  Training loss: 1.6383...  0.3534 sec/batch
Epoch: 4/10...  Training Step: 608...  Training loss: 1.6558...  0.3707 sec/batch
Epoch: 4/10...  Training Step: 609...  Training loss: 1.6266...  0.3891 sec/batch
Epoch: 4/10...  Training Step: 610...  Training loss: 1.6738...  0.3410 sec/batch
Epoch: 4/10...  Training Step: 611...  Training loss: 1.6375...  0.3142 sec/batch
Epoch: 4/10...  Training Step: 612...  Training loss: 1.6657...  0.3487 sec/batch
Epoch: 4/10...  Training Step: 613...  Training loss: 1.6375...  0.3769 sec/batch
Epoch: 4/10...  Training Step: 614...  Training loss: 1.6402...  0.2891 sec/batch
Epoch: 4/10...  Training Step: 615...  Training loss: 1.6607...  0.3228 sec/batch
Epoch: 4/10...  Training Step: 616...  Training loss: 1.6275...  0.3224 sec/batch
Epoch: 4/10...  Training Step: 617...  Training loss: 1.6090...  0.3370 sec/batch
Epoch: 4/10...  Training Step: 618...  Training loss: 1.6442...  0.2979 sec/batch
Epoch: 4/10...  Training Step: 619...  Training loss: 1.6353...  0.4358 sec/batch
Epoch: 4/10...  Training Step: 620...  Training loss: 1.6454...  0.4045 sec/batch
Epoch: 4/10...  Training Step: 621...  Training loss: 1.6590...  0.4023 sec/batch
Epoch: 4/10...  Training Step: 622...  Training loss: 1.6511...  0.3245 sec/batch
Epoch: 4/10...  Training Step: 623...  Training loss: 1.6116...  0.3240 sec/batch
Epoch: 4/10...  Training Step: 624...  Training loss: 1.6476...  0.3397 sec/batch
Epoch: 4/10...  Training Step: 625...  Training loss: 1.6168...  0.3654 sec/batch
Epoch: 4/10...  Training Step: 626...  Training loss: 1.6316...  0.2517 sec/batch
Epoch: 4/10...  Training Step: 627...  Training loss: 1.5967...  0.2842 sec/batch
Epoch: 4/10...  Training Step: 628...  Training loss: 1.6498...  0.3563 sec/batch
Epoch: 4/10...  Training Step: 629...  Training loss: 1.6483...  0.3705 sec/batch
Epoch: 4/10...  Training Step: 630...  Training loss: 1.6596...  0.3404 sec/batch
Epoch: 4/10...  Training Step: 631...  Training loss: 1.6544...  0.3674 sec/batch
Epoch: 4/10...  Training Step: 632...  Training loss: 1.6354...  0.3342 sec/batch
Epoch: 4/10...  Training Step: 633...  Training loss: 1.6335...  0.4171 sec/batch
Epoch: 4/10...  Training Step: 634...  Training loss: 1.6540...  0.3420 sec/batch
Epoch: 4/10...  Training Step: 635...  Training loss: 1.6273...  0.3356 sec/batch
Epoch: 4/10...  Training Step: 636...  Training loss: 1.6285...  0.3268 sec/batch
Epoch: 4/10...  Training Step: 637...  Training loss: 1.6571...  0.3225 sec/batch
Epoch: 4/10...  Training Step: 638...  Training loss: 1.6281...  0.2778 sec/batch
Epoch: 4/10...  Training Step: 639...  Training loss: 1.6018...  0.3158 sec/batch
Epoch: 4/10...  Training Step: 640...  Training loss: 1.6174...  0.3605 sec/batch
Epoch: 4/10...  Training Step: 641...  Training loss: 1.6153...  0.3178 sec/batch
Epoch: 4/10...  Training Step: 642...  Training loss: 1.6106...  0.3970 sec/batch
Epoch: 4/10...  Training Step: 643...  Training loss: 1.6512...  0.3501 sec/batch
Epoch: 4/10...  Training Step: 644...  Training loss: 1.6355...  0.3295 sec/batch
Epoch: 4/10...  Training Step: 645...  Training loss: 1.6109...  0.3703 sec/batch
Epoch: 4/10...  Training Step: 646...  Training loss: 1.6138...  0.3228 sec/batch
Epoch: 4/10...  Training Step: 647...  Training loss: 1.5981...  0.3333 sec/batch
Epoch: 4/10...  Training Step: 648...  Training loss: 1.6133...  0.3177 sec/batch
Epoch: 4/10...  Training Step: 649...  Training loss: 1.6704...  0.3508 sec/batch
Epoch: 4/10...  Training Step: 650...  Training loss: 1.6245...  0.4441 sec/batch
Epoch: 4/10...  Training Step: 651...  Training loss: 1.6491...  0.3471 sec/batch
Epoch: 4/10...  Training Step: 652...  Training loss: 1.6447...  0.3112 sec/batch
Epoch: 4/10...  Training Step: 653...  Training loss: 1.6294...  0.3091 sec/batch
Epoch: 4/10...  Training Step: 654...  Training loss: 1.6526...  0.3754 sec/batch
Epoch: 4/10...  Training Step: 655...  Training loss: 1.6120...  0.3346 sec/batch
Epoch: 4/10...  Training Step: 656...  Training loss: 1.6322...  0.3067 sec/batch
Epoch: 4/10...  Training Step: 657...  Training loss: 1.6071...  0.3558 sec/batch
Epoch: 4/10...  Training Step: 658...  Training loss: 1.5927...  0.3931 sec/batch
Epoch: 4/10...  Training Step: 659...  Training loss: 1.6049...  0.3475 sec/batch
Epoch: 4/10...  Training Step: 660...  Training loss: 1.6365...  0.2751 sec/batch
Epoch: 4/10...  Training Step: 661...  Training loss: 1.6694...  0.3917 sec/batch
Epoch: 4/10...  Training Step: 662...  Training loss: 1.6228...  0.3977 sec/batch
Epoch: 4/10...  Training Step: 663...  Training loss: 1.6313...  0.3335 sec/batch
Epoch: 4/10...  Training Step: 664...  Training loss: 1.6191...  0.2606 sec/batch
Epoch: 4/10...  Training Step: 665...  Training loss: 1.6251...  0.3233 sec/batch
Epoch: 4/10...  Training Step: 666...  Training loss: 1.6331...  0.2909 sec/batch
Epoch: 4/10...  Training Step: 667...  Training loss: 1.6085...  0.4106 sec/batch
Epoch: 4/10...  Training Step: 668...  Training loss: 1.6256...  0.3221 sec/batch
Epoch: 4/10...  Training Step: 669...  Training loss: 1.6037...  0.3352 sec/batch
Epoch: 4/10...  Training Step: 670...  Training loss: 1.6410...  0.3239 sec/batch
Epoch: 4/10...  Training Step: 671...  Training loss: 1.6081...  0.3268 sec/batch
Epoch: 4/10...  Training Step: 672...  Training loss: 1.5943...  0.3102 sec/batch
Epoch: 4/10...  Training Step: 673...  Training loss: 1.5971...  0.2912 sec/batch
Epoch: 4/10...  Training Step: 674...  Training loss: 1.6323...  0.3159 sec/batch
Epoch: 4/10...  Training Step: 675...  Training loss: 1.5960...  0.3358 sec/batch
Epoch: 4/10...  Training Step: 676...  Training loss: 1.6076...  0.2988 sec/batch
Epoch: 4/10...  Training Step: 677...  Training loss: 1.6490...  0.3136 sec/batch
Epoch: 4/10...  Training Step: 678...  Training loss: 1.6558...  0.2700 sec/batch
Epoch: 4/10...  Training Step: 679...  Training loss: 1.6026...  0.3058 sec/batch
Epoch: 4/10...  Training Step: 680...  Training loss: 1.6039...  0.3441 sec/batch
Epoch: 4/10...  Training Step: 681...  Training loss: 1.6003...  0.3731 sec/batch
Epoch: 4/10...  Training Step: 682...  Training loss: 1.6253...  0.3353 sec/batch
Epoch: 4/10...  Training Step: 683...  Training loss: 1.6363...  0.2828 sec/batch
Epoch: 4/10...  Training Step: 684...  Training loss: 1.6458...  0.4494 sec/batch
Epoch: 4/10...  Training Step: 685...  Training loss: 1.5847...  0.3058 sec/batch
Epoch: 4/10...  Training Step: 686...  Training loss: 1.5905...  0.3108 sec/batch
Epoch: 4/10...  Training Step: 687...  Training loss: 1.6178...  0.3449 sec/batch
Epoch: 4/10...  Training Step: 688...  Training loss: 1.6291...  0.3188 sec/batch
Epoch: 4/10...  Training Step: 689...  Training loss: 1.6084...  0.2997 sec/batch
Epoch: 4/10...  Training Step: 690...  Training loss: 1.6220...  0.3246 sec/batch
Epoch: 4/10...  Training Step: 691...  Training loss: 1.6222...  0.3238 sec/batch
Epoch: 4/10...  Training Step: 692...  Training loss: 1.6528...  0.3861 sec/batch
Epoch: 4/10...  Training Step: 693...  Training loss: 1.6396...  0.4301 sec/batch
Epoch: 4/10...  Training Step: 694...  Training loss: 1.6434...  0.2892 sec/batch
Epoch: 4/10...  Training Step: 695...  Training loss: 1.5937...  0.2905 sec/batch
Epoch: 4/10...  Training Step: 696...  Training loss: 1.5982...  0.3095 sec/batch
Epoch: 4/10...  Training Step: 697...  Training loss: 1.6083...  0.3185 sec/batch
Epoch: 4/10...  Training Step: 698...  Training loss: 1.6304...  0.4037 sec/batch
Epoch: 4/10...  Training Step: 699...  Training loss: 1.6448...  0.3238 sec/batch
Epoch: 4/10...  Training Step: 700...  Training loss: 1.6067...  0.2708 sec/batch
Epoch: 4/10...  Training Step: 701...  Training loss: 1.6278...  0.3243 sec/batch
Epoch: 4/10...  Training Step: 702...  Training loss: 1.6279...  0.3207 sec/batch
Epoch: 4/10...  Training Step: 703...  Training loss: 1.6306...  0.3391 sec/batch
Epoch: 4/10...  Training Step: 704...  Training loss: 1.5994...  0.3497 sec/batch
Epoch: 4/10...  Training Step: 705...  Training loss: 1.6255...  0.3066 sec/batch
Epoch: 4/10...  Training Step: 706...  Training loss: 1.6111...  0.4393 sec/batch
Epoch: 4/10...  Training Step: 707...  Training loss: 1.6162...  0.3729 sec/batch
Epoch: 4/10...  Training Step: 708...  Training loss: 1.6534...  0.4032 sec/batch
Epoch: 4/10...  Training Step: 709...  Training loss: 1.6128...  0.4058 sec/batch
Epoch: 4/10...  Training Step: 710...  Training loss: 1.6146...  0.3763 sec/batch
Epoch: 4/10...  Training Step: 711...  Training loss: 1.6023...  0.3641 sec/batch
Epoch: 4/10...  Training Step: 712...  Training loss: 1.5965...  0.3422 sec/batch
Epoch: 4/10...  Training Step: 713...  Training loss: 1.5957...  0.3046 sec/batch
Epoch: 4/10...  Training Step: 714...  Training loss: 1.6540...  0.3079 sec/batch
Epoch: 4/10...  Training Step: 715...  Training loss: 1.6218...  0.3665 sec/batch
Epoch: 4/10...  Training Step: 716...  Training loss: 1.6334...  0.3059 sec/batch
Epoch: 4/10...  Training Step: 717...  Training loss: 1.6466...  0.2853 sec/batch
Epoch: 4/10...  Training Step: 718...  Training loss: 1.6553...  0.3416 sec/batch
Epoch: 4/10...  Training Step: 719...  Training loss: 1.6307...  0.3053 sec/batch
Epoch: 4/10...  Training Step: 720...  Training loss: 1.6256...  0.3535 sec/batch
Epoch: 4/10...  Training Step: 721...  Training loss: 1.6223...  0.3558 sec/batch
Epoch: 4/10...  Training Step: 722...  Training loss: 1.6080...  0.2936 sec/batch
Epoch: 4/10...  Training Step: 723...  Training loss: 1.6123...  0.3342 sec/batch
Epoch: 4/10...  Training Step: 724...  Training loss: 1.6128...  0.3646 sec/batch
Epoch: 4/10...  Training Step: 725...  Training loss: 1.6004...  0.3957 sec/batch
Epoch: 4/10...  Training Step: 726...  Training loss: 1.6267...  0.3132 sec/batch
Epoch: 4/10...  Training Step: 727...  Training loss: 1.6267...  0.3528 sec/batch
Epoch: 4/10...  Training Step: 728...  Training loss: 1.6121...  0.3072 sec/batch
Epoch: 4/10...  Training Step: 729...  Training loss: 1.6025...  0.3387 sec/batch
Epoch: 4/10...  Training Step: 730...  Training loss: 1.6393...  0.3241 sec/batch
Epoch: 4/10...  Training Step: 731...  Training loss: 1.6685...  0.3119 sec/batch
Epoch: 4/10...  Training Step: 732...  Training loss: 1.6247...  0.3022 sec/batch
Epoch: 4/10...  Training Step: 733...  Training loss: 1.6062...  0.3825 sec/batch
Epoch: 4/10...  Training Step: 734...  Training loss: 1.5986...  0.3258 sec/batch
Epoch: 4/10...  Training Step: 735...  Training loss: 1.6228...  0.3246 sec/batch
Epoch: 4/10...  Training Step: 736...  Training loss: 1.6269...  0.4075 sec/batch
Epoch: 4/10...  Training Step: 737...  Training loss: 1.5946...  0.3356 sec/batch
Epoch: 4/10...  Training Step: 738...  Training loss: 1.6184...  0.3791 sec/batch
Epoch: 4/10...  Training Step: 739...  Training loss: 1.6190...  0.3937 sec/batch
Epoch: 4/10...  Training Step: 740...  Training loss: 1.6321...  0.3280 sec/batch
Epoch: 4/10...  Training Step: 741...  Training loss: 1.6215...  0.2830 sec/batch
Epoch: 4/10...  Training Step: 742...  Training loss: 1.6174...  0.3006 sec/batch
Epoch: 4/10...  Training Step: 743...  Training loss: 1.6199...  0.3151 sec/batch
Epoch: 4/10...  Training Step: 744...  Training loss: 1.6177...  0.3791 sec/batch
Epoch: 4/10...  Training Step: 745...  Training loss: 1.6450...  0.3806 sec/batch
Epoch: 4/10...  Training Step: 746...  Training loss: 1.6174...  0.3179 sec/batch
Epoch: 4/10...  Training Step: 747...  Training loss: 1.6139...  0.3376 sec/batch
Epoch: 4/10...  Training Step: 748...  Training loss: 1.5779...  0.3517 sec/batch
Epoch: 4/10...  Training Step: 749...  Training loss: 1.6070...  0.3189 sec/batch
Epoch: 4/10...  Training Step: 750...  Training loss: 1.6037...  0.3473 sec/batch
Epoch: 4/10...  Training Step: 751...  Training loss: 1.6004...  0.3303 sec/batch
Epoch: 4/10...  Training Step: 752...  Training loss: 1.5933...  0.2944 sec/batch
Epoch: 4/10...  Training Step: 753...  Training loss: 1.6059...  0.3353 sec/batch
Epoch: 4/10...  Training Step: 754...  Training loss: 1.5705...  0.2901 sec/batch
Epoch: 4/10...  Training Step: 755...  Training loss: 1.5829...  0.3425 sec/batch
Epoch: 4/10...  Training Step: 756...  Training loss: 1.5647...  0.2884 sec/batch
Epoch: 4/10...  Training Step: 757...  Training loss: 1.5863...  0.3575 sec/batch
Epoch: 4/10...  Training Step: 758...  Training loss: 1.6031...  0.3271 sec/batch
Epoch: 4/10...  Training Step: 759...  Training loss: 1.6033...  0.3305 sec/batch
Epoch: 4/10...  Training Step: 760...  Training loss: 1.5837...  0.3328 sec/batch
Epoch: 4/10...  Training Step: 761...  Training loss: 1.5940...  0.3140 sec/batch
Epoch: 4/10...  Training Step: 762...  Training loss: 1.6097...  0.3031 sec/batch
Epoch: 4/10...  Training Step: 763...  Training loss: 1.5920...  0.3171 sec/batch
Epoch: 4/10...  Training Step: 764...  Training loss: 1.5925...  0.3689 sec/batch
Epoch: 4/10...  Training Step: 765...  Training loss: 1.6189...  0.3008 sec/batch
Epoch: 4/10...  Training Step: 766...  Training loss: 1.5841...  0.3902 sec/batch
Epoch: 4/10...  Training Step: 767...  Training loss: 1.5834...  0.3138 sec/batch
Epoch: 4/10...  Training Step: 768...  Training loss: 1.5637...  0.3269 sec/batch
Epoch: 4/10...  Training Step: 769...  Training loss: 1.5890...  0.2651 sec/batch
Epoch: 4/10...  Training Step: 770...  Training loss: 1.5863...  0.2859 sec/batch
Epoch: 4/10...  Training Step: 771...  Training loss: 1.5486...  0.3852 sec/batch
Epoch: 4/10...  Training Step: 772...  Training loss: 1.5941...  0.3620 sec/batch
Epoch: 4/10...  Training Step: 773...  Training loss: 1.5960...  0.3246 sec/batch
Epoch: 4/10...  Training Step: 774...  Training loss: 1.6172...  0.3536 sec/batch
Epoch: 4/10...  Training Step: 775...  Training loss: 1.5994...  0.3165 sec/batch
Epoch: 4/10...  Training Step: 776...  Training loss: 1.5852...  0.3087 sec/batch
Epoch: 4/10...  Training Step: 777...  Training loss: 1.5907...  0.3555 sec/batch
Epoch: 4/10...  Training Step: 778...  Training loss: 1.5877...  0.3189 sec/batch
Epoch: 4/10...  Training Step: 779...  Training loss: 1.5929...  0.3709 sec/batch
Epoch: 4/10...  Training Step: 780...  Training loss: 1.5311...  0.3843 sec/batch
Epoch: 4/10...  Training Step: 781...  Training loss: 1.5868...  0.3149 sec/batch
Epoch: 4/10...  Training Step: 782...  Training loss: 1.5784...  0.2889 sec/batch
Epoch: 4/10...  Training Step: 783...  Training loss: 1.5706...  0.3162 sec/batch
Epoch: 4/10...  Training Step: 784...  Training loss: 1.5784...  0.2600 sec/batch
Epoch: 4/10...  Training Step: 785...  Training loss: 1.5782...  0.4084 sec/batch
Epoch: 4/10...  Training Step: 786...  Training loss: 1.5867...  0.3388 sec/batch
Epoch: 4/10...  Training Step: 787...  Training loss: 1.6046...  0.3408 sec/batch
Epoch: 4/10...  Training Step: 788...  Training loss: 1.5726...  0.3235 sec/batch
Epoch: 4/10...  Training Step: 789...  Training loss: 1.5845...  0.3247 sec/batch
Epoch: 4/10...  Training Step: 790...  Training loss: 1.5645...  0.2483 sec/batch
Epoch: 4/10...  Training Step: 791...  Training loss: 1.5863...  0.3760 sec/batch
Epoch: 4/10...  Training Step: 792...  Training loss: 1.5770...  0.3005 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 5/10...  Training Step: 793...  Training loss: 1.6502...  0.3516 sec/batch
Epoch: 5/10...  Training Step: 794...  Training loss: 1.6017...  0.3735 sec/batch
Epoch: 5/10...  Training Step: 795...  Training loss: 1.5923...  0.2760 sec/batch
Epoch: 5/10...  Training Step: 796...  Training loss: 1.6021...  0.2916 sec/batch
Epoch: 5/10...  Training Step: 797...  Training loss: 1.5815...  0.3436 sec/batch
Epoch: 5/10...  Training Step: 798...  Training loss: 1.5859...  0.4745 sec/batch
Epoch: 5/10...  Training Step: 799...  Training loss: 1.5674...  0.3324 sec/batch
Epoch: 5/10...  Training Step: 800...  Training loss: 1.6088...  0.4525 sec/batch
Epoch: 5/10...  Training Step: 801...  Training loss: 1.5659...  0.4241 sec/batch
Epoch: 5/10...  Training Step: 802...  Training loss: 1.5837...  0.3178 sec/batch
Epoch: 5/10...  Training Step: 803...  Training loss: 1.5601...  0.3747 sec/batch
Epoch: 5/10...  Training Step: 804...  Training loss: 1.5620...  0.3493 sec/batch
Epoch: 5/10...  Training Step: 805...  Training loss: 1.5818...  0.2786 sec/batch
Epoch: 5/10...  Training Step: 806...  Training loss: 1.5934...  0.3251 sec/batch
Epoch: 5/10...  Training Step: 807...  Training loss: 1.5656...  0.3241 sec/batch
Epoch: 5/10...  Training Step: 808...  Training loss: 1.6142...  0.2773 sec/batch
Epoch: 5/10...  Training Step: 809...  Training loss: 1.5736...  0.3405 sec/batch
Epoch: 5/10...  Training Step: 810...  Training loss: 1.6038...  0.2998 sec/batch
Epoch: 5/10...  Training Step: 811...  Training loss: 1.5748...  0.3248 sec/batch
Epoch: 5/10...  Training Step: 812...  Training loss: 1.5798...  0.3178 sec/batch
Epoch: 5/10...  Training Step: 813...  Training loss: 1.6042...  0.3713 sec/batch
Epoch: 5/10...  Training Step: 814...  Training loss: 1.5697...  0.2956 sec/batch
Epoch: 5/10...  Training Step: 815...  Training loss: 1.5475...  0.3958 sec/batch
Epoch: 5/10...  Training Step: 816...  Training loss: 1.5856...  0.4240 sec/batch
Epoch: 5/10...  Training Step: 817...  Training loss: 1.5828...  0.2887 sec/batch
Epoch: 5/10...  Training Step: 818...  Training loss: 1.5899...  0.3813 sec/batch
Epoch: 5/10...  Training Step: 819...  Training loss: 1.5986...  0.3152 sec/batch
Epoch: 5/10...  Training Step: 820...  Training loss: 1.5886...  0.3152 sec/batch
Epoch: 5/10...  Training Step: 821...  Training loss: 1.5554...  0.3132 sec/batch
Epoch: 5/10...  Training Step: 822...  Training loss: 1.5856...  0.3274 sec/batch
Epoch: 5/10...  Training Step: 823...  Training loss: 1.5537...  0.3662 sec/batch
Epoch: 5/10...  Training Step: 824...  Training loss: 1.5763...  0.3522 sec/batch
Epoch: 5/10...  Training Step: 825...  Training loss: 1.5400...  0.4146 sec/batch
Epoch: 5/10...  Training Step: 826...  Training loss: 1.5874...  0.3604 sec/batch
Epoch: 5/10...  Training Step: 827...  Training loss: 1.5904...  0.3384 sec/batch
Epoch: 5/10...  Training Step: 828...  Training loss: 1.5981...  0.3501 sec/batch
Epoch: 5/10...  Training Step: 829...  Training loss: 1.5977...  0.3480 sec/batch
Epoch: 5/10...  Training Step: 830...  Training loss: 1.5786...  0.3106 sec/batch
Epoch: 5/10...  Training Step: 831...  Training loss: 1.5766...  0.3226 sec/batch
Epoch: 5/10...  Training Step: 832...  Training loss: 1.5938...  0.2800 sec/batch
Epoch: 5/10...  Training Step: 833...  Training loss: 1.5729...  0.2946 sec/batch
Epoch: 5/10...  Training Step: 834...  Training loss: 1.5719...  0.2861 sec/batch
Epoch: 5/10...  Training Step: 835...  Training loss: 1.6021...  0.3577 sec/batch
Epoch: 5/10...  Training Step: 836...  Training loss: 1.5663...  0.3949 sec/batch
Epoch: 5/10...  Training Step: 837...  Training loss: 1.5465...  0.3055 sec/batch
Epoch: 5/10...  Training Step: 838...  Training loss: 1.5608...  0.3027 sec/batch
Epoch: 5/10...  Training Step: 839...  Training loss: 1.5576...  0.4041 sec/batch
Epoch: 5/10...  Training Step: 840...  Training loss: 1.5548...  0.3680 sec/batch
Epoch: 5/10...  Training Step: 841...  Training loss: 1.5974...  0.2882 sec/batch
Epoch: 5/10...  Training Step: 842...  Training loss: 1.5853...  0.3305 sec/batch
Epoch: 5/10...  Training Step: 843...  Training loss: 1.5603...  0.3811 sec/batch
Epoch: 5/10...  Training Step: 844...  Training loss: 1.5555...  0.3299 sec/batch
Epoch: 5/10...  Training Step: 845...  Training loss: 1.5402...  0.3298 sec/batch
Epoch: 5/10...  Training Step: 846...  Training loss: 1.5610...  0.3477 sec/batch
Epoch: 5/10...  Training Step: 847...  Training loss: 1.6131...  0.2830 sec/batch
Epoch: 5/10...  Training Step: 848...  Training loss: 1.5732...  0.3382 sec/batch
Epoch: 5/10...  Training Step: 849...  Training loss: 1.5931...  0.3637 sec/batch
Epoch: 5/10...  Training Step: 850...  Training loss: 1.5890...  0.3295 sec/batch
Epoch: 5/10...  Training Step: 851...  Training loss: 1.5708...  0.3870 sec/batch
Epoch: 5/10...  Training Step: 852...  Training loss: 1.6024...  0.3218 sec/batch
Epoch: 5/10...  Training Step: 853...  Training loss: 1.5537...  0.3163 sec/batch
Epoch: 5/10...  Training Step: 854...  Training loss: 1.5763...  0.3253 sec/batch
Epoch: 5/10...  Training Step: 855...  Training loss: 1.5539...  0.3124 sec/batch
Epoch: 5/10...  Training Step: 856...  Training loss: 1.5399...  0.2907 sec/batch
Epoch: 5/10...  Training Step: 857...  Training loss: 1.5551...  0.3304 sec/batch
Epoch: 5/10...  Training Step: 858...  Training loss: 1.5857...  0.2957 sec/batch
Epoch: 5/10...  Training Step: 859...  Training loss: 1.6132...  0.3609 sec/batch
Epoch: 5/10...  Training Step: 860...  Training loss: 1.5723...  0.3530 sec/batch
Epoch: 5/10...  Training Step: 861...  Training loss: 1.5817...  0.2781 sec/batch
Epoch: 5/10...  Training Step: 862...  Training loss: 1.5662...  0.3576 sec/batch
Epoch: 5/10...  Training Step: 863...  Training loss: 1.5724...  0.3177 sec/batch
Epoch: 5/10...  Training Step: 864...  Training loss: 1.5788...  0.3921 sec/batch
Epoch: 5/10...  Training Step: 865...  Training loss: 1.5502...  0.3493 sec/batch
Epoch: 5/10...  Training Step: 866...  Training loss: 1.5667...  0.3345 sec/batch
Epoch: 5/10...  Training Step: 867...  Training loss: 1.5507...  0.3576 sec/batch
Epoch: 5/10...  Training Step: 868...  Training loss: 1.5875...  0.3361 sec/batch
Epoch: 5/10...  Training Step: 869...  Training loss: 1.5566...  0.2801 sec/batch
Epoch: 5/10...  Training Step: 870...  Training loss: 1.5432...  0.4007 sec/batch
Epoch: 5/10...  Training Step: 871...  Training loss: 1.5453...  0.2969 sec/batch
Epoch: 5/10...  Training Step: 872...  Training loss: 1.5819...  0.3684 sec/batch
Epoch: 5/10...  Training Step: 873...  Training loss: 1.5415...  0.2530 sec/batch
Epoch: 5/10...  Training Step: 874...  Training loss: 1.5557...  0.3056 sec/batch
Epoch: 5/10...  Training Step: 875...  Training loss: 1.6019...  0.2823 sec/batch
Epoch: 5/10...  Training Step: 876...  Training loss: 1.6053...  0.3264 sec/batch
Epoch: 5/10...  Training Step: 877...  Training loss: 1.5540...  0.2892 sec/batch
Epoch: 5/10...  Training Step: 878...  Training loss: 1.5517...  0.3356 sec/batch
Epoch: 5/10...  Training Step: 879...  Training loss: 1.5502...  0.4650 sec/batch
Epoch: 5/10...  Training Step: 880...  Training loss: 1.5756...  0.3284 sec/batch
Epoch: 5/10...  Training Step: 881...  Training loss: 1.5860...  0.3494 sec/batch
Epoch: 5/10...  Training Step: 882...  Training loss: 1.5917...  0.3220 sec/batch
Epoch: 5/10...  Training Step: 883...  Training loss: 1.5371...  0.3280 sec/batch
Epoch: 5/10...  Training Step: 884...  Training loss: 1.5428...  0.4155 sec/batch
Epoch: 5/10...  Training Step: 885...  Training loss: 1.5712...  0.4033 sec/batch
Epoch: 5/10...  Training Step: 886...  Training loss: 1.5762...  0.2995 sec/batch
Epoch: 5/10...  Training Step: 887...  Training loss: 1.5573...  0.3358 sec/batch
Epoch: 5/10...  Training Step: 888...  Training loss: 1.5710...  0.3004 sec/batch
Epoch: 5/10...  Training Step: 889...  Training loss: 1.5702...  0.2855 sec/batch
Epoch: 5/10...  Training Step: 890...  Training loss: 1.6041...  0.5204 sec/batch
Epoch: 5/10...  Training Step: 891...  Training loss: 1.5892...  0.4414 sec/batch
Epoch: 5/10...  Training Step: 892...  Training loss: 1.5928...  0.2878 sec/batch
Epoch: 5/10...  Training Step: 893...  Training loss: 1.5464...  0.4567 sec/batch
Epoch: 5/10...  Training Step: 894...  Training loss: 1.5463...  0.3514 sec/batch
Epoch: 5/10...  Training Step: 895...  Training loss: 1.5556...  0.4236 sec/batch
Epoch: 5/10...  Training Step: 896...  Training loss: 1.5790...  0.3285 sec/batch
Epoch: 5/10...  Training Step: 897...  Training loss: 1.5904...  0.3375 sec/batch
Epoch: 5/10...  Training Step: 898...  Training loss: 1.5535...  0.3199 sec/batch
Epoch: 5/10...  Training Step: 899...  Training loss: 1.5793...  0.3464 sec/batch
Epoch: 5/10...  Training Step: 900...  Training loss: 1.5746...  0.4249 sec/batch
Epoch: 5/10...  Training Step: 901...  Training loss: 1.5759...  0.3628 sec/batch
Epoch: 5/10...  Training Step: 902...  Training loss: 1.5499...  0.4164 sec/batch
Epoch: 5/10...  Training Step: 903...  Training loss: 1.5754...  0.3949 sec/batch
Epoch: 5/10...  Training Step: 904...  Training loss: 1.5625...  0.3546 sec/batch
Epoch: 5/10...  Training Step: 905...  Training loss: 1.5705...  0.4165 sec/batch
Epoch: 5/10...  Training Step: 906...  Training loss: 1.6031...  0.3781 sec/batch
Epoch: 5/10...  Training Step: 907...  Training loss: 1.5650...  0.4431 sec/batch
Epoch: 5/10...  Training Step: 908...  Training loss: 1.5649...  0.3291 sec/batch
Epoch: 5/10...  Training Step: 909...  Training loss: 1.5520...  0.3310 sec/batch
Epoch: 5/10...  Training Step: 910...  Training loss: 1.5508...  0.3178 sec/batch
Epoch: 5/10...  Training Step: 911...  Training loss: 1.5470...  0.3279 sec/batch
Epoch: 5/10...  Training Step: 912...  Training loss: 1.6002...  0.3757 sec/batch
Epoch: 5/10...  Training Step: 913...  Training loss: 1.5732...  0.3848 sec/batch
Epoch: 5/10...  Training Step: 914...  Training loss: 1.5868...  0.3744 sec/batch
Epoch: 5/10...  Training Step: 915...  Training loss: 1.6018...  0.3572 sec/batch
Epoch: 5/10...  Training Step: 916...  Training loss: 1.6078...  0.3245 sec/batch
Epoch: 5/10...  Training Step: 917...  Training loss: 1.5860...  0.4005 sec/batch
Epoch: 5/10...  Training Step: 918...  Training loss: 1.5778...  0.4044 sec/batch
Epoch: 5/10...  Training Step: 919...  Training loss: 1.5753...  0.3907 sec/batch
Epoch: 5/10...  Training Step: 920...  Training loss: 1.5616...  0.3437 sec/batch
Epoch: 5/10...  Training Step: 921...  Training loss: 1.5639...  0.3511 sec/batch
Epoch: 5/10...  Training Step: 922...  Training loss: 1.5689...  0.3944 sec/batch
Epoch: 5/10...  Training Step: 923...  Training loss: 1.5508...  0.3012 sec/batch
Epoch: 5/10...  Training Step: 924...  Training loss: 1.5782...  0.2806 sec/batch
Epoch: 5/10...  Training Step: 925...  Training loss: 1.5802...  0.4615 sec/batch
Epoch: 5/10...  Training Step: 926...  Training loss: 1.5658...  0.2811 sec/batch
Epoch: 5/10...  Training Step: 927...  Training loss: 1.5576...  0.3454 sec/batch
Epoch: 5/10...  Training Step: 928...  Training loss: 1.5928...  0.3439 sec/batch
Epoch: 5/10...  Training Step: 929...  Training loss: 1.6224...  0.3485 sec/batch
Epoch: 5/10...  Training Step: 930...  Training loss: 1.5770...  0.3292 sec/batch
Epoch: 5/10...  Training Step: 931...  Training loss: 1.5576...  0.3226 sec/batch
Epoch: 5/10...  Training Step: 932...  Training loss: 1.5558...  0.3624 sec/batch
Epoch: 5/10...  Training Step: 933...  Training loss: 1.5800...  0.3679 sec/batch
Epoch: 5/10...  Training Step: 934...  Training loss: 1.5826...  0.3308 sec/batch
Epoch: 5/10...  Training Step: 935...  Training loss: 1.5493...  0.3181 sec/batch
Epoch: 5/10...  Training Step: 936...  Training loss: 1.5638...  0.3154 sec/batch
Epoch: 5/10...  Training Step: 937...  Training loss: 1.5687...  0.3349 sec/batch
Epoch: 5/10...  Training Step: 938...  Training loss: 1.5815...  0.4523 sec/batch
Epoch: 5/10...  Training Step: 939...  Training loss: 1.5707...  0.3352 sec/batch
Epoch: 5/10...  Training Step: 940...  Training loss: 1.5674...  0.4381 sec/batch
Epoch: 5/10...  Training Step: 941...  Training loss: 1.5678...  0.3384 sec/batch
Epoch: 5/10...  Training Step: 942...  Training loss: 1.5662...  0.4368 sec/batch
Epoch: 5/10...  Training Step: 943...  Training loss: 1.5924...  0.3569 sec/batch
Epoch: 5/10...  Training Step: 944...  Training loss: 1.5691...  0.3916 sec/batch
Epoch: 5/10...  Training Step: 945...  Training loss: 1.5640...  0.3872 sec/batch
Epoch: 5/10...  Training Step: 946...  Training loss: 1.5326...  0.3219 sec/batch
Epoch: 5/10...  Training Step: 947...  Training loss: 1.5601...  0.4041 sec/batch
Epoch: 5/10...  Training Step: 948...  Training loss: 1.5539...  0.3411 sec/batch
Epoch: 5/10...  Training Step: 949...  Training loss: 1.5532...  0.2809 sec/batch
Epoch: 5/10...  Training Step: 950...  Training loss: 1.5518...  0.3322 sec/batch
Epoch: 5/10...  Training Step: 951...  Training loss: 1.5578...  0.3904 sec/batch
Epoch: 5/10...  Training Step: 952...  Training loss: 1.5278...  0.4382 sec/batch
Epoch: 5/10...  Training Step: 953...  Training loss: 1.5380...  0.4015 sec/batch
Epoch: 5/10...  Training Step: 954...  Training loss: 1.5205...  0.4528 sec/batch
Epoch: 5/10...  Training Step: 955...  Training loss: 1.5419...  0.3854 sec/batch
Epoch: 5/10...  Training Step: 956...  Training loss: 1.5578...  0.4275 sec/batch
Epoch: 5/10...  Training Step: 957...  Training loss: 1.5570...  0.3377 sec/batch
Epoch: 5/10...  Training Step: 958...  Training loss: 1.5407...  0.4255 sec/batch
Epoch: 5/10...  Training Step: 959...  Training loss: 1.5462...  0.3674 sec/batch
Epoch: 5/10...  Training Step: 960...  Training loss: 1.5672...  0.3738 sec/batch
Epoch: 5/10...  Training Step: 961...  Training loss: 1.5417...  0.3281 sec/batch
Epoch: 5/10...  Training Step: 962...  Training loss: 1.5455...  0.4269 sec/batch
Epoch: 5/10...  Training Step: 963...  Training loss: 1.5730...  0.4013 sec/batch
Epoch: 5/10...  Training Step: 964...  Training loss: 1.5349...  0.4215 sec/batch
Epoch: 5/10...  Training Step: 965...  Training loss: 1.5402...  0.4634 sec/batch
Epoch: 5/10...  Training Step: 966...  Training loss: 1.5185...  0.4015 sec/batch
Epoch: 5/10...  Training Step: 967...  Training loss: 1.5490...  0.3668 sec/batch
Epoch: 5/10...  Training Step: 968...  Training loss: 1.5423...  0.4145 sec/batch
Epoch: 5/10...  Training Step: 969...  Training loss: 1.5068...  0.4233 sec/batch
Epoch: 5/10...  Training Step: 970...  Training loss: 1.5517...  0.3624 sec/batch
Epoch: 5/10...  Training Step: 971...  Training loss: 1.5523...  0.3828 sec/batch
Epoch: 5/10...  Training Step: 972...  Training loss: 1.5745...  0.3883 sec/batch
Epoch: 5/10...  Training Step: 973...  Training loss: 1.5562...  0.3578 sec/batch
Epoch: 5/10...  Training Step: 974...  Training loss: 1.5433...  0.3463 sec/batch
Epoch: 5/10...  Training Step: 975...  Training loss: 1.5454...  0.3231 sec/batch
Epoch: 5/10...  Training Step: 976...  Training loss: 1.5439...  0.4171 sec/batch
Epoch: 5/10...  Training Step: 977...  Training loss: 1.5480...  0.4113 sec/batch
Epoch: 5/10...  Training Step: 978...  Training loss: 1.4913...  0.5088 sec/batch
Epoch: 5/10...  Training Step: 979...  Training loss: 1.5444...  0.3278 sec/batch
Epoch: 5/10...  Training Step: 980...  Training loss: 1.5351...  0.3960 sec/batch
Epoch: 5/10...  Training Step: 981...  Training loss: 1.5327...  0.3886 sec/batch
Epoch: 5/10...  Training Step: 982...  Training loss: 1.5358...  0.3397 sec/batch
Epoch: 5/10...  Training Step: 983...  Training loss: 1.5371...  0.3684 sec/batch
Epoch: 5/10...  Training Step: 984...  Training loss: 1.5459...  0.3829 sec/batch
Epoch: 5/10...  Training Step: 985...  Training loss: 1.5632...  0.3773 sec/batch
Epoch: 5/10...  Training Step: 986...  Training loss: 1.5302...  0.3483 sec/batch
Epoch: 5/10...  Training Step: 987...  Training loss: 1.5399...  0.4108 sec/batch
Epoch: 5/10...  Training Step: 988...  Training loss: 1.5219...  0.4017 sec/batch
Epoch: 5/10...  Training Step: 989...  Training loss: 1.5428...  0.3932 sec/batch
Epoch: 5/10...  Training Step: 990...  Training loss: 1.5358...  0.3663 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 6/10...  Training Step: 991...  Training loss: 1.6097...  0.3444 sec/batch
Epoch: 6/10...  Training Step: 992...  Training loss: 1.5606...  0.4679 sec/batch
Epoch: 6/10...  Training Step: 993...  Training loss: 1.5534...  0.4354 sec/batch
Epoch: 6/10...  Training Step: 994...  Training loss: 1.5615...  0.3275 sec/batch
Epoch: 6/10...  Training Step: 995...  Training loss: 1.5386...  0.3411 sec/batch
Epoch: 6/10...  Training Step: 996...  Training loss: 1.5431...  0.3185 sec/batch
Epoch: 6/10...  Training Step: 997...  Training loss: 1.5273...  0.3595 sec/batch
Epoch: 6/10...  Training Step: 998...  Training loss: 1.5658...  0.4018 sec/batch
Epoch: 6/10...  Training Step: 999...  Training loss: 1.5262...  0.3070 sec/batch
Epoch: 6/10...  Training Step: 1000...  Training loss: 1.5445...  0.3372 sec/batch
Epoch: 6/10...  Training Step: 1001...  Training loss: 1.5197...  0.4368 sec/batch
Epoch: 6/10...  Training Step: 1002...  Training loss: 1.5234...  0.3326 sec/batch
Epoch: 6/10...  Training Step: 1003...  Training loss: 1.5423...  0.3642 sec/batch
Epoch: 6/10...  Training Step: 1004...  Training loss: 1.5520...  0.4734 sec/batch
Epoch: 6/10...  Training Step: 1005...  Training loss: 1.5261...  0.3400 sec/batch
Epoch: 6/10...  Training Step: 1006...  Training loss: 1.5738...  0.4113 sec/batch
Epoch: 6/10...  Training Step: 1007...  Training loss: 1.5312...  0.3041 sec/batch
Epoch: 6/10...  Training Step: 1008...  Training loss: 1.5614...  0.3300 sec/batch
Epoch: 6/10...  Training Step: 1009...  Training loss: 1.5329...  0.3261 sec/batch
Epoch: 6/10...  Training Step: 1010...  Training loss: 1.5396...  0.3576 sec/batch
Epoch: 6/10...  Training Step: 1011...  Training loss: 1.5666...  0.3653 sec/batch
Epoch: 6/10...  Training Step: 1012...  Training loss: 1.5324...  0.3131 sec/batch
Epoch: 6/10...  Training Step: 1013...  Training loss: 1.5091...  0.3057 sec/batch
Epoch: 6/10...  Training Step: 1014...  Training loss: 1.5441...  0.3811 sec/batch
Epoch: 6/10...  Training Step: 1015...  Training loss: 1.5433...  0.3447 sec/batch
Epoch: 6/10...  Training Step: 1016...  Training loss: 1.5529...  0.3517 sec/batch
Epoch: 6/10...  Training Step: 1017...  Training loss: 1.5560...  0.3960 sec/batch
Epoch: 6/10...  Training Step: 1018...  Training loss: 1.5451...  0.4396 sec/batch
Epoch: 6/10...  Training Step: 1019...  Training loss: 1.5158...  0.3672 sec/batch
Epoch: 6/10...  Training Step: 1020...  Training loss: 1.5415...  0.3390 sec/batch
Epoch: 6/10...  Training Step: 1021...  Training loss: 1.5129...  0.3731 sec/batch
Epoch: 6/10...  Training Step: 1022...  Training loss: 1.5368...  0.3311 sec/batch
Epoch: 6/10...  Training Step: 1023...  Training loss: 1.5030...  0.3601 sec/batch
Epoch: 6/10...  Training Step: 1024...  Training loss: 1.5416...  0.3277 sec/batch
Epoch: 6/10...  Training Step: 1025...  Training loss: 1.5528...  0.3536 sec/batch
Epoch: 6/10...  Training Step: 1026...  Training loss: 1.5537...  0.3628 sec/batch
Epoch: 6/10...  Training Step: 1027...  Training loss: 1.5573...  0.2781 sec/batch
Epoch: 6/10...  Training Step: 1028...  Training loss: 1.5412...  0.4044 sec/batch
Epoch: 6/10...  Training Step: 1029...  Training loss: 1.5360...  0.3381 sec/batch
Epoch: 6/10...  Training Step: 1030...  Training loss: 1.5529...  0.4101 sec/batch
Epoch: 6/10...  Training Step: 1031...  Training loss: 1.5343...  0.3979 sec/batch
Epoch: 6/10...  Training Step: 1032...  Training loss: 1.5327...  0.3336 sec/batch
Epoch: 6/10...  Training Step: 1033...  Training loss: 1.5646...  0.3137 sec/batch
Epoch: 6/10...  Training Step: 1034...  Training loss: 1.5180...  0.4260 sec/batch
Epoch: 6/10...  Training Step: 1035...  Training loss: 1.5082...  0.4522 sec/batch
Epoch: 6/10...  Training Step: 1036...  Training loss: 1.5225...  0.3729 sec/batch
Epoch: 6/10...  Training Step: 1037...  Training loss: 1.5190...  0.3945 sec/batch
Epoch: 6/10...  Training Step: 1038...  Training loss: 1.5176...  0.3830 sec/batch
Epoch: 6/10...  Training Step: 1039...  Training loss: 1.5569...  0.4142 sec/batch
Epoch: 6/10...  Training Step: 1040...  Training loss: 1.5487...  0.3375 sec/batch
Epoch: 6/10...  Training Step: 1041...  Training loss: 1.5205...  0.3284 sec/batch
Epoch: 6/10...  Training Step: 1042...  Training loss: 1.5161...  0.4375 sec/batch
Epoch: 6/10...  Training Step: 1043...  Training loss: 1.4973...  0.3828 sec/batch
Epoch: 6/10...  Training Step: 1044...  Training loss: 1.5225...  0.3471 sec/batch
Epoch: 6/10...  Training Step: 1045...  Training loss: 1.5740...  0.2964 sec/batch
Epoch: 6/10...  Training Step: 1046...  Training loss: 1.5366...  0.4348 sec/batch
Epoch: 6/10...  Training Step: 1047...  Training loss: 1.5533...  0.4306 sec/batch
Epoch: 6/10...  Training Step: 1048...  Training loss: 1.5500...  0.4525 sec/batch
Epoch: 6/10...  Training Step: 1049...  Training loss: 1.5285...  0.4074 sec/batch
Epoch: 6/10...  Training Step: 1050...  Training loss: 1.5660...  0.3633 sec/batch
Epoch: 6/10...  Training Step: 1051...  Training loss: 1.5126...  0.3322 sec/batch
Epoch: 6/10...  Training Step: 1052...  Training loss: 1.5358...  0.2945 sec/batch
Epoch: 6/10...  Training Step: 1053...  Training loss: 1.5128...  0.3773 sec/batch
Epoch: 6/10...  Training Step: 1054...  Training loss: 1.5012...  0.4383 sec/batch
Epoch: 6/10...  Training Step: 1055...  Training loss: 1.5213...  0.4354 sec/batch
Epoch: 6/10...  Training Step: 1056...  Training loss: 1.5491...  0.3873 sec/batch
Epoch: 6/10...  Training Step: 1057...  Training loss: 1.5735...  0.3612 sec/batch
Epoch: 6/10...  Training Step: 1058...  Training loss: 1.5348...  0.4533 sec/batch
Epoch: 6/10...  Training Step: 1059...  Training loss: 1.5453...  0.4314 sec/batch
Epoch: 6/10...  Training Step: 1060...  Training loss: 1.5301...  0.3587 sec/batch
Epoch: 6/10...  Training Step: 1061...  Training loss: 1.5384...  0.3424 sec/batch
Epoch: 6/10...  Training Step: 1062...  Training loss: 1.5385...  0.3755 sec/batch
Epoch: 6/10...  Training Step: 1063...  Training loss: 1.5085...  0.4241 sec/batch
Epoch: 6/10...  Training Step: 1064...  Training loss: 1.5267...  0.3308 sec/batch
Epoch: 6/10...  Training Step: 1065...  Training loss: 1.5128...  0.3154 sec/batch
Epoch: 6/10...  Training Step: 1066...  Training loss: 1.5514...  0.3684 sec/batch
Epoch: 6/10...  Training Step: 1067...  Training loss: 1.5209...  0.3576 sec/batch
Epoch: 6/10...  Training Step: 1068...  Training loss: 1.5076...  0.3663 sec/batch
Epoch: 6/10...  Training Step: 1069...  Training loss: 1.5114...  0.4021 sec/batch
Epoch: 6/10...  Training Step: 1070...  Training loss: 1.5466...  0.3848 sec/batch
Epoch: 6/10...  Training Step: 1071...  Training loss: 1.5043...  0.3481 sec/batch
Epoch: 6/10...  Training Step: 1072...  Training loss: 1.5226...  0.3750 sec/batch
Epoch: 6/10...  Training Step: 1073...  Training loss: 1.5690...  0.3924 sec/batch
Epoch: 6/10...  Training Step: 1074...  Training loss: 1.5701...  0.4355 sec/batch
Epoch: 6/10...  Training Step: 1075...  Training loss: 1.5194...  0.3774 sec/batch
Epoch: 6/10...  Training Step: 1076...  Training loss: 1.5167...  0.3910 sec/batch
Epoch: 6/10...  Training Step: 1077...  Training loss: 1.5171...  0.4117 sec/batch
Epoch: 6/10...  Training Step: 1078...  Training loss: 1.5406...  0.2977 sec/batch
Epoch: 6/10...  Training Step: 1079...  Training loss: 1.5526...  0.3797 sec/batch
Epoch: 6/10...  Training Step: 1080...  Training loss: 1.5569...  0.3675 sec/batch
Epoch: 6/10...  Training Step: 1081...  Training loss: 1.5023...  0.2882 sec/batch
Epoch: 6/10...  Training Step: 1082...  Training loss: 1.5090...  0.3182 sec/batch
Epoch: 6/10...  Training Step: 1083...  Training loss: 1.5388...  0.3138 sec/batch
Epoch: 6/10...  Training Step: 1084...  Training loss: 1.5403...  0.3389 sec/batch
Epoch: 6/10...  Training Step: 1085...  Training loss: 1.5216...  0.3391 sec/batch
Epoch: 6/10...  Training Step: 1086...  Training loss: 1.5348...  0.4399 sec/batch
Epoch: 6/10...  Training Step: 1087...  Training loss: 1.5339...  0.3594 sec/batch
Epoch: 6/10...  Training Step: 1088...  Training loss: 1.5676...  0.4097 sec/batch
Epoch: 6/10...  Training Step: 1089...  Training loss: 1.5529...  0.3587 sec/batch
Epoch: 6/10...  Training Step: 1090...  Training loss: 1.5557...  0.3059 sec/batch
Epoch: 6/10...  Training Step: 1091...  Training loss: 1.5138...  0.3367 sec/batch
Epoch: 6/10...  Training Step: 1092...  Training loss: 1.5107...  0.3862 sec/batch
Epoch: 6/10...  Training Step: 1093...  Training loss: 1.5194...  0.3876 sec/batch
Epoch: 6/10...  Training Step: 1094...  Training loss: 1.5452...  0.3208 sec/batch
Epoch: 6/10...  Training Step: 1095...  Training loss: 1.5544...  0.3957 sec/batch
Epoch: 6/10...  Training Step: 1096...  Training loss: 1.5176...  0.3871 sec/batch
Epoch: 6/10...  Training Step: 1097...  Training loss: 1.5433...  0.3988 sec/batch
Epoch: 6/10...  Training Step: 1098...  Training loss: 1.5394...  0.2954 sec/batch
Epoch: 6/10...  Training Step: 1099...  Training loss: 1.5385...  0.3063 sec/batch
Epoch: 6/10...  Training Step: 1100...  Training loss: 1.5144...  0.4400 sec/batch
Epoch: 6/10...  Training Step: 1101...  Training loss: 1.5393...  0.3259 sec/batch
Epoch: 6/10...  Training Step: 1102...  Training loss: 1.5269...  0.3076 sec/batch
Epoch: 6/10...  Training Step: 1103...  Training loss: 1.5353...  0.2785 sec/batch
Epoch: 6/10...  Training Step: 1104...  Training loss: 1.5688...  0.3519 sec/batch
Epoch: 6/10...  Training Step: 1105...  Training loss: 1.5321...  0.3351 sec/batch
Epoch: 6/10...  Training Step: 1106...  Training loss: 1.5295...  0.3360 sec/batch
Epoch: 6/10...  Training Step: 1107...  Training loss: 1.5179...  0.3497 sec/batch
Epoch: 6/10...  Training Step: 1108...  Training loss: 1.5223...  0.4783 sec/batch
Epoch: 6/10...  Training Step: 1109...  Training loss: 1.5116...  0.3532 sec/batch
Epoch: 6/10...  Training Step: 1110...  Training loss: 1.5655...  0.4536 sec/batch
Epoch: 6/10...  Training Step: 1111...  Training loss: 1.5435...  0.3416 sec/batch
Epoch: 6/10...  Training Step: 1112...  Training loss: 1.5525...  0.3873 sec/batch
Epoch: 6/10...  Training Step: 1113...  Training loss: 1.5688...  0.3986 sec/batch
Epoch: 6/10...  Training Step: 1114...  Training loss: 1.5723...  0.3511 sec/batch
Epoch: 6/10...  Training Step: 1115...  Training loss: 1.5521...  0.3081 sec/batch
Epoch: 6/10...  Training Step: 1116...  Training loss: 1.5437...  0.3616 sec/batch
Epoch: 6/10...  Training Step: 1117...  Training loss: 1.5401...  0.4065 sec/batch
Epoch: 6/10...  Training Step: 1118...  Training loss: 1.5287...  0.2999 sec/batch
Epoch: 6/10...  Training Step: 1119...  Training loss: 1.5265...  0.3580 sec/batch
Epoch: 6/10...  Training Step: 1120...  Training loss: 1.5390...  0.3158 sec/batch
Epoch: 6/10...  Training Step: 1121...  Training loss: 1.5160...  0.4577 sec/batch
Epoch: 6/10...  Training Step: 1122...  Training loss: 1.5424...  0.3317 sec/batch
Epoch: 6/10...  Training Step: 1123...  Training loss: 1.5455...  0.3505 sec/batch
Epoch: 6/10...  Training Step: 1124...  Training loss: 1.5305...  0.3340 sec/batch
Epoch: 6/10...  Training Step: 1125...  Training loss: 1.5271...  0.3316 sec/batch
Epoch: 6/10...  Training Step: 1126...  Training loss: 1.5569...  0.3646 sec/batch
Epoch: 6/10...  Training Step: 1127...  Training loss: 1.5929...  0.3169 sec/batch
Epoch: 6/10...  Training Step: 1128...  Training loss: 1.5434...  0.3212 sec/batch
Epoch: 6/10...  Training Step: 1129...  Training loss: 1.5249...  0.3625 sec/batch
Epoch: 6/10...  Training Step: 1130...  Training loss: 1.5277...  0.4109 sec/batch
Epoch: 6/10...  Training Step: 1131...  Training loss: 1.5484...  0.3683 sec/batch
Epoch: 6/10...  Training Step: 1132...  Training loss: 1.5504...  0.3872 sec/batch
Epoch: 6/10...  Training Step: 1133...  Training loss: 1.5182...  0.4083 sec/batch
Epoch: 6/10...  Training Step: 1134...  Training loss: 1.5253...  0.2999 sec/batch
Epoch: 6/10...  Training Step: 1135...  Training loss: 1.5343...  0.2911 sec/batch
Epoch: 6/10...  Training Step: 1136...  Training loss: 1.5517...  0.3386 sec/batch
Epoch: 6/10...  Training Step: 1137...  Training loss: 1.5361...  0.3007 sec/batch
Epoch: 6/10...  Training Step: 1138...  Training loss: 1.5333...  0.3922 sec/batch
Epoch: 6/10...  Training Step: 1139...  Training loss: 1.5298...  0.3903 sec/batch
Epoch: 6/10...  Training Step: 1140...  Training loss: 1.5301...  0.3574 sec/batch
Epoch: 6/10...  Training Step: 1141...  Training loss: 1.5542...  0.3271 sec/batch
Epoch: 6/10...  Training Step: 1142...  Training loss: 1.5333...  0.3088 sec/batch
Epoch: 6/10...  Training Step: 1143...  Training loss: 1.5340...  0.4280 sec/batch
Epoch: 6/10...  Training Step: 1144...  Training loss: 1.4983...  0.4070 sec/batch
Epoch: 6/10...  Training Step: 1145...  Training loss: 1.5221...  0.4267 sec/batch
Epoch: 6/10...  Training Step: 1146...  Training loss: 1.5216...  0.3398 sec/batch
Epoch: 6/10...  Training Step: 1147...  Training loss: 1.5184...  0.3849 sec/batch
Epoch: 6/10...  Training Step: 1148...  Training loss: 1.5214...  0.3556 sec/batch
Epoch: 6/10...  Training Step: 1149...  Training loss: 1.5237...  0.3770 sec/batch
Epoch: 6/10...  Training Step: 1150...  Training loss: 1.4928...  0.3483 sec/batch
Epoch: 6/10...  Training Step: 1151...  Training loss: 1.5023...  0.3233 sec/batch
Epoch: 6/10...  Training Step: 1152...  Training loss: 1.4885...  0.3937 sec/batch
Epoch: 6/10...  Training Step: 1153...  Training loss: 1.5035...  0.3747 sec/batch
Epoch: 6/10...  Training Step: 1154...  Training loss: 1.5228...  0.3151 sec/batch
Epoch: 6/10...  Training Step: 1155...  Training loss: 1.5264...  0.3284 sec/batch
Epoch: 6/10...  Training Step: 1156...  Training loss: 1.5089...  0.3833 sec/batch
Epoch: 6/10...  Training Step: 1157...  Training loss: 1.5131...  0.3810 sec/batch
Epoch: 6/10...  Training Step: 1158...  Training loss: 1.5372...  0.4395 sec/batch
Epoch: 6/10...  Training Step: 1159...  Training loss: 1.5065...  0.4286 sec/batch
Epoch: 6/10...  Training Step: 1160...  Training loss: 1.5137...  0.4516 sec/batch
Epoch: 6/10...  Training Step: 1161...  Training loss: 1.5427...  0.4326 sec/batch
Epoch: 6/10...  Training Step: 1162...  Training loss: 1.4998...  0.3957 sec/batch
Epoch: 6/10...  Training Step: 1163...  Training loss: 1.5107...  0.4812 sec/batch
Epoch: 6/10...  Training Step: 1164...  Training loss: 1.4866...  0.3842 sec/batch
Epoch: 6/10...  Training Step: 1165...  Training loss: 1.5225...  0.3705 sec/batch
Epoch: 6/10...  Training Step: 1166...  Training loss: 1.5099...  0.3628 sec/batch
Epoch: 6/10...  Training Step: 1167...  Training loss: 1.4735...  0.3695 sec/batch
Epoch: 6/10...  Training Step: 1168...  Training loss: 1.5208...  0.3727 sec/batch
Epoch: 6/10...  Training Step: 1169...  Training loss: 1.5224...  0.4238 sec/batch
Epoch: 6/10...  Training Step: 1170...  Training loss: 1.5402...  0.4304 sec/batch
Epoch: 6/10...  Training Step: 1171...  Training loss: 1.5261...  0.3214 sec/batch
Epoch: 6/10...  Training Step: 1172...  Training loss: 1.5126...  0.3807 sec/batch
Epoch: 6/10...  Training Step: 1173...  Training loss: 1.5136...  0.3733 sec/batch
Epoch: 6/10...  Training Step: 1174...  Training loss: 1.5143...  0.4571 sec/batch
Epoch: 6/10...  Training Step: 1175...  Training loss: 1.5165...  0.4146 sec/batch
Epoch: 6/10...  Training Step: 1176...  Training loss: 1.4616...  0.3739 sec/batch
Epoch: 6/10...  Training Step: 1177...  Training loss: 1.5146...  0.4047 sec/batch
Epoch: 6/10...  Training Step: 1178...  Training loss: 1.5022...  0.3324 sec/batch
Epoch: 6/10...  Training Step: 1179...  Training loss: 1.5055...  0.3433 sec/batch
Epoch: 6/10...  Training Step: 1180...  Training loss: 1.5042...  0.2905 sec/batch
Epoch: 6/10...  Training Step: 1181...  Training loss: 1.5069...  0.3926 sec/batch
Epoch: 6/10...  Training Step: 1182...  Training loss: 1.5180...  0.3849 sec/batch
Epoch: 6/10...  Training Step: 1183...  Training loss: 1.5339...  0.3988 sec/batch
Epoch: 6/10...  Training Step: 1184...  Training loss: 1.5005...  0.4310 sec/batch
Epoch: 6/10...  Training Step: 1185...  Training loss: 1.5115...  0.3740 sec/batch
Epoch: 6/10...  Training Step: 1186...  Training loss: 1.4916...  0.3307 sec/batch
Epoch: 6/10...  Training Step: 1187...  Training loss: 1.5146...  0.4969 sec/batch
Epoch: 6/10...  Training Step: 1188...  Training loss: 1.5042...  0.4013 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 7/10...  Training Step: 1189...  Training loss: 1.5797...  0.3615 sec/batch
Epoch: 7/10...  Training Step: 1190...  Training loss: 1.5324...  0.4059 sec/batch
Epoch: 7/10...  Training Step: 1191...  Training loss: 1.5243...  0.3579 sec/batch
Epoch: 7/10...  Training Step: 1192...  Training loss: 1.5324...  0.3029 sec/batch
Epoch: 7/10...  Training Step: 1193...  Training loss: 1.5075...  0.2754 sec/batch
Epoch: 7/10...  Training Step: 1194...  Training loss: 1.5151...  0.4000 sec/batch
Epoch: 7/10...  Training Step: 1195...  Training loss: 1.5006...  0.3976 sec/batch
Epoch: 7/10...  Training Step: 1196...  Training loss: 1.5343...  0.3878 sec/batch
Epoch: 7/10...  Training Step: 1197...  Training loss: 1.4978...  0.3940 sec/batch
Epoch: 7/10...  Training Step: 1198...  Training loss: 1.5159...  0.4088 sec/batch
Epoch: 7/10...  Training Step: 1199...  Training loss: 1.4876...  0.4043 sec/batch
Epoch: 7/10...  Training Step: 1200...  Training loss: 1.4935...  0.4104 sec/batch
Epoch: 7/10...  Training Step: 1201...  Training loss: 1.5164...  0.3707 sec/batch
Epoch: 7/10...  Training Step: 1202...  Training loss: 1.5201...  0.3980 sec/batch
Epoch: 7/10...  Training Step: 1203...  Training loss: 1.4971...  0.3526 sec/batch
Epoch: 7/10...  Training Step: 1204...  Training loss: 1.5457...  0.4124 sec/batch
Epoch: 7/10...  Training Step: 1205...  Training loss: 1.5008...  0.3269 sec/batch
Epoch: 7/10...  Training Step: 1206...  Training loss: 1.5302...  0.3458 sec/batch
Epoch: 7/10...  Training Step: 1207...  Training loss: 1.5024...  0.3999 sec/batch
Epoch: 7/10...  Training Step: 1208...  Training loss: 1.5108...  0.3524 sec/batch
Epoch: 7/10...  Training Step: 1209...  Training loss: 1.5379...  0.4795 sec/batch
Epoch: 7/10...  Training Step: 1210...  Training loss: 1.5044...  0.3573 sec/batch
Epoch: 7/10...  Training Step: 1211...  Training loss: 1.4799...  0.3969 sec/batch
Epoch: 7/10...  Training Step: 1212...  Training loss: 1.5125...  0.3859 sec/batch
Epoch: 7/10...  Training Step: 1213...  Training loss: 1.5150...  0.4279 sec/batch
Epoch: 7/10...  Training Step: 1214...  Training loss: 1.5232...  0.3665 sec/batch
Epoch: 7/10...  Training Step: 1215...  Training loss: 1.5236...  0.3848 sec/batch
Epoch: 7/10...  Training Step: 1216...  Training loss: 1.5152...  0.3580 sec/batch
Epoch: 7/10...  Training Step: 1217...  Training loss: 1.4887...  0.2957 sec/batch
Epoch: 7/10...  Training Step: 1218...  Training loss: 1.5100...  0.3020 sec/batch
Epoch: 7/10...  Training Step: 1219...  Training loss: 1.4834...  0.3408 sec/batch
Epoch: 7/10...  Training Step: 1220...  Training loss: 1.5089...  0.4274 sec/batch
Epoch: 7/10...  Training Step: 1221...  Training loss: 1.4766...  0.3528 sec/batch
Epoch: 7/10...  Training Step: 1222...  Training loss: 1.5131...  0.3479 sec/batch
Epoch: 7/10...  Training Step: 1223...  Training loss: 1.5249...  0.3289 sec/batch
Epoch: 7/10...  Training Step: 1224...  Training loss: 1.5245...  0.3431 sec/batch
Epoch: 7/10...  Training Step: 1225...  Training loss: 1.5291...  0.3812 sec/batch
Epoch: 7/10...  Training Step: 1226...  Training loss: 1.5121...  0.4137 sec/batch
Epoch: 7/10...  Training Step: 1227...  Training loss: 1.5079...  0.3923 sec/batch
Epoch: 7/10...  Training Step: 1228...  Training loss: 1.5242...  0.3869 sec/batch
Epoch: 7/10...  Training Step: 1229...  Training loss: 1.5058...  0.3719 sec/batch
Epoch: 7/10...  Training Step: 1230...  Training loss: 1.5032...  0.3277 sec/batch
Epoch: 7/10...  Training Step: 1231...  Training loss: 1.5359...  0.3341 sec/batch
Epoch: 7/10...  Training Step: 1232...  Training loss: 1.4841...  0.4212 sec/batch
Epoch: 7/10...  Training Step: 1233...  Training loss: 1.4786...  0.3777 sec/batch
Epoch: 7/10...  Training Step: 1234...  Training loss: 1.4920...  0.3761 sec/batch
Epoch: 7/10...  Training Step: 1235...  Training loss: 1.4859...  0.3378 sec/batch
Epoch: 7/10...  Training Step: 1236...  Training loss: 1.4913...  0.3717 sec/batch
Epoch: 7/10...  Training Step: 1237...  Training loss: 1.5255...  0.3719 sec/batch
Epoch: 7/10...  Training Step: 1238...  Training loss: 1.5209...  0.4263 sec/batch
Epoch: 7/10...  Training Step: 1239...  Training loss: 1.4873...  0.3848 sec/batch
Epoch: 7/10...  Training Step: 1240...  Training loss: 1.4863...  0.3580 sec/batch
Epoch: 7/10...  Training Step: 1241...  Training loss: 1.4684...  0.3851 sec/batch
Epoch: 7/10...  Training Step: 1242...  Training loss: 1.4937...  0.3679 sec/batch
Epoch: 7/10...  Training Step: 1243...  Training loss: 1.5444...  0.3617 sec/batch
Epoch: 7/10...  Training Step: 1244...  Training loss: 1.5088...  0.3316 sec/batch
Epoch: 7/10...  Training Step: 1245...  Training loss: 1.5221...  0.3357 sec/batch
Epoch: 7/10...  Training Step: 1246...  Training loss: 1.5223...  0.4583 sec/batch
Epoch: 7/10...  Training Step: 1247...  Training loss: 1.4989...  0.3391 sec/batch
Epoch: 7/10...  Training Step: 1248...  Training loss: 1.5371...  0.4328 sec/batch
Epoch: 7/10...  Training Step: 1249...  Training loss: 1.4818...  0.4583 sec/batch
Epoch: 7/10...  Training Step: 1250...  Training loss: 1.5045...  0.4172 sec/batch
Epoch: 7/10...  Training Step: 1251...  Training loss: 1.4823...  0.3776 sec/batch
Epoch: 7/10...  Training Step: 1252...  Training loss: 1.4704...  0.4388 sec/batch
Epoch: 7/10...  Training Step: 1253...  Training loss: 1.4967...  0.3933 sec/batch
Epoch: 7/10...  Training Step: 1254...  Training loss: 1.5223...  0.3842 sec/batch
Epoch: 7/10...  Training Step: 1255...  Training loss: 1.5458...  0.3589 sec/batch
Epoch: 7/10...  Training Step: 1256...  Training loss: 1.5064...  0.3197 sec/batch
Epoch: 7/10...  Training Step: 1257...  Training loss: 1.5130...  0.3109 sec/batch
Epoch: 7/10...  Training Step: 1258...  Training loss: 1.5024...  0.3381 sec/batch
Epoch: 7/10...  Training Step: 1259...  Training loss: 1.5140...  0.3597 sec/batch
Epoch: 7/10...  Training Step: 1260...  Training loss: 1.5085...  0.4159 sec/batch
Epoch: 7/10...  Training Step: 1261...  Training loss: 1.4793...  0.4046 sec/batch
Epoch: 7/10...  Training Step: 1262...  Training loss: 1.4975...  0.4254 sec/batch
Epoch: 7/10...  Training Step: 1263...  Training loss: 1.4855...  0.3527 sec/batch
Epoch: 7/10...  Training Step: 1264...  Training loss: 1.5233...  0.3716 sec/batch
Epoch: 7/10...  Training Step: 1265...  Training loss: 1.4968...  0.3293 sec/batch
Epoch: 7/10...  Training Step: 1266...  Training loss: 1.4808...  0.3520 sec/batch
Epoch: 7/10...  Training Step: 1267...  Training loss: 1.4846...  0.3605 sec/batch
Epoch: 7/10...  Training Step: 1268...  Training loss: 1.5202...  0.3678 sec/batch
Epoch: 7/10...  Training Step: 1269...  Training loss: 1.4764...  0.3301 sec/batch
Epoch: 7/10...  Training Step: 1270...  Training loss: 1.4977...  0.3314 sec/batch
Epoch: 7/10...  Training Step: 1271...  Training loss: 1.5443...  0.4346 sec/batch
Epoch: 7/10...  Training Step: 1272...  Training loss: 1.5437...  0.3984 sec/batch
Epoch: 7/10...  Training Step: 1273...  Training loss: 1.4953...  0.3552 sec/batch
Epoch: 7/10...  Training Step: 1274...  Training loss: 1.4923...  0.3397 sec/batch
Epoch: 7/10...  Training Step: 1275...  Training loss: 1.4907...  0.4365 sec/batch
Epoch: 7/10...  Training Step: 1276...  Training loss: 1.5132...  0.4304 sec/batch
Epoch: 7/10...  Training Step: 1277...  Training loss: 1.5272...  0.4189 sec/batch
Epoch: 7/10...  Training Step: 1278...  Training loss: 1.5321...  0.3955 sec/batch
Epoch: 7/10...  Training Step: 1279...  Training loss: 1.4752...  0.3440 sec/batch
Epoch: 7/10...  Training Step: 1280...  Training loss: 1.4827...  0.3698 sec/batch
Epoch: 7/10...  Training Step: 1281...  Training loss: 1.5133...  0.3208 sec/batch
Epoch: 7/10...  Training Step: 1282...  Training loss: 1.5128...  0.3397 sec/batch
Epoch: 7/10...  Training Step: 1283...  Training loss: 1.4951...  0.3755 sec/batch
Epoch: 7/10...  Training Step: 1284...  Training loss: 1.5070...  0.3839 sec/batch
Epoch: 7/10...  Training Step: 1285...  Training loss: 1.5095...  0.4453 sec/batch
Epoch: 7/10...  Training Step: 1286...  Training loss: 1.5428...  0.4135 sec/batch
Epoch: 7/10...  Training Step: 1287...  Training loss: 1.5253...  0.3676 sec/batch
Epoch: 7/10...  Training Step: 1288...  Training loss: 1.5273...  0.4150 sec/batch
Epoch: 7/10...  Training Step: 1289...  Training loss: 1.4919...  0.4033 sec/batch
Epoch: 7/10...  Training Step: 1290...  Training loss: 1.4837...  0.4568 sec/batch
Epoch: 7/10...  Training Step: 1291...  Training loss: 1.4916...  0.4219 sec/batch
Epoch: 7/10...  Training Step: 1292...  Training loss: 1.5200...  0.3394 sec/batch
Epoch: 7/10...  Training Step: 1293...  Training loss: 1.5272...  0.3757 sec/batch
Epoch: 7/10...  Training Step: 1294...  Training loss: 1.4897...  0.3296 sec/batch
Epoch: 7/10...  Training Step: 1295...  Training loss: 1.5171...  0.3061 sec/batch
Epoch: 7/10...  Training Step: 1296...  Training loss: 1.5127...  0.3710 sec/batch
Epoch: 7/10...  Training Step: 1297...  Training loss: 1.5117...  0.3442 sec/batch
Epoch: 7/10...  Training Step: 1298...  Training loss: 1.4875...  0.3459 sec/batch
Epoch: 7/10...  Training Step: 1299...  Training loss: 1.5113...  0.3637 sec/batch
Epoch: 7/10...  Training Step: 1300...  Training loss: 1.5011...  0.4016 sec/batch
Epoch: 7/10...  Training Step: 1301...  Training loss: 1.5040...  0.3668 sec/batch
Epoch: 7/10...  Training Step: 1302...  Training loss: 1.5426...  0.4702 sec/batch
Epoch: 7/10...  Training Step: 1303...  Training loss: 1.5031...  0.4488 sec/batch
Epoch: 7/10...  Training Step: 1304...  Training loss: 1.5043...  0.3879 sec/batch
Epoch: 7/10...  Training Step: 1305...  Training loss: 1.4921...  0.3429 sec/batch
Epoch: 7/10...  Training Step: 1306...  Training loss: 1.4953...  0.3875 sec/batch
Epoch: 7/10...  Training Step: 1307...  Training loss: 1.4866...  0.3582 sec/batch
Epoch: 7/10...  Training Step: 1308...  Training loss: 1.5335...  0.3367 sec/batch
Epoch: 7/10...  Training Step: 1309...  Training loss: 1.5203...  0.3447 sec/batch
Epoch: 7/10...  Training Step: 1310...  Training loss: 1.5288...  0.4509 sec/batch
Epoch: 7/10...  Training Step: 1311...  Training loss: 1.5408...  0.4629 sec/batch
Epoch: 7/10...  Training Step: 1312...  Training loss: 1.5446...  0.4279 sec/batch
Epoch: 7/10...  Training Step: 1313...  Training loss: 1.5266...  0.3831 sec/batch
Epoch: 7/10...  Training Step: 1314...  Training loss: 1.5181...  0.4115 sec/batch
Epoch: 7/10...  Training Step: 1315...  Training loss: 1.5116...  0.4205 sec/batch
Epoch: 7/10...  Training Step: 1316...  Training loss: 1.5042...  0.4717 sec/batch
Epoch: 7/10...  Training Step: 1317...  Training loss: 1.4994...  0.3730 sec/batch
Epoch: 7/10...  Training Step: 1318...  Training loss: 1.5124...  0.4103 sec/batch
Epoch: 7/10...  Training Step: 1319...  Training loss: 1.4911...  0.3781 sec/batch
Epoch: 7/10...  Training Step: 1320...  Training loss: 1.5136...  0.3232 sec/batch
Epoch: 7/10...  Training Step: 1321...  Training loss: 1.5186...  0.3307 sec/batch
Epoch: 7/10...  Training Step: 1322...  Training loss: 1.5007...  0.4082 sec/batch
Epoch: 7/10...  Training Step: 1323...  Training loss: 1.5028...  0.3951 sec/batch
Epoch: 7/10...  Training Step: 1324...  Training loss: 1.5271...  0.3558 sec/batch
Epoch: 7/10...  Training Step: 1325...  Training loss: 1.5682...  0.3960 sec/batch
Epoch: 7/10...  Training Step: 1326...  Training loss: 1.5195...  0.3508 sec/batch
Epoch: 7/10...  Training Step: 1327...  Training loss: 1.4983...  0.4174 sec/batch
Epoch: 7/10...  Training Step: 1328...  Training loss: 1.5012...  0.4392 sec/batch
Epoch: 7/10...  Training Step: 1329...  Training loss: 1.5241...  0.4209 sec/batch
Epoch: 7/10...  Training Step: 1330...  Training loss: 1.5234...  0.3085 sec/batch
Epoch: 7/10...  Training Step: 1331...  Training loss: 1.4917...  0.3716 sec/batch
Epoch: 7/10...  Training Step: 1332...  Training loss: 1.4961...  0.3605 sec/batch
Epoch: 7/10...  Training Step: 1333...  Training loss: 1.5046...  0.3151 sec/batch
Epoch: 7/10...  Training Step: 1334...  Training loss: 1.5277...  0.3216 sec/batch
Epoch: 7/10...  Training Step: 1335...  Training loss: 1.5109...  0.3931 sec/batch
Epoch: 7/10...  Training Step: 1336...  Training loss: 1.5044...  0.5150 sec/batch
Epoch: 7/10...  Training Step: 1337...  Training loss: 1.5007...  0.4124 sec/batch
Epoch: 7/10...  Training Step: 1338...  Training loss: 1.5027...  0.3742 sec/batch
Epoch: 7/10...  Training Step: 1339...  Training loss: 1.5257...  0.4048 sec/batch
Epoch: 7/10...  Training Step: 1340...  Training loss: 1.5071...  0.4331 sec/batch
Epoch: 7/10...  Training Step: 1341...  Training loss: 1.5120...  0.3611 sec/batch
Epoch: 7/10...  Training Step: 1342...  Training loss: 1.4748...  0.3617 sec/batch
Epoch: 7/10...  Training Step: 1343...  Training loss: 1.4938...  0.3607 sec/batch
Epoch: 7/10...  Training Step: 1344...  Training loss: 1.4969...  0.3730 sec/batch
Epoch: 7/10...  Training Step: 1345...  Training loss: 1.4919...  0.3904 sec/batch
Epoch: 7/10...  Training Step: 1346...  Training loss: 1.4983...  0.3425 sec/batch
Epoch: 7/10...  Training Step: 1347...  Training loss: 1.4992...  0.3140 sec/batch
Epoch: 7/10...  Training Step: 1348...  Training loss: 1.4683...  0.4004 sec/batch
Epoch: 7/10...  Training Step: 1349...  Training loss: 1.4753...  0.4107 sec/batch
Epoch: 7/10...  Training Step: 1350...  Training loss: 1.4640...  0.4376 sec/batch
Epoch: 7/10...  Training Step: 1351...  Training loss: 1.4776...  0.4717 sec/batch
Epoch: 7/10...  Training Step: 1352...  Training loss: 1.4963...  0.4133 sec/batch
Epoch: 7/10...  Training Step: 1353...  Training loss: 1.5024...  0.3522 sec/batch
Epoch: 7/10...  Training Step: 1354...  Training loss: 1.4833...  0.3649 sec/batch
Epoch: 7/10...  Training Step: 1355...  Training loss: 1.4868...  0.3700 sec/batch
Epoch: 7/10...  Training Step: 1356...  Training loss: 1.5155...  0.3875 sec/batch
Epoch: 7/10...  Training Step: 1357...  Training loss: 1.4812...  0.4644 sec/batch
Epoch: 7/10...  Training Step: 1358...  Training loss: 1.4882...  0.3566 sec/batch
Epoch: 7/10...  Training Step: 1359...  Training loss: 1.5166...  0.3858 sec/batch
Epoch: 7/10...  Training Step: 1360...  Training loss: 1.4728...  0.3418 sec/batch
Epoch: 7/10...  Training Step: 1361...  Training loss: 1.4864...  0.3029 sec/batch
Epoch: 7/10...  Training Step: 1362...  Training loss: 1.4615...  0.3517 sec/batch
Epoch: 7/10...  Training Step: 1363...  Training loss: 1.5006...  0.4413 sec/batch
Epoch: 7/10...  Training Step: 1364...  Training loss: 1.4861...  0.3730 sec/batch
Epoch: 7/10...  Training Step: 1365...  Training loss: 1.4500...  0.3803 sec/batch
Epoch: 7/10...  Training Step: 1366...  Training loss: 1.4956...  0.3827 sec/batch
Epoch: 7/10...  Training Step: 1367...  Training loss: 1.4977...  0.4582 sec/batch
Epoch: 7/10...  Training Step: 1368...  Training loss: 1.5131...  0.4215 sec/batch
Epoch: 7/10...  Training Step: 1369...  Training loss: 1.5022...  0.3723 sec/batch
Epoch: 7/10...  Training Step: 1370...  Training loss: 1.4907...  0.4482 sec/batch
Epoch: 7/10...  Training Step: 1371...  Training loss: 1.4907...  0.4259 sec/batch
Epoch: 7/10...  Training Step: 1372...  Training loss: 1.4881...  0.3670 sec/batch
Epoch: 7/10...  Training Step: 1373...  Training loss: 1.4926...  0.3357 sec/batch
Epoch: 7/10...  Training Step: 1374...  Training loss: 1.4396...  0.3169 sec/batch
Epoch: 7/10...  Training Step: 1375...  Training loss: 1.4909...  0.2905 sec/batch
Epoch: 7/10...  Training Step: 1376...  Training loss: 1.4771...  0.3513 sec/batch
Epoch: 7/10...  Training Step: 1377...  Training loss: 1.4842...  0.3535 sec/batch
Epoch: 7/10...  Training Step: 1378...  Training loss: 1.4809...  0.4205 sec/batch
Epoch: 7/10...  Training Step: 1379...  Training loss: 1.4845...  0.3243 sec/batch
Epoch: 7/10...  Training Step: 1380...  Training loss: 1.4959...  0.4291 sec/batch
Epoch: 7/10...  Training Step: 1381...  Training loss: 1.5128...  0.3235 sec/batch
Epoch: 7/10...  Training Step: 1382...  Training loss: 1.4795...  0.3861 sec/batch
Epoch: 7/10...  Training Step: 1383...  Training loss: 1.4886...  0.4183 sec/batch
Epoch: 7/10...  Training Step: 1384...  Training loss: 1.4682...  0.3673 sec/batch
Epoch: 7/10...  Training Step: 1385...  Training loss: 1.4937...  0.3790 sec/batch
Epoch: 7/10...  Training Step: 1386...  Training loss: 1.4793...  0.3267 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 8/10...  Training Step: 1387...  Training loss: 1.5512...  0.3343 sec/batch
Epoch: 8/10...  Training Step: 1388...  Training loss: 1.5113...  0.3218 sec/batch
Epoch: 8/10...  Training Step: 1389...  Training loss: 1.5020...  0.3998 sec/batch
Epoch: 8/10...  Training Step: 1390...  Training loss: 1.5091...  0.3909 sec/batch
Epoch: 8/10...  Training Step: 1391...  Training loss: 1.4848...  0.3256 sec/batch
Epoch: 8/10...  Training Step: 1392...  Training loss: 1.4933...  0.3220 sec/batch
Epoch: 8/10...  Training Step: 1393...  Training loss: 1.4810...  0.4430 sec/batch
Epoch: 8/10...  Training Step: 1394...  Training loss: 1.5108...  0.3429 sec/batch
Epoch: 8/10...  Training Step: 1395...  Training loss: 1.4758...  0.3153 sec/batch
Epoch: 8/10...  Training Step: 1396...  Training loss: 1.4922...  0.3610 sec/batch
Epoch: 8/10...  Training Step: 1397...  Training loss: 1.4636...  0.3679 sec/batch
Epoch: 8/10...  Training Step: 1398...  Training loss: 1.4703...  0.4030 sec/batch
Epoch: 8/10...  Training Step: 1399...  Training loss: 1.4944...  0.4113 sec/batch
Epoch: 8/10...  Training Step: 1400...  Training loss: 1.4976...  0.3448 sec/batch
Epoch: 8/10...  Training Step: 1401...  Training loss: 1.4743...  0.3515 sec/batch
Epoch: 8/10...  Training Step: 1402...  Training loss: 1.5220...  0.4197 sec/batch
Epoch: 8/10...  Training Step: 1403...  Training loss: 1.4778...  0.3473 sec/batch
Epoch: 8/10...  Training Step: 1404...  Training loss: 1.5061...  0.3767 sec/batch
Epoch: 8/10...  Training Step: 1405...  Training loss: 1.4787...  0.3778 sec/batch
Epoch: 8/10...  Training Step: 1406...  Training loss: 1.4881...  0.3241 sec/batch
Epoch: 8/10...  Training Step: 1407...  Training loss: 1.5163...  0.3490 sec/batch
Epoch: 8/10...  Training Step: 1408...  Training loss: 1.4837...  0.3898 sec/batch
Epoch: 8/10...  Training Step: 1409...  Training loss: 1.4612...  0.4481 sec/batch
Epoch: 8/10...  Training Step: 1410...  Training loss: 1.4904...  0.3538 sec/batch
Epoch: 8/10...  Training Step: 1411...  Training loss: 1.4945...  0.3624 sec/batch
Epoch: 8/10...  Training Step: 1412...  Training loss: 1.5007...  0.3248 sec/batch
Epoch: 8/10...  Training Step: 1413...  Training loss: 1.4977...  0.3791 sec/batch
Epoch: 8/10...  Training Step: 1414...  Training loss: 1.4930...  0.3048 sec/batch
Epoch: 8/10...  Training Step: 1415...  Training loss: 1.4677...  0.4523 sec/batch
Epoch: 8/10...  Training Step: 1416...  Training loss: 1.4853...  0.3918 sec/batch
Epoch: 8/10...  Training Step: 1417...  Training loss: 1.4607...  0.3422 sec/batch
Epoch: 8/10...  Training Step: 1418...  Training loss: 1.4894...  0.3989 sec/batch
Epoch: 8/10...  Training Step: 1419...  Training loss: 1.4559...  0.3446 sec/batch
Epoch: 8/10...  Training Step: 1420...  Training loss: 1.4923...  0.3857 sec/batch
Epoch: 8/10...  Training Step: 1421...  Training loss: 1.5036...  0.3812 sec/batch
Epoch: 8/10...  Training Step: 1422...  Training loss: 1.5044...  0.4097 sec/batch
Epoch: 8/10...  Training Step: 1423...  Training loss: 1.5086...  0.3887 sec/batch
Epoch: 8/10...  Training Step: 1424...  Training loss: 1.4885...  0.3944 sec/batch
Epoch: 8/10...  Training Step: 1425...  Training loss: 1.4877...  0.3711 sec/batch
Epoch: 8/10...  Training Step: 1426...  Training loss: 1.5022...  0.3596 sec/batch
Epoch: 8/10...  Training Step: 1427...  Training loss: 1.4844...  0.3276 sec/batch
Epoch: 8/10...  Training Step: 1428...  Training loss: 1.4819...  0.4093 sec/batch
Epoch: 8/10...  Training Step: 1429...  Training loss: 1.5154...  0.3611 sec/batch
Epoch: 8/10...  Training Step: 1430...  Training loss: 1.4600...  0.4067 sec/batch
Epoch: 8/10...  Training Step: 1431...  Training loss: 1.4583...  0.4672 sec/batch
Epoch: 8/10...  Training Step: 1432...  Training loss: 1.4673...  0.4439 sec/batch
Epoch: 8/10...  Training Step: 1433...  Training loss: 1.4632...  0.3760 sec/batch
Epoch: 8/10...  Training Step: 1434...  Training loss: 1.4721...  0.4203 sec/batch
Epoch: 8/10...  Training Step: 1435...  Training loss: 1.5026...  0.3521 sec/batch
Epoch: 8/10...  Training Step: 1436...  Training loss: 1.4979...  0.3837 sec/batch
Epoch: 8/10...  Training Step: 1437...  Training loss: 1.4639...  0.3955 sec/batch
Epoch: 8/10...  Training Step: 1438...  Training loss: 1.4637...  0.3699 sec/batch
Epoch: 8/10...  Training Step: 1439...  Training loss: 1.4474...  0.3441 sec/batch
Epoch: 8/10...  Training Step: 1440...  Training loss: 1.4734...  0.3330 sec/batch
Epoch: 8/10...  Training Step: 1441...  Training loss: 1.5210...  0.3843 sec/batch
Epoch: 8/10...  Training Step: 1442...  Training loss: 1.4888...  0.4291 sec/batch
Epoch: 8/10...  Training Step: 1443...  Training loss: 1.4988...  0.3556 sec/batch
Epoch: 8/10...  Training Step: 1444...  Training loss: 1.5012...  0.3361 sec/batch
Epoch: 8/10...  Training Step: 1445...  Training loss: 1.4758...  0.4744 sec/batch
Epoch: 8/10...  Training Step: 1446...  Training loss: 1.5164...  0.4175 sec/batch
Epoch: 8/10...  Training Step: 1447...  Training loss: 1.4582...  0.3923 sec/batch
Epoch: 8/10...  Training Step: 1448...  Training loss: 1.4843...  0.3685 sec/batch
Epoch: 8/10...  Training Step: 1449...  Training loss: 1.4611...  0.4563 sec/batch
Epoch: 8/10...  Training Step: 1450...  Training loss: 1.4472...  0.3581 sec/batch
Epoch: 8/10...  Training Step: 1451...  Training loss: 1.4776...  0.3748 sec/batch
Epoch: 8/10...  Training Step: 1452...  Training loss: 1.5011...  0.3817 sec/batch
Epoch: 8/10...  Training Step: 1453...  Training loss: 1.5236...  0.3141 sec/batch
Epoch: 8/10...  Training Step: 1454...  Training loss: 1.4840...  0.4216 sec/batch
Epoch: 8/10...  Training Step: 1455...  Training loss: 1.4883...  0.4198 sec/batch
Epoch: 8/10...  Training Step: 1456...  Training loss: 1.4822...  0.3625 sec/batch
Epoch: 8/10...  Training Step: 1457...  Training loss: 1.4960...  0.4504 sec/batch
Epoch: 8/10...  Training Step: 1458...  Training loss: 1.4868...  0.3173 sec/batch
Epoch: 8/10...  Training Step: 1459...  Training loss: 1.4576...  0.4261 sec/batch
Epoch: 8/10...  Training Step: 1460...  Training loss: 1.4771...  0.3875 sec/batch
Epoch: 8/10...  Training Step: 1461...  Training loss: 1.4667...  0.3874 sec/batch
Epoch: 8/10...  Training Step: 1462...  Training loss: 1.5005...  0.3858 sec/batch
Epoch: 8/10...  Training Step: 1463...  Training loss: 1.4776...  0.3898 sec/batch
Epoch: 8/10...  Training Step: 1464...  Training loss: 1.4608...  0.3719 sec/batch
Epoch: 8/10...  Training Step: 1465...  Training loss: 1.4644...  0.3099 sec/batch
Epoch: 8/10...  Training Step: 1466...  Training loss: 1.5005...  0.3346 sec/batch
Epoch: 8/10...  Training Step: 1467...  Training loss: 1.4531...  0.4019 sec/batch
Epoch: 8/10...  Training Step: 1468...  Training loss: 1.4796...  0.4181 sec/batch
Epoch: 8/10...  Training Step: 1469...  Training loss: 1.5251...  0.3632 sec/batch
Epoch: 8/10...  Training Step: 1470...  Training loss: 1.5239...  0.4768 sec/batch
Epoch: 8/10...  Training Step: 1471...  Training loss: 1.4778...  0.3289 sec/batch
Epoch: 8/10...  Training Step: 1472...  Training loss: 1.4726...  0.3981 sec/batch
Epoch: 8/10...  Training Step: 1473...  Training loss: 1.4700...  0.3367 sec/batch
Epoch: 8/10...  Training Step: 1474...  Training loss: 1.4928...  0.4054 sec/batch
Epoch: 8/10...  Training Step: 1475...  Training loss: 1.5090...  0.4322 sec/batch
Epoch: 8/10...  Training Step: 1476...  Training loss: 1.5114...  0.3494 sec/batch
Epoch: 8/10...  Training Step: 1477...  Training loss: 1.4534...  0.3142 sec/batch
Epoch: 8/10...  Training Step: 1478...  Training loss: 1.4628...  0.3208 sec/batch
Epoch: 8/10...  Training Step: 1479...  Training loss: 1.4930...  0.4026 sec/batch
Epoch: 8/10...  Training Step: 1480...  Training loss: 1.4909...  0.4589 sec/batch
Epoch: 8/10...  Training Step: 1481...  Training loss: 1.4752...  0.4392 sec/batch
Epoch: 8/10...  Training Step: 1482...  Training loss: 1.4861...  0.3242 sec/batch
Epoch: 8/10...  Training Step: 1483...  Training loss: 1.4919...  0.3541 sec/batch
Epoch: 8/10...  Training Step: 1484...  Training loss: 1.5230...  0.3481 sec/batch
Epoch: 8/10...  Training Step: 1485...  Training loss: 1.5046...  0.4082 sec/batch
Epoch: 8/10...  Training Step: 1486...  Training loss: 1.5042...  0.4180 sec/batch
Epoch: 8/10...  Training Step: 1487...  Training loss: 1.4736...  0.3979 sec/batch
Epoch: 8/10...  Training Step: 1488...  Training loss: 1.4634...  0.3691 sec/batch
Epoch: 8/10...  Training Step: 1489...  Training loss: 1.4697...  0.3540 sec/batch
Epoch: 8/10...  Training Step: 1490...  Training loss: 1.4988...  0.3039 sec/batch
Epoch: 8/10...  Training Step: 1491...  Training loss: 1.5052...  0.3326 sec/batch
Epoch: 8/10...  Training Step: 1492...  Training loss: 1.4657...  0.3888 sec/batch
Epoch: 8/10...  Training Step: 1493...  Training loss: 1.4986...  0.4282 sec/batch
Epoch: 8/10...  Training Step: 1494...  Training loss: 1.4928...  0.3922 sec/batch
Epoch: 8/10...  Training Step: 1495...  Training loss: 1.4899...  0.3454 sec/batch
Epoch: 8/10...  Training Step: 1496...  Training loss: 1.4687...  0.3713 sec/batch
Epoch: 8/10...  Training Step: 1497...  Training loss: 1.4905...  0.3181 sec/batch
Epoch: 8/10...  Training Step: 1498...  Training loss: 1.4815...  0.4235 sec/batch
Epoch: 8/10...  Training Step: 1499...  Training loss: 1.4800...  0.4114 sec/batch
Epoch: 8/10...  Training Step: 1500...  Training loss: 1.5214...  0.3888 sec/batch
Epoch: 8/10...  Training Step: 1501...  Training loss: 1.4826...  0.3520 sec/batch
Epoch: 8/10...  Training Step: 1502...  Training loss: 1.4835...  0.3567 sec/batch
Epoch: 8/10...  Training Step: 1503...  Training loss: 1.4731...  0.3408 sec/batch
Epoch: 8/10...  Training Step: 1504...  Training loss: 1.4731...  0.3329 sec/batch
Epoch: 8/10...  Training Step: 1505...  Training loss: 1.4674...  0.3771 sec/batch
Epoch: 8/10...  Training Step: 1506...  Training loss: 1.5131...  0.3950 sec/batch
Epoch: 8/10...  Training Step: 1507...  Training loss: 1.5010...  0.4141 sec/batch
Epoch: 8/10...  Training Step: 1508...  Training loss: 1.5108...  0.3654 sec/batch
Epoch: 8/10...  Training Step: 1509...  Training loss: 1.5186...  0.3873 sec/batch
Epoch: 8/10...  Training Step: 1510...  Training loss: 1.5223...  0.4564 sec/batch
Epoch: 8/10...  Training Step: 1511...  Training loss: 1.5057...  0.3929 sec/batch
Epoch: 8/10...  Training Step: 1512...  Training loss: 1.4994...  0.4232 sec/batch
Epoch: 8/10...  Training Step: 1513...  Training loss: 1.4918...  0.4216 sec/batch
Epoch: 8/10...  Training Step: 1514...  Training loss: 1.4858...  0.4051 sec/batch
Epoch: 8/10...  Training Step: 1515...  Training loss: 1.4812...  0.4142 sec/batch
Epoch: 8/10...  Training Step: 1516...  Training loss: 1.4921...  0.4186 sec/batch
Epoch: 8/10...  Training Step: 1517...  Training loss: 1.4720...  0.2976 sec/batch
Epoch: 8/10...  Training Step: 1518...  Training loss: 1.4934...  0.3163 sec/batch
Epoch: 8/10...  Training Step: 1519...  Training loss: 1.4982...  0.3569 sec/batch
Epoch: 8/10...  Training Step: 1520...  Training loss: 1.4793...  0.3802 sec/batch
Epoch: 8/10...  Training Step: 1521...  Training loss: 1.4841...  0.4257 sec/batch
Epoch: 8/10...  Training Step: 1522...  Training loss: 1.5076...  0.4012 sec/batch
Epoch: 8/10...  Training Step: 1523...  Training loss: 1.5496...  0.3213 sec/batch
Epoch: 8/10...  Training Step: 1524...  Training loss: 1.5011...  0.4732 sec/batch
Epoch: 8/10...  Training Step: 1525...  Training loss: 1.4769...  0.3713 sec/batch
Epoch: 8/10...  Training Step: 1526...  Training loss: 1.4806...  0.3847 sec/batch
Epoch: 8/10...  Training Step: 1527...  Training loss: 1.5036...  0.3719 sec/batch
Epoch: 8/10...  Training Step: 1528...  Training loss: 1.5045...  0.4041 sec/batch
Epoch: 8/10...  Training Step: 1529...  Training loss: 1.4724...  0.4059 sec/batch
Epoch: 8/10...  Training Step: 1530...  Training loss: 1.4750...  0.4606 sec/batch
Epoch: 8/10...  Training Step: 1531...  Training loss: 1.4828...  0.3793 sec/batch
Epoch: 8/10...  Training Step: 1532...  Training loss: 1.5080...  0.3857 sec/batch
Epoch: 8/10...  Training Step: 1533...  Training loss: 1.4912...  0.4214 sec/batch
Epoch: 8/10...  Training Step: 1534...  Training loss: 1.4805...  0.4459 sec/batch
Epoch: 8/10...  Training Step: 1535...  Training loss: 1.4791...  0.3868 sec/batch
Epoch: 8/10...  Training Step: 1536...  Training loss: 1.4821...  0.3910 sec/batch
Epoch: 8/10...  Training Step: 1537...  Training loss: 1.5044...  0.3462 sec/batch
Epoch: 8/10...  Training Step: 1538...  Training loss: 1.4879...  0.4089 sec/batch
Epoch: 8/10...  Training Step: 1539...  Training loss: 1.4940...  0.3448 sec/batch
Epoch: 8/10...  Training Step: 1540...  Training loss: 1.4562...  0.3801 sec/batch
Epoch: 8/10...  Training Step: 1541...  Training loss: 1.4733...  0.3172 sec/batch
Epoch: 8/10...  Training Step: 1542...  Training loss: 1.4781...  0.3177 sec/batch
Epoch: 8/10...  Training Step: 1543...  Training loss: 1.4715...  0.3578 sec/batch
Epoch: 8/10...  Training Step: 1544...  Training loss: 1.4832...  0.4346 sec/batch
Epoch: 8/10...  Training Step: 1545...  Training loss: 1.4822...  0.3896 sec/batch
Epoch: 8/10...  Training Step: 1546...  Training loss: 1.4496...  0.3861 sec/batch
Epoch: 8/10...  Training Step: 1547...  Training loss: 1.4550...  0.3271 sec/batch
Epoch: 8/10...  Training Step: 1548...  Training loss: 1.4442...  0.3399 sec/batch
Epoch: 8/10...  Training Step: 1549...  Training loss: 1.4609...  0.4278 sec/batch
Epoch: 8/10...  Training Step: 1550...  Training loss: 1.4779...  0.3491 sec/batch
Epoch: 8/10...  Training Step: 1551...  Training loss: 1.4836...  0.3832 sec/batch
Epoch: 8/10...  Training Step: 1552...  Training loss: 1.4648...  0.4940 sec/batch
Epoch: 8/10...  Training Step: 1553...  Training loss: 1.4673...  0.3766 sec/batch
Epoch: 8/10...  Training Step: 1554...  Training loss: 1.4986...  0.3207 sec/batch
Epoch: 8/10...  Training Step: 1555...  Training loss: 1.4636...  0.3577 sec/batch
Epoch: 8/10...  Training Step: 1556...  Training loss: 1.4697...  0.4225 sec/batch
Epoch: 8/10...  Training Step: 1557...  Training loss: 1.4977...  0.3987 sec/batch
Epoch: 8/10...  Training Step: 1558...  Training loss: 1.4534...  0.3977 sec/batch
Epoch: 8/10...  Training Step: 1559...  Training loss: 1.4679...  0.3312 sec/batch
Epoch: 8/10...  Training Step: 1560...  Training loss: 1.4440...  0.4430 sec/batch
Epoch: 8/10...  Training Step: 1561...  Training loss: 1.4838...  0.3360 sec/batch
Epoch: 8/10...  Training Step: 1562...  Training loss: 1.4696...  0.3513 sec/batch
Epoch: 8/10...  Training Step: 1563...  Training loss: 1.4329...  0.4454 sec/batch
Epoch: 8/10...  Training Step: 1564...  Training loss: 1.4754...  0.3965 sec/batch
Epoch: 8/10...  Training Step: 1565...  Training loss: 1.4786...  0.3579 sec/batch
Epoch: 8/10...  Training Step: 1566...  Training loss: 1.4922...  0.3596 sec/batch
Epoch: 8/10...  Training Step: 1567...  Training loss: 1.4819...  0.3259 sec/batch
Epoch: 8/10...  Training Step: 1568...  Training loss: 1.4732...  0.3347 sec/batch
Epoch: 8/10...  Training Step: 1569...  Training loss: 1.4739...  0.4017 sec/batch
Epoch: 8/10...  Training Step: 1570...  Training loss: 1.4683...  0.3612 sec/batch
Epoch: 8/10...  Training Step: 1571...  Training loss: 1.4746...  0.3732 sec/batch
Epoch: 8/10...  Training Step: 1572...  Training loss: 1.4226...  0.3738 sec/batch
Epoch: 8/10...  Training Step: 1573...  Training loss: 1.4736...  0.3360 sec/batch
Epoch: 8/10...  Training Step: 1574...  Training loss: 1.4571...  0.3861 sec/batch
Epoch: 8/10...  Training Step: 1575...  Training loss: 1.4664...  0.4109 sec/batch
Epoch: 8/10...  Training Step: 1576...  Training loss: 1.4646...  0.3342 sec/batch
Epoch: 8/10...  Training Step: 1577...  Training loss: 1.4671...  0.3660 sec/batch
Epoch: 8/10...  Training Step: 1578...  Training loss: 1.4773...  0.3501 sec/batch
Epoch: 8/10...  Training Step: 1579...  Training loss: 1.4981...  0.3701 sec/batch
Epoch: 8/10...  Training Step: 1580...  Training loss: 1.4646...  0.3338 sec/batch
Epoch: 8/10...  Training Step: 1581...  Training loss: 1.4707...  0.3143 sec/batch
Epoch: 8/10...  Training Step: 1582...  Training loss: 1.4524...  0.3100 sec/batch
Epoch: 8/10...  Training Step: 1583...  Training loss: 1.4757...  0.3731 sec/batch
Epoch: 8/10...  Training Step: 1584...  Training loss: 1.4612...  0.4286 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 9/10...  Training Step: 1585...  Training loss: 1.5265...  0.4682 sec/batch
Epoch: 9/10...  Training Step: 1586...  Training loss: 1.4955...  0.4284 sec/batch
Epoch: 9/10...  Training Step: 1587...  Training loss: 1.4851...  0.4309 sec/batch
Epoch: 9/10...  Training Step: 1588...  Training loss: 1.4887...  0.3544 sec/batch
Epoch: 9/10...  Training Step: 1589...  Training loss: 1.4666...  0.3643 sec/batch
Epoch: 9/10...  Training Step: 1590...  Training loss: 1.4756...  0.3793 sec/batch
Epoch: 9/10...  Training Step: 1591...  Training loss: 1.4640...  0.3447 sec/batch
Epoch: 9/10...  Training Step: 1592...  Training loss: 1.4935...  0.3615 sec/batch
Epoch: 9/10...  Training Step: 1593...  Training loss: 1.4584...  0.3629 sec/batch
Epoch: 9/10...  Training Step: 1594...  Training loss: 1.4720...  0.3766 sec/batch
Epoch: 9/10...  Training Step: 1595...  Training loss: 1.4460...  0.3848 sec/batch
Epoch: 9/10...  Training Step: 1596...  Training loss: 1.4538...  0.3618 sec/batch
Epoch: 9/10...  Training Step: 1597...  Training loss: 1.4762...  0.3774 sec/batch
Epoch: 9/10...  Training Step: 1598...  Training loss: 1.4797...  0.3839 sec/batch
Epoch: 9/10...  Training Step: 1599...  Training loss: 1.4585...  0.3774 sec/batch
Epoch: 9/10...  Training Step: 1600...  Training loss: 1.5022...  0.3862 sec/batch
Epoch: 9/10...  Training Step: 1601...  Training loss: 1.4589...  0.3603 sec/batch
Epoch: 9/10...  Training Step: 1602...  Training loss: 1.4877...  0.3738 sec/batch
Epoch: 9/10...  Training Step: 1603...  Training loss: 1.4609...  0.3455 sec/batch
Epoch: 9/10...  Training Step: 1604...  Training loss: 1.4703...  0.3676 sec/batch
Epoch: 9/10...  Training Step: 1605...  Training loss: 1.4981...  0.3602 sec/batch
Epoch: 9/10...  Training Step: 1606...  Training loss: 1.4685...  0.3544 sec/batch
Epoch: 9/10...  Training Step: 1607...  Training loss: 1.4459...  0.3203 sec/batch
Epoch: 9/10...  Training Step: 1608...  Training loss: 1.4727...  0.3228 sec/batch
Epoch: 9/10...  Training Step: 1609...  Training loss: 1.4791...  0.3451 sec/batch
Epoch: 9/10...  Training Step: 1610...  Training loss: 1.4840...  0.4033 sec/batch
Epoch: 9/10...  Training Step: 1611...  Training loss: 1.4797...  0.3999 sec/batch
Epoch: 9/10...  Training Step: 1612...  Training loss: 1.4758...  0.3952 sec/batch
Epoch: 9/10...  Training Step: 1613...  Training loss: 1.4509...  0.3378 sec/batch
Epoch: 9/10...  Training Step: 1614...  Training loss: 1.4659...  0.4307 sec/batch
Epoch: 9/10...  Training Step: 1615...  Training loss: 1.4431...  0.3358 sec/batch
Epoch: 9/10...  Training Step: 1616...  Training loss: 1.4731...  0.3090 sec/batch
Epoch: 9/10...  Training Step: 1617...  Training loss: 1.4388...  0.3486 sec/batch
Epoch: 9/10...  Training Step: 1618...  Training loss: 1.4756...  0.4234 sec/batch
Epoch: 9/10...  Training Step: 1619...  Training loss: 1.4869...  0.3557 sec/batch
Epoch: 9/10...  Training Step: 1620...  Training loss: 1.4881...  0.3423 sec/batch
Epoch: 9/10...  Training Step: 1621...  Training loss: 1.4930...  0.3481 sec/batch
Epoch: 9/10...  Training Step: 1622...  Training loss: 1.4708...  0.4357 sec/batch
Epoch: 9/10...  Training Step: 1623...  Training loss: 1.4711...  0.3804 sec/batch
Epoch: 9/10...  Training Step: 1624...  Training loss: 1.4848...  0.4117 sec/batch
Epoch: 9/10...  Training Step: 1625...  Training loss: 1.4676...  0.3911 sec/batch
Epoch: 9/10...  Training Step: 1626...  Training loss: 1.4657...  0.4515 sec/batch
Epoch: 9/10...  Training Step: 1627...  Training loss: 1.4994...  0.4691 sec/batch
Epoch: 9/10...  Training Step: 1628...  Training loss: 1.4424...  0.3817 sec/batch
Epoch: 9/10...  Training Step: 1629...  Training loss: 1.4431...  0.4100 sec/batch
Epoch: 9/10...  Training Step: 1630...  Training loss: 1.4491...  0.4184 sec/batch
Epoch: 9/10...  Training Step: 1631...  Training loss: 1.4466...  0.4253 sec/batch
Epoch: 9/10...  Training Step: 1632...  Training loss: 1.4571...  0.3394 sec/batch
Epoch: 9/10...  Training Step: 1633...  Training loss: 1.4854...  0.3538 sec/batch
Epoch: 9/10...  Training Step: 1634...  Training loss: 1.4790...  0.4468 sec/batch
Epoch: 9/10...  Training Step: 1635...  Training loss: 1.4450...  0.3551 sec/batch
Epoch: 9/10...  Training Step: 1636...  Training loss: 1.4446...  0.3924 sec/batch
Epoch: 9/10...  Training Step: 1637...  Training loss: 1.4304...  0.3746 sec/batch
Epoch: 9/10...  Training Step: 1638...  Training loss: 1.4581...  0.4775 sec/batch
Epoch: 9/10...  Training Step: 1639...  Training loss: 1.5033...  0.3948 sec/batch
Epoch: 9/10...  Training Step: 1640...  Training loss: 1.4719...  0.3444 sec/batch
Epoch: 9/10...  Training Step: 1641...  Training loss: 1.4805...  0.3989 sec/batch
Epoch: 9/10...  Training Step: 1642...  Training loss: 1.4849...  0.3244 sec/batch
Epoch: 9/10...  Training Step: 1643...  Training loss: 1.4584...  0.4179 sec/batch
Epoch: 9/10...  Training Step: 1644...  Training loss: 1.5009...  0.3748 sec/batch
Epoch: 9/10...  Training Step: 1645...  Training loss: 1.4389...  0.2792 sec/batch
Epoch: 9/10...  Training Step: 1646...  Training loss: 1.4689...  0.3570 sec/batch
Epoch: 9/10...  Training Step: 1647...  Training loss: 1.4439...  0.3714 sec/batch
Epoch: 9/10...  Training Step: 1648...  Training loss: 1.4299...  0.4052 sec/batch
Epoch: 9/10...  Training Step: 1649...  Training loss: 1.4620...  0.3946 sec/batch
Epoch: 9/10...  Training Step: 1650...  Training loss: 1.4820...  0.3993 sec/batch
Epoch: 9/10...  Training Step: 1651...  Training loss: 1.5056...  0.4119 sec/batch
Epoch: 9/10...  Training Step: 1652...  Training loss: 1.4660...  0.3884 sec/batch
Epoch: 9/10...  Training Step: 1653...  Training loss: 1.4686...  0.3591 sec/batch
Epoch: 9/10...  Training Step: 1654...  Training loss: 1.4669...  0.3399 sec/batch
Epoch: 9/10...  Training Step: 1655...  Training loss: 1.4812...  0.3910 sec/batch
Epoch: 9/10...  Training Step: 1656...  Training loss: 1.4706...  0.3522 sec/batch
Epoch: 9/10...  Training Step: 1657...  Training loss: 1.4409...  0.3564 sec/batch
Epoch: 9/10...  Training Step: 1658...  Training loss: 1.4594...  0.3572 sec/batch
Epoch: 9/10...  Training Step: 1659...  Training loss: 1.4522...  0.3020 sec/batch
Epoch: 9/10...  Training Step: 1660...  Training loss: 1.4838...  0.3840 sec/batch
Epoch: 9/10...  Training Step: 1661...  Training loss: 1.4636...  0.3516 sec/batch
Epoch: 9/10...  Training Step: 1662...  Training loss: 1.4455...  0.3993 sec/batch
Epoch: 9/10...  Training Step: 1663...  Training loss: 1.4490...  0.4031 sec/batch
Epoch: 9/10...  Training Step: 1664...  Training loss: 1.4845...  0.4260 sec/batch
Epoch: 9/10...  Training Step: 1665...  Training loss: 1.4357...  0.3874 sec/batch
Epoch: 9/10...  Training Step: 1666...  Training loss: 1.4649...  0.3656 sec/batch
Epoch: 9/10...  Training Step: 1667...  Training loss: 1.5099...  0.3104 sec/batch
Epoch: 9/10...  Training Step: 1668...  Training loss: 1.5082...  0.3479 sec/batch
Epoch: 9/10...  Training Step: 1669...  Training loss: 1.4648...  0.3942 sec/batch
Epoch: 9/10...  Training Step: 1670...  Training loss: 1.4558...  0.3555 sec/batch
Epoch: 9/10...  Training Step: 1671...  Training loss: 1.4541...  0.3478 sec/batch
Epoch: 9/10...  Training Step: 1672...  Training loss: 1.4772...  0.3045 sec/batch
Epoch: 9/10...  Training Step: 1673...  Training loss: 1.4928...  0.3192 sec/batch
Epoch: 9/10...  Training Step: 1674...  Training loss: 1.4936...  0.3845 sec/batch
Epoch: 9/10...  Training Step: 1675...  Training loss: 1.4356...  0.4325 sec/batch
Epoch: 9/10...  Training Step: 1676...  Training loss: 1.4470...  0.3625 sec/batch
Epoch: 9/10...  Training Step: 1677...  Training loss: 1.4764...  0.3416 sec/batch
Epoch: 9/10...  Training Step: 1678...  Training loss: 1.4719...  0.4737 sec/batch
Epoch: 9/10...  Training Step: 1679...  Training loss: 1.4594...  0.3337 sec/batch
Epoch: 9/10...  Training Step: 1680...  Training loss: 1.4687...  0.3669 sec/batch
Epoch: 9/10...  Training Step: 1681...  Training loss: 1.4767...  0.3313 sec/batch
Epoch: 9/10...  Training Step: 1682...  Training loss: 1.5060...  0.4246 sec/batch
Epoch: 9/10...  Training Step: 1683...  Training loss: 1.4869...  0.3490 sec/batch
Epoch: 9/10...  Training Step: 1684...  Training loss: 1.4873...  0.3681 sec/batch
Epoch: 9/10...  Training Step: 1685...  Training loss: 1.4591...  0.3042 sec/batch
Epoch: 9/10...  Training Step: 1686...  Training loss: 1.4470...  0.3966 sec/batch
Epoch: 9/10...  Training Step: 1687...  Training loss: 1.4538...  0.4170 sec/batch
Epoch: 9/10...  Training Step: 1688...  Training loss: 1.4814...  0.3816 sec/batch
Epoch: 9/10...  Training Step: 1689...  Training loss: 1.4881...  0.4887 sec/batch
Epoch: 9/10...  Training Step: 1690...  Training loss: 1.4467...  0.4637 sec/batch
Epoch: 9/10...  Training Step: 1691...  Training loss: 1.4844...  0.4501 sec/batch
Epoch: 9/10...  Training Step: 1692...  Training loss: 1.4800...  0.4143 sec/batch
Epoch: 9/10...  Training Step: 1693...  Training loss: 1.4717...  0.4098 sec/batch
Epoch: 9/10...  Training Step: 1694...  Training loss: 1.4544...  0.4854 sec/batch
Epoch: 9/10...  Training Step: 1695...  Training loss: 1.4742...  0.4240 sec/batch
Epoch: 9/10...  Training Step: 1696...  Training loss: 1.4653...  0.3886 sec/batch
Epoch: 9/10...  Training Step: 1697...  Training loss: 1.4627...  0.3406 sec/batch
Epoch: 9/10...  Training Step: 1698...  Training loss: 1.5024...  0.3482 sec/batch
Epoch: 9/10...  Training Step: 1699...  Training loss: 1.4690...  0.4194 sec/batch
Epoch: 9/10...  Training Step: 1700...  Training loss: 1.4646...  0.3893 sec/batch
Epoch: 9/10...  Training Step: 1701...  Training loss: 1.4568...  0.3775 sec/batch
Epoch: 9/10...  Training Step: 1702...  Training loss: 1.4560...  0.3970 sec/batch
Epoch: 9/10...  Training Step: 1703...  Training loss: 1.4498...  0.4132 sec/batch
Epoch: 9/10...  Training Step: 1704...  Training loss: 1.4987...  0.3500 sec/batch
Epoch: 9/10...  Training Step: 1705...  Training loss: 1.4825...  0.4014 sec/batch
Epoch: 9/10...  Training Step: 1706...  Training loss: 1.4953...  0.4423 sec/batch
Epoch: 9/10...  Training Step: 1707...  Training loss: 1.5001...  0.3566 sec/batch
Epoch: 9/10...  Training Step: 1708...  Training loss: 1.5051...  0.3615 sec/batch
Epoch: 9/10...  Training Step: 1709...  Training loss: 1.4890...  0.3535 sec/batch
Epoch: 9/10...  Training Step: 1710...  Training loss: 1.4851...  0.3761 sec/batch
Epoch: 9/10...  Training Step: 1711...  Training loss: 1.4749...  0.3687 sec/batch
Epoch: 9/10...  Training Step: 1712...  Training loss: 1.4710...  0.4757 sec/batch
Epoch: 9/10...  Training Step: 1713...  Training loss: 1.4679...  0.4182 sec/batch
Epoch: 9/10...  Training Step: 1714...  Training loss: 1.4741...  0.3966 sec/batch
Epoch: 9/10...  Training Step: 1715...  Training loss: 1.4575...  0.4439 sec/batch
Epoch: 9/10...  Training Step: 1716...  Training loss: 1.4784...  0.4590 sec/batch
Epoch: 9/10...  Training Step: 1717...  Training loss: 1.4831...  0.4614 sec/batch
Epoch: 9/10...  Training Step: 1718...  Training loss: 1.4639...  0.4305 sec/batch
Epoch: 9/10...  Training Step: 1719...  Training loss: 1.4687...  0.4264 sec/batch
Epoch: 9/10...  Training Step: 1720...  Training loss: 1.4923...  0.3899 sec/batch
Epoch: 9/10...  Training Step: 1721...  Training loss: 1.5339...  0.3807 sec/batch
Epoch: 9/10...  Training Step: 1722...  Training loss: 1.4856...  0.4045 sec/batch
Epoch: 9/10...  Training Step: 1723...  Training loss: 1.4604...  0.3410 sec/batch
Epoch: 9/10...  Training Step: 1724...  Training loss: 1.4641...  0.4062 sec/batch
Epoch: 9/10...  Training Step: 1725...  Training loss: 1.4895...  0.4082 sec/batch
Epoch: 9/10...  Training Step: 1726...  Training loss: 1.4908...  0.3540 sec/batch
Epoch: 9/10...  Training Step: 1727...  Training loss: 1.4581...  0.3561 sec/batch
Epoch: 9/10...  Training Step: 1728...  Training loss: 1.4591...  0.4495 sec/batch
Epoch: 9/10...  Training Step: 1729...  Training loss: 1.4662...  0.4041 sec/batch
Epoch: 9/10...  Training Step: 1730...  Training loss: 1.4925...  0.3739 sec/batch
Epoch: 9/10...  Training Step: 1731...  Training loss: 1.4747...  0.4237 sec/batch
Epoch: 9/10...  Training Step: 1732...  Training loss: 1.4614...  0.3813 sec/batch
Epoch: 9/10...  Training Step: 1733...  Training loss: 1.4639...  0.3672 sec/batch
Epoch: 9/10...  Training Step: 1734...  Training loss: 1.4661...  0.3782 sec/batch
Epoch: 9/10...  Training Step: 1735...  Training loss: 1.4884...  0.3894 sec/batch
Epoch: 9/10...  Training Step: 1736...  Training loss: 1.4732...  0.3764 sec/batch
Epoch: 9/10...  Training Step: 1737...  Training loss: 1.4786...  0.3380 sec/batch
Epoch: 9/10...  Training Step: 1738...  Training loss: 1.4422...  0.4182 sec/batch
Epoch: 9/10...  Training Step: 1739...  Training loss: 1.4587...  0.4019 sec/batch
Epoch: 9/10...  Training Step: 1740...  Training loss: 1.4634...  0.3686 sec/batch
Epoch: 9/10...  Training Step: 1741...  Training loss: 1.4552...  0.4282 sec/batch
Epoch: 9/10...  Training Step: 1742...  Training loss: 1.4703...  0.4451 sec/batch
Epoch: 9/10...  Training Step: 1743...  Training loss: 1.4663...  0.3717 sec/batch
Epoch: 9/10...  Training Step: 1744...  Training loss: 1.4361...  0.4079 sec/batch
Epoch: 9/10...  Training Step: 1745...  Training loss: 1.4395...  0.3832 sec/batch
Epoch: 9/10...  Training Step: 1746...  Training loss: 1.4300...  0.3514 sec/batch
Epoch: 9/10...  Training Step: 1747...  Training loss: 1.4457...  0.4224 sec/batch
Epoch: 9/10...  Training Step: 1748...  Training loss: 1.4656...  0.3233 sec/batch
Epoch: 9/10...  Training Step: 1749...  Training loss: 1.4692...  0.3869 sec/batch
Epoch: 9/10...  Training Step: 1750...  Training loss: 1.4501...  0.3192 sec/batch
Epoch: 9/10...  Training Step: 1751...  Training loss: 1.4531...  0.3628 sec/batch
Epoch: 9/10...  Training Step: 1752...  Training loss: 1.4845...  0.4095 sec/batch
Epoch: 9/10...  Training Step: 1753...  Training loss: 1.4497...  0.3707 sec/batch
Epoch: 9/10...  Training Step: 1754...  Training loss: 1.4556...  0.3491 sec/batch
Epoch: 9/10...  Training Step: 1755...  Training loss: 1.4821...  0.3453 sec/batch
Epoch: 9/10...  Training Step: 1756...  Training loss: 1.4381...  0.4133 sec/batch
Epoch: 9/10...  Training Step: 1757...  Training loss: 1.4532...  0.4471 sec/batch
Epoch: 9/10...  Training Step: 1758...  Training loss: 1.4301...  0.4486 sec/batch
Epoch: 9/10...  Training Step: 1759...  Training loss: 1.4700...  0.4120 sec/batch
Epoch: 9/10...  Training Step: 1760...  Training loss: 1.4565...  0.4211 sec/batch
Epoch: 9/10...  Training Step: 1761...  Training loss: 1.4192...  0.3728 sec/batch
Epoch: 9/10...  Training Step: 1762...  Training loss: 1.4595...  0.3159 sec/batch
Epoch: 9/10...  Training Step: 1763...  Training loss: 1.4638...  0.3466 sec/batch
Epoch: 9/10...  Training Step: 1764...  Training loss: 1.4764...  0.3929 sec/batch
Epoch: 9/10...  Training Step: 1765...  Training loss: 1.4649...  0.4032 sec/batch
Epoch: 9/10...  Training Step: 1766...  Training loss: 1.4586...  0.4172 sec/batch
Epoch: 9/10...  Training Step: 1767...  Training loss: 1.4615...  0.4044 sec/batch
Epoch: 9/10...  Training Step: 1768...  Training loss: 1.4495...  0.3942 sec/batch
Epoch: 9/10...  Training Step: 1769...  Training loss: 1.4585...  0.4097 sec/batch
Epoch: 9/10...  Training Step: 1770...  Training loss: 1.4084...  0.3591 sec/batch
Epoch: 9/10...  Training Step: 1771...  Training loss: 1.4585...  0.3598 sec/batch
Epoch: 9/10...  Training Step: 1772...  Training loss: 1.4390...  0.3743 sec/batch
Epoch: 9/10...  Training Step: 1773...  Training loss: 1.4507...  0.4232 sec/batch
Epoch: 9/10...  Training Step: 1774...  Training loss: 1.4500...  0.3553 sec/batch
Epoch: 9/10...  Training Step: 1775...  Training loss: 1.4511...  0.2894 sec/batch
Epoch: 9/10...  Training Step: 1776...  Training loss: 1.4601...  0.3877 sec/batch
Epoch: 9/10...  Training Step: 1777...  Training loss: 1.4847...  0.3368 sec/batch
Epoch: 9/10...  Training Step: 1778...  Training loss: 1.4516...  0.3606 sec/batch
Epoch: 9/10...  Training Step: 1779...  Training loss: 1.4555...  0.3455 sec/batch
Epoch: 9/10...  Training Step: 1780...  Training loss: 1.4402...  0.3216 sec/batch
Epoch: 9/10...  Training Step: 1781...  Training loss: 1.4606...  0.3993 sec/batch
Epoch: 9/10...  Training Step: 1782...  Training loss: 1.4456...  0.4809 sec/batch
10000 198
[[31 64 57 ...,  1 79 65]
 [76 64  1 ..., 75 11  1]
 [57 70 60 ..., 67 61  1]
 ..., 
 [70 60 74 ..., 76 57 70]
 [60  1 64 ..., 72 68 81]
 [ 1 79 65 ..., 70 58 61]]
Epoch: 10/10...  Training Step: 1783...  Training loss: 1.5080...  0.4030 sec/batch
Epoch: 10/10...  Training Step: 1784...  Training loss: 1.4805...  0.3970 sec/batch
Epoch: 10/10...  Training Step: 1785...  Training loss: 1.4719...  0.3551 sec/batch
Epoch: 10/10...  Training Step: 1786...  Training loss: 1.4731...  0.4020 sec/batch
Epoch: 10/10...  Training Step: 1787...  Training loss: 1.4499...  0.3529 sec/batch
Epoch: 10/10...  Training Step: 1788...  Training loss: 1.4598...  0.3226 sec/batch
Epoch: 10/10...  Training Step: 1789...  Training loss: 1.4487...  0.4668 sec/batch
Epoch: 10/10...  Training Step: 1790...  Training loss: 1.4795...  0.3566 sec/batch
Epoch: 10/10...  Training Step: 1791...  Training loss: 1.4471...  0.3507 sec/batch
Epoch: 10/10...  Training Step: 1792...  Training loss: 1.4548...  0.3769 sec/batch
Epoch: 10/10...  Training Step: 1793...  Training loss: 1.4313...  0.4728 sec/batch
Epoch: 10/10...  Training Step: 1794...  Training loss: 1.4412...  0.4411 sec/batch
Epoch: 10/10...  Training Step: 1795...  Training loss: 1.4601...  0.4107 sec/batch
Epoch: 10/10...  Training Step: 1796...  Training loss: 1.4640...  0.3464 sec/batch
Epoch: 10/10...  Training Step: 1797...  Training loss: 1.4462...  0.3727 sec/batch
Epoch: 10/10...  Training Step: 1798...  Training loss: 1.4869...  0.3874 sec/batch
Epoch: 10/10...  Training Step: 1799...  Training loss: 1.4446...  0.3458 sec/batch
Epoch: 10/10...  Training Step: 1800...  Training loss: 1.4718...  0.3481 sec/batch
Epoch: 10/10...  Training Step: 1801...  Training loss: 1.4464...  0.3269 sec/batch
Epoch: 10/10...  Training Step: 1802...  Training loss: 1.4565...  0.3786 sec/batch
Epoch: 10/10...  Training Step: 1803...  Training loss: 1.4842...  0.3869 sec/batch
Epoch: 10/10...  Training Step: 1804...  Training loss: 1.4566...  0.4433 sec/batch
Epoch: 10/10...  Training Step: 1805...  Training loss: 1.4353...  0.4018 sec/batch
Epoch: 10/10...  Training Step: 1806...  Training loss: 1.4587...  0.3816 sec/batch
Epoch: 10/10...  Training Step: 1807...  Training loss: 1.4669...  0.4023 sec/batch
Epoch: 10/10...  Training Step: 1808...  Training loss: 1.4707...  0.4489 sec/batch
Epoch: 10/10...  Training Step: 1809...  Training loss: 1.4650...  0.4081 sec/batch
Epoch: 10/10...  Training Step: 1810...  Training loss: 1.4644...  0.4698 sec/batch
Epoch: 10/10...  Training Step: 1811...  Training loss: 1.4373...  0.3548 sec/batch
Epoch: 10/10...  Training Step: 1812...  Training loss: 1.4523...  0.3655 sec/batch
Epoch: 10/10...  Training Step: 1813...  Training loss: 1.4283...  0.2928 sec/batch
Epoch: 10/10...  Training Step: 1814...  Training loss: 1.4578...  0.3142 sec/batch
Epoch: 10/10...  Training Step: 1815...  Training loss: 1.4262...  0.3742 sec/batch
Epoch: 10/10...  Training Step: 1816...  Training loss: 1.4644...  0.3216 sec/batch
Epoch: 10/10...  Training Step: 1817...  Training loss: 1.4741...  0.3944 sec/batch
Epoch: 10/10...  Training Step: 1818...  Training loss: 1.4765...  0.4842 sec/batch
Epoch: 10/10...  Training Step: 1819...  Training loss: 1.4810...  0.3779 sec/batch
Epoch: 10/10...  Training Step: 1820...  Training loss: 1.4587...  0.4873 sec/batch
Epoch: 10/10...  Training Step: 1821...  Training loss: 1.4587...  0.4704 sec/batch
Epoch: 10/10...  Training Step: 1822...  Training loss: 1.4698...  0.4990 sec/batch
Epoch: 10/10...  Training Step: 1823...  Training loss: 1.4531...  0.5039 sec/batch
Epoch: 10/10...  Training Step: 1824...  Training loss: 1.4545...  0.3129 sec/batch
Epoch: 10/10...  Training Step: 1825...  Training loss: 1.4850...  0.3556 sec/batch
Epoch: 10/10...  Training Step: 1826...  Training loss: 1.4280...  0.4785 sec/batch
Epoch: 10/10...  Training Step: 1827...  Training loss: 1.4312...  0.4247 sec/batch
Epoch: 10/10...  Training Step: 1828...  Training loss: 1.4357...  0.4439 sec/batch
Epoch: 10/10...  Training Step: 1829...  Training loss: 1.4310...  0.4126 sec/batch
Epoch: 10/10...  Training Step: 1830...  Training loss: 1.4439...  0.4122 sec/batch
Epoch: 10/10...  Training Step: 1831...  Training loss: 1.4727...  0.3698 sec/batch
Epoch: 10/10...  Training Step: 1832...  Training loss: 1.4642...  0.3798 sec/batch
Epoch: 10/10...  Training Step: 1833...  Training loss: 1.4285...  0.4782 sec/batch
Epoch: 10/10...  Training Step: 1834...  Training loss: 1.4303...  0.4285 sec/batch
Epoch: 10/10...  Training Step: 1835...  Training loss: 1.4150...  0.3697 sec/batch
Epoch: 10/10...  Training Step: 1836...  Training loss: 1.4460...  0.3937 sec/batch
Epoch: 10/10...  Training Step: 1837...  Training loss: 1.4903...  0.3602 sec/batch
Epoch: 10/10...  Training Step: 1838...  Training loss: 1.4572...  0.3471 sec/batch
Epoch: 10/10...  Training Step: 1839...  Training loss: 1.4653...  0.3392 sec/batch
Epoch: 10/10...  Training Step: 1840...  Training loss: 1.4720...  0.4876 sec/batch
Epoch: 10/10...  Training Step: 1841...  Training loss: 1.4455...  0.3622 sec/batch
Epoch: 10/10...  Training Step: 1842...  Training loss: 1.4886...  0.4648 sec/batch
Epoch: 10/10...  Training Step: 1843...  Training loss: 1.4241...  0.4311 sec/batch
Epoch: 10/10...  Training Step: 1844...  Training loss: 1.4543...  0.5057 sec/batch
Epoch: 10/10...  Training Step: 1845...  Training loss: 1.4299...  0.5404 sec/batch
Epoch: 10/10...  Training Step: 1846...  Training loss: 1.4158...  0.4638 sec/batch
Epoch: 10/10...  Training Step: 1847...  Training loss: 1.4498...  0.4340 sec/batch
Epoch: 10/10...  Training Step: 1848...  Training loss: 1.4660...  0.3939 sec/batch
Epoch: 10/10...  Training Step: 1849...  Training loss: 1.4904...  0.3979 sec/batch
Epoch: 10/10...  Training Step: 1850...  Training loss: 1.4520...  0.2884 sec/batch
Epoch: 10/10...  Training Step: 1851...  Training loss: 1.4524...  0.3111 sec/batch
Epoch: 10/10...  Training Step: 1852...  Training loss: 1.4538...  0.4388 sec/batch
Epoch: 10/10...  Training Step: 1853...  Training loss: 1.4697...  0.4382 sec/batch
Epoch: 10/10...  Training Step: 1854...  Training loss: 1.4572...  0.4354 sec/batch
Epoch: 10/10...  Training Step: 1855...  Training loss: 1.4278...  0.4061 sec/batch
Epoch: 10/10...  Training Step: 1856...  Training loss: 1.4452...  0.3627 sec/batch
Epoch: 10/10...  Training Step: 1857...  Training loss: 1.4405...  0.5123 sec/batch
Epoch: 10/10...  Training Step: 1858...  Training loss: 1.4705...  0.3866 sec/batch
Epoch: 10/10...  Training Step: 1859...  Training loss: 1.4501...  0.4694 sec/batch
Epoch: 10/10...  Training Step: 1860...  Training loss: 1.4325...  0.4127 sec/batch
Epoch: 10/10...  Training Step: 1861...  Training loss: 1.4366...  0.3279 sec/batch
Epoch: 10/10...  Training Step: 1862...  Training loss: 1.4703...  0.3906 sec/batch
Epoch: 10/10...  Training Step: 1863...  Training loss: 1.4231...  0.2898 sec/batch
Epoch: 10/10...  Training Step: 1864...  Training loss: 1.4505...  0.4557 sec/batch
Epoch: 10/10...  Training Step: 1865...  Training loss: 1.4967...  0.4094 sec/batch
Epoch: 10/10...  Training Step: 1866...  Training loss: 1.4949...  0.4136 sec/batch
Epoch: 10/10...  Training Step: 1867...  Training loss: 1.4518...  0.3700 sec/batch
Epoch: 10/10...  Training Step: 1868...  Training loss: 1.4430...  0.4760 sec/batch
Epoch: 10/10...  Training Step: 1869...  Training loss: 1.4399...  0.4086 sec/batch
Epoch: 10/10...  Training Step: 1870...  Training loss: 1.4635...  0.4024 sec/batch
Epoch: 10/10...  Training Step: 1871...  Training loss: 1.4781...  0.3472 sec/batch
Epoch: 10/10...  Training Step: 1872...  Training loss: 1.4793...  0.3045 sec/batch
Epoch: 10/10...  Training Step: 1873...  Training loss: 1.4223...  0.3110 sec/batch
Epoch: 10/10...  Training Step: 1874...  Training loss: 1.4339...  0.3974 sec/batch
Epoch: 10/10...  Training Step: 1875...  Training loss: 1.4617...  0.3131 sec/batch
Epoch: 10/10...  Training Step: 1876...  Training loss: 1.4576...  0.3346 sec/batch
Epoch: 10/10...  Training Step: 1877...  Training loss: 1.4462...  0.3796 sec/batch
Epoch: 10/10...  Training Step: 1878...  Training loss: 1.4543...  0.3601 sec/batch
Epoch: 10/10...  Training Step: 1879...  Training loss: 1.4631...  0.3103 sec/batch
Epoch: 10/10...  Training Step: 1880...  Training loss: 1.4920...  0.3391 sec/batch
Epoch: 10/10...  Training Step: 1881...  Training loss: 1.4724...  0.3771 sec/batch
Epoch: 10/10...  Training Step: 1882...  Training loss: 1.4753...  0.3805 sec/batch
Epoch: 10/10...  Training Step: 1883...  Training loss: 1.4471...  0.3216 sec/batch
Epoch: 10/10...  Training Step: 1884...  Training loss: 1.4328...  0.4510 sec/batch
Epoch: 10/10...  Training Step: 1885...  Training loss: 1.4415...  0.3747 sec/batch
Epoch: 10/10...  Training Step: 1886...  Training loss: 1.4673...  0.4680 sec/batch
Epoch: 10/10...  Training Step: 1887...  Training loss: 1.4729...  0.4165 sec/batch
Epoch: 10/10...  Training Step: 1888...  Training loss: 1.4323...  0.3856 sec/batch
Epoch: 10/10...  Training Step: 1889...  Training loss: 1.4725...  0.3982 sec/batch
Epoch: 10/10...  Training Step: 1890...  Training loss: 1.4679...  0.4087 sec/batch
Epoch: 10/10...  Training Step: 1891...  Training loss: 1.4565...  0.4618 sec/batch
Epoch: 10/10...  Training Step: 1892...  Training loss: 1.4412...  0.3425 sec/batch
Epoch: 10/10...  Training Step: 1893...  Training loss: 1.4613...  0.3939 sec/batch
Epoch: 10/10...  Training Step: 1894...  Training loss: 1.4501...  0.4222 sec/batch
Epoch: 10/10...  Training Step: 1895...  Training loss: 1.4502...  0.4335 sec/batch
Epoch: 10/10...  Training Step: 1896...  Training loss: 1.4874...  0.4697 sec/batch
Epoch: 10/10...  Training Step: 1897...  Training loss: 1.4572...  0.4032 sec/batch
Epoch: 10/10...  Training Step: 1898...  Training loss: 1.4518...  0.3803 sec/batch
Epoch: 10/10...  Training Step: 1899...  Training loss: 1.4423...  0.3299 sec/batch
Epoch: 10/10...  Training Step: 1900...  Training loss: 1.4444...  0.2996 sec/batch
Epoch: 10/10...  Training Step: 1901...  Training loss: 1.4357...  0.3977 sec/batch
Epoch: 10/10...  Training Step: 1902...  Training loss: 1.4860...  0.3593 sec/batch
Epoch: 10/10...  Training Step: 1903...  Training loss: 1.4700...  0.3797 sec/batch
Epoch: 10/10...  Training Step: 1904...  Training loss: 1.4830...  0.3456 sec/batch
Epoch: 10/10...  Training Step: 1905...  Training loss: 1.4853...  0.3436 sec/batch
Epoch: 10/10...  Training Step: 1906...  Training loss: 1.4912...  0.4381 sec/batch
Epoch: 10/10...  Training Step: 1907...  Training loss: 1.4767...  0.4965 sec/batch
Epoch: 10/10...  Training Step: 1908...  Training loss: 1.4727...  0.3504 sec/batch
Epoch: 10/10...  Training Step: 1909...  Training loss: 1.4611...  0.3668 sec/batch
Epoch: 10/10...  Training Step: 1910...  Training loss: 1.4584...  0.4111 sec/batch
Epoch: 10/10...  Training Step: 1911...  Training loss: 1.4586...  0.4015 sec/batch
Epoch: 10/10...  Training Step: 1912...  Training loss: 1.4607...  0.3416 sec/batch
Epoch: 10/10...  Training Step: 1913...  Training loss: 1.4449...  0.4305 sec/batch
Epoch: 10/10...  Training Step: 1914...  Training loss: 1.4669...  0.4922 sec/batch
Epoch: 10/10...  Training Step: 1915...  Training loss: 1.4722...  0.4133 sec/batch
Epoch: 10/10...  Training Step: 1916...  Training loss: 1.4510...  0.4554 sec/batch
Epoch: 10/10...  Training Step: 1917...  Training loss: 1.4569...  0.5512 sec/batch
Epoch: 10/10...  Training Step: 1918...  Training loss: 1.4790...  0.4907 sec/batch
Epoch: 10/10...  Training Step: 1919...  Training loss: 1.5210...  0.4360 sec/batch
Epoch: 10/10...  Training Step: 1920...  Training loss: 1.4736...  0.4757 sec/batch
Epoch: 10/10...  Training Step: 1921...  Training loss: 1.4480...  0.4497 sec/batch
Epoch: 10/10...  Training Step: 1922...  Training loss: 1.4520...  0.4066 sec/batch
Epoch: 10/10...  Training Step: 1923...  Training loss: 1.4783...  0.3819 sec/batch
Epoch: 10/10...  Training Step: 1924...  Training loss: 1.4785...  0.4296 sec/batch
Epoch: 10/10...  Training Step: 1925...  Training loss: 1.4485...  0.4502 sec/batch
Epoch: 10/10...  Training Step: 1926...  Training loss: 1.4466...  0.4344 sec/batch
Epoch: 10/10...  Training Step: 1927...  Training loss: 1.4519...  0.3495 sec/batch
Epoch: 10/10...  Training Step: 1928...  Training loss: 1.4808...  0.4216 sec/batch
Epoch: 10/10...  Training Step: 1929...  Training loss: 1.4611...  0.3978 sec/batch
Epoch: 10/10...  Training Step: 1930...  Training loss: 1.4455...  0.4318 sec/batch
Epoch: 10/10...  Training Step: 1931...  Training loss: 1.4512...  0.3412 sec/batch
Epoch: 10/10...  Training Step: 1932...  Training loss: 1.4539...  0.3956 sec/batch
Epoch: 10/10...  Training Step: 1933...  Training loss: 1.4759...  0.3505 sec/batch
Epoch: 10/10...  Training Step: 1934...  Training loss: 1.4614...  0.3394 sec/batch
Epoch: 10/10...  Training Step: 1935...  Training loss: 1.4669...  0.3371 sec/batch
Epoch: 10/10...  Training Step: 1936...  Training loss: 1.4306...  0.3640 sec/batch
Epoch: 10/10...  Training Step: 1937...  Training loss: 1.4470...  0.3066 sec/batch
Epoch: 10/10...  Training Step: 1938...  Training loss: 1.4534...  0.3029 sec/batch
Epoch: 10/10...  Training Step: 1939...  Training loss: 1.4438...  0.2912 sec/batch
Epoch: 10/10...  Training Step: 1940...  Training loss: 1.4582...  0.3851 sec/batch
Epoch: 10/10...  Training Step: 1941...  Training loss: 1.4545...  0.3786 sec/batch
Epoch: 10/10...  Training Step: 1942...  Training loss: 1.4257...  0.3528 sec/batch
Epoch: 10/10...  Training Step: 1943...  Training loss: 1.4274...  0.4672 sec/batch
Epoch: 10/10...  Training Step: 1944...  Training loss: 1.4205...  0.4438 sec/batch
Epoch: 10/10...  Training Step: 1945...  Training loss: 1.4329...  0.3612 sec/batch
Epoch: 10/10...  Training Step: 1946...  Training loss: 1.4561...  0.3691 sec/batch
Epoch: 10/10...  Training Step: 1947...  Training loss: 1.4583...  0.3963 sec/batch
Epoch: 10/10...  Training Step: 1948...  Training loss: 1.4368...  0.3249 sec/batch
Epoch: 10/10...  Training Step: 1949...  Training loss: 1.4422...  0.3002 sec/batch
Epoch: 10/10...  Training Step: 1950...  Training loss: 1.4728...  0.2788 sec/batch
Epoch: 10/10...  Training Step: 1951...  Training loss: 1.4370...  0.3793 sec/batch
Epoch: 10/10...  Training Step: 1952...  Training loss: 1.4455...  0.4232 sec/batch
Epoch: 10/10...  Training Step: 1953...  Training loss: 1.4688...  0.3215 sec/batch
Epoch: 10/10...  Training Step: 1954...  Training loss: 1.4239...  0.3526 sec/batch
Epoch: 10/10...  Training Step: 1955...  Training loss: 1.4428...  0.3174 sec/batch
Epoch: 10/10...  Training Step: 1956...  Training loss: 1.4167...  0.3813 sec/batch
Epoch: 10/10...  Training Step: 1957...  Training loss: 1.4587...  0.3199 sec/batch
Epoch: 10/10...  Training Step: 1958...  Training loss: 1.4468...  0.3959 sec/batch
Epoch: 10/10...  Training Step: 1959...  Training loss: 1.4068...  0.3357 sec/batch
Epoch: 10/10...  Training Step: 1960...  Training loss: 1.4468...  0.4024 sec/batch
Epoch: 10/10...  Training Step: 1961...  Training loss: 1.4527...  0.3575 sec/batch
Epoch: 10/10...  Training Step: 1962...  Training loss: 1.4639...  0.3641 sec/batch
Epoch: 10/10...  Training Step: 1963...  Training loss: 1.4521...  0.4892 sec/batch
Epoch: 10/10...  Training Step: 1964...  Training loss: 1.4451...  0.4761 sec/batch
Epoch: 10/10...  Training Step: 1965...  Training loss: 1.4507...  0.4529 sec/batch
Epoch: 10/10...  Training Step: 1966...  Training loss: 1.4352...  0.4483 sec/batch
Epoch: 10/10...  Training Step: 1967...  Training loss: 1.4439...  0.4846 sec/batch
Epoch: 10/10...  Training Step: 1968...  Training loss: 1.3976...  0.4673 sec/batch
Epoch: 10/10...  Training Step: 1969...  Training loss: 1.4459...  0.4385 sec/batch
Epoch: 10/10...  Training Step: 1970...  Training loss: 1.4240...  0.4738 sec/batch
Epoch: 10/10...  Training Step: 1971...  Training loss: 1.4385...  0.4792 sec/batch
Epoch: 10/10...  Training Step: 1972...  Training loss: 1.4348...  0.4344 sec/batch
Epoch: 10/10...  Training Step: 1973...  Training loss: 1.4386...  0.4342 sec/batch
Epoch: 10/10...  Training Step: 1974...  Training loss: 1.4470...  0.4448 sec/batch
Epoch: 10/10...  Training Step: 1975...  Training loss: 1.4716...  0.4195 sec/batch
Epoch: 10/10...  Training Step: 1976...  Training loss: 1.4406...  0.5593 sec/batch
Epoch: 10/10...  Training Step: 1977...  Training loss: 1.4431...  0.4479 sec/batch
Epoch: 10/10...  Training Step: 1978...  Training loss: 1.4288...  0.4972 sec/batch
Epoch: 10/10...  Training Step: 1979...  Training loss: 1.4497...  0.4061 sec/batch
Epoch: 10/10...  Training Step: 1980...  Training loss: 1.4317...  0.3882 sec/batch

Saved checkpoints

Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables


In [73]:
tf.train.get_checkpoint_state('checkpoints')


Out[73]:
model_checkpoint_path: "checkpoints/i200_l512.ckpt"
all_model_checkpoint_paths: "checkpoints/i200_l512.ckpt"

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [74]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [75]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    samples = [c for c in prime]
    
    model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
    print("model check")
    saver = tf.train.Saver()
    print("save check")
    with tf.Session() as sess:
        print("session check")
        saver.restore(sess, checkpoint)
        print("restore check")
        new_state = sess.run(model.initial_state)
        print("run check")
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.prediction, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

Here, pass in the path to a checkpoint and sample from the network.


In [76]:
tf.train.latest_checkpoint('checkpoints')


Out[76]:
'checkpoints/i400_l512.ckpt'

In [77]:
checkpoint = tf.train.latest_checkpoint('checkpoints')
print(checkpoint)


checkpoints/i400_l512.ckpt

In [78]:
samp = sample(checkpoint, 200, lstm_size, len(vocab), prime="Far")
print(samp)


model check
save check
session check
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [128,83] rhs shape= [512,83]
	 [[Node: save/Assign_16 = Assign[T=DT_FLOAT, _class=["loc:@softmax/Variable"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](softmax/Variable/Adam_1, save/RestoreV2_16)]]

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-78-f6e2a780f6a9> in <module>()
----> 1 samp = sample(checkpoint, 200, lstm_size, len(vocab), prime="Far")
      2 print(samp)

<ipython-input-75-89bc99c7e5f6> in sample(checkpoint, n_samples, lstm_size, vocab_size, prime)
      8     with tf.Session() as sess:
      9         print("session check")
---> 10         saver.restore(sess, checkpoint)
     11         print("restore check")
     12         new_state = sess.run(model.initial_state)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
   1426       return
   1427     sess.run(self.saver_def.restore_op_name,
-> 1428              {self.saver_def.filename_tensor_name: save_path})
   1429 
   1430   @staticmethod

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1033         except KeyError:
   1034           pass
-> 1035       raise type(e)(node_def, op, message)
   1036 
   1037   def _extend_graph(self):

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [128,83] rhs shape= [512,83]
	 [[Node: save/Assign_16 = Assign[T=DT_FLOAT, _class=["loc:@softmax/Variable"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](softmax/Variable/Adam_1, save/RestoreV2_16)]]

Caused by op 'save/Assign_16', defined at:
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>
    app.launch_new_instance()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start
    ioloop.IOLoop.instance().start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start
    handler_func(fd_obj, events)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
    handler(stream, idents, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
    user_expressions, allow_stdin)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
    if self.run_code(code, result):
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-78-f6e2a780f6a9>", line 1, in <module>
    samp = sample(checkpoint, 200, lstm_size, len(vocab), prime="Far")
  File "<ipython-input-75-89bc99c7e5f6>", line 6, in sample
    saver = tf.train.Saver()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 414, in _AddRestoreOps
    assign_ops.append(saveable.restore(tensors, shapes))
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 155, in restore
    self.op.get_shape().is_fully_defined())
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
    use_locking=use_locking, name=name)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [128,83] rhs shape= [512,83]
	 [[Node: save/Assign_16 = Assign[T=DT_FLOAT, _class=["loc:@softmax/Variable"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](softmax/Variable/Adam_1, save/RestoreV2_16)]]

In [56]:
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [512] rhs shape= [2048]
	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases/Adam, save/RestoreV2_3)]]

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-56-d1a22da3ba8d> in <module>()
      1 checkpoint = 'checkpoints/i200_l512.ckpt'
----> 2 samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
      3 print(samp)

<ipython-input-53-8eb787ae9642> in sample(checkpoint, n_samples, lstm_size, vocab_size, prime)
      4     saver = tf.train.Saver()
      5     with tf.Session() as sess:
----> 6         saver.restore(sess, checkpoint)
      7         new_state = sess.run(model.initial_state)
      8         for c in prime:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
   1426       return
   1427     sess.run(self.saver_def.restore_op_name,
-> 1428              {self.saver_def.filename_tensor_name: save_path})
   1429 
   1430   @staticmethod

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1033         except KeyError:
   1034           pass
-> 1035       raise type(e)(node_def, op, message)
   1036 
   1037   def _extend_graph(self):

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [512] rhs shape= [2048]
	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases/Adam, save/RestoreV2_3)]]

Caused by op 'save/Assign_3', defined at:
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>
    app.launch_new_instance()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start
    ioloop.IOLoop.instance().start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start
    handler_func(fd_obj, events)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
    handler(stream, idents, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
    user_expressions, allow_stdin)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
    if self.run_code(code, result):
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-56-d1a22da3ba8d>", line 2, in <module>
    samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
  File "<ipython-input-53-8eb787ae9642>", line 4, in sample
    saver = tf.train.Saver()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 414, in _AddRestoreOps
    assign_ops.append(saveable.restore(tensors, shapes))
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 155, in restore
    self.op.get_shape().is_fully_defined())
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
    use_locking=use_locking, name=name)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [512] rhs shape= [2048]
	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/multi_rnn_cell/cell_0/basic_lstm_cell/biases/Adam, save/RestoreV2_3)]]

In [57]:
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)


---------------------------------------------------------------------------
NotFoundError                             Traceback (most recent call last)
/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

During handling of the above exception, another exception occurred:

NotFoundError                             Traceback (most recent call last)
<ipython-input-57-7c4e18bbddc1> in <module>()
      1 checkpoint = 'checkpoints/i600_l512.ckpt'
----> 2 samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
      3 print(samp)

<ipython-input-53-8eb787ae9642> in sample(checkpoint, n_samples, lstm_size, vocab_size, prime)
      4     saver = tf.train.Saver()
      5     with tf.Session() as sess:
----> 6         saver.restore(sess, checkpoint)
      7         new_state = sess.run(model.initial_state)
      8         for c in prime:

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
   1426       return
   1427     sess.run(self.saver_def.restore_op_name,
-> 1428              {self.saver_def.filename_tensor_name: save_path})
   1429 
   1430   @staticmethod

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1033         except KeyError:
   1034           pass
-> 1035       raise type(e)(node_def, op, message)
   1036 
   1037   def _extend_graph(self):

NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

Caused by op 'save/RestoreV2_18', defined at:
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/__main__.py", line 3, in <module>
    app.launch_new_instance()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 474, in start
    ioloop.IOLoop.instance().start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/ioloop.py", line 887, in start
    handler_func(fd_obj, events)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
    handler(stream, idents, msg)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
    user_expressions, allow_stdin)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
    if self.run_code(code, result):
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-57-7c4e18bbddc1>", line 2, in <module>
    samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
  File "<ipython-input-53-8eb787ae9642>", line 4, in sample
    saver = tf.train.Saver()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 242, in restore_op
    [spec.tensor.dtype])[0])
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
    dtypes=dtypes, name=name)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/tr/anaconda3/envs/tensorflow35/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for checkpoints/i600_l512.ckpt
	 [[Node: save/RestoreV2_18 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_18/tensor_names, save/RestoreV2_18/shape_and_slices)]]

In [ ]:
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)