TV Script Generation

In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.

Get the Data

The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..


In [14]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [15]:
view_sentence_range = (100, 110)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555

The sentences 100 to 110:
Barney_Gumble: Wow, it really works.
HARV: (CHUCKLING) I'll be back.
Homer_Simpson: Moe, I haven't seen the place this crowded since the government cracked down on you for accepting food stamps. Do you think my drink had something to do with it?
Moe_Szyslak: Who can say? It's probably a combination of things.
Patron_#1: (TO MOE) Another pitcher of those amazing "Flaming Moe's".
Patron_#2: Boy, I hate this joint, but I love that drink.
Collette: Barkeep, I couldn't help noticing your sign.
Moe_Szyslak: The one that says, "Bartenders Do It 'Til You Barf"?
Collette: No, above that store-bought drollery.
Moe_Szyslak: Oh great! Why don't we fill out an application? (READING) I'll need your name, measurements and turn ons..

Implement Preprocessing Functions

The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:

  • Lookup Table
  • Tokenize Punctuation

Lookup Table

To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:

  • Dictionary to go from the words to an id, we'll call vocab_to_int
  • Dictionary to go from the id to word, we'll call int_to_vocab

Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)


In [16]:
import numpy as np
import problem_unittests as tests
from collections import Counter
from string import punctuation
print(tf.__version__)
def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    counts = Counter(text)
    vocab = sorted(counts, key=counts.get, reverse=True)
    vocab_to_int = {word: i for i,word in enumerate(vocab,1)}
    int_to_vocab = dict(enumerate(vocab,1))
    return (vocab_to_int, int_to_vocab)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)


1.1.0
Tests Passed

Tokenize Punctuation

We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".

Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:

  • Period ( . )
  • Comma ( , )
  • Quotation Mark ( " )
  • Semicolon ( ; )
  • Exclamation mark ( ! )
  • Question mark ( ? )
  • Left Parentheses ( ( )
  • Right Parentheses ( ) )
  • Dash ( -- )
  • Return ( \n )

This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".


In [17]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    # TODO: Implement Function
    tokenizer = {
        '.' : '||Period||',
        ',' : '||Comma||',
        '"' : '||Quotation_Mark||',
        ';' : '||Semicolon||',
        '!' : '||Exclamation_Mark||',
        '?' : '||Question_Mark||',
        '(' : '||Left_Parentheses||',
        ')' : '||Right_Parentheses||',
        '--' : '||Dash||',
        '\n': '||Return||'
    }
    return tokenizer

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Build the Neural Network

You'll build the components necessary to build a RNN by implementing the following functions below:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the Version of TensorFlow and Access to GPU


In [2]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
print(tf.__version__)
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


1.1.0
TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter.
  • Targets placeholder
  • Learning Rate placeholder

Return the placeholders in the following tuple (Input, Targets, LearningRate)


In [3]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    inputs = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None,None], name='target')
    learning_rate = tf.placeholder(tf.float32,name='learning_rate')
    # TODO: Implement Function
    return inputs, targets, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)


Tests Passed

Build RNN Cell and Initialize

Stack one or more BasicLSTMCells in a MultiRNNCell.

  • The Rnn size should be set using rnn_size
  • Initalize Cell State using the MultiRNNCell's zero_state() function
    • Apply the name "initial_state" to the initial state using tf.identity()

Return the cell and initial state in the following tuple (Cell, InitialState)


In [4]:
def build_rnn_cell(size):
        return tf.contrib.rnn.BasicLSTMCell(size, state_is_tuple=True)

In [5]:
def get_init_cell(batch_size, rnn_size, keep_prob=0.7):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    # TODO: Implement Function
    
    num_layers = 2
#     drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    cell = tf.contrib.rnn.MultiRNNCell([build_rnn_cell(rnn_size) for _ in range(num_layers)])
#     cell=tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)])
#     batch_size = tf.placeholder(tf.int32, [])
    initial_state = cell.zero_state(batch_size, tf.float32)
    initial_state = tf.identity(initial_state, name='initial_state')
    
    return cell, initial_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)


Tests Passed

Word Embedding

Apply embedding to input_data using TensorFlow. Return the embedded sequence.


In [6]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    # TODO: Implement Function
    embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1,1))
    embed = tf.nn.embedding_lookup(embedding, input_data)
    return embed


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)


Tests Passed

Build RNN

You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.

Return the outputs and final_state state in the following tuple (Outputs, FinalState)


In [7]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    # TODO: Implement Function
    outputs, final_state = tf.nn.dynamic_rnn(cell,inputs, dtype=tf.float32)
    final_state = tf.identity(final_state, name='final_state')
    return outputs, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
  • Build RNN using cell and your build_rnn(cell, inputs) function.
  • Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.

Return the logits and final state in the following tuple (Logits, FinalState)


In [29]:
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :param embed_dim: Number of embedding dimensions
    :return: Tuple (Logits, FinalState)
    """
    # TODO: Implement Function
    embedded = get_embed(input_data,vocab_size, embed_dim)
    outputs,final_state = build_rnn(cell, embedded)
    logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None, weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
                                               biases_initializer=tf.zeros_initializer()
)
    return logits, final_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)


Tests Passed

Batches

Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a single batch of input with the shape [batch size, sequence length]
  • The second element is a single batch of targets with the shape [batch size, sequence length]

If you can't fill the last batch with enough data, drop the last batch.

For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2  3], [ 7  8  9]],
    # Batch of targets
    [[ 2  3  4], [ 8  9 10]]
  ],

  # Second Batch
  [
    # Batch of Input
    [[ 4  5  6], [10 11 12]],
    # Batch of targets
    [[ 5  6  7], [11 12 13]]
  ]
]

In [30]:
def get_batches(int_text, batch_size, seq_length):
    """
    Return batches of input and target
    :param int_text: Text with the words replaced by their ids
    :param batch_size: The size of batch
    :param seq_length: The length of sequence
    :return: Batches as a Numpy array
    """
    # TODO: Implement Function
    slice_size = batch_size*seq_length
    n_batches = int(len(int_text)/slice_size)
    inputs = np.array(int_text[:n_batches*slice_size])
    targets = np.array(int_text[1:n_batches*slice_size + 1])
    inputs =  np.stack(np.split(inputs,batch_size))
    targets = np.stack(np.split(targets, batch_size))
    batches = []
    for b in range(n_batches):
        x = inputs[:,b*seq_length:(b+1)*seq_length]
        y = targets[:,b*seq_length: (b+1)*seq_length]
        batches.append([x,y])
    batches = np.array(batches)
    return batches


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set num_epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set embed_dim to the size of the embedding.
  • Set seq_length to the length of sequence.
  • Set learning_rate to the learning rate.
  • Set show_every_n_batches to the number of batches the neural network should print progress.

In [37]:
# Number of Epochs
num_epochs = 60
# Batch Size
batch_size = 20
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 10

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Build the Graph

Build the graph using the neural network you implemented.


In [38]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
    train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [39]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')


Epoch   0 Batch    0/172   train_loss = 8.826
Epoch   0 Batch   10/172   train_loss = 6.780
Epoch   0 Batch   20/172   train_loss = 6.451
Epoch   0 Batch   30/172   train_loss = 6.073
Epoch   0 Batch   40/172   train_loss = 5.579
Epoch   0 Batch   50/172   train_loss = 5.733
Epoch   0 Batch   60/172   train_loss = 5.429
Epoch   0 Batch   70/172   train_loss = 5.585
Epoch   0 Batch   80/172   train_loss = 5.554
Epoch   0 Batch   90/172   train_loss = 5.373
Epoch   0 Batch  100/172   train_loss = 5.309
Epoch   0 Batch  110/172   train_loss = 5.449
Epoch   0 Batch  120/172   train_loss = 5.297
Epoch   0 Batch  130/172   train_loss = 5.469
Epoch   0 Batch  140/172   train_loss = 5.389
Epoch   0 Batch  150/172   train_loss = 5.485
Epoch   0 Batch  160/172   train_loss = 5.638
Epoch   0 Batch  170/172   train_loss = 5.390
Epoch   1 Batch    8/172   train_loss = 4.636
Epoch   1 Batch   18/172   train_loss = 4.777
Epoch   1 Batch   28/172   train_loss = 5.355
Epoch   1 Batch   38/172   train_loss = 5.393
Epoch   1 Batch   48/172   train_loss = 4.968
Epoch   1 Batch   58/172   train_loss = 5.041
Epoch   1 Batch   68/172   train_loss = 5.073
Epoch   1 Batch   78/172   train_loss = 5.004
Epoch   1 Batch   88/172   train_loss = 4.978
Epoch   1 Batch   98/172   train_loss = 4.745
Epoch   1 Batch  108/172   train_loss = 5.061
Epoch   1 Batch  118/172   train_loss = 4.773
Epoch   1 Batch  128/172   train_loss = 4.954
Epoch   1 Batch  138/172   train_loss = 4.711
Epoch   1 Batch  148/172   train_loss = 4.729
Epoch   1 Batch  158/172   train_loss = 5.072
Epoch   1 Batch  168/172   train_loss = 4.690
Epoch   2 Batch    6/172   train_loss = 4.579
Epoch   2 Batch   16/172   train_loss = 4.862
Epoch   2 Batch   26/172   train_loss = 4.626
Epoch   2 Batch   36/172   train_loss = 4.912
Epoch   2 Batch   46/172   train_loss = 4.679
Epoch   2 Batch   56/172   train_loss = 4.549
Epoch   2 Batch   66/172   train_loss = 4.778
Epoch   2 Batch   76/172   train_loss = 4.446
Epoch   2 Batch   86/172   train_loss = 4.469
Epoch   2 Batch   96/172   train_loss = 4.479
Epoch   2 Batch  106/172   train_loss = 4.395
Epoch   2 Batch  116/172   train_loss = 4.668
Epoch   2 Batch  126/172   train_loss = 4.430
Epoch   2 Batch  136/172   train_loss = 4.526
Epoch   2 Batch  146/172   train_loss = 4.416
Epoch   2 Batch  156/172   train_loss = 4.477
Epoch   2 Batch  166/172   train_loss = 4.288
Epoch   3 Batch    4/172   train_loss = 4.495
Epoch   3 Batch   14/172   train_loss = 4.595
Epoch   3 Batch   24/172   train_loss = 4.456
Epoch   3 Batch   34/172   train_loss = 4.203
Epoch   3 Batch   44/172   train_loss = 4.281
Epoch   3 Batch   54/172   train_loss = 4.109
Epoch   3 Batch   64/172   train_loss = 4.187
Epoch   3 Batch   74/172   train_loss = 4.221
Epoch   3 Batch   84/172   train_loss = 4.474
Epoch   3 Batch   94/172   train_loss = 4.259
Epoch   3 Batch  104/172   train_loss = 3.942
Epoch   3 Batch  114/172   train_loss = 4.377
Epoch   3 Batch  124/172   train_loss = 4.020
Epoch   3 Batch  134/172   train_loss = 4.243
Epoch   3 Batch  144/172   train_loss = 3.948
Epoch   3 Batch  154/172   train_loss = 4.098
Epoch   3 Batch  164/172   train_loss = 3.991
Epoch   4 Batch    2/172   train_loss = 4.048
Epoch   4 Batch   12/172   train_loss = 4.038
Epoch   4 Batch   22/172   train_loss = 4.215
Epoch   4 Batch   32/172   train_loss = 3.701
Epoch   4 Batch   42/172   train_loss = 4.113
Epoch   4 Batch   52/172   train_loss = 3.962
Epoch   4 Batch   62/172   train_loss = 3.811
Epoch   4 Batch   72/172   train_loss = 3.898
Epoch   4 Batch   82/172   train_loss = 3.996
Epoch   4 Batch   92/172   train_loss = 4.018
Epoch   4 Batch  102/172   train_loss = 3.829
Epoch   4 Batch  112/172   train_loss = 3.939
Epoch   4 Batch  122/172   train_loss = 4.152
Epoch   4 Batch  132/172   train_loss = 3.995
Epoch   4 Batch  142/172   train_loss = 3.639
Epoch   4 Batch  152/172   train_loss = 3.946
Epoch   4 Batch  162/172   train_loss = 4.002
Epoch   5 Batch    0/172   train_loss = 3.940
Epoch   5 Batch   10/172   train_loss = 3.782
Epoch   5 Batch   20/172   train_loss = 3.799
Epoch   5 Batch   30/172   train_loss = 3.714
Epoch   5 Batch   40/172   train_loss = 3.456
Epoch   5 Batch   50/172   train_loss = 3.683
Epoch   5 Batch   60/172   train_loss = 3.590
Epoch   5 Batch   70/172   train_loss = 3.790
Epoch   5 Batch   80/172   train_loss = 3.728
Epoch   5 Batch   90/172   train_loss = 3.635
Epoch   5 Batch  100/172   train_loss = 3.594
Epoch   5 Batch  110/172   train_loss = 3.530
Epoch   5 Batch  120/172   train_loss = 3.573
Epoch   5 Batch  130/172   train_loss = 3.711
Epoch   5 Batch  140/172   train_loss = 3.557
Epoch   5 Batch  150/172   train_loss = 3.596
Epoch   5 Batch  160/172   train_loss = 3.658
Epoch   5 Batch  170/172   train_loss = 3.650
Epoch   6 Batch    8/172   train_loss = 3.351
Epoch   6 Batch   18/172   train_loss = 3.319
Epoch   6 Batch   28/172   train_loss = 3.644
Epoch   6 Batch   38/172   train_loss = 3.662
Epoch   6 Batch   48/172   train_loss = 3.245
Epoch   6 Batch   58/172   train_loss = 3.547
Epoch   6 Batch   68/172   train_loss = 3.399
Epoch   6 Batch   78/172   train_loss = 3.513
Epoch   6 Batch   88/172   train_loss = 3.382
Epoch   6 Batch   98/172   train_loss = 3.311
Epoch   6 Batch  108/172   train_loss = 3.505
Epoch   6 Batch  118/172   train_loss = 3.300
Epoch   6 Batch  128/172   train_loss = 3.370
Epoch   6 Batch  138/172   train_loss = 3.265
Epoch   6 Batch  148/172   train_loss = 3.066
Epoch   6 Batch  158/172   train_loss = 3.324
Epoch   6 Batch  168/172   train_loss = 3.320
Epoch   7 Batch    6/172   train_loss = 3.179
Epoch   7 Batch   16/172   train_loss = 3.313
Epoch   7 Batch   26/172   train_loss = 3.223
Epoch   7 Batch   36/172   train_loss = 3.407
Epoch   7 Batch   46/172   train_loss = 3.270
Epoch   7 Batch   56/172   train_loss = 3.015
Epoch   7 Batch   66/172   train_loss = 3.209
Epoch   7 Batch   76/172   train_loss = 3.025
Epoch   7 Batch   86/172   train_loss = 3.173
Epoch   7 Batch   96/172   train_loss = 3.067
Epoch   7 Batch  106/172   train_loss = 2.967
Epoch   7 Batch  116/172   train_loss = 3.120
Epoch   7 Batch  126/172   train_loss = 2.968
Epoch   7 Batch  136/172   train_loss = 2.941
Epoch   7 Batch  146/172   train_loss = 2.904
Epoch   7 Batch  156/172   train_loss = 2.916
Epoch   7 Batch  166/172   train_loss = 2.781
Epoch   8 Batch    4/172   train_loss = 2.871
Epoch   8 Batch   14/172   train_loss = 3.032
Epoch   8 Batch   24/172   train_loss = 2.982
Epoch   8 Batch   34/172   train_loss = 2.775
Epoch   8 Batch   44/172   train_loss = 2.909
Epoch   8 Batch   54/172   train_loss = 2.796
Epoch   8 Batch   64/172   train_loss = 2.780
Epoch   8 Batch   74/172   train_loss = 2.690
Epoch   8 Batch   84/172   train_loss = 2.905
Epoch   8 Batch   94/172   train_loss = 2.801
Epoch   8 Batch  104/172   train_loss = 2.689
Epoch   8 Batch  114/172   train_loss = 2.987
Epoch   8 Batch  124/172   train_loss = 2.672
Epoch   8 Batch  134/172   train_loss = 2.741
Epoch   8 Batch  144/172   train_loss = 2.535
Epoch   8 Batch  154/172   train_loss = 2.523
Epoch   8 Batch  164/172   train_loss = 2.681
Epoch   9 Batch    2/172   train_loss = 2.605
Epoch   9 Batch   12/172   train_loss = 2.650
Epoch   9 Batch   22/172   train_loss = 2.488
Epoch   9 Batch   32/172   train_loss = 2.547
Epoch   9 Batch   42/172   train_loss = 2.621
Epoch   9 Batch   52/172   train_loss = 2.570
Epoch   9 Batch   62/172   train_loss = 2.482
Epoch   9 Batch   72/172   train_loss = 2.432
Epoch   9 Batch   82/172   train_loss = 2.534
Epoch   9 Batch   92/172   train_loss = 2.651
Epoch   9 Batch  102/172   train_loss = 2.397
Epoch   9 Batch  112/172   train_loss = 2.432
Epoch   9 Batch  122/172   train_loss = 2.536
Epoch   9 Batch  132/172   train_loss = 2.350
Epoch   9 Batch  142/172   train_loss = 2.402
Epoch   9 Batch  152/172   train_loss = 2.326
Epoch   9 Batch  162/172   train_loss = 2.384
Epoch  10 Batch    0/172   train_loss = 2.369
Epoch  10 Batch   10/172   train_loss = 2.357
Epoch  10 Batch   20/172   train_loss = 2.217
Epoch  10 Batch   30/172   train_loss = 2.138
Epoch  10 Batch   40/172   train_loss = 2.254
Epoch  10 Batch   50/172   train_loss = 2.267
Epoch  10 Batch   60/172   train_loss = 2.249
Epoch  10 Batch   70/172   train_loss = 2.302
Epoch  10 Batch   80/172   train_loss = 2.188
Epoch  10 Batch   90/172   train_loss = 2.212
Epoch  10 Batch  100/172   train_loss = 2.293
Epoch  10 Batch  110/172   train_loss = 1.991
Epoch  10 Batch  120/172   train_loss = 2.141
Epoch  10 Batch  130/172   train_loss = 2.190
Epoch  10 Batch  140/172   train_loss = 2.077
Epoch  10 Batch  150/172   train_loss = 2.021
Epoch  10 Batch  160/172   train_loss = 2.043
Epoch  10 Batch  170/172   train_loss = 2.138
Epoch  11 Batch    8/172   train_loss = 2.236
Epoch  11 Batch   18/172   train_loss = 2.168
Epoch  11 Batch   28/172   train_loss = 2.015
Epoch  11 Batch   38/172   train_loss = 2.062
Epoch  11 Batch   48/172   train_loss = 1.844
Epoch  11 Batch   58/172   train_loss = 2.090
Epoch  11 Batch   68/172   train_loss = 1.821
Epoch  11 Batch   78/172   train_loss = 1.952
Epoch  11 Batch   88/172   train_loss = 1.859
Epoch  11 Batch   98/172   train_loss = 1.976
Epoch  11 Batch  108/172   train_loss = 2.023
Epoch  11 Batch  118/172   train_loss = 1.882
Epoch  11 Batch  128/172   train_loss = 1.993
Epoch  11 Batch  138/172   train_loss = 1.908
Epoch  11 Batch  148/172   train_loss = 1.661
Epoch  11 Batch  158/172   train_loss = 1.672
Epoch  11 Batch  168/172   train_loss = 1.752
Epoch  12 Batch    6/172   train_loss = 1.833
Epoch  12 Batch   16/172   train_loss = 1.762
Epoch  12 Batch   26/172   train_loss = 1.810
Epoch  12 Batch   36/172   train_loss = 1.721
Epoch  12 Batch   46/172   train_loss = 1.863
Epoch  12 Batch   56/172   train_loss = 1.507
Epoch  12 Batch   66/172   train_loss = 1.618
Epoch  12 Batch   76/172   train_loss = 1.596
Epoch  12 Batch   86/172   train_loss = 1.713
Epoch  12 Batch   96/172   train_loss = 1.717
Epoch  12 Batch  106/172   train_loss = 1.629
Epoch  12 Batch  116/172   train_loss = 1.570
Epoch  12 Batch  126/172   train_loss = 1.680
Epoch  12 Batch  136/172   train_loss = 1.611
Epoch  12 Batch  146/172   train_loss = 1.658
Epoch  12 Batch  156/172   train_loss = 1.589
Epoch  12 Batch  166/172   train_loss = 1.443
Epoch  13 Batch    4/172   train_loss = 1.448
Epoch  13 Batch   14/172   train_loss = 1.632
Epoch  13 Batch   24/172   train_loss = 1.529
Epoch  13 Batch   34/172   train_loss = 1.412
Epoch  13 Batch   44/172   train_loss = 1.513
Epoch  13 Batch   54/172   train_loss = 1.548
Epoch  13 Batch   64/172   train_loss = 1.414
Epoch  13 Batch   74/172   train_loss = 1.329
Epoch  13 Batch   84/172   train_loss = 1.428
Epoch  13 Batch   94/172   train_loss = 1.418
Epoch  13 Batch  104/172   train_loss = 1.397
Epoch  13 Batch  114/172   train_loss = 1.545
Epoch  13 Batch  124/172   train_loss = 1.407
Epoch  13 Batch  134/172   train_loss = 1.331
Epoch  13 Batch  144/172   train_loss = 1.380
Epoch  13 Batch  154/172   train_loss = 1.326
Epoch  13 Batch  164/172   train_loss = 1.298
Epoch  14 Batch    2/172   train_loss = 1.275
Epoch  14 Batch   12/172   train_loss = 1.296
Epoch  14 Batch   22/172   train_loss = 1.335
Epoch  14 Batch   32/172   train_loss = 1.379
Epoch  14 Batch   42/172   train_loss = 1.296
Epoch  14 Batch   52/172   train_loss = 1.282
Epoch  14 Batch   62/172   train_loss = 1.350
Epoch  14 Batch   72/172   train_loss = 1.183
Epoch  14 Batch   82/172   train_loss = 1.271
Epoch  14 Batch   92/172   train_loss = 1.190
Epoch  14 Batch  102/172   train_loss = 1.198
Epoch  14 Batch  112/172   train_loss = 1.104
Epoch  14 Batch  122/172   train_loss = 1.088
Epoch  14 Batch  132/172   train_loss = 1.110
Epoch  14 Batch  142/172   train_loss = 1.180
Epoch  14 Batch  152/172   train_loss = 0.957
Epoch  14 Batch  162/172   train_loss = 1.112
Epoch  15 Batch    0/172   train_loss = 1.121
Epoch  15 Batch   10/172   train_loss = 1.115
Epoch  15 Batch   20/172   train_loss = 1.104
Epoch  15 Batch   30/172   train_loss = 1.044
Epoch  15 Batch   40/172   train_loss = 1.087
Epoch  15 Batch   50/172   train_loss = 1.083
Epoch  15 Batch   60/172   train_loss = 1.039
Epoch  15 Batch   70/172   train_loss = 1.153
Epoch  15 Batch   80/172   train_loss = 0.998
Epoch  15 Batch   90/172   train_loss = 1.008
Epoch  15 Batch  100/172   train_loss = 1.226
Epoch  15 Batch  110/172   train_loss = 0.878
Epoch  15 Batch  120/172   train_loss = 1.005
Epoch  15 Batch  130/172   train_loss = 0.973
Epoch  15 Batch  140/172   train_loss = 0.999
Epoch  15 Batch  150/172   train_loss = 0.937
Epoch  15 Batch  160/172   train_loss = 0.914
Epoch  15 Batch  170/172   train_loss = 0.996
Epoch  16 Batch    8/172   train_loss = 1.027
Epoch  16 Batch   18/172   train_loss = 1.076
Epoch  16 Batch   28/172   train_loss = 0.921
Epoch  16 Batch   38/172   train_loss = 1.035
Epoch  16 Batch   48/172   train_loss = 0.902
Epoch  16 Batch   58/172   train_loss = 0.854
Epoch  16 Batch   68/172   train_loss = 0.829
Epoch  16 Batch   78/172   train_loss = 0.775
Epoch  16 Batch   88/172   train_loss = 0.791
Epoch  16 Batch   98/172   train_loss = 0.951
Epoch  16 Batch  108/172   train_loss = 0.883
Epoch  16 Batch  118/172   train_loss = 0.805
Epoch  16 Batch  128/172   train_loss = 0.858
Epoch  16 Batch  138/172   train_loss = 0.836
Epoch  16 Batch  148/172   train_loss = 0.718
Epoch  16 Batch  158/172   train_loss = 0.734
Epoch  16 Batch  168/172   train_loss = 0.842
Epoch  17 Batch    6/172   train_loss = 0.845
Epoch  17 Batch   16/172   train_loss = 0.737
Epoch  17 Batch   26/172   train_loss = 0.822
Epoch  17 Batch   36/172   train_loss = 0.772
Epoch  17 Batch   46/172   train_loss = 0.844
Epoch  17 Batch   56/172   train_loss = 0.625
Epoch  17 Batch   66/172   train_loss = 0.664
Epoch  17 Batch   76/172   train_loss = 0.687
Epoch  17 Batch   86/172   train_loss = 0.643
Epoch  17 Batch   96/172   train_loss = 0.730
Epoch  17 Batch  106/172   train_loss = 0.667
Epoch  17 Batch  116/172   train_loss = 0.614
Epoch  17 Batch  126/172   train_loss = 0.751
Epoch  17 Batch  136/172   train_loss = 0.666
Epoch  17 Batch  146/172   train_loss = 0.724
Epoch  17 Batch  156/172   train_loss = 0.661
Epoch  17 Batch  166/172   train_loss = 0.623
Epoch  18 Batch    4/172   train_loss = 0.567
Epoch  18 Batch   14/172   train_loss = 0.605
Epoch  18 Batch   24/172   train_loss = 0.606
Epoch  18 Batch   34/172   train_loss = 0.524
Epoch  18 Batch   44/172   train_loss = 0.570
Epoch  18 Batch   54/172   train_loss = 0.635
Epoch  18 Batch   64/172   train_loss = 0.566
Epoch  18 Batch   74/172   train_loss = 0.574
Epoch  18 Batch   84/172   train_loss = 0.596
Epoch  18 Batch   94/172   train_loss = 0.520
Epoch  18 Batch  104/172   train_loss = 0.554
Epoch  18 Batch  114/172   train_loss = 0.551
Epoch  18 Batch  124/172   train_loss = 0.576
Epoch  18 Batch  134/172   train_loss = 0.475
Epoch  18 Batch  144/172   train_loss = 0.563
Epoch  18 Batch  154/172   train_loss = 0.521
Epoch  18 Batch  164/172   train_loss = 0.550
Epoch  19 Batch    2/172   train_loss = 0.455
Epoch  19 Batch   12/172   train_loss = 0.452
Epoch  19 Batch   22/172   train_loss = 0.468
Epoch  19 Batch   32/172   train_loss = 0.523
Epoch  19 Batch   42/172   train_loss = 0.554
Epoch  19 Batch   52/172   train_loss = 0.460
Epoch  19 Batch   62/172   train_loss = 0.526
Epoch  19 Batch   72/172   train_loss = 0.400
Epoch  19 Batch   82/172   train_loss = 0.449
Epoch  19 Batch   92/172   train_loss = 0.498
Epoch  19 Batch  102/172   train_loss = 0.407
Epoch  19 Batch  112/172   train_loss = 0.433
Epoch  19 Batch  122/172   train_loss = 0.445
Epoch  19 Batch  132/172   train_loss = 0.409
Epoch  19 Batch  142/172   train_loss = 0.407
Epoch  19 Batch  152/172   train_loss = 0.375
Epoch  19 Batch  162/172   train_loss = 0.416
Epoch  20 Batch    0/172   train_loss = 0.456
Epoch  20 Batch   10/172   train_loss = 0.398
Epoch  20 Batch   20/172   train_loss = 0.399
Epoch  20 Batch   30/172   train_loss = 0.308
Epoch  20 Batch   40/172   train_loss = 0.416
Epoch  20 Batch   50/172   train_loss = 0.409
Epoch  20 Batch   60/172   train_loss = 0.417
Epoch  20 Batch   70/172   train_loss = 0.432
Epoch  20 Batch   80/172   train_loss = 0.359
Epoch  20 Batch   90/172   train_loss = 0.355
Epoch  20 Batch  100/172   train_loss = 0.457
Epoch  20 Batch  110/172   train_loss = 0.309
Epoch  20 Batch  120/172   train_loss = 0.391
Epoch  20 Batch  130/172   train_loss = 0.395
Epoch  20 Batch  140/172   train_loss = 0.397
Epoch  20 Batch  150/172   train_loss = 0.362
Epoch  20 Batch  160/172   train_loss = 0.366
Epoch  20 Batch  170/172   train_loss = 0.333
Epoch  21 Batch    8/172   train_loss = 0.393
Epoch  21 Batch   18/172   train_loss = 0.400
Epoch  21 Batch   28/172   train_loss = 0.331
Epoch  21 Batch   38/172   train_loss = 0.372
Epoch  21 Batch   48/172   train_loss = 0.322
Epoch  21 Batch   58/172   train_loss = 0.303
Epoch  21 Batch   68/172   train_loss = 0.337
Epoch  21 Batch   78/172   train_loss = 0.309
Epoch  21 Batch   88/172   train_loss = 0.303
Epoch  21 Batch   98/172   train_loss = 0.372
Epoch  21 Batch  108/172   train_loss = 0.302
Epoch  21 Batch  118/172   train_loss = 0.337
Epoch  21 Batch  128/172   train_loss = 0.334
Epoch  21 Batch  138/172   train_loss = 0.300
Epoch  21 Batch  148/172   train_loss = 0.304
Epoch  21 Batch  158/172   train_loss = 0.303
Epoch  21 Batch  168/172   train_loss = 0.374
Epoch  22 Batch    6/172   train_loss = 0.382
Epoch  22 Batch   16/172   train_loss = 0.297
Epoch  22 Batch   26/172   train_loss = 0.369
Epoch  22 Batch   36/172   train_loss = 0.303
Epoch  22 Batch   46/172   train_loss = 0.326
Epoch  22 Batch   56/172   train_loss = 0.315
Epoch  22 Batch   66/172   train_loss = 0.322
Epoch  22 Batch   76/172   train_loss = 0.302
Epoch  22 Batch   86/172   train_loss = 0.320
Epoch  22 Batch   96/172   train_loss = 0.330
Epoch  22 Batch  106/172   train_loss = 0.277
Epoch  22 Batch  116/172   train_loss = 0.286
Epoch  22 Batch  126/172   train_loss = 0.335
Epoch  22 Batch  136/172   train_loss = 0.260
Epoch  22 Batch  146/172   train_loss = 0.389
Epoch  22 Batch  156/172   train_loss = 0.280
Epoch  22 Batch  166/172   train_loss = 0.294
Epoch  23 Batch    4/172   train_loss = 0.269
Epoch  23 Batch   14/172   train_loss = 0.260
Epoch  23 Batch   24/172   train_loss = 0.284
Epoch  23 Batch   34/172   train_loss = 0.217
Epoch  23 Batch   44/172   train_loss = 0.230
Epoch  23 Batch   54/172   train_loss = 0.258
Epoch  23 Batch   64/172   train_loss = 0.268
Epoch  23 Batch   74/172   train_loss = 0.296
Epoch  23 Batch   84/172   train_loss = 0.302
Epoch  23 Batch   94/172   train_loss = 0.263
Epoch  23 Batch  104/172   train_loss = 0.250
Epoch  23 Batch  114/172   train_loss = 0.240
Epoch  23 Batch  124/172   train_loss = 0.244
Epoch  23 Batch  134/172   train_loss = 0.234
Epoch  23 Batch  144/172   train_loss = 0.299
Epoch  23 Batch  154/172   train_loss = 0.287
Epoch  23 Batch  164/172   train_loss = 0.283
Epoch  24 Batch    2/172   train_loss = 0.258
Epoch  24 Batch   12/172   train_loss = 0.221
Epoch  24 Batch   22/172   train_loss = 0.237
Epoch  24 Batch   32/172   train_loss = 0.261
Epoch  24 Batch   42/172   train_loss = 0.270
Epoch  24 Batch   52/172   train_loss = 0.207
Epoch  24 Batch   62/172   train_loss = 0.286
Epoch  24 Batch   72/172   train_loss = 0.202
Epoch  24 Batch   82/172   train_loss = 0.254
Epoch  24 Batch   92/172   train_loss = 0.276
Epoch  24 Batch  102/172   train_loss = 0.221
Epoch  24 Batch  112/172   train_loss = 0.237
Epoch  24 Batch  122/172   train_loss = 0.253
Epoch  24 Batch  132/172   train_loss = 0.227
Epoch  24 Batch  142/172   train_loss = 0.215
Epoch  24 Batch  152/172   train_loss = 0.207
Epoch  24 Batch  162/172   train_loss = 0.226
Epoch  25 Batch    0/172   train_loss = 0.248
Epoch  25 Batch   10/172   train_loss = 0.220
Epoch  25 Batch   20/172   train_loss = 0.254
Epoch  25 Batch   30/172   train_loss = 0.174
Epoch  25 Batch   40/172   train_loss = 0.216
Epoch  25 Batch   50/172   train_loss = 0.247
Epoch  25 Batch   60/172   train_loss = 0.208
Epoch  25 Batch   70/172   train_loss = 0.289
Epoch  25 Batch   80/172   train_loss = 0.223
Epoch  25 Batch   90/172   train_loss = 0.219
Epoch  25 Batch  100/172   train_loss = 0.278
Epoch  25 Batch  110/172   train_loss = 0.167
Epoch  25 Batch  120/172   train_loss = 0.245
Epoch  25 Batch  130/172   train_loss = 0.226
Epoch  25 Batch  140/172   train_loss = 0.245
Epoch  25 Batch  150/172   train_loss = 0.238
Epoch  25 Batch  160/172   train_loss = 0.204
Epoch  25 Batch  170/172   train_loss = 0.179
Epoch  26 Batch    8/172   train_loss = 0.237
Epoch  26 Batch   18/172   train_loss = 0.277
Epoch  26 Batch   28/172   train_loss = 0.215
Epoch  26 Batch   38/172   train_loss = 0.262
Epoch  26 Batch   48/172   train_loss = 0.203
Epoch  26 Batch   58/172   train_loss = 0.171
Epoch  26 Batch   68/172   train_loss = 0.219
Epoch  26 Batch   78/172   train_loss = 0.188
Epoch  26 Batch   88/172   train_loss = 0.194
Epoch  26 Batch   98/172   train_loss = 0.268
Epoch  26 Batch  108/172   train_loss = 0.199
Epoch  26 Batch  118/172   train_loss = 0.226
Epoch  26 Batch  128/172   train_loss = 0.218
Epoch  26 Batch  138/172   train_loss = 0.214
Epoch  26 Batch  148/172   train_loss = 0.205
Epoch  26 Batch  158/172   train_loss = 0.212
Epoch  26 Batch  168/172   train_loss = 0.260
Epoch  27 Batch    6/172   train_loss = 0.285
Epoch  27 Batch   16/172   train_loss = 0.203
Epoch  27 Batch   26/172   train_loss = 0.291
Epoch  27 Batch   36/172   train_loss = 0.212
Epoch  27 Batch   46/172   train_loss = 0.234
Epoch  27 Batch   56/172   train_loss = 0.257
Epoch  27 Batch   66/172   train_loss = 0.226
Epoch  27 Batch   76/172   train_loss = 0.274
Epoch  27 Batch   86/172   train_loss = 0.203
Epoch  27 Batch   96/172   train_loss = 0.238
Epoch  27 Batch  106/172   train_loss = 0.190
Epoch  27 Batch  116/172   train_loss = 0.221
Epoch  27 Batch  126/172   train_loss = 0.245
Epoch  27 Batch  136/172   train_loss = 0.196
Epoch  27 Batch  146/172   train_loss = 0.283
Epoch  27 Batch  156/172   train_loss = 0.233
Epoch  27 Batch  166/172   train_loss = 0.218
Epoch  28 Batch    4/172   train_loss = 0.200
Epoch  28 Batch   14/172   train_loss = 0.223
Epoch  28 Batch   24/172   train_loss = 0.217
Epoch  28 Batch   34/172   train_loss = 0.149
Epoch  28 Batch   44/172   train_loss = 0.179
Epoch  28 Batch   54/172   train_loss = 0.203
Epoch  28 Batch   64/172   train_loss = 0.203
Epoch  28 Batch   74/172   train_loss = 0.223
Epoch  28 Batch   84/172   train_loss = 0.248
Epoch  28 Batch   94/172   train_loss = 0.192
Epoch  28 Batch  104/172   train_loss = 0.186
Epoch  28 Batch  114/172   train_loss = 0.194
Epoch  28 Batch  124/172   train_loss = 0.194
Epoch  28 Batch  134/172   train_loss = 0.178
Epoch  28 Batch  144/172   train_loss = 0.231
Epoch  28 Batch  154/172   train_loss = 0.231
Epoch  28 Batch  164/172   train_loss = 0.241
Epoch  29 Batch    2/172   train_loss = 0.203
Epoch  29 Batch   12/172   train_loss = 0.176
Epoch  29 Batch   22/172   train_loss = 0.196
Epoch  29 Batch   32/172   train_loss = 0.219
Epoch  29 Batch   42/172   train_loss = 0.232
Epoch  29 Batch   52/172   train_loss = 0.178
Epoch  29 Batch   62/172   train_loss = 0.241
Epoch  29 Batch   72/172   train_loss = 0.164
Epoch  29 Batch   82/172   train_loss = 0.197
Epoch  29 Batch   92/172   train_loss = 0.237
Epoch  29 Batch  102/172   train_loss = 0.180
Epoch  29 Batch  112/172   train_loss = 0.199
Epoch  29 Batch  122/172   train_loss = 0.220
Epoch  29 Batch  132/172   train_loss = 0.164
Epoch  29 Batch  142/172   train_loss = 0.154
Epoch  29 Batch  152/172   train_loss = 0.181
Epoch  29 Batch  162/172   train_loss = 0.217
Epoch  30 Batch    0/172   train_loss = 0.241
Epoch  30 Batch   10/172   train_loss = 0.199
Epoch  30 Batch   20/172   train_loss = 0.213
Epoch  30 Batch   30/172   train_loss = 0.146
Epoch  30 Batch   40/172   train_loss = 0.182
Epoch  30 Batch   50/172   train_loss = 0.234
Epoch  30 Batch   60/172   train_loss = 0.192
Epoch  30 Batch   70/172   train_loss = 0.254
Epoch  30 Batch   80/172   train_loss = 0.206
Epoch  30 Batch   90/172   train_loss = 0.186
Epoch  30 Batch  100/172   train_loss = 0.256
Epoch  30 Batch  110/172   train_loss = 0.139
Epoch  30 Batch  120/172   train_loss = 0.206
Epoch  30 Batch  130/172   train_loss = 0.206
Epoch  30 Batch  140/172   train_loss = 0.201
Epoch  30 Batch  150/172   train_loss = 0.223
Epoch  30 Batch  160/172   train_loss = 0.181
Epoch  30 Batch  170/172   train_loss = 0.198
Epoch  31 Batch    8/172   train_loss = 0.215
Epoch  31 Batch   18/172   train_loss = 0.230
Epoch  31 Batch   28/172   train_loss = 0.202
Epoch  31 Batch   38/172   train_loss = 0.246
Epoch  31 Batch   48/172   train_loss = 0.179
Epoch  31 Batch   58/172   train_loss = 0.161
Epoch  31 Batch   68/172   train_loss = 0.211
Epoch  31 Batch   78/172   train_loss = 0.181
Epoch  31 Batch   88/172   train_loss = 0.182
Epoch  31 Batch   98/172   train_loss = 0.257
Epoch  31 Batch  108/172   train_loss = 0.175
Epoch  31 Batch  118/172   train_loss = 0.214
Epoch  31 Batch  128/172   train_loss = 0.195
Epoch  31 Batch  138/172   train_loss = 0.183
Epoch  31 Batch  148/172   train_loss = 0.197
Epoch  31 Batch  158/172   train_loss = 0.200
Epoch  31 Batch  168/172   train_loss = 0.234
Epoch  32 Batch    6/172   train_loss = 0.255
Epoch  32 Batch   16/172   train_loss = 0.198
Epoch  32 Batch   26/172   train_loss = 0.259
Epoch  32 Batch   36/172   train_loss = 0.190
Epoch  32 Batch   46/172   train_loss = 0.220
Epoch  32 Batch   56/172   train_loss = 0.246
Epoch  32 Batch   66/172   train_loss = 0.236
Epoch  32 Batch   76/172   train_loss = 0.232
Epoch  32 Batch   86/172   train_loss = 0.190
Epoch  32 Batch   96/172   train_loss = 0.216
Epoch  32 Batch  106/172   train_loss = 0.179
Epoch  32 Batch  116/172   train_loss = 0.220
Epoch  32 Batch  126/172   train_loss = 0.258
Epoch  32 Batch  136/172   train_loss = 0.171
Epoch  32 Batch  146/172   train_loss = 0.280
Epoch  32 Batch  156/172   train_loss = 0.192
Epoch  32 Batch  166/172   train_loss = 0.207
Epoch  33 Batch    4/172   train_loss = 0.176
Epoch  33 Batch   14/172   train_loss = 0.200
Epoch  33 Batch   24/172   train_loss = 0.206
Epoch  33 Batch   34/172   train_loss = 0.176
Epoch  33 Batch   44/172   train_loss = 0.172
Epoch  33 Batch   54/172   train_loss = 0.183
Epoch  33 Batch   64/172   train_loss = 0.188
Epoch  33 Batch   74/172   train_loss = 0.223
Epoch  33 Batch   84/172   train_loss = 0.243
Epoch  33 Batch   94/172   train_loss = 0.208
Epoch  33 Batch  104/172   train_loss = 0.175
Epoch  33 Batch  114/172   train_loss = 0.186
Epoch  33 Batch  124/172   train_loss = 0.184
Epoch  33 Batch  134/172   train_loss = 0.182
Epoch  33 Batch  144/172   train_loss = 0.215
Epoch  33 Batch  154/172   train_loss = 0.204
Epoch  33 Batch  164/172   train_loss = 0.245
Epoch  34 Batch    2/172   train_loss = 0.200
Epoch  34 Batch   12/172   train_loss = 0.141
Epoch  34 Batch   22/172   train_loss = 0.180
Epoch  34 Batch   32/172   train_loss = 0.190
Epoch  34 Batch   42/172   train_loss = 0.222
Epoch  34 Batch   52/172   train_loss = 0.189
Epoch  34 Batch   62/172   train_loss = 0.236
Epoch  34 Batch   72/172   train_loss = 0.161
Epoch  34 Batch   82/172   train_loss = 0.190
Epoch  34 Batch   92/172   train_loss = 0.228
Epoch  34 Batch  102/172   train_loss = 0.168
Epoch  34 Batch  112/172   train_loss = 0.182
Epoch  34 Batch  122/172   train_loss = 0.187
Epoch  34 Batch  132/172   train_loss = 0.173
Epoch  34 Batch  142/172   train_loss = 0.147
Epoch  34 Batch  152/172   train_loss = 0.161
Epoch  34 Batch  162/172   train_loss = 0.199
Epoch  35 Batch    0/172   train_loss = 0.228
Epoch  35 Batch   10/172   train_loss = 0.181
Epoch  35 Batch   20/172   train_loss = 0.215
Epoch  35 Batch   30/172   train_loss = 0.138
Epoch  35 Batch   40/172   train_loss = 0.170
Epoch  35 Batch   50/172   train_loss = 0.224
Epoch  35 Batch   60/172   train_loss = 0.192
Epoch  35 Batch   70/172   train_loss = 0.244
Epoch  35 Batch   80/172   train_loss = 0.189
Epoch  35 Batch   90/172   train_loss = 0.189
Epoch  35 Batch  100/172   train_loss = 0.246
Epoch  35 Batch  110/172   train_loss = 0.137
Epoch  35 Batch  120/172   train_loss = 0.198
Epoch  35 Batch  130/172   train_loss = 0.207
Epoch  35 Batch  140/172   train_loss = 0.208
Epoch  35 Batch  150/172   train_loss = 0.195
Epoch  35 Batch  160/172   train_loss = 0.192
Epoch  35 Batch  170/172   train_loss = 0.183
Epoch  36 Batch    8/172   train_loss = 0.208
Epoch  36 Batch   18/172   train_loss = 0.232
Epoch  36 Batch   28/172   train_loss = 0.184
Epoch  36 Batch   38/172   train_loss = 0.222
Epoch  36 Batch   48/172   train_loss = 0.163
Epoch  36 Batch   58/172   train_loss = 0.154
Epoch  36 Batch   68/172   train_loss = 0.197
Epoch  36 Batch   78/172   train_loss = 0.175
Epoch  36 Batch   88/172   train_loss = 0.182
Epoch  36 Batch   98/172   train_loss = 0.271
Epoch  36 Batch  108/172   train_loss = 0.166
Epoch  36 Batch  118/172   train_loss = 0.217
Epoch  36 Batch  128/172   train_loss = 0.201
Epoch  36 Batch  138/172   train_loss = 0.173
Epoch  36 Batch  148/172   train_loss = 0.215
Epoch  36 Batch  158/172   train_loss = 0.199
Epoch  36 Batch  168/172   train_loss = 0.249
Epoch  37 Batch    6/172   train_loss = 0.246
Epoch  37 Batch   16/172   train_loss = 0.174
Epoch  37 Batch   26/172   train_loss = 0.252
Epoch  37 Batch   36/172   train_loss = 0.188
Epoch  37 Batch   46/172   train_loss = 0.227
Epoch  37 Batch   56/172   train_loss = 0.237
Epoch  37 Batch   66/172   train_loss = 0.215
Epoch  37 Batch   76/172   train_loss = 0.229
Epoch  37 Batch   86/172   train_loss = 0.198
Epoch  37 Batch   96/172   train_loss = 0.213
Epoch  37 Batch  106/172   train_loss = 0.182
Epoch  37 Batch  116/172   train_loss = 0.227
Epoch  37 Batch  126/172   train_loss = 0.232
Epoch  37 Batch  136/172   train_loss = 0.176
Epoch  37 Batch  146/172   train_loss = 0.269
Epoch  37 Batch  156/172   train_loss = 0.205
Epoch  37 Batch  166/172   train_loss = 0.209
Epoch  38 Batch    4/172   train_loss = 0.215
Epoch  38 Batch   14/172   train_loss = 0.193
Epoch  38 Batch   24/172   train_loss = 0.216
Epoch  38 Batch   34/172   train_loss = 0.145
Epoch  38 Batch   44/172   train_loss = 0.171
Epoch  38 Batch   54/172   train_loss = 0.187
Epoch  38 Batch   64/172   train_loss = 0.203
Epoch  38 Batch   74/172   train_loss = 0.210
Epoch  38 Batch   84/172   train_loss = 0.249
Epoch  38 Batch   94/172   train_loss = 0.184
Epoch  38 Batch  104/172   train_loss = 0.182
Epoch  38 Batch  114/172   train_loss = 0.211
Epoch  38 Batch  124/172   train_loss = 0.206
Epoch  38 Batch  134/172   train_loss = 0.177
Epoch  38 Batch  144/172   train_loss = 0.212
Epoch  38 Batch  154/172   train_loss = 0.236
Epoch  38 Batch  164/172   train_loss = 0.271
Epoch  39 Batch    2/172   train_loss = 0.219
Epoch  39 Batch   12/172   train_loss = 0.187
Epoch  39 Batch   22/172   train_loss = 0.210
Epoch  39 Batch   32/172   train_loss = 0.204
Epoch  39 Batch   42/172   train_loss = 0.253
Epoch  39 Batch   52/172   train_loss = 0.196
Epoch  39 Batch   62/172   train_loss = 0.257
Epoch  39 Batch   72/172   train_loss = 0.185
Epoch  39 Batch   82/172   train_loss = 0.223
Epoch  39 Batch   92/172   train_loss = 0.264
Epoch  39 Batch  102/172   train_loss = 0.195
Epoch  39 Batch  112/172   train_loss = 0.217
Epoch  39 Batch  122/172   train_loss = 0.231
Epoch  39 Batch  132/172   train_loss = 0.184
Epoch  39 Batch  142/172   train_loss = 0.189
Epoch  39 Batch  152/172   train_loss = 0.236
Epoch  39 Batch  162/172   train_loss = 0.248
Epoch  40 Batch    0/172   train_loss = 0.263
Epoch  40 Batch   10/172   train_loss = 0.244
Epoch  40 Batch   20/172   train_loss = 0.246
Epoch  40 Batch   30/172   train_loss = 0.176
Epoch  40 Batch   40/172   train_loss = 0.238
Epoch  40 Batch   50/172   train_loss = 0.302
Epoch  40 Batch   60/172   train_loss = 0.211
Epoch  40 Batch   70/172   train_loss = 0.289
Epoch  40 Batch   80/172   train_loss = 0.270
Epoch  40 Batch   90/172   train_loss = 0.220
Epoch  40 Batch  100/172   train_loss = 0.281
Epoch  40 Batch  110/172   train_loss = 0.214
Epoch  40 Batch  120/172   train_loss = 0.220
Epoch  40 Batch  130/172   train_loss = 0.306
Epoch  40 Batch  140/172   train_loss = 0.227
Epoch  40 Batch  150/172   train_loss = 0.253
Epoch  40 Batch  160/172   train_loss = 0.263
Epoch  40 Batch  170/172   train_loss = 0.231
Epoch  41 Batch    8/172   train_loss = 0.276
Epoch  41 Batch   18/172   train_loss = 0.259
Epoch  41 Batch   28/172   train_loss = 0.239
Epoch  41 Batch   38/172   train_loss = 0.280
Epoch  41 Batch   48/172   train_loss = 0.210
Epoch  41 Batch   58/172   train_loss = 0.202
Epoch  41 Batch   68/172   train_loss = 0.232
Epoch  41 Batch   78/172   train_loss = 0.212
Epoch  41 Batch   88/172   train_loss = 0.214
Epoch  41 Batch   98/172   train_loss = 0.348
Epoch  41 Batch  108/172   train_loss = 0.191
Epoch  41 Batch  118/172   train_loss = 0.248
Epoch  41 Batch  128/172   train_loss = 0.209
Epoch  41 Batch  138/172   train_loss = 0.212
Epoch  41 Batch  148/172   train_loss = 0.202
Epoch  41 Batch  158/172   train_loss = 0.218
Epoch  41 Batch  168/172   train_loss = 0.276
Epoch  42 Batch    6/172   train_loss = 0.269
Epoch  42 Batch   16/172   train_loss = 0.192
Epoch  42 Batch   26/172   train_loss = 0.277
Epoch  42 Batch   36/172   train_loss = 0.220
Epoch  42 Batch   46/172   train_loss = 0.257
Epoch  42 Batch   56/172   train_loss = 0.280
Epoch  42 Batch   66/172   train_loss = 0.219
Epoch  42 Batch   76/172   train_loss = 0.219
Epoch  42 Batch   86/172   train_loss = 0.198
Epoch  42 Batch   96/172   train_loss = 0.220
Epoch  42 Batch  106/172   train_loss = 0.192
Epoch  42 Batch  116/172   train_loss = 0.201
Epoch  42 Batch  126/172   train_loss = 0.247
Epoch  42 Batch  136/172   train_loss = 0.174
Epoch  42 Batch  146/172   train_loss = 0.298
Epoch  42 Batch  156/172   train_loss = 0.163
Epoch  42 Batch  166/172   train_loss = 0.236
Epoch  43 Batch    4/172   train_loss = 0.180
Epoch  43 Batch   14/172   train_loss = 0.205
Epoch  43 Batch   24/172   train_loss = 0.197
Epoch  43 Batch   34/172   train_loss = 0.155
Epoch  43 Batch   44/172   train_loss = 0.180
Epoch  43 Batch   54/172   train_loss = 0.187
Epoch  43 Batch   64/172   train_loss = 0.188
Epoch  43 Batch   74/172   train_loss = 0.200
Epoch  43 Batch   84/172   train_loss = 0.240
Epoch  43 Batch   94/172   train_loss = 0.186
Epoch  43 Batch  104/172   train_loss = 0.187
Epoch  43 Batch  114/172   train_loss = 0.174
Epoch  43 Batch  124/172   train_loss = 0.171
Epoch  43 Batch  134/172   train_loss = 0.189
Epoch  43 Batch  144/172   train_loss = 0.198
Epoch  43 Batch  154/172   train_loss = 0.206
Epoch  43 Batch  164/172   train_loss = 0.241
Epoch  44 Batch    2/172   train_loss = 0.198
Epoch  44 Batch   12/172   train_loss = 0.137
Epoch  44 Batch   22/172   train_loss = 0.180
Epoch  44 Batch   32/172   train_loss = 0.189
Epoch  44 Batch   42/172   train_loss = 0.213
Epoch  44 Batch   52/172   train_loss = 0.172
Epoch  44 Batch   62/172   train_loss = 0.239
Epoch  44 Batch   72/172   train_loss = 0.158
Epoch  44 Batch   82/172   train_loss = 0.175
Epoch  44 Batch   92/172   train_loss = 0.199
Epoch  44 Batch  102/172   train_loss = 0.164
Epoch  44 Batch  112/172   train_loss = 0.186
Epoch  44 Batch  122/172   train_loss = 0.184
Epoch  44 Batch  132/172   train_loss = 0.160
Epoch  44 Batch  142/172   train_loss = 0.132
Epoch  44 Batch  152/172   train_loss = 0.162
Epoch  44 Batch  162/172   train_loss = 0.190
Epoch  45 Batch    0/172   train_loss = 0.208
Epoch  45 Batch   10/172   train_loss = 0.165
Epoch  45 Batch   20/172   train_loss = 0.183
Epoch  45 Batch   30/172   train_loss = 0.121
Epoch  45 Batch   40/172   train_loss = 0.160
Epoch  45 Batch   50/172   train_loss = 0.211
Epoch  45 Batch   60/172   train_loss = 0.154
Epoch  45 Batch   70/172   train_loss = 0.243
Epoch  45 Batch   80/172   train_loss = 0.204
Epoch  45 Batch   90/172   train_loss = 0.171
Epoch  45 Batch  100/172   train_loss = 0.242
Epoch  45 Batch  110/172   train_loss = 0.117
Epoch  45 Batch  120/172   train_loss = 0.175
Epoch  45 Batch  130/172   train_loss = 0.182
Epoch  45 Batch  140/172   train_loss = 0.217
Epoch  45 Batch  150/172   train_loss = 0.192
Epoch  45 Batch  160/172   train_loss = 0.181
Epoch  45 Batch  170/172   train_loss = 0.165
Epoch  46 Batch    8/172   train_loss = 0.217
Epoch  46 Batch   18/172   train_loss = 0.231
Epoch  46 Batch   28/172   train_loss = 0.187
Epoch  46 Batch   38/172   train_loss = 0.225
Epoch  46 Batch   48/172   train_loss = 0.152
Epoch  46 Batch   58/172   train_loss = 0.143
Epoch  46 Batch   68/172   train_loss = 0.185
Epoch  46 Batch   78/172   train_loss = 0.171
Epoch  46 Batch   88/172   train_loss = 0.172
Epoch  46 Batch   98/172   train_loss = 0.238
Epoch  46 Batch  108/172   train_loss = 0.153
Epoch  46 Batch  118/172   train_loss = 0.202
Epoch  46 Batch  128/172   train_loss = 0.174
Epoch  46 Batch  138/172   train_loss = 0.163
Epoch  46 Batch  148/172   train_loss = 0.177
Epoch  46 Batch  158/172   train_loss = 0.177
Epoch  46 Batch  168/172   train_loss = 0.244
Epoch  47 Batch    6/172   train_loss = 0.237
Epoch  47 Batch   16/172   train_loss = 0.159
Epoch  47 Batch   26/172   train_loss = 0.247
Epoch  47 Batch   36/172   train_loss = 0.175
Epoch  47 Batch   46/172   train_loss = 0.201
Epoch  47 Batch   56/172   train_loss = 0.230
Epoch  47 Batch   66/172   train_loss = 0.188
Epoch  47 Batch   76/172   train_loss = 0.212
Epoch  47 Batch   86/172   train_loss = 0.198
Epoch  47 Batch   96/172   train_loss = 0.197
Epoch  47 Batch  106/172   train_loss = 0.163
Epoch  47 Batch  116/172   train_loss = 0.188
Epoch  47 Batch  126/172   train_loss = 0.218
Epoch  47 Batch  136/172   train_loss = 0.159
Epoch  47 Batch  146/172   train_loss = 0.266
Epoch  47 Batch  156/172   train_loss = 0.142
Epoch  47 Batch  166/172   train_loss = 0.207
Epoch  48 Batch    4/172   train_loss = 0.172
Epoch  48 Batch   14/172   train_loss = 0.199
Epoch  48 Batch   24/172   train_loss = 0.205
Epoch  48 Batch   34/172   train_loss = 0.149
Epoch  48 Batch   44/172   train_loss = 0.162
Epoch  48 Batch   54/172   train_loss = 0.181
Epoch  48 Batch   64/172   train_loss = 0.185
Epoch  48 Batch   74/172   train_loss = 0.201
Epoch  48 Batch   84/172   train_loss = 0.225
Epoch  48 Batch   94/172   train_loss = 0.197
Epoch  48 Batch  104/172   train_loss = 0.158
Epoch  48 Batch  114/172   train_loss = 0.160
Epoch  48 Batch  124/172   train_loss = 0.168
Epoch  48 Batch  134/172   train_loss = 0.161
Epoch  48 Batch  144/172   train_loss = 0.204
Epoch  48 Batch  154/172   train_loss = 0.210
Epoch  48 Batch  164/172   train_loss = 0.233
Epoch  49 Batch    2/172   train_loss = 0.187
Epoch  49 Batch   12/172   train_loss = 0.136
Epoch  49 Batch   22/172   train_loss = 0.175
Epoch  49 Batch   32/172   train_loss = 0.216
Epoch  49 Batch   42/172   train_loss = 0.210
Epoch  49 Batch   52/172   train_loss = 0.166
Epoch  49 Batch   62/172   train_loss = 0.224
Epoch  49 Batch   72/172   train_loss = 0.136
Epoch  49 Batch   82/172   train_loss = 0.191
Epoch  49 Batch   92/172   train_loss = 0.209
Epoch  49 Batch  102/172   train_loss = 0.147
Epoch  49 Batch  112/172   train_loss = 0.178
Epoch  49 Batch  122/172   train_loss = 0.217
Epoch  49 Batch  132/172   train_loss = 0.165
Epoch  49 Batch  142/172   train_loss = 0.123
Epoch  49 Batch  152/172   train_loss = 0.144
Epoch  49 Batch  162/172   train_loss = 0.188
Epoch  50 Batch    0/172   train_loss = 0.216
Epoch  50 Batch   10/172   train_loss = 0.161
Epoch  50 Batch   20/172   train_loss = 0.192
Epoch  50 Batch   30/172   train_loss = 0.118
Epoch  50 Batch   40/172   train_loss = 0.216
Epoch  50 Batch   50/172   train_loss = 0.233
Epoch  50 Batch   60/172   train_loss = 0.164
Epoch  50 Batch   70/172   train_loss = 0.230
Epoch  50 Batch   80/172   train_loss = 0.186
Epoch  50 Batch   90/172   train_loss = 0.165
Epoch  50 Batch  100/172   train_loss = 0.251
Epoch  50 Batch  110/172   train_loss = 0.113
Epoch  50 Batch  120/172   train_loss = 0.177
Epoch  50 Batch  130/172   train_loss = 0.177
Epoch  50 Batch  140/172   train_loss = 0.239
Epoch  50 Batch  150/172   train_loss = 0.213
Epoch  50 Batch  160/172   train_loss = 0.170
Epoch  50 Batch  170/172   train_loss = 0.153
Epoch  51 Batch    8/172   train_loss = 0.220
Epoch  51 Batch   18/172   train_loss = 0.225
Epoch  51 Batch   28/172   train_loss = 0.187
Epoch  51 Batch   38/172   train_loss = 0.220
Epoch  51 Batch   48/172   train_loss = 0.153
Epoch  51 Batch   58/172   train_loss = 0.140
Epoch  51 Batch   68/172   train_loss = 0.185
Epoch  51 Batch   78/172   train_loss = 0.164
Epoch  51 Batch   88/172   train_loss = 0.164
Epoch  51 Batch   98/172   train_loss = 0.227
Epoch  51 Batch  108/172   train_loss = 0.158
Epoch  51 Batch  118/172   train_loss = 0.187
Epoch  51 Batch  128/172   train_loss = 0.177
Epoch  51 Batch  138/172   train_loss = 0.192
Epoch  51 Batch  148/172   train_loss = 0.172
Epoch  51 Batch  158/172   train_loss = 0.171
Epoch  51 Batch  168/172   train_loss = 0.222
Epoch  52 Batch    6/172   train_loss = 0.242
Epoch  52 Batch   16/172   train_loss = 0.167
Epoch  52 Batch   26/172   train_loss = 0.238
Epoch  52 Batch   36/172   train_loss = 0.182
Epoch  52 Batch   46/172   train_loss = 0.197
Epoch  52 Batch   56/172   train_loss = 0.228
Epoch  52 Batch   66/172   train_loss = 0.191
Epoch  52 Batch   76/172   train_loss = 0.191
Epoch  52 Batch   86/172   train_loss = 0.182
Epoch  52 Batch   96/172   train_loss = 0.195
Epoch  52 Batch  106/172   train_loss = 0.160
Epoch  52 Batch  116/172   train_loss = 0.189
Epoch  52 Batch  126/172   train_loss = 0.213
Epoch  52 Batch  136/172   train_loss = 0.158
Epoch  52 Batch  146/172   train_loss = 0.279
Epoch  52 Batch  156/172   train_loss = 0.141
Epoch  52 Batch  166/172   train_loss = 0.208
Epoch  53 Batch    4/172   train_loss = 0.162
Epoch  53 Batch   14/172   train_loss = 0.193
Epoch  53 Batch   24/172   train_loss = 0.185
Epoch  53 Batch   34/172   train_loss = 0.133
Epoch  53 Batch   44/172   train_loss = 0.157
Epoch  53 Batch   54/172   train_loss = 0.208
Epoch  53 Batch   64/172   train_loss = 0.187
Epoch  53 Batch   74/172   train_loss = 0.202
Epoch  53 Batch   84/172   train_loss = 0.252
Epoch  53 Batch   94/172   train_loss = 0.177
Epoch  53 Batch  104/172   train_loss = 0.158
Epoch  53 Batch  114/172   train_loss = 0.151
Epoch  53 Batch  124/172   train_loss = 0.166
Epoch  53 Batch  134/172   train_loss = 0.153
Epoch  53 Batch  144/172   train_loss = 0.195
Epoch  53 Batch  154/172   train_loss = 0.220
Epoch  53 Batch  164/172   train_loss = 0.223
Epoch  54 Batch    2/172   train_loss = 0.192
Epoch  54 Batch   12/172   train_loss = 0.141
Epoch  54 Batch   22/172   train_loss = 0.172
Epoch  54 Batch   32/172   train_loss = 0.202
Epoch  54 Batch   42/172   train_loss = 0.201
Epoch  54 Batch   52/172   train_loss = 0.157
Epoch  54 Batch   62/172   train_loss = 0.216
Epoch  54 Batch   72/172   train_loss = 0.137
Epoch  54 Batch   82/172   train_loss = 0.173
Epoch  54 Batch   92/172   train_loss = 0.203
Epoch  54 Batch  102/172   train_loss = 0.148
Epoch  54 Batch  112/172   train_loss = 0.175
Epoch  54 Batch  122/172   train_loss = 0.192
Epoch  54 Batch  132/172   train_loss = 0.143
Epoch  54 Batch  142/172   train_loss = 0.122
Epoch  54 Batch  152/172   train_loss = 0.138
Epoch  54 Batch  162/172   train_loss = 0.183
Epoch  55 Batch    0/172   train_loss = 0.203
Epoch  55 Batch   10/172   train_loss = 0.162
Epoch  55 Batch   20/172   train_loss = 0.184
Epoch  55 Batch   30/172   train_loss = 0.114
Epoch  55 Batch   40/172   train_loss = 0.176
Epoch  55 Batch   50/172   train_loss = 0.208
Epoch  55 Batch   60/172   train_loss = 0.147
Epoch  55 Batch   70/172   train_loss = 0.215
Epoch  55 Batch   80/172   train_loss = 0.178
Epoch  55 Batch   90/172   train_loss = 0.159
Epoch  55 Batch  100/172   train_loss = 0.236
Epoch  55 Batch  110/172   train_loss = 0.108
Epoch  55 Batch  120/172   train_loss = 0.169
Epoch  55 Batch  130/172   train_loss = 0.172
Epoch  55 Batch  140/172   train_loss = 0.209
Epoch  55 Batch  150/172   train_loss = 0.186
Epoch  55 Batch  160/172   train_loss = 0.162
Epoch  55 Batch  170/172   train_loss = 0.146
Epoch  56 Batch    8/172   train_loss = 0.204
Epoch  56 Batch   18/172   train_loss = 0.211
Epoch  56 Batch   28/172   train_loss = 0.173
Epoch  56 Batch   38/172   train_loss = 0.217
Epoch  56 Batch   48/172   train_loss = 0.146
Epoch  56 Batch   58/172   train_loss = 0.135
Epoch  56 Batch   68/172   train_loss = 0.175
Epoch  56 Batch   78/172   train_loss = 0.152
Epoch  56 Batch   88/172   train_loss = 0.156
Epoch  56 Batch   98/172   train_loss = 0.223
Epoch  56 Batch  108/172   train_loss = 0.140
Epoch  56 Batch  118/172   train_loss = 0.186
Epoch  56 Batch  128/172   train_loss = 0.164
Epoch  56 Batch  138/172   train_loss = 0.167
Epoch  56 Batch  148/172   train_loss = 0.166
Epoch  56 Batch  158/172   train_loss = 0.169
Epoch  56 Batch  168/172   train_loss = 0.222
Epoch  57 Batch    6/172   train_loss = 0.232
Epoch  57 Batch   16/172   train_loss = 0.152
Epoch  57 Batch   26/172   train_loss = 0.228
Epoch  57 Batch   36/172   train_loss = 0.173
Epoch  57 Batch   46/172   train_loss = 0.185
Epoch  57 Batch   56/172   train_loss = 0.218
Epoch  57 Batch   66/172   train_loss = 0.185
Epoch  57 Batch   76/172   train_loss = 0.199
Epoch  57 Batch   86/172   train_loss = 0.189
Epoch  57 Batch   96/172   train_loss = 0.183
Epoch  57 Batch  106/172   train_loss = 0.154
Epoch  57 Batch  116/172   train_loss = 0.183
Epoch  57 Batch  126/172   train_loss = 0.207
Epoch  57 Batch  136/172   train_loss = 0.154
Epoch  57 Batch  146/172   train_loss = 0.260
Epoch  57 Batch  156/172   train_loss = 0.137
Epoch  57 Batch  166/172   train_loss = 0.196
Epoch  58 Batch    4/172   train_loss = 0.158
Epoch  58 Batch   14/172   train_loss = 0.177
Epoch  58 Batch   24/172   train_loss = 0.183
Epoch  58 Batch   34/172   train_loss = 0.121
Epoch  58 Batch   44/172   train_loss = 0.150
Epoch  58 Batch   54/172   train_loss = 0.185
Epoch  58 Batch   64/172   train_loss = 0.174
Epoch  58 Batch   74/172   train_loss = 0.190
Epoch  58 Batch   84/172   train_loss = 0.234
Epoch  58 Batch   94/172   train_loss = 0.173
Epoch  58 Batch  104/172   train_loss = 0.155
Epoch  58 Batch  114/172   train_loss = 0.151
Epoch  58 Batch  124/172   train_loss = 0.161
Epoch  58 Batch  134/172   train_loss = 0.156
Epoch  58 Batch  144/172   train_loss = 0.185
Epoch  58 Batch  154/172   train_loss = 0.201
Epoch  58 Batch  164/172   train_loss = 0.215
Epoch  59 Batch    2/172   train_loss = 0.178
Epoch  59 Batch   12/172   train_loss = 0.130
Epoch  59 Batch   22/172   train_loss = 0.168
Epoch  59 Batch   32/172   train_loss = 0.180
Epoch  59 Batch   42/172   train_loss = 0.197
Epoch  59 Batch   52/172   train_loss = 0.153
Epoch  59 Batch   62/172   train_loss = 0.216
Epoch  59 Batch   72/172   train_loss = 0.131
Epoch  59 Batch   82/172   train_loss = 0.170
Epoch  59 Batch   92/172   train_loss = 0.198
Epoch  59 Batch  102/172   train_loss = 0.146
Epoch  59 Batch  112/172   train_loss = 0.170
Epoch  59 Batch  122/172   train_loss = 0.185
Epoch  59 Batch  132/172   train_loss = 0.143
Epoch  59 Batch  142/172   train_loss = 0.119
Epoch  59 Batch  152/172   train_loss = 0.136
Epoch  59 Batch  162/172   train_loss = 0.174
Model Trained and Saved

Save Parameters

Save seq_length and save_dir for generating a new TV script.


In [40]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint


In [41]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Implement Generate Functions

Get Tensors

Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


In [42]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    inp = loaded_graph.get_tensor_by_name('input:0')
    init_state = loaded_graph.get_tensor_by_name('initial_state:0')
    final_state = loaded_graph.get_tensor_by_name('final_state:0')
    probs = loaded_graph.get_tensor_by_name('probs:0')
    return inp, init_state, final_state, probs


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)


Tests Passed

Choose Word

Implement the pick_word() function to select the next word using probabilities.


In [52]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    # TODO: Implement Function
    
    word = np.random.choice(list(int_to_vocab.values()), p=probabilities)
    return word


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)


Tests Passed

Generate TV Script

This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.


In [53]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)


INFO:tensorflow:Restoring parameters from ./save
moe_szyslak: moe(raises ask go the senator the least the please him" moe_szyslak: of broken: want i and crazy lise: brakes
exactly with in i a bad
what's i points and back" how and! one room" but never(tongue the mic for moe's nervous the bag my other going and quiet
who feel is homer_simpson: you'd wonderful but homer this this feel too a this okay
spanish
okay
so) you gets more good collette: down yeah(never(friends the use in stop the came the(and suicide the means the lisa_simpson: pal it's in i got
cleaned your bar-boy married
strong lenny_leonard: phone have and you know in i'm just hide i down i
, that's just hundred of kent_brockman:
and you worry have homer_simpson: and is hold yeah yeah(never but take i(go the her the sure him the sure the found sure sexual of!, ya are i'm homer
moe-heads
would lenny_leonard: of anyway moe_szyslak: look(and i'm moron
book(and friends the suppose the work get be nah

The TV Script is Nonsensical

It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.