TV Script Generation

In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.

Get the Data

The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555

The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.


Implement Preprocessing Functions

The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:

  • Lookup Table
  • Tokenize Punctuation

Lookup Table

To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:

  • Dictionary to go from the words to an id, we'll call vocab_to_int
  • Dictionary to go from the id to word, we'll call int_to_vocab

Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)


In [3]:
import numpy as np
import problem_unittests as tests

def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    words = set(text) 
    words_to_key = {w: i for i, w in enumerate(words)}
    key_to_words = {i: w for i, w in enumerate(words)}
    
    return words_to_key, key_to_words


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)


Tests Passed

Tokenize Punctuation

We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".

Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:

  • Period ( . )
  • Comma ( , )
  • Quotation Mark ( " )
  • Semicolon ( ; )
  • Exclamation mark ( ! )
  • Question mark ( ? )
  • Left Parentheses ( ( )
  • Right Parentheses ( ) )
  • Dash ( -- )
  • Return ( \n )

This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".


In [4]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    return {
        '.':'||PERIOD||',
        ',':'||COMMA||',
        '"':'||QUOTATION_MARK||',
        ';':'||SEMICOLON||',
        '!':'||EXCLAMATION_MARK||',
        '?':'||QUESTION_MARK||',
        '(':'||LEFT_PARENTHESES||',
        ')':'||RIGHT_PARENTHESES||',
        '--':'||DASH||',
        '\n':'||NEWLINE||'    
    }

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Build the Neural Network

You'll build the components necessary to build a RNN by implementing the following functions below:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the Version of TensorFlow and Access to GPU


In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.0.1
Default GPU Device: /gpu:0

Input

Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter.
  • Targets placeholder
  • Learning Rate placeholder

Return the placeholders in the following the tuple (Input, Targets, LearingRate)


In [8]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    input = tf.placeholder(tf.int32, shape=(None, None), name="input")
    targets = tf.placeholder(tf.int32, shape=(None, None), name="targets")
    learning_rate = tf.placeholder(tf.float32, name="learning_rate")
    
    return input, targets, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)


Tests Passed

Build RNN Cell and Initialize

Stack one or more BasicLSTMCells in a MultiRNNCell.

  • The Rnn size should be set using rnn_size
  • Initalize Cell State using the MultiRNNCell's zero_state() function
    • Apply the name "initial_state" to the initial state using tf.identity()

Return the cell and initial state in the following tuple (Cell, InitialState)


In [9]:
def get_init_cell(batch_size, rnn_size):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    lstm_layer_count = 2
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    
    cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layer_count)
    
    init_state = cell.zero_state(batch_size, tf.float32)
    init_state = tf.identity(init_state, name='initial_state')
    
    return cell, init_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)


Tests Passed

Word Embedding

Apply embedding to input_data using TensorFlow. Return the embedded sequence.


In [10]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    random = tf.Variable(tf.random_normal((vocab_size, embed_dim)))
    
    return tf.nn.embedding_lookup(random,  input_data)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)


Tests Passed

Build RNN

You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.

Return the outputs and final_state state in the following tuple (Outputs, FinalState)


In [11]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
    
    return output, tf.identity(final_state, name='final_state')


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
  • Build RNN using cell and your build_rnn(cell, inputs) function.
  • Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.

Return the logits and final state in the following tuple (Logits, FinalState)


In [12]:
def build_nn(cell, rnn_size, input_data, vocab_size):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :return: Tuple (Logits, FinalState)
    """
    embed = get_embed(input_data, vocab_size, rnn_size)
    output, final_state = build_rnn(cell, embed)
    logits = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn=None)
    
    return logits, final_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)


Tests Passed

Batches

Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a single batch of input with the shape [batch size, sequence length]
  • The second element is a single batch of targets with the shape [batch size, sequence length]

If you can't fill the last batch with enough data, drop the last batch.

For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2  3], [ 7  8  9]],
    # Batch of targets
    [[ 2  3  4], [ 8  9 10]]
  ],

  # Second Batch
  [
    # Batch of Input
    [[ 4  5  6], [10 11 12]],
    # Batch of targets
    [[ 5  6  7], [11 12 13]]
  ]
]

In [13]:
def get_batches(int_text, batch_size, seq_length):
    """
    Return batches of input and target
    :param int_text: Text with the words replaced by their ids
    :param batch_size: The size of batch
    :param seq_length: The length of sequence
    :return: Batches as a Numpy array
    """
    batch_count = int(len(int_text) / (batch_size * seq_length))
    
    x_data = np.array(int_text[: batch_count * batch_size * seq_length])
    x_batches = np.split(x_data.reshape(batch_size, -1), batch_count, 1)
    
    y_data = np.array(int_text[1: batch_count * batch_size * seq_length + 1])
    y_batches = np.split(y_data.reshape(batch_size, -1), batch_count, 1)    
    
    return np.array(list(zip(x_batches, y_batches)))


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set num_epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set seq_length to the length of sequence.
  • Set learning_rate to the learning rate.
  • Set show_every_n_batches to the number of batches the neural network should print progress.

In [17]:
# Number of Epochs
num_epochs = 180
# Batch Size
batch_size = 100
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 10

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Build the Graph

Build the graph using the neural network you implemented.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
    train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [21]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')


Epoch   0 Batch    0/43   train_loss = 8.822
Epoch   0 Batch   10/43   train_loss = 7.255
Epoch   0 Batch   20/43   train_loss = 6.364
Epoch   0 Batch   30/43   train_loss = 6.298
Epoch   0 Batch   40/43   train_loss = 6.239
Epoch   1 Batch    7/43   train_loss = 6.078
Epoch   1 Batch   17/43   train_loss = 5.979
Epoch   1 Batch   27/43   train_loss = 6.084
Epoch   1 Batch   37/43   train_loss = 6.120
Epoch   2 Batch    4/43   train_loss = 6.025
Epoch   2 Batch   14/43   train_loss = 6.095
Epoch   2 Batch   24/43   train_loss = 6.019
Epoch   2 Batch   34/43   train_loss = 6.069
Epoch   3 Batch    1/43   train_loss = 6.031
Epoch   3 Batch   11/43   train_loss = 6.030
Epoch   3 Batch   21/43   train_loss = 5.970
Epoch   3 Batch   31/43   train_loss = 6.054
Epoch   3 Batch   41/43   train_loss = 6.031
Epoch   4 Batch    8/43   train_loss = 5.959
Epoch   4 Batch   18/43   train_loss = 6.112
Epoch   4 Batch   28/43   train_loss = 6.010
Epoch   4 Batch   38/43   train_loss = 6.080
Epoch   5 Batch    5/43   train_loss = 6.092
Epoch   5 Batch   15/43   train_loss = 6.045
Epoch   5 Batch   25/43   train_loss = 6.004
Epoch   5 Batch   35/43   train_loss = 6.049
Epoch   6 Batch    2/43   train_loss = 6.057
Epoch   6 Batch   12/43   train_loss = 5.956
Epoch   6 Batch   22/43   train_loss = 6.095
Epoch   6 Batch   32/43   train_loss = 6.077
Epoch   6 Batch   42/43   train_loss = 6.041
Epoch   7 Batch    9/43   train_loss = 5.935
Epoch   7 Batch   19/43   train_loss = 5.894
Epoch   7 Batch   29/43   train_loss = 5.977
Epoch   7 Batch   39/43   train_loss = 5.886
Epoch   8 Batch    6/43   train_loss = 5.936
Epoch   8 Batch   16/43   train_loss = 6.027
Epoch   8 Batch   26/43   train_loss = 5.910
Epoch   8 Batch   36/43   train_loss = 5.870
Epoch   9 Batch    3/43   train_loss = 5.806
Epoch   9 Batch   13/43   train_loss = 5.837
Epoch   9 Batch   23/43   train_loss = 5.859
Epoch   9 Batch   33/43   train_loss = 5.813
Epoch  10 Batch    0/43   train_loss = 5.813
Epoch  10 Batch   10/43   train_loss = 5.737
Epoch  10 Batch   20/43   train_loss = 5.876
Epoch  10 Batch   30/43   train_loss = 5.725
Epoch  10 Batch   40/43   train_loss = 5.706
Epoch  11 Batch    7/43   train_loss = 5.655
Epoch  11 Batch   17/43   train_loss = 5.679
Epoch  11 Batch   27/43   train_loss = 5.717
Epoch  11 Batch   37/43   train_loss = 5.695
Epoch  12 Batch    4/43   train_loss = 5.613
Epoch  12 Batch   14/43   train_loss = 5.728
Epoch  12 Batch   24/43   train_loss = 5.643
Epoch  12 Batch   34/43   train_loss = 5.669
Epoch  13 Batch    1/43   train_loss = 5.616
Epoch  13 Batch   11/43   train_loss = 5.602
Epoch  13 Batch   21/43   train_loss = 5.545
Epoch  13 Batch   31/43   train_loss = 5.633
Epoch  13 Batch   41/43   train_loss = 5.529
Epoch  14 Batch    8/43   train_loss = 5.441
Epoch  14 Batch   18/43   train_loss = 5.589
Epoch  14 Batch   28/43   train_loss = 5.483
Epoch  14 Batch   38/43   train_loss = 5.495
Epoch  15 Batch    5/43   train_loss = 5.470
Epoch  15 Batch   15/43   train_loss = 5.443
Epoch  15 Batch   25/43   train_loss = 5.307
Epoch  15 Batch   35/43   train_loss = 5.361
Epoch  16 Batch    2/43   train_loss = 5.344
Epoch  16 Batch   12/43   train_loss = 5.242
Epoch  16 Batch   22/43   train_loss = 5.357
Epoch  16 Batch   32/43   train_loss = 5.318
Epoch  16 Batch   42/43   train_loss = 5.256
Epoch  17 Batch    9/43   train_loss = 5.140
Epoch  17 Batch   19/43   train_loss = 5.076
Epoch  17 Batch   29/43   train_loss = 5.182
Epoch  17 Batch   39/43   train_loss = 5.067
Epoch  18 Batch    6/43   train_loss = 5.137
Epoch  18 Batch   16/43   train_loss = 5.177
Epoch  18 Batch   26/43   train_loss = 5.117
Epoch  18 Batch   36/43   train_loss = 4.989
Epoch  19 Batch    3/43   train_loss = 4.926
Epoch  19 Batch   13/43   train_loss = 4.981
Epoch  19 Batch   23/43   train_loss = 5.028
Epoch  19 Batch   33/43   train_loss = 4.978
Epoch  20 Batch    0/43   train_loss = 4.958
Epoch  20 Batch   10/43   train_loss = 4.879
Epoch  20 Batch   20/43   train_loss = 4.969
Epoch  20 Batch   30/43   train_loss = 4.880
Epoch  20 Batch   40/43   train_loss = 4.791
Epoch  21 Batch    7/43   train_loss = 4.736
Epoch  21 Batch   17/43   train_loss = 4.778
Epoch  21 Batch   27/43   train_loss = 4.815
Epoch  21 Batch   37/43   train_loss = 4.782
Epoch  22 Batch    4/43   train_loss = 4.698
Epoch  22 Batch   14/43   train_loss = 4.815
Epoch  22 Batch   24/43   train_loss = 4.714
Epoch  22 Batch   34/43   train_loss = 4.689
Epoch  23 Batch    1/43   train_loss = 4.688
Epoch  23 Batch   11/43   train_loss = 4.629
Epoch  23 Batch   21/43   train_loss = 4.592
Epoch  23 Batch   31/43   train_loss = 4.735
Epoch  23 Batch   41/43   train_loss = 4.617
Epoch  24 Batch    8/43   train_loss = 4.510
Epoch  24 Batch   18/43   train_loss = 4.663
Epoch  24 Batch   28/43   train_loss = 4.641
Epoch  24 Batch   38/43   train_loss = 4.602
Epoch  25 Batch    5/43   train_loss = 4.636
Epoch  25 Batch   15/43   train_loss = 4.651
Epoch  25 Batch   25/43   train_loss = 4.459
Epoch  25 Batch   35/43   train_loss = 4.518
Epoch  26 Batch    2/43   train_loss = 4.564
Epoch  26 Batch   12/43   train_loss = 4.461
Epoch  26 Batch   22/43   train_loss = 4.587
Epoch  26 Batch   32/43   train_loss = 4.528
Epoch  26 Batch   42/43   train_loss = 4.510
Epoch  27 Batch    9/43   train_loss = 4.402
Epoch  27 Batch   19/43   train_loss = 4.386
Epoch  27 Batch   29/43   train_loss = 4.466
Epoch  27 Batch   39/43   train_loss = 4.385
Epoch  28 Batch    6/43   train_loss = 4.450
Epoch  28 Batch   16/43   train_loss = 4.490
Epoch  28 Batch   26/43   train_loss = 4.451
Epoch  28 Batch   36/43   train_loss = 4.345
Epoch  29 Batch    3/43   train_loss = 4.316
Epoch  29 Batch   13/43   train_loss = 4.328
Epoch  29 Batch   23/43   train_loss = 4.424
Epoch  29 Batch   33/43   train_loss = 4.337
Epoch  30 Batch    0/43   train_loss = 4.360
Epoch  30 Batch   10/43   train_loss = 4.303
Epoch  30 Batch   20/43   train_loss = 4.355
Epoch  30 Batch   30/43   train_loss = 4.289
Epoch  30 Batch   40/43   train_loss = 4.198
Epoch  31 Batch    7/43   train_loss = 4.182
Epoch  31 Batch   17/43   train_loss = 4.215
Epoch  31 Batch   27/43   train_loss = 4.259
Epoch  31 Batch   37/43   train_loss = 4.212
Epoch  32 Batch    4/43   train_loss = 4.166
Epoch  32 Batch   14/43   train_loss = 4.285
Epoch  32 Batch   24/43   train_loss = 4.161
Epoch  32 Batch   34/43   train_loss = 4.131
Epoch  33 Batch    1/43   train_loss = 4.154
Epoch  33 Batch   11/43   train_loss = 4.135
Epoch  33 Batch   21/43   train_loss = 4.095
Epoch  33 Batch   31/43   train_loss = 4.210
Epoch  33 Batch   41/43   train_loss = 4.127
Epoch  34 Batch    8/43   train_loss = 4.029
Epoch  34 Batch   18/43   train_loss = 4.150
Epoch  34 Batch   28/43   train_loss = 4.133
Epoch  34 Batch   38/43   train_loss = 4.086
Epoch  35 Batch    5/43   train_loss = 4.138
Epoch  35 Batch   15/43   train_loss = 4.149
Epoch  35 Batch   25/43   train_loss = 3.983
Epoch  35 Batch   35/43   train_loss = 4.094
Epoch  36 Batch    2/43   train_loss = 4.080
Epoch  36 Batch   12/43   train_loss = 3.952
Epoch  36 Batch   22/43   train_loss = 4.153
Epoch  36 Batch   32/43   train_loss = 4.057
Epoch  36 Batch   42/43   train_loss = 4.004
Epoch  37 Batch    9/43   train_loss = 3.921
Epoch  37 Batch   19/43   train_loss = 3.922
Epoch  37 Batch   29/43   train_loss = 3.996
Epoch  37 Batch   39/43   train_loss = 3.897
Epoch  38 Batch    6/43   train_loss = 3.976
Epoch  38 Batch   16/43   train_loss = 3.969
Epoch  38 Batch   26/43   train_loss = 3.964
Epoch  38 Batch   36/43   train_loss = 3.847
Epoch  39 Batch    3/43   train_loss = 3.803
Epoch  39 Batch   13/43   train_loss = 3.811
Epoch  39 Batch   23/43   train_loss = 3.927
Epoch  39 Batch   33/43   train_loss = 3.834
Epoch  40 Batch    0/43   train_loss = 3.876
Epoch  40 Batch   10/43   train_loss = 3.824
Epoch  40 Batch   20/43   train_loss = 3.853
Epoch  40 Batch   30/43   train_loss = 3.808
Epoch  40 Batch   40/43   train_loss = 3.736
Epoch  41 Batch    7/43   train_loss = 3.716
Epoch  41 Batch   17/43   train_loss = 3.756
Epoch  41 Batch   27/43   train_loss = 3.779
Epoch  41 Batch   37/43   train_loss = 3.755
Epoch  42 Batch    4/43   train_loss = 3.742
Epoch  42 Batch   14/43   train_loss = 3.837
Epoch  42 Batch   24/43   train_loss = 3.689
Epoch  42 Batch   34/43   train_loss = 3.672
Epoch  43 Batch    1/43   train_loss = 3.713
Epoch  43 Batch   11/43   train_loss = 3.716
Epoch  43 Batch   21/43   train_loss = 3.697
Epoch  43 Batch   31/43   train_loss = 3.759
Epoch  43 Batch   41/43   train_loss = 3.698
Epoch  44 Batch    8/43   train_loss = 3.612
Epoch  44 Batch   18/43   train_loss = 3.697
Epoch  44 Batch   28/43   train_loss = 3.713
Epoch  44 Batch   38/43   train_loss = 3.623
Epoch  45 Batch    5/43   train_loss = 3.695
Epoch  45 Batch   15/43   train_loss = 3.730
Epoch  45 Batch   25/43   train_loss = 3.530
Epoch  45 Batch   35/43   train_loss = 3.624
Epoch  46 Batch    2/43   train_loss = 3.621
Epoch  46 Batch   12/43   train_loss = 3.508
Epoch  46 Batch   22/43   train_loss = 3.676
Epoch  46 Batch   32/43   train_loss = 3.547
Epoch  46 Batch   42/43   train_loss = 3.563
Epoch  47 Batch    9/43   train_loss = 3.448
Epoch  47 Batch   19/43   train_loss = 3.464
Epoch  47 Batch   29/43   train_loss = 3.541
Epoch  47 Batch   39/43   train_loss = 3.466
Epoch  48 Batch    6/43   train_loss = 3.526
Epoch  48 Batch   16/43   train_loss = 3.549
Epoch  48 Batch   26/43   train_loss = 3.546
Epoch  48 Batch   36/43   train_loss = 3.427
Epoch  49 Batch    3/43   train_loss = 3.370
Epoch  49 Batch   13/43   train_loss = 3.403
Epoch  49 Batch   23/43   train_loss = 3.521
Epoch  49 Batch   33/43   train_loss = 3.429
Epoch  50 Batch    0/43   train_loss = 3.445
Epoch  50 Batch   10/43   train_loss = 3.452
Epoch  50 Batch   20/43   train_loss = 3.429
Epoch  50 Batch   30/43   train_loss = 3.415
Epoch  50 Batch   40/43   train_loss = 3.349
Epoch  51 Batch    7/43   train_loss = 3.329
Epoch  51 Batch   17/43   train_loss = 3.363
Epoch  51 Batch   27/43   train_loss = 3.379
Epoch  51 Batch   37/43   train_loss = 3.302
Epoch  52 Batch    4/43   train_loss = 3.321
Epoch  52 Batch   14/43   train_loss = 3.418
Epoch  52 Batch   24/43   train_loss = 3.257
Epoch  52 Batch   34/43   train_loss = 3.284
Epoch  53 Batch    1/43   train_loss = 3.296
Epoch  53 Batch   11/43   train_loss = 3.318
Epoch  53 Batch   21/43   train_loss = 3.250
Epoch  53 Batch   31/43   train_loss = 3.344
Epoch  53 Batch   41/43   train_loss = 3.261
Epoch  54 Batch    8/43   train_loss = 3.171
Epoch  54 Batch   18/43   train_loss = 3.218
Epoch  54 Batch   28/43   train_loss = 3.221
Epoch  54 Batch   38/43   train_loss = 3.196
Epoch  55 Batch    5/43   train_loss = 3.287
Epoch  55 Batch   15/43   train_loss = 3.342
Epoch  55 Batch   25/43   train_loss = 3.114
Epoch  55 Batch   35/43   train_loss = 3.198
Epoch  56 Batch    2/43   train_loss = 3.188
Epoch  56 Batch   12/43   train_loss = 3.104
Epoch  56 Batch   22/43   train_loss = 3.269
Epoch  56 Batch   32/43   train_loss = 3.126
Epoch  56 Batch   42/43   train_loss = 3.137
Epoch  57 Batch    9/43   train_loss = 3.086
Epoch  57 Batch   19/43   train_loss = 3.127
Epoch  57 Batch   29/43   train_loss = 3.207
Epoch  57 Batch   39/43   train_loss = 3.092
Epoch  58 Batch    6/43   train_loss = 3.123
Epoch  58 Batch   16/43   train_loss = 3.095
Epoch  58 Batch   26/43   train_loss = 3.126
Epoch  58 Batch   36/43   train_loss = 3.031
Epoch  59 Batch    3/43   train_loss = 2.987
Epoch  59 Batch   13/43   train_loss = 2.981
Epoch  59 Batch   23/43   train_loss = 3.108
Epoch  59 Batch   33/43   train_loss = 3.062
Epoch  60 Batch    0/43   train_loss = 3.112
Epoch  60 Batch   10/43   train_loss = 3.081
Epoch  60 Batch   20/43   train_loss = 3.027
Epoch  60 Batch   30/43   train_loss = 2.998
Epoch  60 Batch   40/43   train_loss = 2.914
Epoch  61 Batch    7/43   train_loss = 2.913
Epoch  61 Batch   17/43   train_loss = 2.989
Epoch  61 Batch   27/43   train_loss = 2.966
Epoch  61 Batch   37/43   train_loss = 2.949
Epoch  62 Batch    4/43   train_loss = 2.997
Epoch  62 Batch   14/43   train_loss = 3.085
Epoch  62 Batch   24/43   train_loss = 2.965
Epoch  62 Batch   34/43   train_loss = 2.950
Epoch  63 Batch    1/43   train_loss = 2.853
Epoch  63 Batch   11/43   train_loss = 2.958
Epoch  63 Batch   21/43   train_loss = 2.974
Epoch  63 Batch   31/43   train_loss = 3.047
Epoch  63 Batch   41/43   train_loss = 3.043
Epoch  64 Batch    8/43   train_loss = 2.990
Epoch  64 Batch   18/43   train_loss = 2.924
Epoch  64 Batch   28/43   train_loss = 2.906
Epoch  64 Batch   38/43   train_loss = 2.832
Epoch  65 Batch    5/43   train_loss = 2.921
Epoch  65 Batch   15/43   train_loss = 2.996
Epoch  65 Batch   25/43   train_loss = 2.820
Epoch  65 Batch   35/43   train_loss = 2.862
Epoch  66 Batch    2/43   train_loss = 2.787
Epoch  66 Batch   12/43   train_loss = 2.730
Epoch  66 Batch   22/43   train_loss = 2.869
Epoch  66 Batch   32/43   train_loss = 2.707
Epoch  66 Batch   42/43   train_loss = 2.716
Epoch  67 Batch    9/43   train_loss = 2.678
Epoch  67 Batch   19/43   train_loss = 2.692
Epoch  67 Batch   29/43   train_loss = 2.726
Epoch  67 Batch   39/43   train_loss = 2.661
Epoch  68 Batch    6/43   train_loss = 2.726
Epoch  68 Batch   16/43   train_loss = 2.685
Epoch  68 Batch   26/43   train_loss = 2.717
Epoch  68 Batch   36/43   train_loss = 2.624
Epoch  69 Batch    3/43   train_loss = 2.574
Epoch  69 Batch   13/43   train_loss = 2.600
Epoch  69 Batch   23/43   train_loss = 2.690
Epoch  69 Batch   33/43   train_loss = 2.634
Epoch  70 Batch    0/43   train_loss = 2.624
Epoch  70 Batch   10/43   train_loss = 2.647
Epoch  70 Batch   20/43   train_loss = 2.614
Epoch  70 Batch   30/43   train_loss = 2.632
Epoch  70 Batch   40/43   train_loss = 2.552
Epoch  71 Batch    7/43   train_loss = 2.553
Epoch  71 Batch   17/43   train_loss = 2.607
Epoch  71 Batch   27/43   train_loss = 2.566
Epoch  71 Batch   37/43   train_loss = 2.493
Epoch  72 Batch    4/43   train_loss = 2.558
Epoch  72 Batch   14/43   train_loss = 2.616
Epoch  72 Batch   24/43   train_loss = 2.516
Epoch  72 Batch   34/43   train_loss = 2.548
Epoch  73 Batch    1/43   train_loss = 2.477
Epoch  73 Batch   11/43   train_loss = 2.582
Epoch  73 Batch   21/43   train_loss = 2.580
Epoch  73 Batch   31/43   train_loss = 2.616
Epoch  73 Batch   41/43   train_loss = 2.516
Epoch  74 Batch    8/43   train_loss = 2.494
Epoch  74 Batch   18/43   train_loss = 2.504
Epoch  74 Batch   28/43   train_loss = 2.517
Epoch  74 Batch   38/43   train_loss = 2.430
Epoch  75 Batch    5/43   train_loss = 2.550
Epoch  75 Batch   15/43   train_loss = 2.666
Epoch  75 Batch   25/43   train_loss = 2.496
Epoch  75 Batch   35/43   train_loss = 2.565
Epoch  76 Batch    2/43   train_loss = 2.506
Epoch  76 Batch   12/43   train_loss = 2.451
Epoch  76 Batch   22/43   train_loss = 2.551
Epoch  76 Batch   32/43   train_loss = 2.386
Epoch  76 Batch   42/43   train_loss = 2.410
Epoch  77 Batch    9/43   train_loss = 2.429
Epoch  77 Batch   19/43   train_loss = 2.476
Epoch  77 Batch   29/43   train_loss = 2.542
Epoch  77 Batch   39/43   train_loss = 2.460
Epoch  78 Batch    6/43   train_loss = 2.559
Epoch  78 Batch   16/43   train_loss = 2.443
Epoch  78 Batch   26/43   train_loss = 2.435
Epoch  78 Batch   36/43   train_loss = 2.361
Epoch  79 Batch    3/43   train_loss = 2.295
Epoch  79 Batch   13/43   train_loss = 2.366
Epoch  79 Batch   23/43   train_loss = 2.438
Epoch  79 Batch   33/43   train_loss = 2.404
Epoch  80 Batch    0/43   train_loss = 2.344
Epoch  80 Batch   10/43   train_loss = 2.345
Epoch  80 Batch   20/43   train_loss = 2.290
Epoch  80 Batch   30/43   train_loss = 2.309
Epoch  80 Batch   40/43   train_loss = 2.247
Epoch  81 Batch    7/43   train_loss = 2.218
Epoch  81 Batch   17/43   train_loss = 2.298
Epoch  81 Batch   27/43   train_loss = 2.194
Epoch  81 Batch   37/43   train_loss = 2.118
Epoch  82 Batch    4/43   train_loss = 2.184
Epoch  82 Batch   14/43   train_loss = 2.242
Epoch  82 Batch   24/43   train_loss = 2.151
Epoch  82 Batch   34/43   train_loss = 2.130
Epoch  83 Batch    1/43   train_loss = 2.103
Epoch  83 Batch   11/43   train_loss = 2.198
Epoch  83 Batch   21/43   train_loss = 2.193
Epoch  83 Batch   31/43   train_loss = 2.172
Epoch  83 Batch   41/43   train_loss = 2.096
Epoch  84 Batch    8/43   train_loss = 2.072
Epoch  84 Batch   18/43   train_loss = 2.041
Epoch  84 Batch   28/43   train_loss = 2.090
Epoch  84 Batch   38/43   train_loss = 2.002
Epoch  85 Batch    5/43   train_loss = 2.059
Epoch  85 Batch   15/43   train_loss = 2.147
Epoch  85 Batch   25/43   train_loss = 2.024
Epoch  85 Batch   35/43   train_loss = 2.110
Epoch  86 Batch    2/43   train_loss = 2.024
Epoch  86 Batch   12/43   train_loss = 1.991
Epoch  86 Batch   22/43   train_loss = 2.068
Epoch  86 Batch   32/43   train_loss = 1.964
Epoch  86 Batch   42/43   train_loss = 1.981
Epoch  87 Batch    9/43   train_loss = 1.925
Epoch  87 Batch   19/43   train_loss = 1.963
Epoch  87 Batch   29/43   train_loss = 1.995
Epoch  87 Batch   39/43   train_loss = 1.931
Epoch  88 Batch    6/43   train_loss = 2.031
Epoch  88 Batch   16/43   train_loss = 1.971
Epoch  88 Batch   26/43   train_loss = 1.977
Epoch  88 Batch   36/43   train_loss = 1.936
Epoch  89 Batch    3/43   train_loss = 1.884
Epoch  89 Batch   13/43   train_loss = 1.930
Epoch  89 Batch   23/43   train_loss = 1.963
Epoch  89 Batch   33/43   train_loss = 1.979
Epoch  90 Batch    0/43   train_loss = 1.964
Epoch  90 Batch   10/43   train_loss = 1.947
Epoch  90 Batch   20/43   train_loss = 1.913
Epoch  90 Batch   30/43   train_loss = 1.997
Epoch  90 Batch   40/43   train_loss = 1.942
Epoch  91 Batch    7/43   train_loss = 1.847
Epoch  91 Batch   17/43   train_loss = 1.924
Epoch  91 Batch   27/43   train_loss = 1.848
Epoch  91 Batch   37/43   train_loss = 1.803
Epoch  92 Batch    4/43   train_loss = 1.838
Epoch  92 Batch   14/43   train_loss = 1.858
Epoch  92 Batch   24/43   train_loss = 1.805
Epoch  92 Batch   34/43   train_loss = 1.797
Epoch  93 Batch    1/43   train_loss = 1.765
Epoch  93 Batch   11/43   train_loss = 1.865
Epoch  93 Batch   21/43   train_loss = 1.887
Epoch  93 Batch   31/43   train_loss = 1.858
Epoch  93 Batch   41/43   train_loss = 1.766
Epoch  94 Batch    8/43   train_loss = 1.757
Epoch  94 Batch   18/43   train_loss = 1.720
Epoch  94 Batch   28/43   train_loss = 1.750
Epoch  94 Batch   38/43   train_loss = 1.675
Epoch  95 Batch    5/43   train_loss = 1.726
Epoch  95 Batch   15/43   train_loss = 1.800
Epoch  95 Batch   25/43   train_loss = 1.712
Epoch  95 Batch   35/43   train_loss = 1.773
Epoch  96 Batch    2/43   train_loss = 1.733
Epoch  96 Batch   12/43   train_loss = 1.675
Epoch  96 Batch   22/43   train_loss = 1.783
Epoch  96 Batch   32/43   train_loss = 1.667
Epoch  96 Batch   42/43   train_loss = 1.709
Epoch  97 Batch    9/43   train_loss = 1.643
Epoch  97 Batch   19/43   train_loss = 1.654
Epoch  97 Batch   29/43   train_loss = 1.696
Epoch  97 Batch   39/43   train_loss = 1.656
Epoch  98 Batch    6/43   train_loss = 1.720
Epoch  98 Batch   16/43   train_loss = 1.613
Epoch  98 Batch   26/43   train_loss = 1.689
Epoch  98 Batch   36/43   train_loss = 1.674
Epoch  99 Batch    3/43   train_loss = 1.640
Epoch  99 Batch   13/43   train_loss = 1.666
Epoch  99 Batch   23/43   train_loss = 1.692
Epoch  99 Batch   33/43   train_loss = 1.651
Epoch 100 Batch    0/43   train_loss = 1.642
Epoch 100 Batch   10/43   train_loss = 1.650
Epoch 100 Batch   20/43   train_loss = 1.604
Epoch 100 Batch   30/43   train_loss = 1.650
Epoch 100 Batch   40/43   train_loss = 1.576
Epoch 101 Batch    7/43   train_loss = 1.561
Epoch 101 Batch   17/43   train_loss = 1.684
Epoch 101 Batch   27/43   train_loss = 1.596
Epoch 101 Batch   37/43   train_loss = 1.525
Epoch 102 Batch    4/43   train_loss = 1.551
Epoch 102 Batch   14/43   train_loss = 1.581
Epoch 102 Batch   24/43   train_loss = 1.537
Epoch 102 Batch   34/43   train_loss = 1.532
Epoch 103 Batch    1/43   train_loss = 1.495
Epoch 103 Batch   11/43   train_loss = 1.603
Epoch 103 Batch   21/43   train_loss = 1.630
Epoch 103 Batch   31/43   train_loss = 1.617
Epoch 103 Batch   41/43   train_loss = 1.498
Epoch 104 Batch    8/43   train_loss = 1.476
Epoch 104 Batch   18/43   train_loss = 1.459
Epoch 104 Batch   28/43   train_loss = 1.545
Epoch 104 Batch   38/43   train_loss = 1.428
Epoch 105 Batch    5/43   train_loss = 1.472
Epoch 105 Batch   15/43   train_loss = 1.559
Epoch 105 Batch   25/43   train_loss = 1.521
Epoch 105 Batch   35/43   train_loss = 1.573
Epoch 106 Batch    2/43   train_loss = 1.539
Epoch 106 Batch   12/43   train_loss = 1.451
Epoch 106 Batch   22/43   train_loss = 1.495
Epoch 106 Batch   32/43   train_loss = 1.412
Epoch 106 Batch   42/43   train_loss = 1.513
Epoch 107 Batch    9/43   train_loss = 1.503
Epoch 107 Batch   19/43   train_loss = 1.520
Epoch 107 Batch   29/43   train_loss = 1.565
Epoch 107 Batch   39/43   train_loss = 1.489
Epoch 108 Batch    6/43   train_loss = 1.475
Epoch 108 Batch   16/43   train_loss = 1.371
Epoch 108 Batch   26/43   train_loss = 1.435
Epoch 108 Batch   36/43   train_loss = 1.439
Epoch 109 Batch    3/43   train_loss = 1.409
Epoch 109 Batch   13/43   train_loss = 1.469
Epoch 109 Batch   23/43   train_loss = 1.502
Epoch 109 Batch   33/43   train_loss = 1.443
Epoch 110 Batch    0/43   train_loss = 1.426
Epoch 110 Batch   10/43   train_loss = 1.432
Epoch 110 Batch   20/43   train_loss = 1.383
Epoch 110 Batch   30/43   train_loss = 1.410
Epoch 110 Batch   40/43   train_loss = 1.374
Epoch 111 Batch    7/43   train_loss = 1.313
Epoch 111 Batch   17/43   train_loss = 1.398
Epoch 111 Batch   27/43   train_loss = 1.340
Epoch 111 Batch   37/43   train_loss = 1.278
Epoch 112 Batch    4/43   train_loss = 1.298
Epoch 112 Batch   14/43   train_loss = 1.291
Epoch 112 Batch   24/43   train_loss = 1.306
Epoch 112 Batch   34/43   train_loss = 1.315
Epoch 113 Batch    1/43   train_loss = 1.259
Epoch 113 Batch   11/43   train_loss = 1.312
Epoch 113 Batch   21/43   train_loss = 1.302
Epoch 113 Batch   31/43   train_loss = 1.274
Epoch 113 Batch   41/43   train_loss = 1.228
Epoch 114 Batch    8/43   train_loss = 1.229
Epoch 114 Batch   18/43   train_loss = 1.141
Epoch 114 Batch   28/43   train_loss = 1.226
Epoch 114 Batch   38/43   train_loss = 1.177
Epoch 115 Batch    5/43   train_loss = 1.177
Epoch 115 Batch   15/43   train_loss = 1.270
Epoch 115 Batch   25/43   train_loss = 1.206
Epoch 115 Batch   35/43   train_loss = 1.257
Epoch 116 Batch    2/43   train_loss = 1.156
Epoch 116 Batch   12/43   train_loss = 1.122
Epoch 116 Batch   22/43   train_loss = 1.181
Epoch 116 Batch   32/43   train_loss = 1.134
Epoch 116 Batch   42/43   train_loss = 1.136
Epoch 117 Batch    9/43   train_loss = 1.111
Epoch 117 Batch   19/43   train_loss = 1.150
Epoch 117 Batch   29/43   train_loss = 1.166
Epoch 117 Batch   39/43   train_loss = 1.128
Epoch 118 Batch    6/43   train_loss = 1.176
Epoch 118 Batch   16/43   train_loss = 1.064
Epoch 118 Batch   26/43   train_loss = 1.105
Epoch 118 Batch   36/43   train_loss = 1.112
Epoch 119 Batch    3/43   train_loss = 1.083
Epoch 119 Batch   13/43   train_loss = 1.113
Epoch 119 Batch   23/43   train_loss = 1.155
Epoch 119 Batch   33/43   train_loss = 1.096
Epoch 120 Batch    0/43   train_loss = 1.117
Epoch 120 Batch   10/43   train_loss = 1.121
Epoch 120 Batch   20/43   train_loss = 1.046
Epoch 120 Batch   30/43   train_loss = 1.099
Epoch 120 Batch   40/43   train_loss = 1.094
Epoch 121 Batch    7/43   train_loss = 1.049
Epoch 121 Batch   17/43   train_loss = 1.149
Epoch 121 Batch   27/43   train_loss = 1.095
Epoch 121 Batch   37/43   train_loss = 1.027
Epoch 122 Batch    4/43   train_loss = 1.061
Epoch 122 Batch   14/43   train_loss = 1.047
Epoch 122 Batch   24/43   train_loss = 1.047
Epoch 122 Batch   34/43   train_loss = 1.027
Epoch 123 Batch    1/43   train_loss = 1.033
Epoch 123 Batch   11/43   train_loss = 1.123
Epoch 123 Batch   21/43   train_loss = 1.110
Epoch 123 Batch   31/43   train_loss = 1.117
Epoch 123 Batch   41/43   train_loss = 1.077
Epoch 124 Batch    8/43   train_loss = 1.018
Epoch 124 Batch   18/43   train_loss = 0.941
Epoch 124 Batch   28/43   train_loss = 1.007
Epoch 124 Batch   38/43   train_loss = 1.010
Epoch 125 Batch    5/43   train_loss = 0.971
Epoch 125 Batch   15/43   train_loss = 1.078
Epoch 125 Batch   25/43   train_loss = 1.056
Epoch 125 Batch   35/43   train_loss = 1.135
Epoch 126 Batch    2/43   train_loss = 1.065
Epoch 126 Batch   12/43   train_loss = 1.011
Epoch 126 Batch   22/43   train_loss = 1.073
Epoch 126 Batch   32/43   train_loss = 0.986
Epoch 126 Batch   42/43   train_loss = 0.992
Epoch 127 Batch    9/43   train_loss = 0.958
Epoch 127 Batch   19/43   train_loss = 0.994
Epoch 127 Batch   29/43   train_loss = 1.020
Epoch 127 Batch   39/43   train_loss = 1.022
Epoch 128 Batch    6/43   train_loss = 1.091
Epoch 128 Batch   16/43   train_loss = 1.031
Epoch 128 Batch   26/43   train_loss = 1.058
Epoch 128 Batch   36/43   train_loss = 1.003
Epoch 129 Batch    3/43   train_loss = 0.980
Epoch 129 Batch   13/43   train_loss = 0.995
Epoch 129 Batch   23/43   train_loss = 1.070
Epoch 129 Batch   33/43   train_loss = 1.017
Epoch 130 Batch    0/43   train_loss = 1.017
Epoch 130 Batch   10/43   train_loss = 1.040
Epoch 130 Batch   20/43   train_loss = 0.964
Epoch 130 Batch   30/43   train_loss = 0.956
Epoch 130 Batch   40/43   train_loss = 0.953
Epoch 131 Batch    7/43   train_loss = 0.959
Epoch 131 Batch   17/43   train_loss = 1.008
Epoch 131 Batch   27/43   train_loss = 0.977
Epoch 131 Batch   37/43   train_loss = 0.920
Epoch 132 Batch    4/43   train_loss = 0.929
Epoch 132 Batch   14/43   train_loss = 0.901
Epoch 132 Batch   24/43   train_loss = 0.927
Epoch 132 Batch   34/43   train_loss = 0.878
Epoch 133 Batch    1/43   train_loss = 0.853
Epoch 133 Batch   11/43   train_loss = 0.931
Epoch 133 Batch   21/43   train_loss = 0.902
Epoch 133 Batch   31/43   train_loss = 0.878
Epoch 133 Batch   41/43   train_loss = 0.844
Epoch 134 Batch    8/43   train_loss = 0.831
Epoch 134 Batch   18/43   train_loss = 0.775
Epoch 134 Batch   28/43   train_loss = 0.823
Epoch 134 Batch   38/43   train_loss = 0.833
Epoch 135 Batch    5/43   train_loss = 0.791
Epoch 135 Batch   15/43   train_loss = 0.861
Epoch 135 Batch   25/43   train_loss = 0.848
Epoch 135 Batch   35/43   train_loss = 0.902
Epoch 136 Batch    2/43   train_loss = 0.805
Epoch 136 Batch   12/43   train_loss = 0.759
Epoch 136 Batch   22/43   train_loss = 0.791
Epoch 136 Batch   32/43   train_loss = 0.779
Epoch 136 Batch   42/43   train_loss = 0.784
Epoch 137 Batch    9/43   train_loss = 0.766
Epoch 137 Batch   19/43   train_loss = 0.809
Epoch 137 Batch   29/43   train_loss = 0.822
Epoch 137 Batch   39/43   train_loss = 0.795
Epoch 138 Batch    6/43   train_loss = 0.842
Epoch 138 Batch   16/43   train_loss = 0.770
Epoch 138 Batch   26/43   train_loss = 0.783
Epoch 138 Batch   36/43   train_loss = 0.770
Epoch 139 Batch    3/43   train_loss = 0.739
Epoch 139 Batch   13/43   train_loss = 0.756
Epoch 139 Batch   23/43   train_loss = 0.793
Epoch 139 Batch   33/43   train_loss = 0.740
Epoch 140 Batch    0/43   train_loss = 0.769
Epoch 140 Batch   10/43   train_loss = 0.804
Epoch 140 Batch   20/43   train_loss = 0.763
Epoch 140 Batch   30/43   train_loss = 0.805
Epoch 140 Batch   40/43   train_loss = 0.787
Epoch 141 Batch    7/43   train_loss = 0.729
Epoch 141 Batch   17/43   train_loss = 0.777
Epoch 141 Batch   27/43   train_loss = 0.723
Epoch 141 Batch   37/43   train_loss = 0.711
Epoch 142 Batch    4/43   train_loss = 0.738
Epoch 142 Batch   14/43   train_loss = 0.754
Epoch 142 Batch   24/43   train_loss = 0.783
Epoch 142 Batch   34/43   train_loss = 0.759
Epoch 143 Batch    1/43   train_loss = 0.735
Epoch 143 Batch   11/43   train_loss = 0.788
Epoch 143 Batch   21/43   train_loss = 0.754
Epoch 143 Batch   31/43   train_loss = 0.752
Epoch 143 Batch   41/43   train_loss = 0.746
Epoch 144 Batch    8/43   train_loss = 0.736
Epoch 144 Batch   18/43   train_loss = 0.698
Epoch 144 Batch   28/43   train_loss = 0.758
Epoch 144 Batch   38/43   train_loss = 0.772
Epoch 145 Batch    5/43   train_loss = 0.719
Epoch 145 Batch   15/43   train_loss = 0.771
Epoch 145 Batch   25/43   train_loss = 0.770
Epoch 145 Batch   35/43   train_loss = 0.774
Epoch 146 Batch    2/43   train_loss = 0.723
Epoch 146 Batch   12/43   train_loss = 0.709
Epoch 146 Batch   22/43   train_loss = 0.773
Epoch 146 Batch   32/43   train_loss = 0.777
Epoch 146 Batch   42/43   train_loss = 0.781
Epoch 147 Batch    9/43   train_loss = 0.717
Epoch 147 Batch   19/43   train_loss = 0.728
Epoch 147 Batch   29/43   train_loss = 0.746
Epoch 147 Batch   39/43   train_loss = 0.731
Epoch 148 Batch    6/43   train_loss = 0.765
Epoch 148 Batch   16/43   train_loss = 0.707
Epoch 148 Batch   26/43   train_loss = 0.722
Epoch 148 Batch   36/43   train_loss = 0.700
Epoch 149 Batch    3/43   train_loss = 0.659
Epoch 149 Batch   13/43   train_loss = 0.726
Epoch 149 Batch   23/43   train_loss = 0.780
Epoch 149 Batch   33/43   train_loss = 0.702
Epoch 150 Batch    0/43   train_loss = 0.700
Epoch 150 Batch   10/43   train_loss = 0.725
Epoch 150 Batch   20/43   train_loss = 0.651
Epoch 150 Batch   30/43   train_loss = 0.664
Epoch 150 Batch   40/43   train_loss = 0.674
Epoch 151 Batch    7/43   train_loss = 0.627
Epoch 151 Batch   17/43   train_loss = 0.674
Epoch 151 Batch   27/43   train_loss = 0.639
Epoch 151 Batch   37/43   train_loss = 0.626
Epoch 152 Batch    4/43   train_loss = 0.640
Epoch 152 Batch   14/43   train_loss = 0.615
Epoch 152 Batch   24/43   train_loss = 0.641
Epoch 152 Batch   34/43   train_loss = 0.626
Epoch 153 Batch    1/43   train_loss = 0.577
Epoch 153 Batch   11/43   train_loss = 0.628
Epoch 153 Batch   21/43   train_loss = 0.622
Epoch 153 Batch   31/43   train_loss = 0.624
Epoch 153 Batch   41/43   train_loss = 0.609
Epoch 154 Batch    8/43   train_loss = 0.612
Epoch 154 Batch   18/43   train_loss = 0.563
Epoch 154 Batch   28/43   train_loss = 0.589
Epoch 154 Batch   38/43   train_loss = 0.605
Epoch 155 Batch    5/43   train_loss = 0.550
Epoch 155 Batch   15/43   train_loss = 0.575
Epoch 155 Batch   25/43   train_loss = 0.605
Epoch 155 Batch   35/43   train_loss = 0.607
Epoch 156 Batch    2/43   train_loss = 0.586
Epoch 156 Batch   12/43   train_loss = 0.556
Epoch 156 Batch   22/43   train_loss = 0.606
Epoch 156 Batch   32/43   train_loss = 0.581
Epoch 156 Batch   42/43   train_loss = 0.590
Epoch 157 Batch    9/43   train_loss = 0.542
Epoch 157 Batch   19/43   train_loss = 0.581
Epoch 157 Batch   29/43   train_loss = 0.593
Epoch 157 Batch   39/43   train_loss = 0.564
Epoch 158 Batch    6/43   train_loss = 0.591
Epoch 158 Batch   16/43   train_loss = 0.534
Epoch 158 Batch   26/43   train_loss = 0.552
Epoch 158 Batch   36/43   train_loss = 0.538
Epoch 159 Batch    3/43   train_loss = 0.549
Epoch 159 Batch   13/43   train_loss = 0.567
Epoch 159 Batch   23/43   train_loss = 0.568
Epoch 159 Batch   33/43   train_loss = 0.508
Epoch 160 Batch    0/43   train_loss = 0.557
Epoch 160 Batch   10/43   train_loss = 0.578
Epoch 160 Batch   20/43   train_loss = 0.520
Epoch 160 Batch   30/43   train_loss = 0.532
Epoch 160 Batch   40/43   train_loss = 0.512
Epoch 161 Batch    7/43   train_loss = 0.504
Epoch 161 Batch   17/43   train_loss = 0.566
Epoch 161 Batch   27/43   train_loss = 0.542
Epoch 161 Batch   37/43   train_loss = 0.520
Epoch 162 Batch    4/43   train_loss = 0.527
Epoch 162 Batch   14/43   train_loss = 0.497
Epoch 162 Batch   24/43   train_loss = 0.515
Epoch 162 Batch   34/43   train_loss = 0.504
Epoch 163 Batch    1/43   train_loss = 0.470
Epoch 163 Batch   11/43   train_loss = 0.531
Epoch 163 Batch   21/43   train_loss = 0.512
Epoch 163 Batch   31/43   train_loss = 0.528
Epoch 163 Batch   41/43   train_loss = 0.502
Epoch 164 Batch    8/43   train_loss = 0.533
Epoch 164 Batch   18/43   train_loss = 0.480
Epoch 164 Batch   28/43   train_loss = 0.491
Epoch 164 Batch   38/43   train_loss = 0.499
Epoch 165 Batch    5/43   train_loss = 0.464
Epoch 165 Batch   15/43   train_loss = 0.496
Epoch 165 Batch   25/43   train_loss = 0.514
Epoch 165 Batch   35/43   train_loss = 0.514
Epoch 166 Batch    2/43   train_loss = 0.463
Epoch 166 Batch   12/43   train_loss = 0.436
Epoch 166 Batch   22/43   train_loss = 0.490
Epoch 166 Batch   32/43   train_loss = 0.509
Epoch 166 Batch   42/43   train_loss = 0.522
Epoch 167 Batch    9/43   train_loss = 0.492
Epoch 167 Batch   19/43   train_loss = 0.510
Epoch 167 Batch   29/43   train_loss = 0.502
Epoch 167 Batch   39/43   train_loss = 0.467
Epoch 168 Batch    6/43   train_loss = 0.495
Epoch 168 Batch   16/43   train_loss = 0.456
Epoch 168 Batch   26/43   train_loss = 0.472
Epoch 168 Batch   36/43   train_loss = 0.474
Epoch 169 Batch    3/43   train_loss = 0.473
Epoch 169 Batch   13/43   train_loss = 0.517
Epoch 169 Batch   23/43   train_loss = 0.535
Epoch 169 Batch   33/43   train_loss = 0.494
Epoch 170 Batch    0/43   train_loss = 0.513
Epoch 170 Batch   10/43   train_loss = 0.519
Epoch 170 Batch   20/43   train_loss = 0.460
Epoch 170 Batch   30/43   train_loss = 0.481
Epoch 170 Batch   40/43   train_loss = 0.471
Epoch 171 Batch    7/43   train_loss = 0.466
Epoch 171 Batch   17/43   train_loss = 0.513
Epoch 171 Batch   27/43   train_loss = 0.492
Epoch 171 Batch   37/43   train_loss = 0.502
Epoch 172 Batch    4/43   train_loss = 0.504
Epoch 172 Batch   14/43   train_loss = 0.495
Epoch 172 Batch   24/43   train_loss = 0.481
Epoch 172 Batch   34/43   train_loss = 0.467
Epoch 173 Batch    1/43   train_loss = 0.434
Epoch 173 Batch   11/43   train_loss = 0.510
Epoch 173 Batch   21/43   train_loss = 0.512
Epoch 173 Batch   31/43   train_loss = 0.529
Epoch 173 Batch   41/43   train_loss = 0.506
Epoch 174 Batch    8/43   train_loss = 0.509
Epoch 174 Batch   18/43   train_loss = 0.453
Epoch 174 Batch   28/43   train_loss = 0.454
Epoch 174 Batch   38/43   train_loss = 0.483
Epoch 175 Batch    5/43   train_loss = 0.442
Epoch 175 Batch   15/43   train_loss = 0.499
Epoch 175 Batch   25/43   train_loss = 0.501
Epoch 175 Batch   35/43   train_loss = 0.544
Epoch 176 Batch    2/43   train_loss = 0.535
Epoch 176 Batch   12/43   train_loss = 0.528
Epoch 176 Batch   22/43   train_loss = 0.553
Epoch 176 Batch   32/43   train_loss = 0.509
Epoch 176 Batch   42/43   train_loss = 0.491
Epoch 177 Batch    9/43   train_loss = 0.468
Epoch 177 Batch   19/43   train_loss = 0.497
Epoch 177 Batch   29/43   train_loss = 0.519
Epoch 177 Batch   39/43   train_loss = 0.494
Epoch 178 Batch    6/43   train_loss = 0.515
Epoch 178 Batch   16/43   train_loss = 0.475
Epoch 178 Batch   26/43   train_loss = 0.482
Epoch 178 Batch   36/43   train_loss = 0.463
Epoch 179 Batch    3/43   train_loss = 0.453
Epoch 179 Batch   13/43   train_loss = 0.469
Epoch 179 Batch   23/43   train_loss = 0.457
Epoch 179 Batch   33/43   train_loss = 0.433
Model Trained and Saved

Save Parameters

Save seq_length and save_dir for generating a new TV script.


In [23]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint


In [24]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Implement Generate Functions

Get Tensors

Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


In [25]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    input = loaded_graph.get_tensor_by_name('input:0')
    initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
    final_state = loaded_graph.get_tensor_by_name('final_state:0')
    probabilities = loaded_graph.get_tensor_by_name('probs:0')
    
    return input, initial_state, final_state, probabilities


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)


Tests Passed

Choose Word

Implement the pick_word() function to select the next word using probabilities.


In [97]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    probabilities = [(i,p) for i, p in enumerate(probabilities)]
    probabilities.sort(key=lambda x: x[1], reverse=True)
    choice = np.random.choice([i[0] for i in probabilities[:10]])
    
    return int_to_vocab[choice]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)


Tests Passed

Generate TV Script

This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.


In [98]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)


moe_szyslak: okay. but don't check that(homer's laugh, unless and 'em-- do all blue carl? duffman did / changing any feet-- / four girl left. then it's only started with no cough into that lady with her.
kemi: all right boys before we gotta pick" too then" later"
jacques:" it ain't it's duff too thing about" the deer straight in a" forget-me-shot! then" / i gave up barney back out!
harv: we leave goin'?
doreen: he just see a guy in my name and there's 'cause there's too hero like to? same one was down?
artie_ziff: well get for our store by twenty million eating" could water with what dead kermit. cigarette as once by some apron? it's already everything. can a poem! from lenny. m(then maggie noise again / canyonero) oh right!(then loud off moan, please watch the camera / book maggie-- comic_book_guy: hello and spend what anyway! super could do bartenders say and save moe's people bad-mouth no one

The TV Script is Nonsensical

It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.