TV Script Generation

In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.

Get the Data

The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..


In [196]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [197]:
view_sentence_range = (100, 110)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555

The sentences 100 to 110:
Barney_Gumble: Wow, it really works.
HARV: (CHUCKLING) I'll be back.
Homer_Simpson: Moe, I haven't seen the place this crowded since the government cracked down on you for accepting food stamps. Do you think my drink had something to do with it?
Moe_Szyslak: Who can say? It's probably a combination of things.
Patron_#1: (TO MOE) Another pitcher of those amazing "Flaming Moe's".
Patron_#2: Boy, I hate this joint, but I love that drink.
Collette: Barkeep, I couldn't help noticing your sign.
Moe_Szyslak: The one that says, "Bartenders Do It 'Til You Barf"?
Collette: No, above that store-bought drollery.
Moe_Szyslak: Oh great! Why don't we fill out an application? (READING) I'll need your name, measurements and turn ons..

Implement Preprocessing Functions

The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:

  • Lookup Table
  • Tokenize Punctuation

Lookup Table

To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:

  • Dictionary to go from the words to an id, we'll call vocab_to_int
  • Dictionary to go from the id to word, we'll call int_to_vocab

Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)


In [198]:
import numpy as np
import problem_unittests as tests

def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    from collections import Counter
    counts = Counter(text)
    vocab = sorted(counts, key=counts.get, reverse=True)
    vocab_to_int = {word: i for i, word in enumerate(set(vocab)) if len(word) > 0}
    int_to_vocab = {i: word for i, word in enumerate(set(vocab)) if len(word) > 0}

    return (vocab_to_int, int_to_vocab)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)


Tests Passed

Tokenize Punctuation

We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".

Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:

  • Period ( . )
  • Comma ( , )
  • Quotation Mark ( " )
  • Semicolon ( ; )
  • Exclamation mark ( ! )
  • Question mark ( ? )
  • Left Parentheses ( ( )
  • Right Parentheses ( ) )
  • Dash ( -- )
  • Return ( \n )

This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".


In [199]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    tokens = {
        ".": "||Period||",
        ",": "||Comma||",
        '"': "||Quotation_Mark||",
        ";": "||Semicolon||",
        "!": "||Exclamation_Mark||",
        "?": "||Question_Mark||",
        "(": "||Left_Parentheses||",
        ")": "||Right_Parentheses||",
        "--": "||Dash||",
        "\n": "||Return||"
    }
    return tokens

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [200]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [201]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Build the Neural Network

You'll build the components necessary to build a RNN by implementing the following functions below:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the Version of TensorFlow and Access to GPU


In [202]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.0.0
/Users/mira/anaconda/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:14: UserWarning: No GPU found. Please use a GPU to train your neural network.
  

Input

Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter.
  • Targets placeholder
  • Learning Rate placeholder

Return the placeholders in the following tuple (Input, Targets, LearningRate)


In [203]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    input = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None, None], name='targets')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    return (input, targets, learning_rate)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)


Tests Passed

Build RNN Cell and Initialize

Stack one or more BasicLSTMCells in a MultiRNNCell.

  • The Rnn size should be set using rnn_size
  • Initalize Cell State using the MultiRNNCell's zero_state() function
    • Apply the name "initial_state" to the initial state using tf.identity()

Return the cell and initial state in the following tuple (Cell, InitialState)


In [204]:
def get_init_cell(batch_size, rnn_size):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    lstm_layers = 1
    keep_prob = 0.7
    
    # Your basic LSTM cell
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)

    # Add dropout to the cell
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)

    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)

    # Getting an initial state of all zeros
    initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")

    return (cell, initial_state)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)


Tests Passed

Word Embedding

Apply embedding to input_data using TensorFlow. Return the embedded sequence.


In [205]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
    embed = tf.nn.embedding_lookup(embedding, input_data)
    return embed


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)


Tests Passed

Build RNN

You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.

Return the outputs and final_state state in the following tuple (Outputs, FinalState)


In [206]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    # TODO: Implement Function
    
    outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
    return outputs, tf.identity(final_state, name="final_state")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
  • Build RNN using cell and your build_rnn(cell, inputs) function.
  • Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.

Return the logits and final state in the following tuple (Logits, FinalState)


In [207]:
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :param embed_dim: Number of embedding dimensions
    :return: Tuple (Logits, FinalState)
    """
    embedding = get_embed(input_data, vocab_size, embed_dim)
    outputs, final_state = build_rnn(cell, embedding)
    logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
    
    return (logits, final_state)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)


Tests Passed

Batches

Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a single batch of input with the shape [batch size, sequence length]
  • The second element is a single batch of targets with the shape [batch size, sequence length]

If you can't fill the last batch with enough data, drop the last batch.

For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2], [ 7  8], [13 14]]
    # Batch of targets
    [[ 2  3], [ 8  9], [14 15]]
  ]

  # Second Batch
  [
    # Batch of Input
    [[ 3  4], [ 9 10], [15 16]]
    # Batch of targets
    [[ 4  5], [10 11], [16 17]]
  ]

  # Third Batch
  [
    # Batch of Input
    [[ 5  6], [11 12], [17 18]]
    # Batch of targets
    [[ 6  7], [12 13], [18  1]]
  ]
]

Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.


In [208]:
def get_batches(int_text, batch_size, seq_length):
    """
    Return batches of input and target
    :param int_text: Text with the words replaced by their ids
    :param batch_size: The size of batch
    :param seq_length: The length of sequence
    :return: Batches as a Numpy array
    """
    batch_length = batch_size * seq_length
    n_batches = int(len(int_text) / batch_length)
    inputs = np.array(int_text[: n_batches * batch_length])
    targets = np.array(int_text[1: n_batches * batch_length + 1])
    targets[-1] = inputs[0]
    input_batches = np.split(inputs.reshape(batch_size, -1), n_batches, 1)
    target_batches = np.split(targets.reshape(batch_size, -1), n_batches, 1)
    
    batches = np.array(list(zip(input_batches, target_batches)))

    return batches

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set num_epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set embed_dim to the size of the embedding.
  • Set seq_length to the length of sequence.
  • Set learning_rate to the learning rate.
  • Set show_every_n_batches to the number of batches the neural network should print progress.

In [217]:
# Number of Epochs
num_epochs = 40
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 48
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Build the Graph

Build the graph using the neural network you implemented.


In [218]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
    train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.


In [219]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')


Epoch   0 Batch    0/22   train_loss = 8.824
Epoch   0 Batch    1/22   train_loss = 8.103
Epoch   0 Batch    2/22   train_loss = 6.473
Epoch   0 Batch    3/22   train_loss = 6.803
Epoch   0 Batch    4/22   train_loss = 6.657
Epoch   0 Batch    5/22   train_loss = 6.437
Epoch   0 Batch    6/22   train_loss = 6.328
Epoch   0 Batch    7/22   train_loss = 6.389
Epoch   0 Batch    8/22   train_loss = 6.085
Epoch   0 Batch    9/22   train_loss = 6.143
Epoch   0 Batch   10/22   train_loss = 6.080
Epoch   0 Batch   11/22   train_loss = 5.880
Epoch   0 Batch   12/22   train_loss = 5.956
Epoch   0 Batch   13/22   train_loss = 6.002
Epoch   0 Batch   14/22   train_loss = 5.783
Epoch   0 Batch   15/22   train_loss = 5.980
Epoch   0 Batch   16/22   train_loss = 5.754
Epoch   0 Batch   17/22   train_loss = 5.801
Epoch   0 Batch   18/22   train_loss = 5.768
Epoch   0 Batch   19/22   train_loss = 5.775
Epoch   0 Batch   20/22   train_loss = 5.718
Epoch   0 Batch   21/22   train_loss = 5.723
Epoch   1 Batch    0/22   train_loss = 5.375
Epoch   1 Batch    1/22   train_loss = 5.470
Epoch   1 Batch    2/22   train_loss = 5.421
Epoch   1 Batch    3/22   train_loss = 5.493
Epoch   1 Batch    4/22   train_loss = 5.395
Epoch   1 Batch    5/22   train_loss = 5.387
Epoch   1 Batch    6/22   train_loss = 5.297
Epoch   1 Batch    7/22   train_loss = 5.442
Epoch   1 Batch    8/22   train_loss = 5.195
Epoch   1 Batch    9/22   train_loss = 5.307
Epoch   1 Batch   10/22   train_loss = 5.251
Epoch   1 Batch   11/22   train_loss = 5.101
Epoch   1 Batch   12/22   train_loss = 5.153
Epoch   1 Batch   13/22   train_loss = 5.245
Epoch   1 Batch   14/22   train_loss = 4.986
Epoch   1 Batch   15/22   train_loss = 5.206
Epoch   1 Batch   16/22   train_loss = 5.056
Epoch   1 Batch   17/22   train_loss = 5.075
Epoch   1 Batch   18/22   train_loss = 5.066
Epoch   1 Batch   19/22   train_loss = 5.077
Epoch   1 Batch   20/22   train_loss = 4.997
Epoch   1 Batch   21/22   train_loss = 5.062
Epoch   2 Batch    0/22   train_loss = 4.715
Epoch   2 Batch    1/22   train_loss = 4.854
Epoch   2 Batch    2/22   train_loss = 4.845
Epoch   2 Batch    3/22   train_loss = 4.917
Epoch   2 Batch    4/22   train_loss = 4.844
Epoch   2 Batch    5/22   train_loss = 4.828
Epoch   2 Batch    6/22   train_loss = 4.739
Epoch   2 Batch    7/22   train_loss = 4.905
Epoch   2 Batch    8/22   train_loss = 4.688
Epoch   2 Batch    9/22   train_loss = 4.792
Epoch   2 Batch   10/22   train_loss = 4.736
Epoch   2 Batch   11/22   train_loss = 4.638
Epoch   2 Batch   12/22   train_loss = 4.648
Epoch   2 Batch   13/22   train_loss = 4.761
Epoch   2 Batch   14/22   train_loss = 4.547
Epoch   2 Batch   15/22   train_loss = 4.749
Epoch   2 Batch   16/22   train_loss = 4.592
Epoch   2 Batch   17/22   train_loss = 4.601
Epoch   2 Batch   18/22   train_loss = 4.584
Epoch   2 Batch   19/22   train_loss = 4.592
Epoch   2 Batch   20/22   train_loss = 4.553
Epoch   2 Batch   21/22   train_loss = 4.590
Epoch   3 Batch    0/22   train_loss = 4.294
Epoch   3 Batch    1/22   train_loss = 4.434
Epoch   3 Batch    2/22   train_loss = 4.396
Epoch   3 Batch    3/22   train_loss = 4.479
Epoch   3 Batch    4/22   train_loss = 4.417
Epoch   3 Batch    5/22   train_loss = 4.420
Epoch   3 Batch    6/22   train_loss = 4.324
Epoch   3 Batch    7/22   train_loss = 4.474
Epoch   3 Batch    8/22   train_loss = 4.315
Epoch   3 Batch    9/22   train_loss = 4.393
Epoch   3 Batch   10/22   train_loss = 4.334
Epoch   3 Batch   11/22   train_loss = 4.215
Epoch   3 Batch   12/22   train_loss = 4.226
Epoch   3 Batch   13/22   train_loss = 4.347
Epoch   3 Batch   14/22   train_loss = 4.162
Epoch   3 Batch   15/22   train_loss = 4.352
Epoch   3 Batch   16/22   train_loss = 4.205
Epoch   3 Batch   17/22   train_loss = 4.231
Epoch   3 Batch   18/22   train_loss = 4.182
Epoch   3 Batch   19/22   train_loss = 4.192
Epoch   3 Batch   20/22   train_loss = 4.157
Epoch   3 Batch   21/22   train_loss = 4.141
Epoch   4 Batch    0/22   train_loss = 3.926
Epoch   4 Batch    1/22   train_loss = 4.076
Epoch   4 Batch    2/22   train_loss = 4.023
Epoch   4 Batch    3/22   train_loss = 4.065
Epoch   4 Batch    4/22   train_loss = 4.005
Epoch   4 Batch    5/22   train_loss = 4.029
Epoch   4 Batch    6/22   train_loss = 3.938
Epoch   4 Batch    7/22   train_loss = 4.089
Epoch   4 Batch    8/22   train_loss = 3.955
Epoch   4 Batch    9/22   train_loss = 4.018
Epoch   4 Batch   10/22   train_loss = 3.941
Epoch   4 Batch   11/22   train_loss = 3.872
Epoch   4 Batch   12/22   train_loss = 3.838
Epoch   4 Batch   13/22   train_loss = 3.940
Epoch   4 Batch   14/22   train_loss = 3.802
Epoch   4 Batch   15/22   train_loss = 3.953
Epoch   4 Batch   16/22   train_loss = 3.827
Epoch   4 Batch   17/22   train_loss = 3.865
Epoch   4 Batch   18/22   train_loss = 3.797
Epoch   4 Batch   19/22   train_loss = 3.832
Epoch   4 Batch   20/22   train_loss = 3.792
Epoch   4 Batch   21/22   train_loss = 3.807
Epoch   5 Batch    0/22   train_loss = 3.628
Epoch   5 Batch    1/22   train_loss = 3.749
Epoch   5 Batch    2/22   train_loss = 3.689
Epoch   5 Batch    3/22   train_loss = 3.696
Epoch   5 Batch    4/22   train_loss = 3.694
Epoch   5 Batch    5/22   train_loss = 3.669
Epoch   5 Batch    6/22   train_loss = 3.596
Epoch   5 Batch    7/22   train_loss = 3.707
Epoch   5 Batch    8/22   train_loss = 3.639
Epoch   5 Batch    9/22   train_loss = 3.681
Epoch   5 Batch   10/22   train_loss = 3.616
Epoch   5 Batch   11/22   train_loss = 3.580
Epoch   5 Batch   12/22   train_loss = 3.555
Epoch   5 Batch   13/22   train_loss = 3.614
Epoch   5 Batch   14/22   train_loss = 3.529
Epoch   5 Batch   15/22   train_loss = 3.639
Epoch   5 Batch   16/22   train_loss = 3.504
Epoch   5 Batch   17/22   train_loss = 3.547
Epoch   5 Batch   18/22   train_loss = 3.479
Epoch   5 Batch   19/22   train_loss = 3.517
Epoch   5 Batch   20/22   train_loss = 3.469
Epoch   5 Batch   21/22   train_loss = 3.474
Epoch   6 Batch    0/22   train_loss = 3.343
Epoch   6 Batch    1/22   train_loss = 3.448
Epoch   6 Batch    2/22   train_loss = 3.409
Epoch   6 Batch    3/22   train_loss = 3.412
Epoch   6 Batch    4/22   train_loss = 3.377
Epoch   6 Batch    5/22   train_loss = 3.374
Epoch   6 Batch    6/22   train_loss = 3.342
Epoch   6 Batch    7/22   train_loss = 3.417
Epoch   6 Batch    8/22   train_loss = 3.344
Epoch   6 Batch    9/22   train_loss = 3.382
Epoch   6 Batch   10/22   train_loss = 3.320
Epoch   6 Batch   11/22   train_loss = 3.287
Epoch   6 Batch   12/22   train_loss = 3.272
Epoch   6 Batch   13/22   train_loss = 3.287
Epoch   6 Batch   14/22   train_loss = 3.238
Epoch   6 Batch   15/22   train_loss = 3.325
Epoch   6 Batch   16/22   train_loss = 3.244
Epoch   6 Batch   17/22   train_loss = 3.282
Epoch   6 Batch   18/22   train_loss = 3.251
Epoch   6 Batch   19/22   train_loss = 3.256
Epoch   6 Batch   20/22   train_loss = 3.222
Epoch   6 Batch   21/22   train_loss = 3.184
Epoch   7 Batch    0/22   train_loss = 3.099
Epoch   7 Batch    1/22   train_loss = 3.170
Epoch   7 Batch    2/22   train_loss = 3.151
Epoch   7 Batch    3/22   train_loss = 3.135
Epoch   7 Batch    4/22   train_loss = 3.091
Epoch   7 Batch    5/22   train_loss = 3.070
Epoch   7 Batch    6/22   train_loss = 3.034
Epoch   7 Batch    7/22   train_loss = 3.114
Epoch   7 Batch    8/22   train_loss = 3.103
Epoch   7 Batch    9/22   train_loss = 3.109
Epoch   7 Batch   10/22   train_loss = 3.079
Epoch   7 Batch   11/22   train_loss = 3.052
Epoch   7 Batch   12/22   train_loss = 3.053
Epoch   7 Batch   13/22   train_loss = 3.061
Epoch   7 Batch   14/22   train_loss = 3.033
Epoch   7 Batch   15/22   train_loss = 3.111
Epoch   7 Batch   16/22   train_loss = 3.014
Epoch   7 Batch   17/22   train_loss = 3.071
Epoch   7 Batch   18/22   train_loss = 3.066
Epoch   7 Batch   19/22   train_loss = 3.031
Epoch   7 Batch   20/22   train_loss = 3.004
Epoch   7 Batch   21/22   train_loss = 2.958
Epoch   8 Batch    0/22   train_loss = 2.888
Epoch   8 Batch    1/22   train_loss = 2.953
Epoch   8 Batch    2/22   train_loss = 2.914
Epoch   8 Batch    3/22   train_loss = 2.895
Epoch   8 Batch    4/22   train_loss = 2.874
Epoch   8 Batch    5/22   train_loss = 2.842
Epoch   8 Batch    6/22   train_loss = 2.780
Epoch   8 Batch    7/22   train_loss = 2.836
Epoch   8 Batch    8/22   train_loss = 2.853
Epoch   8 Batch    9/22   train_loss = 2.843
Epoch   8 Batch   10/22   train_loss = 2.846
Epoch   8 Batch   11/22   train_loss = 2.872
Epoch   8 Batch   12/22   train_loss = 2.840
Epoch   8 Batch   13/22   train_loss = 2.816
Epoch   8 Batch   14/22   train_loss = 2.851
Epoch   8 Batch   15/22   train_loss = 2.885
Epoch   8 Batch   16/22   train_loss = 2.816
Epoch   8 Batch   17/22   train_loss = 2.846
Epoch   8 Batch   18/22   train_loss = 2.808
Epoch   8 Batch   19/22   train_loss = 2.816
Epoch   8 Batch   20/22   train_loss = 2.789
Epoch   8 Batch   21/22   train_loss = 2.771
Epoch   9 Batch    0/22   train_loss = 2.712
Epoch   9 Batch    1/22   train_loss = 2.761
Epoch   9 Batch    2/22   train_loss = 2.719
Epoch   9 Batch    3/22   train_loss = 2.679
Epoch   9 Batch    4/22   train_loss = 2.661
Epoch   9 Batch    5/22   train_loss = 2.635
Epoch   9 Batch    6/22   train_loss = 2.567
Epoch   9 Batch    7/22   train_loss = 2.593
Epoch   9 Batch    8/22   train_loss = 2.597
Epoch   9 Batch    9/22   train_loss = 2.565
Epoch   9 Batch   10/22   train_loss = 2.622
Epoch   9 Batch   11/22   train_loss = 2.706
Epoch   9 Batch   12/22   train_loss = 2.684
Epoch   9 Batch   13/22   train_loss = 2.653
Epoch   9 Batch   14/22   train_loss = 2.648
Epoch   9 Batch   15/22   train_loss = 2.677
Epoch   9 Batch   16/22   train_loss = 2.662
Epoch   9 Batch   17/22   train_loss = 2.687
Epoch   9 Batch   18/22   train_loss = 2.653
Epoch   9 Batch   19/22   train_loss = 2.649
Epoch   9 Batch   20/22   train_loss = 2.613
Epoch   9 Batch   21/22   train_loss = 2.584
Epoch  10 Batch    0/22   train_loss = 2.556
Epoch  10 Batch    1/22   train_loss = 2.577
Epoch  10 Batch    2/22   train_loss = 2.565
Epoch  10 Batch    3/22   train_loss = 2.499
Epoch  10 Batch    4/22   train_loss = 2.505
Epoch  10 Batch    5/22   train_loss = 2.428
Epoch  10 Batch    6/22   train_loss = 2.379
Epoch  10 Batch    7/22   train_loss = 2.386
Epoch  10 Batch    8/22   train_loss = 2.403
Epoch  10 Batch    9/22   train_loss = 2.385
Epoch  10 Batch   10/22   train_loss = 2.365
Epoch  10 Batch   11/22   train_loss = 2.414
Epoch  10 Batch   12/22   train_loss = 2.431
Epoch  10 Batch   13/22   train_loss = 2.513
Epoch  10 Batch   14/22   train_loss = 2.575
Epoch  10 Batch   15/22   train_loss = 2.589
Epoch  10 Batch   16/22   train_loss = 2.497
Epoch  10 Batch   17/22   train_loss = 2.508
Epoch  10 Batch   18/22   train_loss = 2.509
Epoch  10 Batch   19/22   train_loss = 2.537
Epoch  10 Batch   20/22   train_loss = 2.552
Epoch  10 Batch   21/22   train_loss = 2.523
Epoch  11 Batch    0/22   train_loss = 2.480
Epoch  11 Batch    1/22   train_loss = 2.495
Epoch  11 Batch    2/22   train_loss = 2.452
Epoch  11 Batch    3/22   train_loss = 2.384
Epoch  11 Batch    4/22   train_loss = 2.376
Epoch  11 Batch    5/22   train_loss = 2.341
Epoch  11 Batch    6/22   train_loss = 2.281
Epoch  11 Batch    7/22   train_loss = 2.265
Epoch  11 Batch    8/22   train_loss = 2.274
Epoch  11 Batch    9/22   train_loss = 2.214
Epoch  11 Batch   10/22   train_loss = 2.205
Epoch  11 Batch   11/22   train_loss = 2.250
Epoch  11 Batch   12/22   train_loss = 2.265
Epoch  11 Batch   13/22   train_loss = 2.236
Epoch  11 Batch   14/22   train_loss = 2.293
Epoch  11 Batch   15/22   train_loss = 2.317
Epoch  11 Batch   16/22   train_loss = 2.348
Epoch  11 Batch   17/22   train_loss = 2.415
Epoch  11 Batch   18/22   train_loss = 2.394
Epoch  11 Batch   19/22   train_loss = 2.412
Epoch  11 Batch   20/22   train_loss = 2.331
Epoch  11 Batch   21/22   train_loss = 2.310
Epoch  12 Batch    0/22   train_loss = 2.296
Epoch  12 Batch    1/22   train_loss = 2.360
Epoch  12 Batch    2/22   train_loss = 2.345
Epoch  12 Batch    3/22   train_loss = 2.307
Epoch  12 Batch    4/22   train_loss = 2.309
Epoch  12 Batch    5/22   train_loss = 2.256
Epoch  12 Batch    6/22   train_loss = 2.160
Epoch  12 Batch    7/22   train_loss = 2.131
Epoch  12 Batch    8/22   train_loss = 2.156
Epoch  12 Batch    9/22   train_loss = 2.075
Epoch  12 Batch   10/22   train_loss = 2.077
Epoch  12 Batch   11/22   train_loss = 2.124
Epoch  12 Batch   12/22   train_loss = 2.122
Epoch  12 Batch   13/22   train_loss = 2.126
Epoch  12 Batch   14/22   train_loss = 2.180
Epoch  12 Batch   15/22   train_loss = 2.189
Epoch  12 Batch   16/22   train_loss = 2.191
Epoch  12 Batch   17/22   train_loss = 2.256
Epoch  12 Batch   18/22   train_loss = 2.207
Epoch  12 Batch   19/22   train_loss = 2.247
Epoch  12 Batch   20/22   train_loss = 2.219
Epoch  12 Batch   21/22   train_loss = 2.225
Epoch  13 Batch    0/22   train_loss = 2.205
Epoch  13 Batch    1/22   train_loss = 2.192
Epoch  13 Batch    2/22   train_loss = 2.187
Epoch  13 Batch    3/22   train_loss = 2.135
Epoch  13 Batch    4/22   train_loss = 2.194
Epoch  13 Batch    5/22   train_loss = 2.150
Epoch  13 Batch    6/22   train_loss = 2.152
Epoch  13 Batch    7/22   train_loss = 2.126
Epoch  13 Batch    8/22   train_loss = 2.107
Epoch  13 Batch    9/22   train_loss = 2.068
Epoch  13 Batch   10/22   train_loss = 2.021
Epoch  13 Batch   11/22   train_loss = 1.998
Epoch  13 Batch   12/22   train_loss = 1.937
Epoch  13 Batch   13/22   train_loss = 1.912
Epoch  13 Batch   14/22   train_loss = 1.995
Epoch  13 Batch   15/22   train_loss = 1.969
Epoch  13 Batch   16/22   train_loss = 2.073
Epoch  13 Batch   17/22   train_loss = 2.125
Epoch  13 Batch   18/22   train_loss = 2.153
Epoch  13 Batch   19/22   train_loss = 2.211
Epoch  13 Batch   20/22   train_loss = 2.165
Epoch  13 Batch   21/22   train_loss = 2.138
Epoch  14 Batch    0/22   train_loss = 2.109
Epoch  14 Batch    1/22   train_loss = 2.140
Epoch  14 Batch    2/22   train_loss = 2.057
Epoch  14 Batch    3/22   train_loss = 2.000
Epoch  14 Batch    4/22   train_loss = 2.005
Epoch  14 Batch    5/22   train_loss = 1.992
Epoch  14 Batch    6/22   train_loss = 1.979
Epoch  14 Batch    7/22   train_loss = 1.999
Epoch  14 Batch    8/22   train_loss = 2.042
Epoch  14 Batch    9/22   train_loss = 2.036
Epoch  14 Batch   10/22   train_loss = 2.025
Epoch  14 Batch   11/22   train_loss = 2.039
Epoch  14 Batch   12/22   train_loss = 1.989
Epoch  14 Batch   13/22   train_loss = 1.951
Epoch  14 Batch   14/22   train_loss = 1.975
Epoch  14 Batch   15/22   train_loss = 1.894
Epoch  14 Batch   16/22   train_loss = 1.844
Epoch  14 Batch   17/22   train_loss = 1.880
Epoch  14 Batch   18/22   train_loss = 1.874
Epoch  14 Batch   19/22   train_loss = 1.945
Epoch  14 Batch   20/22   train_loss = 1.970
Epoch  14 Batch   21/22   train_loss = 2.033
Epoch  15 Batch    0/22   train_loss = 2.035
Epoch  15 Batch    1/22   train_loss = 2.093
Epoch  15 Batch    2/22   train_loss = 2.085
Epoch  15 Batch    3/22   train_loss = 2.002
Epoch  15 Batch    4/22   train_loss = 1.964
Epoch  15 Batch    5/22   train_loss = 1.907
Epoch  15 Batch    6/22   train_loss = 1.854
Epoch  15 Batch    7/22   train_loss = 1.790
Epoch  15 Batch    8/22   train_loss = 1.869
Epoch  15 Batch    9/22   train_loss = 1.857
Epoch  15 Batch   10/22   train_loss = 1.823
Epoch  15 Batch   11/22   train_loss = 1.895
Epoch  15 Batch   12/22   train_loss = 1.863
Epoch  15 Batch   13/22   train_loss = 1.858
Epoch  15 Batch   14/22   train_loss = 1.914
Epoch  15 Batch   15/22   train_loss = 1.832
Epoch  15 Batch   16/22   train_loss = 1.853
Epoch  15 Batch   17/22   train_loss = 1.826
Epoch  15 Batch   18/22   train_loss = 1.767
Epoch  15 Batch   19/22   train_loss = 1.804
Epoch  15 Batch   20/22   train_loss = 1.774
Epoch  15 Batch   21/22   train_loss = 1.747
Epoch  16 Batch    0/22   train_loss = 1.751
Epoch  16 Batch    1/22   train_loss = 1.824
Epoch  16 Batch    2/22   train_loss = 1.762
Epoch  16 Batch    3/22   train_loss = 1.771
Epoch  16 Batch    4/22   train_loss = 1.823
Epoch  16 Batch    5/22   train_loss = 1.785
Epoch  16 Batch    6/22   train_loss = 1.727
Epoch  16 Batch    7/22   train_loss = 1.684
Epoch  16 Batch    8/22   train_loss = 1.748
Epoch  16 Batch    9/22   train_loss = 1.692
Epoch  16 Batch   10/22   train_loss = 1.659
Epoch  16 Batch   11/22   train_loss = 1.714
Epoch  16 Batch   12/22   train_loss = 1.655
Epoch  16 Batch   13/22   train_loss = 1.645
Epoch  16 Batch   14/22   train_loss = 1.711
Epoch  16 Batch   15/22   train_loss = 1.679
Epoch  16 Batch   16/22   train_loss = 1.667
Epoch  16 Batch   17/22   train_loss = 1.694
Epoch  16 Batch   18/22   train_loss = 1.602
Epoch  16 Batch   19/22   train_loss = 1.667
Epoch  16 Batch   20/22   train_loss = 1.635
Epoch  16 Batch   21/22   train_loss = 1.598
Epoch  17 Batch    0/22   train_loss = 1.604
Epoch  17 Batch    1/22   train_loss = 1.652
Epoch  17 Batch    2/22   train_loss = 1.605
Epoch  17 Batch    3/22   train_loss = 1.563
Epoch  17 Batch    4/22   train_loss = 1.590
Epoch  17 Batch    5/22   train_loss = 1.569
Epoch  17 Batch    6/22   train_loss = 1.567
Epoch  17 Batch    7/22   train_loss = 1.492
Epoch  17 Batch    8/22   train_loss = 1.591
Epoch  17 Batch    9/22   train_loss = 1.554
Epoch  17 Batch   10/22   train_loss = 1.527
Epoch  17 Batch   11/22   train_loss = 1.532
Epoch  17 Batch   12/22   train_loss = 1.540
Epoch  17 Batch   13/22   train_loss = 1.464
Epoch  17 Batch   14/22   train_loss = 1.540
Epoch  17 Batch   15/22   train_loss = 1.503
Epoch  17 Batch   16/22   train_loss = 1.508
Epoch  17 Batch   17/22   train_loss = 1.537
Epoch  17 Batch   18/22   train_loss = 1.463
Epoch  17 Batch   19/22   train_loss = 1.496
Epoch  17 Batch   20/22   train_loss = 1.483
Epoch  17 Batch   21/22   train_loss = 1.478
Epoch  18 Batch    0/22   train_loss = 1.465
Epoch  18 Batch    1/22   train_loss = 1.470
Epoch  18 Batch    2/22   train_loss = 1.442
Epoch  18 Batch    3/22   train_loss = 1.420
Epoch  18 Batch    4/22   train_loss = 1.427
Epoch  18 Batch    5/22   train_loss = 1.390
Epoch  18 Batch    6/22   train_loss = 1.360
Epoch  18 Batch    7/22   train_loss = 1.344
Epoch  18 Batch    8/22   train_loss = 1.423
Epoch  18 Batch    9/22   train_loss = 1.385
Epoch  18 Batch   10/22   train_loss = 1.391
Epoch  18 Batch   11/22   train_loss = 1.404
Epoch  18 Batch   12/22   train_loss = 1.410
Epoch  18 Batch   13/22   train_loss = 1.360
Epoch  18 Batch   14/22   train_loss = 1.439
Epoch  18 Batch   15/22   train_loss = 1.375
Epoch  18 Batch   16/22   train_loss = 1.369
Epoch  18 Batch   17/22   train_loss = 1.380
Epoch  18 Batch   18/22   train_loss = 1.319
Epoch  18 Batch   19/22   train_loss = 1.381
Epoch  18 Batch   20/22   train_loss = 1.361
Epoch  18 Batch   21/22   train_loss = 1.362
Epoch  19 Batch    0/22   train_loss = 1.376
Epoch  19 Batch    1/22   train_loss = 1.364
Epoch  19 Batch    2/22   train_loss = 1.325
Epoch  19 Batch    3/22   train_loss = 1.307
Epoch  19 Batch    4/22   train_loss = 1.344
Epoch  19 Batch    5/22   train_loss = 1.281
Epoch  19 Batch    6/22   train_loss = 1.296
Epoch  19 Batch    7/22   train_loss = 1.188
Epoch  19 Batch    8/22   train_loss = 1.269
Epoch  19 Batch    9/22   train_loss = 1.286
Epoch  19 Batch   10/22   train_loss = 1.248
Epoch  19 Batch   11/22   train_loss = 1.281
Epoch  19 Batch   12/22   train_loss = 1.273
Epoch  19 Batch   13/22   train_loss = 1.217
Epoch  19 Batch   14/22   train_loss = 1.352
Epoch  19 Batch   15/22   train_loss = 1.256
Epoch  19 Batch   16/22   train_loss = 1.271
Epoch  19 Batch   17/22   train_loss = 1.265
Epoch  19 Batch   18/22   train_loss = 1.201
Epoch  19 Batch   19/22   train_loss = 1.297
Epoch  19 Batch   20/22   train_loss = 1.259
Epoch  19 Batch   21/22   train_loss = 1.229
Epoch  20 Batch    0/22   train_loss = 1.252
Epoch  20 Batch    1/22   train_loss = 1.267
Epoch  20 Batch    2/22   train_loss = 1.246
Epoch  20 Batch    3/22   train_loss = 1.190
Epoch  20 Batch    4/22   train_loss = 1.211
Epoch  20 Batch    5/22   train_loss = 1.167
Epoch  20 Batch    6/22   train_loss = 1.155
Epoch  20 Batch    7/22   train_loss = 1.101
Epoch  20 Batch    8/22   train_loss = 1.164
Epoch  20 Batch    9/22   train_loss = 1.140
Epoch  20 Batch   10/22   train_loss = 1.151
Epoch  20 Batch   11/22   train_loss = 1.163
Epoch  20 Batch   12/22   train_loss = 1.174
Epoch  20 Batch   13/22   train_loss = 1.108
Epoch  20 Batch   14/22   train_loss = 1.207
Epoch  20 Batch   15/22   train_loss = 1.194
Epoch  20 Batch   16/22   train_loss = 1.168
Epoch  20 Batch   17/22   train_loss = 1.172
Epoch  20 Batch   18/22   train_loss = 1.138
Epoch  20 Batch   19/22   train_loss = 1.169
Epoch  20 Batch   20/22   train_loss = 1.191
Epoch  20 Batch   21/22   train_loss = 1.157
Epoch  21 Batch    0/22   train_loss = 1.181
Epoch  21 Batch    1/22   train_loss = 1.207
Epoch  21 Batch    2/22   train_loss = 1.162
Epoch  21 Batch    3/22   train_loss = 1.129
Epoch  21 Batch    4/22   train_loss = 1.143
Epoch  21 Batch    5/22   train_loss = 1.121
Epoch  21 Batch    6/22   train_loss = 1.093
Epoch  21 Batch    7/22   train_loss = 1.058
Epoch  21 Batch    8/22   train_loss = 1.110
Epoch  21 Batch    9/22   train_loss = 1.084
Epoch  21 Batch   10/22   train_loss = 1.062
Epoch  21 Batch   11/22   train_loss = 1.072
Epoch  21 Batch   12/22   train_loss = 1.071
Epoch  21 Batch   13/22   train_loss = 1.042
Epoch  21 Batch   14/22   train_loss = 1.126
Epoch  21 Batch   15/22   train_loss = 1.081
Epoch  21 Batch   16/22   train_loss = 1.084
Epoch  21 Batch   17/22   train_loss = 1.066
Epoch  21 Batch   18/22   train_loss = 1.060
Epoch  21 Batch   19/22   train_loss = 1.099
Epoch  21 Batch   20/22   train_loss = 1.101
Epoch  21 Batch   21/22   train_loss = 1.099
Epoch  22 Batch    0/22   train_loss = 1.093
Epoch  22 Batch    1/22   train_loss = 1.123
Epoch  22 Batch    2/22   train_loss = 1.074
Epoch  22 Batch    3/22   train_loss = 1.044
Epoch  22 Batch    4/22   train_loss = 1.075
Epoch  22 Batch    5/22   train_loss = 1.024
Epoch  22 Batch    6/22   train_loss = 1.035
Epoch  22 Batch    7/22   train_loss = 0.976
Epoch  22 Batch    8/22   train_loss = 1.048
Epoch  22 Batch    9/22   train_loss = 1.001
Epoch  22 Batch   10/22   train_loss = 1.009
Epoch  22 Batch   11/22   train_loss = 1.002
Epoch  22 Batch   12/22   train_loss = 1.037
Epoch  22 Batch   13/22   train_loss = 0.960
Epoch  22 Batch   14/22   train_loss = 1.060
Epoch  22 Batch   15/22   train_loss = 1.022
Epoch  22 Batch   16/22   train_loss = 1.025
Epoch  22 Batch   17/22   train_loss = 1.022
Epoch  22 Batch   18/22   train_loss = 1.001
Epoch  22 Batch   19/22   train_loss = 1.025
Epoch  22 Batch   20/22   train_loss = 1.022
Epoch  22 Batch   21/22   train_loss = 1.003
Epoch  23 Batch    0/22   train_loss = 1.052
Epoch  23 Batch    1/22   train_loss = 1.057
Epoch  23 Batch    2/22   train_loss = 0.997
Epoch  23 Batch    3/22   train_loss = 1.006
Epoch  23 Batch    4/22   train_loss = 1.029
Epoch  23 Batch    5/22   train_loss = 0.974
Epoch  23 Batch    6/22   train_loss = 0.947
Epoch  23 Batch    7/22   train_loss = 0.900
Epoch  23 Batch    8/22   train_loss = 0.973
Epoch  23 Batch    9/22   train_loss = 0.965
Epoch  23 Batch   10/22   train_loss = 0.947
Epoch  23 Batch   11/22   train_loss = 0.943
Epoch  23 Batch   12/22   train_loss = 0.959
Epoch  23 Batch   13/22   train_loss = 0.893
Epoch  23 Batch   14/22   train_loss = 1.011
Epoch  23 Batch   15/22   train_loss = 0.921
Epoch  23 Batch   16/22   train_loss = 0.933
Epoch  23 Batch   17/22   train_loss = 0.940
Epoch  23 Batch   18/22   train_loss = 0.920
Epoch  23 Batch   19/22   train_loss = 0.973
Epoch  23 Batch   20/22   train_loss = 0.963
Epoch  23 Batch   21/22   train_loss = 0.947
Epoch  24 Batch    0/22   train_loss = 0.968
Epoch  24 Batch    1/22   train_loss = 0.975
Epoch  24 Batch    2/22   train_loss = 0.945
Epoch  24 Batch    3/22   train_loss = 0.927
Epoch  24 Batch    4/22   train_loss = 0.939
Epoch  24 Batch    5/22   train_loss = 0.903
Epoch  24 Batch    6/22   train_loss = 0.887
Epoch  24 Batch    7/22   train_loss = 0.853
Epoch  24 Batch    8/22   train_loss = 0.927
Epoch  24 Batch    9/22   train_loss = 0.921
Epoch  24 Batch   10/22   train_loss = 0.871
Epoch  24 Batch   11/22   train_loss = 0.889
Epoch  24 Batch   12/22   train_loss = 0.894
Epoch  24 Batch   13/22   train_loss = 0.829
Epoch  24 Batch   14/22   train_loss = 0.919
Epoch  24 Batch   15/22   train_loss = 0.889
Epoch  24 Batch   16/22   train_loss = 0.896
Epoch  24 Batch   17/22   train_loss = 0.871
Epoch  24 Batch   18/22   train_loss = 0.867
Epoch  24 Batch   19/22   train_loss = 0.913
Epoch  24 Batch   20/22   train_loss = 0.896
Epoch  24 Batch   21/22   train_loss = 0.886
Epoch  25 Batch    0/22   train_loss = 0.924
Epoch  25 Batch    1/22   train_loss = 0.909
Epoch  25 Batch    2/22   train_loss = 0.869
Epoch  25 Batch    3/22   train_loss = 0.827
Epoch  25 Batch    4/22   train_loss = 0.917
Epoch  25 Batch    5/22   train_loss = 0.858
Epoch  25 Batch    6/22   train_loss = 0.825
Epoch  25 Batch    7/22   train_loss = 0.795
Epoch  25 Batch    8/22   train_loss = 0.854
Epoch  25 Batch    9/22   train_loss = 0.869
Epoch  25 Batch   10/22   train_loss = 0.840
Epoch  25 Batch   11/22   train_loss = 0.853
Epoch  25 Batch   12/22   train_loss = 0.835
Epoch  25 Batch   13/22   train_loss = 0.801
Epoch  25 Batch   14/22   train_loss = 0.870
Epoch  25 Batch   15/22   train_loss = 0.828
Epoch  25 Batch   16/22   train_loss = 0.861
Epoch  25 Batch   17/22   train_loss = 0.824
Epoch  25 Batch   18/22   train_loss = 0.828
Epoch  25 Batch   19/22   train_loss = 0.847
Epoch  25 Batch   20/22   train_loss = 0.845
Epoch  25 Batch   21/22   train_loss = 0.817
Epoch  26 Batch    0/22   train_loss = 0.857
Epoch  26 Batch    1/22   train_loss = 0.878
Epoch  26 Batch    2/22   train_loss = 0.835
Epoch  26 Batch    3/22   train_loss = 0.798
Epoch  26 Batch    4/22   train_loss = 0.858
Epoch  26 Batch    5/22   train_loss = 0.812
Epoch  26 Batch    6/22   train_loss = 0.796
Epoch  26 Batch    7/22   train_loss = 0.758
Epoch  26 Batch    8/22   train_loss = 0.858
Epoch  26 Batch    9/22   train_loss = 0.820
Epoch  26 Batch   10/22   train_loss = 0.786
Epoch  26 Batch   11/22   train_loss = 0.783
Epoch  26 Batch   12/22   train_loss = 0.792
Epoch  26 Batch   13/22   train_loss = 0.769
Epoch  26 Batch   14/22   train_loss = 0.823
Epoch  26 Batch   15/22   train_loss = 0.816
Epoch  26 Batch   16/22   train_loss = 0.814
Epoch  26 Batch   17/22   train_loss = 0.792
Epoch  26 Batch   18/22   train_loss = 0.758
Epoch  26 Batch   19/22   train_loss = 0.817
Epoch  26 Batch   20/22   train_loss = 0.811
Epoch  26 Batch   21/22   train_loss = 0.765
Epoch  27 Batch    0/22   train_loss = 0.832
Epoch  27 Batch    1/22   train_loss = 0.796
Epoch  27 Batch    2/22   train_loss = 0.785
Epoch  27 Batch    3/22   train_loss = 0.758
Epoch  27 Batch    4/22   train_loss = 0.813
Epoch  27 Batch    5/22   train_loss = 0.769
Epoch  27 Batch    6/22   train_loss = 0.748
Epoch  27 Batch    7/22   train_loss = 0.763
Epoch  27 Batch    8/22   train_loss = 0.787
Epoch  27 Batch    9/22   train_loss = 0.790
Epoch  27 Batch   10/22   train_loss = 0.735
Epoch  27 Batch   11/22   train_loss = 0.754
Epoch  27 Batch   12/22   train_loss = 0.762
Epoch  27 Batch   13/22   train_loss = 0.685
Epoch  27 Batch   14/22   train_loss = 0.786
Epoch  27 Batch   15/22   train_loss = 0.740
Epoch  27 Batch   16/22   train_loss = 0.765
Epoch  27 Batch   17/22   train_loss = 0.742
Epoch  27 Batch   18/22   train_loss = 0.711
Epoch  27 Batch   19/22   train_loss = 0.744
Epoch  27 Batch   20/22   train_loss = 0.765
Epoch  27 Batch   21/22   train_loss = 0.731
Epoch  28 Batch    0/22   train_loss = 0.760
Epoch  28 Batch    1/22   train_loss = 0.755
Epoch  28 Batch    2/22   train_loss = 0.747
Epoch  28 Batch    3/22   train_loss = 0.693
Epoch  28 Batch    4/22   train_loss = 0.764
Epoch  28 Batch    5/22   train_loss = 0.716
Epoch  28 Batch    6/22   train_loss = 0.708
Epoch  28 Batch    7/22   train_loss = 0.656
Epoch  28 Batch    8/22   train_loss = 0.737
Epoch  28 Batch    9/22   train_loss = 0.737
Epoch  28 Batch   10/22   train_loss = 0.717
Epoch  28 Batch   11/22   train_loss = 0.725
Epoch  28 Batch   12/22   train_loss = 0.712
Epoch  28 Batch   13/22   train_loss = 0.677
Epoch  28 Batch   14/22   train_loss = 0.764
Epoch  28 Batch   15/22   train_loss = 0.702
Epoch  28 Batch   16/22   train_loss = 0.708
Epoch  28 Batch   17/22   train_loss = 0.689
Epoch  28 Batch   18/22   train_loss = 0.686
Epoch  28 Batch   19/22   train_loss = 0.716
Epoch  28 Batch   20/22   train_loss = 0.717
Epoch  28 Batch   21/22   train_loss = 0.699
Epoch  29 Batch    0/22   train_loss = 0.703
Epoch  29 Batch    1/22   train_loss = 0.735
Epoch  29 Batch    2/22   train_loss = 0.685
Epoch  29 Batch    3/22   train_loss = 0.695
Epoch  29 Batch    4/22   train_loss = 0.716
Epoch  29 Batch    5/22   train_loss = 0.663
Epoch  29 Batch    6/22   train_loss = 0.667
Epoch  29 Batch    7/22   train_loss = 0.635
Epoch  29 Batch    8/22   train_loss = 0.692
Epoch  29 Batch    9/22   train_loss = 0.716
Epoch  29 Batch   10/22   train_loss = 0.674
Epoch  29 Batch   11/22   train_loss = 0.680
Epoch  29 Batch   12/22   train_loss = 0.671
Epoch  29 Batch   13/22   train_loss = 0.620
Epoch  29 Batch   14/22   train_loss = 0.698
Epoch  29 Batch   15/22   train_loss = 0.668
Epoch  29 Batch   16/22   train_loss = 0.687
Epoch  29 Batch   17/22   train_loss = 0.659
Epoch  29 Batch   18/22   train_loss = 0.651
Epoch  29 Batch   19/22   train_loss = 0.668
Epoch  29 Batch   20/22   train_loss = 0.692
Epoch  29 Batch   21/22   train_loss = 0.658
Epoch  30 Batch    0/22   train_loss = 0.671
Epoch  30 Batch    1/22   train_loss = 0.701
Epoch  30 Batch    2/22   train_loss = 0.654
Epoch  30 Batch    3/22   train_loss = 0.626
Epoch  30 Batch    4/22   train_loss = 0.660
Epoch  30 Batch    5/22   train_loss = 0.652
Epoch  30 Batch    6/22   train_loss = 0.667
Epoch  30 Batch    7/22   train_loss = 0.616
Epoch  30 Batch    8/22   train_loss = 0.662
Epoch  30 Batch    9/22   train_loss = 0.655
Epoch  30 Batch   10/22   train_loss = 0.619
Epoch  30 Batch   11/22   train_loss = 0.670
Epoch  30 Batch   12/22   train_loss = 0.648
Epoch  30 Batch   13/22   train_loss = 0.597
Epoch  30 Batch   14/22   train_loss = 0.663
Epoch  30 Batch   15/22   train_loss = 0.647
Epoch  30 Batch   16/22   train_loss = 0.648
Epoch  30 Batch   17/22   train_loss = 0.644
Epoch  30 Batch   18/22   train_loss = 0.622
Epoch  30 Batch   19/22   train_loss = 0.630
Epoch  30 Batch   20/22   train_loss = 0.678
Epoch  30 Batch   21/22   train_loss = 0.628
Epoch  31 Batch    0/22   train_loss = 0.663
Epoch  31 Batch    1/22   train_loss = 0.665
Epoch  31 Batch    2/22   train_loss = 0.625
Epoch  31 Batch    3/22   train_loss = 0.615
Epoch  31 Batch    4/22   train_loss = 0.644
Epoch  31 Batch    5/22   train_loss = 0.617
Epoch  31 Batch    6/22   train_loss = 0.610
Epoch  31 Batch    7/22   train_loss = 0.583
Epoch  31 Batch    8/22   train_loss = 0.635
Epoch  31 Batch    9/22   train_loss = 0.652
Epoch  31 Batch   10/22   train_loss = 0.616
Epoch  31 Batch   11/22   train_loss = 0.620
Epoch  31 Batch   12/22   train_loss = 0.621
Epoch  31 Batch   13/22   train_loss = 0.582
Epoch  31 Batch   14/22   train_loss = 0.668
Epoch  31 Batch   15/22   train_loss = 0.634
Epoch  31 Batch   16/22   train_loss = 0.621
Epoch  31 Batch   17/22   train_loss = 0.588
Epoch  31 Batch   18/22   train_loss = 0.595
Epoch  31 Batch   19/22   train_loss = 0.616
Epoch  31 Batch   20/22   train_loss = 0.629
Epoch  31 Batch   21/22   train_loss = 0.592
Epoch  32 Batch    0/22   train_loss = 0.618
Epoch  32 Batch    1/22   train_loss = 0.626
Epoch  32 Batch    2/22   train_loss = 0.602
Epoch  32 Batch    3/22   train_loss = 0.570
Epoch  32 Batch    4/22   train_loss = 0.630
Epoch  32 Batch    5/22   train_loss = 0.573
Epoch  32 Batch    6/22   train_loss = 0.573
Epoch  32 Batch    7/22   train_loss = 0.561
Epoch  32 Batch    8/22   train_loss = 0.597
Epoch  32 Batch    9/22   train_loss = 0.648
Epoch  32 Batch   10/22   train_loss = 0.589
Epoch  32 Batch   11/22   train_loss = 0.593
Epoch  32 Batch   12/22   train_loss = 0.612
Epoch  32 Batch   13/22   train_loss = 0.533
Epoch  32 Batch   14/22   train_loss = 0.625
Epoch  32 Batch   15/22   train_loss = 0.568
Epoch  32 Batch   16/22   train_loss = 0.601
Epoch  32 Batch   17/22   train_loss = 0.595
Epoch  32 Batch   18/22   train_loss = 0.578
Epoch  32 Batch   19/22   train_loss = 0.608
Epoch  32 Batch   20/22   train_loss = 0.622
Epoch  32 Batch   21/22   train_loss = 0.570
Epoch  33 Batch    0/22   train_loss = 0.602
Epoch  33 Batch    1/22   train_loss = 0.614
Epoch  33 Batch    2/22   train_loss = 0.566
Epoch  33 Batch    3/22   train_loss = 0.580
Epoch  33 Batch    4/22   train_loss = 0.599
Epoch  33 Batch    5/22   train_loss = 0.567
Epoch  33 Batch    6/22   train_loss = 0.555
Epoch  33 Batch    7/22   train_loss = 0.552
Epoch  33 Batch    8/22   train_loss = 0.599
Epoch  33 Batch    9/22   train_loss = 0.595
Epoch  33 Batch   10/22   train_loss = 0.563
Epoch  33 Batch   11/22   train_loss = 0.564
Epoch  33 Batch   12/22   train_loss = 0.572
Epoch  33 Batch   13/22   train_loss = 0.522
Epoch  33 Batch   14/22   train_loss = 0.610
Epoch  33 Batch   15/22   train_loss = 0.585
Epoch  33 Batch   16/22   train_loss = 0.587
Epoch  33 Batch   17/22   train_loss = 0.564
Epoch  33 Batch   18/22   train_loss = 0.542
Epoch  33 Batch   19/22   train_loss = 0.548
Epoch  33 Batch   20/22   train_loss = 0.604
Epoch  33 Batch   21/22   train_loss = 0.547
Epoch  34 Batch    0/22   train_loss = 0.591
Epoch  34 Batch    1/22   train_loss = 0.578
Epoch  34 Batch    2/22   train_loss = 0.565
Epoch  34 Batch    3/22   train_loss = 0.539
Epoch  34 Batch    4/22   train_loss = 0.592
Epoch  34 Batch    5/22   train_loss = 0.528
Epoch  34 Batch    6/22   train_loss = 0.550
Epoch  34 Batch    7/22   train_loss = 0.533
Epoch  34 Batch    8/22   train_loss = 0.589
Epoch  34 Batch    9/22   train_loss = 0.559
Epoch  34 Batch   10/22   train_loss = 0.560
Epoch  34 Batch   11/22   train_loss = 0.550
Epoch  34 Batch   12/22   train_loss = 0.560
Epoch  34 Batch   13/22   train_loss = 0.532
Epoch  34 Batch   14/22   train_loss = 0.592
Epoch  34 Batch   15/22   train_loss = 0.555
Epoch  34 Batch   16/22   train_loss = 0.550
Epoch  34 Batch   17/22   train_loss = 0.529
Epoch  34 Batch   18/22   train_loss = 0.535
Epoch  34 Batch   19/22   train_loss = 0.540
Epoch  34 Batch   20/22   train_loss = 0.577
Epoch  34 Batch   21/22   train_loss = 0.562
Epoch  35 Batch    0/22   train_loss = 0.553
Epoch  35 Batch    1/22   train_loss = 0.542
Epoch  35 Batch    2/22   train_loss = 0.548
Epoch  35 Batch    3/22   train_loss = 0.542
Epoch  35 Batch    4/22   train_loss = 0.546
Epoch  35 Batch    5/22   train_loss = 0.513
Epoch  35 Batch    6/22   train_loss = 0.498
Epoch  35 Batch    7/22   train_loss = 0.519
Epoch  35 Batch    8/22   train_loss = 0.533
Epoch  35 Batch    9/22   train_loss = 0.559
Epoch  35 Batch   10/22   train_loss = 0.528
Epoch  35 Batch   11/22   train_loss = 0.538
Epoch  35 Batch   12/22   train_loss = 0.531
Epoch  35 Batch   13/22   train_loss = 0.505
Epoch  35 Batch   14/22   train_loss = 0.566
Epoch  35 Batch   15/22   train_loss = 0.516
Epoch  35 Batch   16/22   train_loss = 0.522
Epoch  35 Batch   17/22   train_loss = 0.509
Epoch  35 Batch   18/22   train_loss = 0.518
Epoch  35 Batch   19/22   train_loss = 0.522
Epoch  35 Batch   20/22   train_loss = 0.583
Epoch  35 Batch   21/22   train_loss = 0.535
Epoch  36 Batch    0/22   train_loss = 0.528
Epoch  36 Batch    1/22   train_loss = 0.527
Epoch  36 Batch    2/22   train_loss = 0.537
Epoch  36 Batch    3/22   train_loss = 0.486
Epoch  36 Batch    4/22   train_loss = 0.507
Epoch  36 Batch    5/22   train_loss = 0.511
Epoch  36 Batch    6/22   train_loss = 0.494
Epoch  36 Batch    7/22   train_loss = 0.511
Epoch  36 Batch    8/22   train_loss = 0.521
Epoch  36 Batch    9/22   train_loss = 0.544
Epoch  36 Batch   10/22   train_loss = 0.502
Epoch  36 Batch   11/22   train_loss = 0.523
Epoch  36 Batch   12/22   train_loss = 0.525
Epoch  36 Batch   13/22   train_loss = 0.468
Epoch  36 Batch   14/22   train_loss = 0.509
Epoch  36 Batch   15/22   train_loss = 0.515
Epoch  36 Batch   16/22   train_loss = 0.529
Epoch  36 Batch   17/22   train_loss = 0.496
Epoch  36 Batch   18/22   train_loss = 0.490
Epoch  36 Batch   19/22   train_loss = 0.504
Epoch  36 Batch   20/22   train_loss = 0.527
Epoch  36 Batch   21/22   train_loss = 0.502
Epoch  37 Batch    0/22   train_loss = 0.508
Epoch  37 Batch    1/22   train_loss = 0.522
Epoch  37 Batch    2/22   train_loss = 0.525
Epoch  37 Batch    3/22   train_loss = 0.506
Epoch  37 Batch    4/22   train_loss = 0.522
Epoch  37 Batch    5/22   train_loss = 0.495
Epoch  37 Batch    6/22   train_loss = 0.479
Epoch  37 Batch    7/22   train_loss = 0.464
Epoch  37 Batch    8/22   train_loss = 0.539
Epoch  37 Batch    9/22   train_loss = 0.532
Epoch  37 Batch   10/22   train_loss = 0.492
Epoch  37 Batch   11/22   train_loss = 0.489
Epoch  37 Batch   12/22   train_loss = 0.489
Epoch  37 Batch   13/22   train_loss = 0.465
Epoch  37 Batch   14/22   train_loss = 0.533
Epoch  37 Batch   15/22   train_loss = 0.506
Epoch  37 Batch   16/22   train_loss = 0.506
Epoch  37 Batch   17/22   train_loss = 0.510
Epoch  37 Batch   18/22   train_loss = 0.488
Epoch  37 Batch   19/22   train_loss = 0.506
Epoch  37 Batch   20/22   train_loss = 0.505
Epoch  37 Batch   21/22   train_loss = 0.503
Epoch  38 Batch    0/22   train_loss = 0.513
Epoch  38 Batch    1/22   train_loss = 0.490
Epoch  38 Batch    2/22   train_loss = 0.503
Epoch  38 Batch    3/22   train_loss = 0.482
Epoch  38 Batch    4/22   train_loss = 0.501
Epoch  38 Batch    5/22   train_loss = 0.495
Epoch  38 Batch    6/22   train_loss = 0.494
Epoch  38 Batch    7/22   train_loss = 0.447
Epoch  38 Batch    8/22   train_loss = 0.491
Epoch  38 Batch    9/22   train_loss = 0.543
Epoch  38 Batch   10/22   train_loss = 0.474
Epoch  38 Batch   11/22   train_loss = 0.487
Epoch  38 Batch   12/22   train_loss = 0.484
Epoch  38 Batch   13/22   train_loss = 0.444
Epoch  38 Batch   14/22   train_loss = 0.515
Epoch  38 Batch   15/22   train_loss = 0.486
Epoch  38 Batch   16/22   train_loss = 0.514
Epoch  38 Batch   17/22   train_loss = 0.484
Epoch  38 Batch   18/22   train_loss = 0.483
Epoch  38 Batch   19/22   train_loss = 0.497
Epoch  38 Batch   20/22   train_loss = 0.493
Epoch  38 Batch   21/22   train_loss = 0.497
Epoch  39 Batch    0/22   train_loss = 0.485
Epoch  39 Batch    1/22   train_loss = 0.493
Epoch  39 Batch    2/22   train_loss = 0.490
Epoch  39 Batch    3/22   train_loss = 0.444
Epoch  39 Batch    4/22   train_loss = 0.492
Epoch  39 Batch    5/22   train_loss = 0.472
Epoch  39 Batch    6/22   train_loss = 0.456
Epoch  39 Batch    7/22   train_loss = 0.448
Epoch  39 Batch    8/22   train_loss = 0.493
Epoch  39 Batch    9/22   train_loss = 0.488
Epoch  39 Batch   10/22   train_loss = 0.444
Epoch  39 Batch   11/22   train_loss = 0.478
Epoch  39 Batch   12/22   train_loss = 0.489
Epoch  39 Batch   13/22   train_loss = 0.443
Epoch  39 Batch   14/22   train_loss = 0.483
Epoch  39 Batch   15/22   train_loss = 0.471
Epoch  39 Batch   16/22   train_loss = 0.485
Epoch  39 Batch   17/22   train_loss = 0.488
Epoch  39 Batch   18/22   train_loss = 0.455
Epoch  39 Batch   19/22   train_loss = 0.472
Epoch  39 Batch   20/22   train_loss = 0.486
Epoch  39 Batch   21/22   train_loss = 0.468
Model Trained and Saved

Save Parameters

Save seq_length and save_dir for generating a new TV script.


In [220]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint


In [221]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Implement Generate Functions

Get Tensors

Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


In [222]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    
    InputTensor = loaded_graph.get_tensor_by_name("input:0")
    InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
    FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
    ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
    
    return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)


Tests Passed

Choose Word

Implement the pick_word() function to select the next word using probabilities.


In [223]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    max_prob = np.argmax(probabilities)
    return int_to_vocab[max_prob]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)


Tests Passed

Generate TV Script

This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.


In [225]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)


moe_szyslak: minimum wage and tips.(meaningfully) of course there are fringe benefits.
collette: such as?
moe_szyslak: an unforgettable weekend at club moe.
collette: i prefer to take my vacations someplace hot.
moe_szyslak: i like your moxie, kid.
homer_simpson:(loud sotto) and the entire steel mill was gay!
moe_szyslak:(not surprised) where ya been, homer? entire steel industry's gay.
moe_szyslak: yeah, aerospace, too. and then what happened? omit no detail, no one's ever been happy in this place pick,(while after") can i have some peanuts?
moe_szyslak: all right, my beautiful, beautiful midge--(sneaky chuckle) soon you'll be mine.


reporter:(panicky) oh, oh, it's so hard. please help me, your daughter has been share of the back of this place we can get ready!
moe_szyslak: yeah.(belches)
roz: so, what you boys drinkin'? i'm buyin'. terrible, disturbing secrets.
voice:(

The TV Script is Nonsensical

It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.