dlnd_tv_script_generation


TV Script Generation

In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.

Get the Data

The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.251908396946565
Number of lines: 4258
Average number of words in each line: 11.50164396430249

The sentences 0 to 10:

Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.

Implement Preprocessing Functions

The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:

  • Lookup Table
  • Tokenize Punctuation

Lookup Table

To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:

  • Dictionary to go from the words to an id, we'll call vocab_to_int
  • Dictionary to go from the id to word, we'll call int_to_vocab

Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)


In [3]:
import numpy as np
import problem_unittests as tests

def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    int_to_vocab = dict(enumerate(set(text)))
    vocab_to_int = {word: enum for (enum, word) in int_to_vocab.items()}
    return vocab_to_int, int_to_vocab

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)


Tests Passed

Tokenize Punctuation

We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".

Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:

  • Period ( . )
  • Comma ( , )
  • Quotation Mark ( " )
  • Semicolon ( ; )
  • Exclamation mark ( ! )
  • Question mark ( ? )
  • Left Parentheses ( ( )
  • Right Parentheses ( ) )
  • Dash ( -- )
  • Return ( \n )

This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".


In [4]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    # TODO: Implement Function
    symbols = ['.', ',', '"', ';', '!', '?', '(', ')', '--', '\n']
    values = ['||Period||', '||Comma||', '||Quotation_Mark||', '||Semicolon||', '||Exclamation_Mark||', '||Question_Mark||',\
              '||Left_Parentheses||', '||Right_Parentheses||', '||Dash||', '||Return||']
    token_dict = dict(zip(symbols, values))
    return token_dict

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Build the Neural Network

You'll build the components necessary to build a RNN by implementing the following functions below:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the Version of TensorFlow and Access to GPU


In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter.
  • Targets placeholder
  • Learning Rate placeholder

Return the placeholders in the following tuple (Input, Targets, LearningRate)


In [8]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    input_data = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None, None], name='targets')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    return input_data, targets, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)


Tests Passed

Build RNN Cell and Initialize

Stack one or more BasicLSTMCells in a MultiRNNCell.

  • The Rnn size should be set using rnn_size
  • Initalize Cell State using the MultiRNNCell's zero_state() function
    • Apply the name "initial_state" to the initial state using tf.identity()

Return the cell and initial state in the following tuple (Cell, InitialState)


In [9]:
def get_init_cell(batch_size, rnn_size):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    cell = tf.contrib.rnn.MultiRNNCell([lstm])
    init_state = cell.zero_state(batch_size, tf.float32)
    init_state = tf.identity(init_state, name='initial_state')
    return cell, init_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)


Tests Passed

Word Embedding

Apply embedding to input_data using TensorFlow. Return the embedded sequence.


In [10]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
    embedding = tf.nn.embedding_lookup(embedding, input_data)
    return embedding


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)


Tests Passed

Build RNN

You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.

Return the outputs and final_state state in the following tuple (Outputs, FinalState)


In [11]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
    final_state = tf.identity(final_state, name='final_state')
    return outputs, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
  • Build RNN using cell and your build_rnn(cell, inputs) function.
  • Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.

Return the logits and final state in the following tuple (Logits, FinalState)


In [12]:
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :param embed_dim: Number of embedding dimensions
    :return: Tuple (Logits, FinalState)
    """
    input_data = get_embed(input_data, vocab_size, embed_dim)
    outputs, final_state = build_rnn(cell, input_data)
    logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn = None)
    return logits, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)


Tests Passed

Batches

Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a single batch of input with the shape [batch size, sequence length]
  • The second element is a single batch of targets with the shape [batch size, sequence length]

If you can't fill the last batch with enough data, drop the last batch.

For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2], [ 7  8], [13 14]]
    # Batch of targets
    [[ 2  3], [ 8  9], [14 15]]
  ]

  # Second Batch
  [
    # Batch of Input
    [[ 3  4], [ 9 10], [15 16]]
    # Batch of targets
    [[ 4  5], [10 11], [16 17]]
  ]

  # Third Batch
  [
    # Batch of Input
    [[ 5  6], [11 12], [17 18]]
    # Batch of targets
    [[ 6  7], [12 13], [18  1]]
  ]
]

Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.


In [31]:
def get_batches(int_text, batch_size, seq_length):
    """
    Return batches of input and target
    :param int_text: Text with the words replaced by their ids
    :param batch_size: The size of batch
    :param seq_length: The length of sequence
    :return: Batches as a Numpy array
    """
    n_per_batch = batch_size * seq_length
    n_batches = len(int_text) // n_per_batch
    int_text = int_text[:n_batches*n_per_batch]
    batches = np.ndarray((n_batches, 2, batch_size, seq_length))
    for i in range(n_batches):
        inputs, targets = [], []
        for ii in range (i * seq_length, len(int_text), n_batches * seq_length):
            inputs.append(int_text[ii:ii+seq_length])
            targets.append(int_text[ii+1:ii+1+seq_length])
        if len(targets[-1]) != seq_length:
            targets[-1].append(int_text[0])
        batches[i]=[inputs, targets]
    return batches

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set num_epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set embed_dim to the size of the embedding.
  • Set seq_length to the length of sequence.
  • Set learning_rate to the learning rate.
  • Set show_every_n_batches to the number of batches the neural network should print progress.

In [32]:
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 1024
# Embedding Dimension Size
embed_dim = 8
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Build the Graph

Build the graph using the neural network you implemented.


In [33]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
    train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [34]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')


Epoch   0 Batch    0/44   train_loss = 8.821
Epoch   0 Batch    5/44   train_loss = 7.281
Epoch   0 Batch   10/44   train_loss = 6.672
Epoch   0 Batch   15/44   train_loss = 6.358
Epoch   0 Batch   20/44   train_loss = 6.337
Epoch   0 Batch   25/44   train_loss = 6.327
Epoch   0 Batch   30/44   train_loss = 6.305
Epoch   0 Batch   35/44   train_loss = 5.980
Epoch   0 Batch   40/44   train_loss = 6.003
Epoch   1 Batch    1/44   train_loss = 5.783
Epoch   1 Batch    6/44   train_loss = 5.725
Epoch   1 Batch   11/44   train_loss = 5.677
Epoch   1 Batch   16/44   train_loss = 5.615
Epoch   1 Batch   21/44   train_loss = 5.538
Epoch   1 Batch   26/44   train_loss = 5.312
Epoch   1 Batch   31/44   train_loss = 5.366
Epoch   1 Batch   36/44   train_loss = 5.280
Epoch   1 Batch   41/44   train_loss = 5.135
Epoch   2 Batch    2/44   train_loss = 4.857
Epoch   2 Batch    7/44   train_loss = 5.217
Epoch   2 Batch   12/44   train_loss = 5.000
Epoch   2 Batch   17/44   train_loss = 5.014
Epoch   2 Batch   22/44   train_loss = 4.864
Epoch   2 Batch   27/44   train_loss = 4.894
Epoch   2 Batch   32/44   train_loss = 4.810
Epoch   2 Batch   37/44   train_loss = 4.884
Epoch   2 Batch   42/44   train_loss = 4.741
Epoch   3 Batch    3/44   train_loss = 4.645
Epoch   3 Batch    8/44   train_loss = 4.709
Epoch   3 Batch   13/44   train_loss = 4.623
Epoch   3 Batch   18/44   train_loss = 4.637
Epoch   3 Batch   23/44   train_loss = 4.637
Epoch   3 Batch   28/44   train_loss = 4.565
Epoch   3 Batch   33/44   train_loss = 4.594
Epoch   3 Batch   38/44   train_loss = 4.444
Epoch   3 Batch   43/44   train_loss = 4.487
Epoch   4 Batch    4/44   train_loss = 4.304
Epoch   4 Batch    9/44   train_loss = 4.432
Epoch   4 Batch   14/44   train_loss = 4.515
Epoch   4 Batch   19/44   train_loss = 4.381
Epoch   4 Batch   24/44   train_loss = 4.229
Epoch   4 Batch   29/44   train_loss = 4.348
Epoch   4 Batch   34/44   train_loss = 4.129
Epoch   4 Batch   39/44   train_loss = 4.158
Epoch   5 Batch    0/44   train_loss = 4.001
Epoch   5 Batch    5/44   train_loss = 4.050
Epoch   5 Batch   10/44   train_loss = 4.119
Epoch   5 Batch   15/44   train_loss = 4.029
Epoch   5 Batch   20/44   train_loss = 4.080
Epoch   5 Batch   25/44   train_loss = 4.076
Epoch   5 Batch   30/44   train_loss = 3.963
Epoch   5 Batch   35/44   train_loss = 3.833
Epoch   5 Batch   40/44   train_loss = 3.984
Epoch   6 Batch    1/44   train_loss = 3.851
Epoch   6 Batch    6/44   train_loss = 3.924
Epoch   6 Batch   11/44   train_loss = 3.871
Epoch   6 Batch   16/44   train_loss = 3.916
Epoch   6 Batch   21/44   train_loss = 3.797
Epoch   6 Batch   26/44   train_loss = 3.688
Epoch   6 Batch   31/44   train_loss = 3.783
Epoch   6 Batch   36/44   train_loss = 3.843
Epoch   6 Batch   41/44   train_loss = 3.696
Epoch   7 Batch    2/44   train_loss = 3.654
Epoch   7 Batch    7/44   train_loss = 3.687
Epoch   7 Batch   12/44   train_loss = 3.582
Epoch   7 Batch   17/44   train_loss = 3.678
Epoch   7 Batch   22/44   train_loss = 3.722
Epoch   7 Batch   27/44   train_loss = 3.641
Epoch   7 Batch   32/44   train_loss = 3.563
Epoch   7 Batch   37/44   train_loss = 3.563
Epoch   7 Batch   42/44   train_loss = 3.431
Epoch   8 Batch    3/44   train_loss = 3.462
Epoch   8 Batch    8/44   train_loss = 3.480
Epoch   8 Batch   13/44   train_loss = 3.383
Epoch   8 Batch   18/44   train_loss = 3.559
Epoch   8 Batch   23/44   train_loss = 3.532
Epoch   8 Batch   28/44   train_loss = 3.421
Epoch   8 Batch   33/44   train_loss = 3.484
Epoch   8 Batch   38/44   train_loss = 3.393
Epoch   8 Batch   43/44   train_loss = 3.379
Epoch   9 Batch    4/44   train_loss = 3.239
Epoch   9 Batch    9/44   train_loss = 3.279
Epoch   9 Batch   14/44   train_loss = 3.271
Epoch   9 Batch   19/44   train_loss = 3.304
Epoch   9 Batch   24/44   train_loss = 3.242
Epoch   9 Batch   29/44   train_loss = 3.243
Epoch   9 Batch   34/44   train_loss = 3.225
Epoch   9 Batch   39/44   train_loss = 3.061
Epoch  10 Batch    0/44   train_loss = 3.020
Epoch  10 Batch    5/44   train_loss = 3.010
Epoch  10 Batch   10/44   train_loss = 3.088
Epoch  10 Batch   15/44   train_loss = 3.012
Epoch  10 Batch   20/44   train_loss = 3.014
Epoch  10 Batch   25/44   train_loss = 3.056
Epoch  10 Batch   30/44   train_loss = 3.018
Epoch  10 Batch   35/44   train_loss = 2.997
Epoch  10 Batch   40/44   train_loss = 3.023
Epoch  11 Batch    1/44   train_loss = 2.979
Epoch  11 Batch    6/44   train_loss = 2.981
Epoch  11 Batch   11/44   train_loss = 3.008
Epoch  11 Batch   16/44   train_loss = 2.930
Epoch  11 Batch   21/44   train_loss = 2.864
Epoch  11 Batch   26/44   train_loss = 2.847
Epoch  11 Batch   31/44   train_loss = 2.982
Epoch  11 Batch   36/44   train_loss = 2.994
Epoch  11 Batch   41/44   train_loss = 2.809
Epoch  12 Batch    2/44   train_loss = 2.766
Epoch  12 Batch    7/44   train_loss = 2.730
Epoch  12 Batch   12/44   train_loss = 2.786
Epoch  12 Batch   17/44   train_loss = 2.885
Epoch  12 Batch   22/44   train_loss = 2.840
Epoch  12 Batch   27/44   train_loss = 2.851
Epoch  12 Batch   32/44   train_loss = 2.898
Epoch  12 Batch   37/44   train_loss = 2.743
Epoch  12 Batch   42/44   train_loss = 2.623
Epoch  13 Batch    3/44   train_loss = 2.709
Epoch  13 Batch    8/44   train_loss = 2.637
Epoch  13 Batch   13/44   train_loss = 2.601
Epoch  13 Batch   18/44   train_loss = 2.843
Epoch  13 Batch   23/44   train_loss = 2.670
Epoch  13 Batch   28/44   train_loss = 2.735
Epoch  13 Batch   33/44   train_loss = 2.758
Epoch  13 Batch   38/44   train_loss = 2.703
Epoch  13 Batch   43/44   train_loss = 2.674
Epoch  14 Batch    4/44   train_loss = 2.553
Epoch  14 Batch    9/44   train_loss = 2.646
Epoch  14 Batch   14/44   train_loss = 2.560
Epoch  14 Batch   19/44   train_loss = 2.550
Epoch  14 Batch   24/44   train_loss = 2.591
Epoch  14 Batch   29/44   train_loss = 2.527
Epoch  14 Batch   34/44   train_loss = 2.530
Epoch  14 Batch   39/44   train_loss = 2.516
Epoch  15 Batch    0/44   train_loss = 2.377
Epoch  15 Batch    5/44   train_loss = 2.363
Epoch  15 Batch   10/44   train_loss = 2.363
Epoch  15 Batch   15/44   train_loss = 2.418
Epoch  15 Batch   20/44   train_loss = 2.473
Epoch  15 Batch   25/44   train_loss = 2.529
Epoch  15 Batch   30/44   train_loss = 2.389
Epoch  15 Batch   35/44   train_loss = 2.409
Epoch  15 Batch   40/44   train_loss = 2.466
Epoch  16 Batch    1/44   train_loss = 2.369
Epoch  16 Batch    6/44   train_loss = 2.308
Epoch  16 Batch   11/44   train_loss = 2.317
Epoch  16 Batch   16/44   train_loss = 2.310
Epoch  16 Batch   21/44   train_loss = 2.265
Epoch  16 Batch   26/44   train_loss = 2.335
Epoch  16 Batch   31/44   train_loss = 2.407
Epoch  16 Batch   36/44   train_loss = 2.414
Epoch  16 Batch   41/44   train_loss = 2.194
Epoch  17 Batch    2/44   train_loss = 2.146
Epoch  17 Batch    7/44   train_loss = 2.122
Epoch  17 Batch   12/44   train_loss = 2.137
Epoch  17 Batch   17/44   train_loss = 2.148
Epoch  17 Batch   22/44   train_loss = 2.149
Epoch  17 Batch   27/44   train_loss = 2.197
Epoch  17 Batch   32/44   train_loss = 2.169
Epoch  17 Batch   37/44   train_loss = 2.138
Epoch  17 Batch   42/44   train_loss = 1.955
Epoch  18 Batch    3/44   train_loss = 2.027
Epoch  18 Batch    8/44   train_loss = 1.994
Epoch  18 Batch   13/44   train_loss = 1.934
Epoch  18 Batch   18/44   train_loss = 2.032
Epoch  18 Batch   23/44   train_loss = 1.906
Epoch  18 Batch   28/44   train_loss = 1.863
Epoch  18 Batch   33/44   train_loss = 1.967
Epoch  18 Batch   38/44   train_loss = 1.931
Epoch  18 Batch   43/44   train_loss = 1.872
Epoch  19 Batch    4/44   train_loss = 1.852
Epoch  19 Batch    9/44   train_loss = 1.847
Epoch  19 Batch   14/44   train_loss = 1.731
Epoch  19 Batch   19/44   train_loss = 1.769
Epoch  19 Batch   24/44   train_loss = 1.755
Epoch  19 Batch   29/44   train_loss = 1.725
Epoch  19 Batch   34/44   train_loss = 1.752
Epoch  19 Batch   39/44   train_loss = 1.654
Epoch  20 Batch    0/44   train_loss = 1.561
Epoch  20 Batch    5/44   train_loss = 1.613
Epoch  20 Batch   10/44   train_loss = 1.566
Epoch  20 Batch   15/44   train_loss = 1.616
Epoch  20 Batch   20/44   train_loss = 1.646
Epoch  20 Batch   25/44   train_loss = 1.595
Epoch  20 Batch   30/44   train_loss = 1.566
Epoch  20 Batch   35/44   train_loss = 1.624
Epoch  20 Batch   40/44   train_loss = 1.589
Epoch  21 Batch    1/44   train_loss = 1.530
Epoch  21 Batch    6/44   train_loss = 1.458
Epoch  21 Batch   11/44   train_loss = 1.471
Epoch  21 Batch   16/44   train_loss = 1.444
Epoch  21 Batch   21/44   train_loss = 1.387
Epoch  21 Batch   26/44   train_loss = 1.398
Epoch  21 Batch   31/44   train_loss = 1.403
Epoch  21 Batch   36/44   train_loss = 1.552
Epoch  21 Batch   41/44   train_loss = 1.351
Epoch  22 Batch    2/44   train_loss = 1.403
Epoch  22 Batch    7/44   train_loss = 1.312
Epoch  22 Batch   12/44   train_loss = 1.267
Epoch  22 Batch   17/44   train_loss = 1.311
Epoch  22 Batch   22/44   train_loss = 1.264
Epoch  22 Batch   27/44   train_loss = 1.317
Epoch  22 Batch   32/44   train_loss = 1.259
Epoch  22 Batch   37/44   train_loss = 1.273
Epoch  22 Batch   42/44   train_loss = 1.231
Epoch  23 Batch    3/44   train_loss = 1.262
Epoch  23 Batch    8/44   train_loss = 1.182
Epoch  23 Batch   13/44   train_loss = 1.134
Epoch  23 Batch   18/44   train_loss = 1.138
Epoch  23 Batch   23/44   train_loss = 1.090
Epoch  23 Batch   28/44   train_loss = 1.082
Epoch  23 Batch   33/44   train_loss = 1.132
Epoch  23 Batch   38/44   train_loss = 1.130
Epoch  23 Batch   43/44   train_loss = 1.072
Epoch  24 Batch    4/44   train_loss = 1.132
Epoch  24 Batch    9/44   train_loss = 1.091
Epoch  24 Batch   14/44   train_loss = 0.986
Epoch  24 Batch   19/44   train_loss = 0.957
Epoch  24 Batch   24/44   train_loss = 0.974
Epoch  24 Batch   29/44   train_loss = 0.896
Epoch  24 Batch   34/44   train_loss = 1.004
Epoch  24 Batch   39/44   train_loss = 0.955
Epoch  25 Batch    0/44   train_loss = 0.938
Epoch  25 Batch    5/44   train_loss = 0.970
Epoch  25 Batch   10/44   train_loss = 0.940
Epoch  25 Batch   15/44   train_loss = 0.934
Epoch  25 Batch   20/44   train_loss = 0.901
Epoch  25 Batch   25/44   train_loss = 0.899
Epoch  25 Batch   30/44   train_loss = 0.833
Epoch  25 Batch   35/44   train_loss = 0.882
Epoch  25 Batch   40/44   train_loss = 0.842
Epoch  26 Batch    1/44   train_loss = 0.833
Epoch  26 Batch    6/44   train_loss = 0.879
Epoch  26 Batch   11/44   train_loss = 0.928
Epoch  26 Batch   16/44   train_loss = 0.818
Epoch  26 Batch   21/44   train_loss = 0.774
Epoch  26 Batch   26/44   train_loss = 0.783
Epoch  26 Batch   31/44   train_loss = 0.817
Epoch  26 Batch   36/44   train_loss = 0.866
Epoch  26 Batch   41/44   train_loss = 0.769
Epoch  27 Batch    2/44   train_loss = 0.802
Epoch  27 Batch    7/44   train_loss = 0.732
Epoch  27 Batch   12/44   train_loss = 0.759
Epoch  27 Batch   17/44   train_loss = 0.785
Epoch  27 Batch   22/44   train_loss = 0.750
Epoch  27 Batch   27/44   train_loss = 0.760
Epoch  27 Batch   32/44   train_loss = 0.744
Epoch  27 Batch   37/44   train_loss = 0.716
Epoch  27 Batch   42/44   train_loss = 0.708
Epoch  28 Batch    3/44   train_loss = 0.742
Epoch  28 Batch    8/44   train_loss = 0.724
Epoch  28 Batch   13/44   train_loss = 0.684
Epoch  28 Batch   18/44   train_loss = 0.702
Epoch  28 Batch   23/44   train_loss = 0.665
Epoch  28 Batch   28/44   train_loss = 0.656
Epoch  28 Batch   33/44   train_loss = 0.716
Epoch  28 Batch   38/44   train_loss = 0.681
Epoch  28 Batch   43/44   train_loss = 0.650
Epoch  29 Batch    4/44   train_loss = 0.678
Epoch  29 Batch    9/44   train_loss = 0.719
Epoch  29 Batch   14/44   train_loss = 0.653
Epoch  29 Batch   19/44   train_loss = 0.640
Epoch  29 Batch   24/44   train_loss = 0.609
Epoch  29 Batch   29/44   train_loss = 0.624
Epoch  29 Batch   34/44   train_loss = 0.704
Epoch  29 Batch   39/44   train_loss = 0.586
Epoch  30 Batch    0/44   train_loss = 0.595
Epoch  30 Batch    5/44   train_loss = 0.635
Epoch  30 Batch   10/44   train_loss = 0.669
Epoch  30 Batch   15/44   train_loss = 0.641
Epoch  30 Batch   20/44   train_loss = 0.599
Epoch  30 Batch   25/44   train_loss = 0.627
Epoch  30 Batch   30/44   train_loss = 0.577
Epoch  30 Batch   35/44   train_loss = 0.602
Epoch  30 Batch   40/44   train_loss = 0.594
Epoch  31 Batch    1/44   train_loss = 0.579
Epoch  31 Batch    6/44   train_loss = 0.599
Epoch  31 Batch   11/44   train_loss = 0.624
Epoch  31 Batch   16/44   train_loss = 0.600
Epoch  31 Batch   21/44   train_loss = 0.536
Epoch  31 Batch   26/44   train_loss = 0.552
Epoch  31 Batch   31/44   train_loss = 0.591
Epoch  31 Batch   36/44   train_loss = 0.623
Epoch  31 Batch   41/44   train_loss = 0.580
Epoch  32 Batch    2/44   train_loss = 0.611
Epoch  32 Batch    7/44   train_loss = 0.543
Epoch  32 Batch   12/44   train_loss = 0.546
Epoch  32 Batch   17/44   train_loss = 0.625
Epoch  32 Batch   22/44   train_loss = 0.569
Epoch  32 Batch   27/44   train_loss = 0.601
Epoch  32 Batch   32/44   train_loss = 0.558
Epoch  32 Batch   37/44   train_loss = 0.540
Epoch  32 Batch   42/44   train_loss = 0.543
Epoch  33 Batch    3/44   train_loss = 0.619
Epoch  33 Batch    8/44   train_loss = 0.581
Epoch  33 Batch   13/44   train_loss = 0.555
Epoch  33 Batch   18/44   train_loss = 0.587
Epoch  33 Batch   23/44   train_loss = 0.589
Epoch  33 Batch   28/44   train_loss = 0.530
Epoch  33 Batch   33/44   train_loss = 0.566
Epoch  33 Batch   38/44   train_loss = 0.542
Epoch  33 Batch   43/44   train_loss = 0.535
Epoch  34 Batch    4/44   train_loss = 0.562
Epoch  34 Batch    9/44   train_loss = 0.601
Epoch  34 Batch   14/44   train_loss = 0.540
Epoch  34 Batch   19/44   train_loss = 0.522
Epoch  34 Batch   24/44   train_loss = 0.540
Epoch  34 Batch   29/44   train_loss = 0.520
Epoch  34 Batch   34/44   train_loss = 0.535
Epoch  34 Batch   39/44   train_loss = 0.523
Epoch  35 Batch    0/44   train_loss = 0.520
Epoch  35 Batch    5/44   train_loss = 0.527
Epoch  35 Batch   10/44   train_loss = 0.515
Epoch  35 Batch   15/44   train_loss = 0.537
Epoch  35 Batch   20/44   train_loss = 0.508
Epoch  35 Batch   25/44   train_loss = 0.535
Epoch  35 Batch   30/44   train_loss = 0.510
Epoch  35 Batch   35/44   train_loss = 0.477
Epoch  35 Batch   40/44   train_loss = 0.459
Epoch  36 Batch    1/44   train_loss = 0.507
Epoch  36 Batch    6/44   train_loss = 0.514
Epoch  36 Batch   11/44   train_loss = 0.534
Epoch  36 Batch   16/44   train_loss = 0.471
Epoch  36 Batch   21/44   train_loss = 0.456
Epoch  36 Batch   26/44   train_loss = 0.499
Epoch  36 Batch   31/44   train_loss = 0.538
Epoch  36 Batch   36/44   train_loss = 0.518
Epoch  36 Batch   41/44   train_loss = 0.461
Epoch  37 Batch    2/44   train_loss = 0.505
Epoch  37 Batch    7/44   train_loss = 0.494
Epoch  37 Batch   12/44   train_loss = 0.496
Epoch  37 Batch   17/44   train_loss = 0.502
Epoch  37 Batch   22/44   train_loss = 0.473
Epoch  37 Batch   27/44   train_loss = 0.502
Epoch  37 Batch   32/44   train_loss = 0.489
Epoch  37 Batch   37/44   train_loss = 0.472
Epoch  37 Batch   42/44   train_loss = 0.445
Epoch  38 Batch    3/44   train_loss = 0.481
Epoch  38 Batch    8/44   train_loss = 0.476
Epoch  38 Batch   13/44   train_loss = 0.488
Epoch  38 Batch   18/44   train_loss = 0.457
Epoch  38 Batch   23/44   train_loss = 0.444
Epoch  38 Batch   28/44   train_loss = 0.422
Epoch  38 Batch   33/44   train_loss = 0.497
Epoch  38 Batch   38/44   train_loss = 0.467
Epoch  38 Batch   43/44   train_loss = 0.420
Epoch  39 Batch    4/44   train_loss = 0.435
Epoch  39 Batch    9/44   train_loss = 0.481
Epoch  39 Batch   14/44   train_loss = 0.441
Epoch  39 Batch   19/44   train_loss = 0.437
Epoch  39 Batch   24/44   train_loss = 0.391
Epoch  39 Batch   29/44   train_loss = 0.407
Epoch  39 Batch   34/44   train_loss = 0.458
Epoch  39 Batch   39/44   train_loss = 0.433
Epoch  40 Batch    0/44   train_loss = 0.392
Epoch  40 Batch    5/44   train_loss = 0.424
Epoch  40 Batch   10/44   train_loss = 0.418
Epoch  40 Batch   15/44   train_loss = 0.438
Epoch  40 Batch   20/44   train_loss = 0.417
Epoch  40 Batch   25/44   train_loss = 0.439
Epoch  40 Batch   30/44   train_loss = 0.402
Epoch  40 Batch   35/44   train_loss = 0.377
Epoch  40 Batch   40/44   train_loss = 0.371
Epoch  41 Batch    1/44   train_loss = 0.387
Epoch  41 Batch    6/44   train_loss = 0.399
Epoch  41 Batch   11/44   train_loss = 0.420
Epoch  41 Batch   16/44   train_loss = 0.369
Epoch  41 Batch   21/44   train_loss = 0.360
Epoch  41 Batch   26/44   train_loss = 0.399
Epoch  41 Batch   31/44   train_loss = 0.414
Epoch  41 Batch   36/44   train_loss = 0.411
Epoch  41 Batch   41/44   train_loss = 0.363
Epoch  42 Batch    2/44   train_loss = 0.380
Epoch  42 Batch    7/44   train_loss = 0.378
Epoch  42 Batch   12/44   train_loss = 0.355
Epoch  42 Batch   17/44   train_loss = 0.393
Epoch  42 Batch   22/44   train_loss = 0.383
Epoch  42 Batch   27/44   train_loss = 0.386
Epoch  42 Batch   32/44   train_loss = 0.343
Epoch  42 Batch   37/44   train_loss = 0.349
Epoch  42 Batch   42/44   train_loss = 0.328
Epoch  43 Batch    3/44   train_loss = 0.391
Epoch  43 Batch    8/44   train_loss = 0.368
Epoch  43 Batch   13/44   train_loss = 0.375
Epoch  43 Batch   18/44   train_loss = 0.363
Epoch  43 Batch   23/44   train_loss = 0.358
Epoch  43 Batch   28/44   train_loss = 0.339
Epoch  43 Batch   33/44   train_loss = 0.394
Epoch  43 Batch   38/44   train_loss = 0.363
Epoch  43 Batch   43/44   train_loss = 0.344
Epoch  44 Batch    4/44   train_loss = 0.354
Epoch  44 Batch    9/44   train_loss = 0.392
Epoch  44 Batch   14/44   train_loss = 0.359
Epoch  44 Batch   19/44   train_loss = 0.361
Epoch  44 Batch   24/44   train_loss = 0.325
Epoch  44 Batch   29/44   train_loss = 0.342
Epoch  44 Batch   34/44   train_loss = 0.366
Epoch  44 Batch   39/44   train_loss = 0.343
Epoch  45 Batch    0/44   train_loss = 0.327
Epoch  45 Batch    5/44   train_loss = 0.358
Epoch  45 Batch   10/44   train_loss = 0.356
Epoch  45 Batch   15/44   train_loss = 0.370
Epoch  45 Batch   20/44   train_loss = 0.342
Epoch  45 Batch   25/44   train_loss = 0.365
Epoch  45 Batch   30/44   train_loss = 0.347
Epoch  45 Batch   35/44   train_loss = 0.328
Epoch  45 Batch   40/44   train_loss = 0.309
Epoch  46 Batch    1/44   train_loss = 0.336
Epoch  46 Batch    6/44   train_loss = 0.342
Epoch  46 Batch   11/44   train_loss = 0.378
Epoch  46 Batch   16/44   train_loss = 0.336
Epoch  46 Batch   21/44   train_loss = 0.313
Epoch  46 Batch   26/44   train_loss = 0.353
Epoch  46 Batch   31/44   train_loss = 0.358
Epoch  46 Batch   36/44   train_loss = 0.382
Epoch  46 Batch   41/44   train_loss = 0.331
Epoch  47 Batch    2/44   train_loss = 0.344
Epoch  47 Batch    7/44   train_loss = 0.347
Epoch  47 Batch   12/44   train_loss = 0.326
Epoch  47 Batch   17/44   train_loss = 0.368
Epoch  47 Batch   22/44   train_loss = 0.352
Epoch  47 Batch   27/44   train_loss = 0.361
Epoch  47 Batch   32/44   train_loss = 0.320
Epoch  47 Batch   37/44   train_loss = 0.329
Epoch  47 Batch   42/44   train_loss = 0.320
Epoch  48 Batch    3/44   train_loss = 0.375
Epoch  48 Batch    8/44   train_loss = 0.357
Epoch  48 Batch   13/44   train_loss = 0.355
Epoch  48 Batch   18/44   train_loss = 0.337
Epoch  48 Batch   23/44   train_loss = 0.340
Epoch  48 Batch   28/44   train_loss = 0.320
Epoch  48 Batch   33/44   train_loss = 0.374
Epoch  48 Batch   38/44   train_loss = 0.350
Epoch  48 Batch   43/44   train_loss = 0.325
Epoch  49 Batch    4/44   train_loss = 0.334
Epoch  49 Batch    9/44   train_loss = 0.377
Epoch  49 Batch   14/44   train_loss = 0.344
Epoch  49 Batch   19/44   train_loss = 0.342
Epoch  49 Batch   24/44   train_loss = 0.318
Epoch  49 Batch   29/44   train_loss = 0.331
Epoch  49 Batch   34/44   train_loss = 0.363
Epoch  49 Batch   39/44   train_loss = 0.327
Epoch  50 Batch    0/44   train_loss = 0.316
Epoch  50 Batch    5/44   train_loss = 0.343
Epoch  50 Batch   10/44   train_loss = 0.348
Epoch  50 Batch   15/44   train_loss = 0.359
Epoch  50 Batch   20/44   train_loss = 0.336
Epoch  50 Batch   25/44   train_loss = 0.362
Epoch  50 Batch   30/44   train_loss = 0.346
Epoch  50 Batch   35/44   train_loss = 0.318
Epoch  50 Batch   40/44   train_loss = 0.304
Epoch  51 Batch    1/44   train_loss = 0.328
Epoch  51 Batch    6/44   train_loss = 0.341
Epoch  51 Batch   11/44   train_loss = 0.368
Epoch  51 Batch   16/44   train_loss = 0.327
Epoch  51 Batch   21/44   train_loss = 0.310
Epoch  51 Batch   26/44   train_loss = 0.349
Epoch  51 Batch   31/44   train_loss = 0.352
Epoch  51 Batch   36/44   train_loss = 0.368
Epoch  51 Batch   41/44   train_loss = 0.324
Epoch  52 Batch    2/44   train_loss = 0.341
Epoch  52 Batch    7/44   train_loss = 0.339
Epoch  52 Batch   12/44   train_loss = 0.326
Epoch  52 Batch   17/44   train_loss = 0.358
Epoch  52 Batch   22/44   train_loss = 0.346
Epoch  52 Batch   27/44   train_loss = 0.352
Epoch  52 Batch   32/44   train_loss = 0.312
Epoch  52 Batch   37/44   train_loss = 0.324
Epoch  52 Batch   42/44   train_loss = 0.303
Epoch  53 Batch    3/44   train_loss = 0.365
Epoch  53 Batch    8/44   train_loss = 0.345
Epoch  53 Batch   13/44   train_loss = 0.349
Epoch  53 Batch   18/44   train_loss = 0.340
Epoch  53 Batch   23/44   train_loss = 0.334
Epoch  53 Batch   28/44   train_loss = 0.317
Epoch  53 Batch   33/44   train_loss = 0.370
Epoch  53 Batch   38/44   train_loss = 0.342
Epoch  53 Batch   43/44   train_loss = 0.321
Epoch  54 Batch    4/44   train_loss = 0.334
Epoch  54 Batch    9/44   train_loss = 0.369
Epoch  54 Batch   14/44   train_loss = 0.337
Epoch  54 Batch   19/44   train_loss = 0.345
Epoch  54 Batch   24/44   train_loss = 0.308
Epoch  54 Batch   29/44   train_loss = 0.329
Epoch  54 Batch   34/44   train_loss = 0.346
Epoch  54 Batch   39/44   train_loss = 0.329
Epoch  55 Batch    0/44   train_loss = 0.314
Epoch  55 Batch    5/44   train_loss = 0.340
Epoch  55 Batch   10/44   train_loss = 0.341
Epoch  55 Batch   15/44   train_loss = 0.355
Epoch  55 Batch   20/44   train_loss = 0.329
Epoch  55 Batch   25/44   train_loss = 0.351
Epoch  55 Batch   30/44   train_loss = 0.333
Epoch  55 Batch   35/44   train_loss = 0.313
Epoch  55 Batch   40/44   train_loss = 0.297
Epoch  56 Batch    1/44   train_loss = 0.323
Epoch  56 Batch    6/44   train_loss = 0.329
Epoch  56 Batch   11/44   train_loss = 0.365
Epoch  56 Batch   16/44   train_loss = 0.322
Epoch  56 Batch   21/44   train_loss = 0.299
Epoch  56 Batch   26/44   train_loss = 0.343
Epoch  56 Batch   31/44   train_loss = 0.343
Epoch  56 Batch   36/44   train_loss = 0.371
Epoch  56 Batch   41/44   train_loss = 0.318
Epoch  57 Batch    2/44   train_loss = 0.332
Epoch  57 Batch    7/44   train_loss = 0.336
Epoch  57 Batch   12/44   train_loss = 0.318
Epoch  57 Batch   17/44   train_loss = 0.357
Epoch  57 Batch   22/44   train_loss = 0.340
Epoch  57 Batch   27/44   train_loss = 0.350
Epoch  57 Batch   32/44   train_loss = 0.307
Epoch  57 Batch   37/44   train_loss = 0.318
Epoch  57 Batch   42/44   train_loss = 0.309
Epoch  58 Batch    3/44   train_loss = 0.363
Epoch  58 Batch    8/44   train_loss = 0.344
Epoch  58 Batch   13/44   train_loss = 0.342
Epoch  58 Batch   18/44   train_loss = 0.328
Epoch  58 Batch   23/44   train_loss = 0.327
Epoch  58 Batch   28/44   train_loss = 0.312
Epoch  58 Batch   33/44   train_loss = 0.363
Epoch  58 Batch   38/44   train_loss = 0.339
Epoch  58 Batch   43/44   train_loss = 0.315
Epoch  59 Batch    4/44   train_loss = 0.324
Epoch  59 Batch    9/44   train_loss = 0.367
Epoch  59 Batch   14/44   train_loss = 0.333
Epoch  59 Batch   19/44   train_loss = 0.334
Epoch  59 Batch   24/44   train_loss = 0.309
Epoch  59 Batch   29/44   train_loss = 0.322
Epoch  59 Batch   34/44   train_loss = 0.352
Epoch  59 Batch   39/44   train_loss = 0.320
Epoch  60 Batch    0/44   train_loss = 0.309
Epoch  60 Batch    5/44   train_loss = 0.335
Epoch  60 Batch   10/44   train_loss = 0.340
Epoch  60 Batch   15/44   train_loss = 0.350
Epoch  60 Batch   20/44   train_loss = 0.327
Epoch  60 Batch   25/44   train_loss = 0.353
Epoch  60 Batch   30/44   train_loss = 0.337
Epoch  60 Batch   35/44   train_loss = 0.309
Epoch  60 Batch   40/44   train_loss = 0.297
Epoch  61 Batch    1/44   train_loss = 0.320
Epoch  61 Batch    6/44   train_loss = 0.332
Epoch  61 Batch   11/44   train_loss = 0.358
Epoch  61 Batch   16/44   train_loss = 0.321
Epoch  61 Batch   21/44   train_loss = 0.300
Epoch  61 Batch   26/44   train_loss = 0.343
Epoch  61 Batch   31/44   train_loss = 0.343
Epoch  61 Batch   36/44   train_loss = 0.363
Epoch  61 Batch   41/44   train_loss = 0.317
Epoch  62 Batch    2/44   train_loss = 0.333
Epoch  62 Batch    7/44   train_loss = 0.332
Epoch  62 Batch   12/44   train_loss = 0.320
Epoch  62 Batch   17/44   train_loss = 0.350
Epoch  62 Batch   22/44   train_loss = 0.337
Epoch  62 Batch   27/44   train_loss = 0.346
Epoch  62 Batch   32/44   train_loss = 0.305
Epoch  62 Batch   37/44   train_loss = 0.316
Epoch  62 Batch   42/44   train_loss = 0.295
Epoch  63 Batch    3/44   train_loss = 0.359
Epoch  63 Batch    8/44   train_loss = 0.338
Epoch  63 Batch   13/44   train_loss = 0.340
Epoch  63 Batch   18/44   train_loss = 0.333
Epoch  63 Batch   23/44   train_loss = 0.327
Epoch  63 Batch   28/44   train_loss = 0.312
Epoch  63 Batch   33/44   train_loss = 0.361
Epoch  63 Batch   38/44   train_loss = 0.334
Epoch  63 Batch   43/44   train_loss = 0.312
Epoch  64 Batch    4/44   train_loss = 0.325
Epoch  64 Batch    9/44   train_loss = 0.362
Epoch  64 Batch   14/44   train_loss = 0.330
Epoch  64 Batch   19/44   train_loss = 0.337
Epoch  64 Batch   24/44   train_loss = 0.300
Epoch  64 Batch   29/44   train_loss = 0.323
Epoch  64 Batch   34/44   train_loss = 0.338
Epoch  64 Batch   39/44   train_loss = 0.323
Epoch  65 Batch    0/44   train_loss = 0.309
Epoch  65 Batch    5/44   train_loss = 0.331
Epoch  65 Batch   10/44   train_loss = 0.333
Epoch  65 Batch   15/44   train_loss = 0.351
Epoch  65 Batch   20/44   train_loss = 0.323
Epoch  65 Batch   25/44   train_loss = 0.345
Epoch  65 Batch   30/44   train_loss = 0.327
Epoch  65 Batch   35/44   train_loss = 0.306
Epoch  65 Batch   40/44   train_loss = 0.291
Epoch  66 Batch    1/44   train_loss = 0.318
Epoch  66 Batch    6/44   train_loss = 0.324
Epoch  66 Batch   11/44   train_loss = 0.359
Epoch  66 Batch   16/44   train_loss = 0.317
Epoch  66 Batch   21/44   train_loss = 0.292
Epoch  66 Batch   26/44   train_loss = 0.339
Epoch  66 Batch   31/44   train_loss = 0.336
Epoch  66 Batch   36/44   train_loss = 0.365
Epoch  66 Batch   41/44   train_loss = 0.313
Epoch  67 Batch    2/44   train_loss = 0.327
Epoch  67 Batch    7/44   train_loss = 0.331
Epoch  67 Batch   12/44   train_loss = 0.314
Epoch  67 Batch   17/44   train_loss = 0.351
Epoch  67 Batch   22/44   train_loss = 0.334
Epoch  67 Batch   27/44   train_loss = 0.343
Epoch  67 Batch   32/44   train_loss = 0.302
Epoch  67 Batch   37/44   train_loss = 0.313
Epoch  67 Batch   42/44   train_loss = 0.302
Epoch  68 Batch    3/44   train_loss = 0.358
Epoch  68 Batch    8/44   train_loss = 0.338
Epoch  68 Batch   13/44   train_loss = 0.336
Epoch  68 Batch   18/44   train_loss = 0.323
Epoch  68 Batch   23/44   train_loss = 0.322
Epoch  68 Batch   28/44   train_loss = 0.308
Epoch  68 Batch   33/44   train_loss = 0.358
Epoch  68 Batch   38/44   train_loss = 0.333
Epoch  68 Batch   43/44   train_loss = 0.309
Epoch  69 Batch    4/44   train_loss = 0.319
Epoch  69 Batch    9/44   train_loss = 0.361
Epoch  69 Batch   14/44   train_loss = 0.328
Epoch  69 Batch   19/44   train_loss = 0.329
Epoch  69 Batch   24/44   train_loss = 0.302
Epoch  69 Batch   29/44   train_loss = 0.316
Epoch  69 Batch   34/44   train_loss = 0.345
Epoch  69 Batch   39/44   train_loss = 0.316
Epoch  70 Batch    0/44   train_loss = 0.306
Epoch  70 Batch    5/44   train_loss = 0.332
Epoch  70 Batch   10/44   train_loss = 0.335
Epoch  70 Batch   15/44   train_loss = 0.346
Epoch  70 Batch   20/44   train_loss = 0.321
Epoch  70 Batch   25/44   train_loss = 0.347
Epoch  70 Batch   30/44   train_loss = 0.333
Epoch  70 Batch   35/44   train_loss = 0.304
Epoch  70 Batch   40/44   train_loss = 0.292
Epoch  71 Batch    1/44   train_loss = 0.316
Epoch  71 Batch    6/44   train_loss = 0.328
Epoch  71 Batch   11/44   train_loss = 0.353
Epoch  71 Batch   16/44   train_loss = 0.317
Epoch  71 Batch   21/44   train_loss = 0.294
Epoch  71 Batch   26/44   train_loss = 0.339
Epoch  71 Batch   31/44   train_loss = 0.338
Epoch  71 Batch   36/44   train_loss = 0.359
Epoch  71 Batch   41/44   train_loss = 0.312
Epoch  72 Batch    2/44   train_loss = 0.328
Epoch  72 Batch    7/44   train_loss = 0.327
Epoch  72 Batch   12/44   train_loss = 0.317
Epoch  72 Batch   17/44   train_loss = 0.346
Epoch  72 Batch   22/44   train_loss = 0.332
Epoch  72 Batch   27/44   train_loss = 0.341
Epoch  72 Batch   32/44   train_loss = 0.302
Epoch  72 Batch   37/44   train_loss = 0.312
Epoch  72 Batch   42/44   train_loss = 0.291
Epoch  73 Batch    3/44   train_loss = 0.353
Epoch  73 Batch    8/44   train_loss = 0.333
Epoch  73 Batch   13/44   train_loss = 0.334
Epoch  73 Batch   18/44   train_loss = 0.329
Epoch  73 Batch   23/44   train_loss = 0.323
Epoch  73 Batch   28/44   train_loss = 0.309
Epoch  73 Batch   33/44   train_loss = 0.357
Epoch  73 Batch   38/44   train_loss = 0.330
Epoch  73 Batch   43/44   train_loss = 0.307
Epoch  74 Batch    4/44   train_loss = 0.321
Epoch  74 Batch    9/44   train_loss = 0.358
Epoch  74 Batch   14/44   train_loss = 0.327
Epoch  74 Batch   19/44   train_loss = 0.333
Epoch  74 Batch   24/44   train_loss = 0.296
Epoch  74 Batch   29/44   train_loss = 0.319
Epoch  74 Batch   34/44   train_loss = 0.334
Epoch  74 Batch   39/44   train_loss = 0.319
Epoch  75 Batch    0/44   train_loss = 0.306
Epoch  75 Batch    5/44   train_loss = 0.330
Epoch  75 Batch   10/44   train_loss = 0.330
Epoch  75 Batch   15/44   train_loss = 0.347
Epoch  75 Batch   20/44   train_loss = 0.318
Epoch  75 Batch   25/44   train_loss = 0.341
Epoch  75 Batch   30/44   train_loss = 0.324
Epoch  75 Batch   35/44   train_loss = 0.301
Epoch  75 Batch   40/44   train_loss = 0.288
Epoch  76 Batch    1/44   train_loss = 0.314
Epoch  76 Batch    6/44   train_loss = 0.320
Epoch  76 Batch   11/44   train_loss = 0.355
Epoch  76 Batch   16/44   train_loss = 0.313
Epoch  76 Batch   21/44   train_loss = 0.288
Epoch  76 Batch   26/44   train_loss = 0.335
Epoch  76 Batch   31/44   train_loss = 0.333
Epoch  76 Batch   36/44   train_loss = 0.361
Epoch  76 Batch   41/44   train_loss = 0.310
Epoch  77 Batch    2/44   train_loss = 0.324
Epoch  77 Batch    7/44   train_loss = 0.329
Epoch  77 Batch   12/44   train_loss = 0.312
Epoch  77 Batch   17/44   train_loss = 0.346
Epoch  77 Batch   22/44   train_loss = 0.331
Epoch  77 Batch   27/44   train_loss = 0.340
Epoch  77 Batch   32/44   train_loss = 0.299
Epoch  77 Batch   37/44   train_loss = 0.309
Epoch  77 Batch   42/44   train_loss = 0.297
Epoch  78 Batch    3/44   train_loss = 0.355
Epoch  78 Batch    8/44   train_loss = 0.334
Epoch  78 Batch   13/44   train_loss = 0.331
Epoch  78 Batch   18/44   train_loss = 0.320
Epoch  78 Batch   23/44   train_loss = 0.318
Epoch  78 Batch   28/44   train_loss = 0.306
Epoch  78 Batch   33/44   train_loss = 0.354
Epoch  78 Batch   38/44   train_loss = 0.329
Epoch  78 Batch   43/44   train_loss = 0.304
Epoch  79 Batch    4/44   train_loss = 0.315
Epoch  79 Batch    9/44   train_loss = 0.356
Epoch  79 Batch   14/44   train_loss = 0.325
Epoch  79 Batch   19/44   train_loss = 0.326
Epoch  79 Batch   24/44   train_loss = 0.297
Epoch  79 Batch   29/44   train_loss = 0.313
Epoch  79 Batch   34/44   train_loss = 0.341
Epoch  79 Batch   39/44   train_loss = 0.314
Epoch  80 Batch    0/44   train_loss = 0.304
Epoch  80 Batch    5/44   train_loss = 0.329
Epoch  80 Batch   10/44   train_loss = 0.332
Epoch  80 Batch   15/44   train_loss = 0.344
Epoch  80 Batch   20/44   train_loss = 0.318
Epoch  80 Batch   25/44   train_loss = 0.344
Epoch  80 Batch   30/44   train_loss = 0.330
Epoch  80 Batch   35/44   train_loss = 0.303
Epoch  80 Batch   40/44   train_loss = 0.290
Epoch  81 Batch    1/44   train_loss = 0.316
Epoch  81 Batch    6/44   train_loss = 0.336
Epoch  81 Batch   11/44   train_loss = 0.358
Epoch  81 Batch   16/44   train_loss = 0.317
Epoch  81 Batch   21/44   train_loss = 0.295
Epoch  81 Batch   26/44   train_loss = 0.337
Epoch  81 Batch   31/44   train_loss = 0.340
Epoch  81 Batch   36/44   train_loss = 0.363
Epoch  81 Batch   41/44   train_loss = 0.317
Epoch  82 Batch    2/44   train_loss = 0.333
Epoch  82 Batch    7/44   train_loss = 0.335
Epoch  82 Batch   12/44   train_loss = 0.334
Epoch  82 Batch   17/44   train_loss = 0.364
Epoch  82 Batch   22/44   train_loss = 0.364
Epoch  82 Batch   27/44   train_loss = 0.382
Epoch  82 Batch   32/44   train_loss = 0.408
Epoch  82 Batch   37/44   train_loss = 0.503
Epoch  82 Batch   42/44   train_loss = 0.697
Epoch  83 Batch    3/44   train_loss = 0.995
Epoch  83 Batch    8/44   train_loss = 1.334
Epoch  83 Batch   13/44   train_loss = 1.657
Epoch  83 Batch   18/44   train_loss = 2.148
Epoch  83 Batch   23/44   train_loss = 2.443
Epoch  83 Batch   28/44   train_loss = 2.681
Epoch  83 Batch   33/44   train_loss = 2.921
Epoch  83 Batch   38/44   train_loss = 3.036
Epoch  83 Batch   43/44   train_loss = 3.006
Epoch  84 Batch    4/44   train_loss = 3.057
Epoch  84 Batch    9/44   train_loss = 3.019
Epoch  84 Batch   14/44   train_loss = 2.895
Epoch  84 Batch   19/44   train_loss = 2.797
Epoch  84 Batch   24/44   train_loss = 2.842
Epoch  84 Batch   29/44   train_loss = 2.659
Epoch  84 Batch   34/44   train_loss = 2.710
Epoch  84 Batch   39/44   train_loss = 2.594
Epoch  85 Batch    0/44   train_loss = 2.513
Epoch  85 Batch    5/44   train_loss = 2.314
Epoch  85 Batch   10/44   train_loss = 2.278
Epoch  85 Batch   15/44   train_loss = 2.288
Epoch  85 Batch   20/44   train_loss = 2.201
Epoch  85 Batch   25/44   train_loss = 2.129
Epoch  85 Batch   30/44   train_loss = 2.063
Epoch  85 Batch   35/44   train_loss = 2.162
Epoch  85 Batch   40/44   train_loss = 2.111
Epoch  86 Batch    1/44   train_loss = 1.875
Epoch  86 Batch    6/44   train_loss = 1.753
Epoch  86 Batch   11/44   train_loss = 1.827
Epoch  86 Batch   16/44   train_loss = 1.758
Epoch  86 Batch   21/44   train_loss = 1.708
Epoch  86 Batch   26/44   train_loss = 1.686
Epoch  86 Batch   31/44   train_loss = 1.688
Epoch  86 Batch   36/44   train_loss = 1.712
Epoch  86 Batch   41/44   train_loss = 1.503
Epoch  87 Batch    2/44   train_loss = 1.514
Epoch  87 Batch    7/44   train_loss = 1.415
Epoch  87 Batch   12/44   train_loss = 1.513
Epoch  87 Batch   17/44   train_loss = 1.430
Epoch  87 Batch   22/44   train_loss = 1.432
Epoch  87 Batch   27/44   train_loss = 1.393
Epoch  87 Batch   32/44   train_loss = 1.418
Epoch  87 Batch   37/44   train_loss = 1.344
Epoch  87 Batch   42/44   train_loss = 1.305
Epoch  88 Batch    3/44   train_loss = 1.296
Epoch  88 Batch    8/44   train_loss = 1.197
Epoch  88 Batch   13/44   train_loss = 1.165
Epoch  88 Batch   18/44   train_loss = 1.272
Epoch  88 Batch   23/44   train_loss = 1.127
Epoch  88 Batch   28/44   train_loss = 1.169
Epoch  88 Batch   33/44   train_loss = 1.188
Epoch  88 Batch   38/44   train_loss = 1.113
Epoch  88 Batch   43/44   train_loss = 1.079
Epoch  89 Batch    4/44   train_loss = 1.078
Epoch  89 Batch    9/44   train_loss = 1.128
Epoch  89 Batch   14/44   train_loss = 1.017
Epoch  89 Batch   19/44   train_loss = 0.990
Epoch  89 Batch   24/44   train_loss = 1.034
Epoch  89 Batch   29/44   train_loss = 0.925
Epoch  89 Batch   34/44   train_loss = 1.019
Epoch  89 Batch   39/44   train_loss = 0.908
Epoch  90 Batch    0/44   train_loss = 0.866
Epoch  90 Batch    5/44   train_loss = 0.891
Epoch  90 Batch   10/44   train_loss = 0.893
Epoch  90 Batch   15/44   train_loss = 0.897
Epoch  90 Batch   20/44   train_loss = 0.862
Epoch  90 Batch   25/44   train_loss = 0.897
Epoch  90 Batch   30/44   train_loss = 0.810
Epoch  90 Batch   35/44   train_loss = 0.796
Epoch  90 Batch   40/44   train_loss = 0.747
Epoch  91 Batch    1/44   train_loss = 0.770
Epoch  91 Batch    6/44   train_loss = 0.762
Epoch  91 Batch   11/44   train_loss = 0.769
Epoch  91 Batch   16/44   train_loss = 0.738
Epoch  91 Batch   21/44   train_loss = 0.678
Epoch  91 Batch   26/44   train_loss = 0.707
Epoch  91 Batch   31/44   train_loss = 0.711
Epoch  91 Batch   36/44   train_loss = 0.734
Epoch  91 Batch   41/44   train_loss = 0.653
Epoch  92 Batch    2/44   train_loss = 0.658
Epoch  92 Batch    7/44   train_loss = 0.633
Epoch  92 Batch   12/44   train_loss = 0.648
Epoch  92 Batch   17/44   train_loss = 0.658
Epoch  92 Batch   22/44   train_loss = 0.633
Epoch  92 Batch   27/44   train_loss = 0.617
Epoch  92 Batch   32/44   train_loss = 0.594
Epoch  92 Batch   37/44   train_loss = 0.589
Epoch  92 Batch   42/44   train_loss = 0.573
Epoch  93 Batch    3/44   train_loss = 0.614
Epoch  93 Batch    8/44   train_loss = 0.600
Epoch  93 Batch   13/44   train_loss = 0.548
Epoch  93 Batch   18/44   train_loss = 0.573
Epoch  93 Batch   23/44   train_loss = 0.539
Epoch  93 Batch   28/44   train_loss = 0.548
Epoch  93 Batch   33/44   train_loss = 0.570
Epoch  93 Batch   38/44   train_loss = 0.526
Epoch  93 Batch   43/44   train_loss = 0.512
Epoch  94 Batch    4/44   train_loss = 0.489
Epoch  94 Batch    9/44   train_loss = 0.542
Epoch  94 Batch   14/44   train_loss = 0.485
Epoch  94 Batch   19/44   train_loss = 0.488
Epoch  94 Batch   24/44   train_loss = 0.446
Epoch  94 Batch   29/44   train_loss = 0.461
Epoch  94 Batch   34/44   train_loss = 0.481
Epoch  94 Batch   39/44   train_loss = 0.443
Epoch  95 Batch    0/44   train_loss = 0.429
Epoch  95 Batch    5/44   train_loss = 0.461
Epoch  95 Batch   10/44   train_loss = 0.454
Epoch  95 Batch   15/44   train_loss = 0.453
Epoch  95 Batch   20/44   train_loss = 0.425
Epoch  95 Batch   25/44   train_loss = 0.439
Epoch  95 Batch   30/44   train_loss = 0.421
Epoch  95 Batch   35/44   train_loss = 0.403
Epoch  95 Batch   40/44   train_loss = 0.393
Epoch  96 Batch    1/44   train_loss = 0.390
Epoch  96 Batch    6/44   train_loss = 0.416
Epoch  96 Batch   11/44   train_loss = 0.426
Epoch  96 Batch   16/44   train_loss = 0.391
Epoch  96 Batch   21/44   train_loss = 0.358
Epoch  96 Batch   26/44   train_loss = 0.397
Epoch  96 Batch   31/44   train_loss = 0.402
Epoch  96 Batch   36/44   train_loss = 0.412
Epoch  96 Batch   41/44   train_loss = 0.367
Epoch  97 Batch    2/44   train_loss = 0.387
Epoch  97 Batch    7/44   train_loss = 0.379
Epoch  97 Batch   12/44   train_loss = 0.365
Epoch  97 Batch   17/44   train_loss = 0.394
Epoch  97 Batch   22/44   train_loss = 0.384
Epoch  97 Batch   27/44   train_loss = 0.391
Epoch  97 Batch   32/44   train_loss = 0.350
Epoch  97 Batch   37/44   train_loss = 0.347
Epoch  97 Batch   42/44   train_loss = 0.331
Epoch  98 Batch    3/44   train_loss = 0.395
Epoch  98 Batch    8/44   train_loss = 0.367
Epoch  98 Batch   13/44   train_loss = 0.365
Epoch  98 Batch   18/44   train_loss = 0.357
Epoch  98 Batch   23/44   train_loss = 0.348
Epoch  98 Batch   28/44   train_loss = 0.340
Epoch  98 Batch   33/44   train_loss = 0.383
Epoch  98 Batch   38/44   train_loss = 0.359
Epoch  98 Batch   43/44   train_loss = 0.341
Epoch  99 Batch    4/44   train_loss = 0.347
Epoch  99 Batch    9/44   train_loss = 0.391
Epoch  99 Batch   14/44   train_loss = 0.352
Epoch  99 Batch   19/44   train_loss = 0.351
Epoch  99 Batch   24/44   train_loss = 0.319
Epoch  99 Batch   29/44   train_loss = 0.340
Epoch  99 Batch   34/44   train_loss = 0.359
Epoch  99 Batch   39/44   train_loss = 0.334
Model Trained and Saved

Save Parameters

Save seq_length and save_dir for generating a new TV script.


In [35]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint


In [36]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Implement Generate Functions

Get Tensors

Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


In [37]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    input_data = loaded_graph.get_tensor_by_name('input:0')
    initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
    final_state = loaded_graph.get_tensor_by_name('final_state:0')
    probabilities = loaded_graph.get_tensor_by_name('probs:0')
    return input_data, initial_state, final_state, probabilities 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)


Tests Passed

Choose Word

Implement the pick_word() function to select the next word using probabilities.


In [43]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    likely_int_words = probabilities.argsort()[::-1][:3] # the 3 most likely int words in descending order
    pick_int_word = likely_int_words[np.random.randint(0,2)]
    return int_to_vocab[pick_int_word]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)


Tests Passed

Generate TV Script

This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.


In [42]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)


INFO:tensorflow:Restoring parameters from ./save
moe_szyslak: yeah. now get the little gonna let him!
homer_simpson:(to moe, proudly go in! king""
moe_szyslak: you mean got the love while you wouldn't wouldn't? i could turn. i've been there's the worst, a planet of the same had..
lenny_leonard: to moe, moe? my one day off off that was all i got.
carl_carlson: aw, it's not the last of i cut?
moe_szyslak: yeah, not what you.
seymour_skinner:(joking) yeah, yeah.
marge_simpson:(not surprised.
moe_szyslak: that's on right, i was.(to barflies out from the" new") moe, this" churchy is so i really... a good book was my house,"" i'm in"..
homer_simpson: homer, i can't really know where about that. the guy i drink this someone!
lenny_leonard: just i'd be the guy.
moe_szyslak: hello.
homer_simpson:(chuckles noise, then: noise.) canyonero, i got that on the ball

The TV Script is Nonsensical

It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.