TV Script Generation

In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.

Get the Data

The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..


In [80]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [81]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555

The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.



In [82]:
len(sentences)


Out[82]:
4257

Implement Preprocessing Functions

The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:

  • Lookup Table
  • Tokenize Punctuation

Lookup Table

To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:

  • Dictionary to go from the words to an id, we'll call vocab_to_int
  • Dictionary to go from the id to word, we'll call int_to_vocab

Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)


In [83]:
import numpy as np
import problem_unittests as tests

def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    # Create a set for the vocabulary
    vocabulary = set()
    
    # Add word tokens from text to the vocabulary set
    for word in text:
        vocabulary.add(word)
    
    # Convert to a list to be able to access by index
    vocab = list(vocabulary)
    
    # Populate dictionary of words in the vocabulary mapped to index positions and vice versa
    vocab_to_int = {}
    int_to_vocab = {}
    
    for i, word in enumerate(vocab):
        vocab_to_int[word] = i
        int_to_vocab[i] = word
    
    return vocab_to_int, int_to_vocab


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)


Tests Passed

Tokenize Punctuation

We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".

Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:

  • Period ( . )
  • Comma ( , )
  • Quotation Mark ( " )
  • Semicolon ( ; )
  • Exclamation mark ( ! )
  • Question mark ( ? )
  • Left Parentheses ( ( )
  • Right Parentheses ( ) )
  • Dash ( -- )
  • Return ( \n )

This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".


In [85]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    # TODO: Implement Function
    
    # Instantiate punctuation dict
    punctuation_dict = {}
    
    # Populate the dictionary
    punctuation_dict['.'] = 'Period'
    punctuation_dict[','] = 'Comma'
    punctuation_dict['"'] = 'Quotation_Mark'
    punctuation_dict[';'] = 'Semicolon'
    punctuation_dict['!'] = 'Exclamation_Mark'
    punctuation_dict['?'] = 'Question_Mark'
    punctuation_dict['('] = 'Left_Parenthesis'
    punctuation_dict[')'] = 'Right_Parenthesis'
    punctuation_dict['--'] = 'Dash'
    punctuation_dict['\n'] = 'Return'
    
    return punctuation_dict

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [86]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [87]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Build the Neural Network

You'll build the components necessary to build a RNN by implementing the following functions below:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the Version of TensorFlow and Access to GPU


In [88]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
/Users/fm61/anaconda/envs/image_class/lib/python3.6/site-packages/ipykernel_launcher.py:14: UserWarning: No GPU found. Please use a GPU to train your neural network.
  

Input

Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter.
  • Targets placeholder
  • Learning Rate placeholder

Return the placeholders in the following tuple (Input, Targets, LearningRate)


In [89]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    # TODO: Implement Function

    inputs = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None, None], name='target')
    learning_rate = tf.placeholder(tf.float32, shape=None, name='lr')
    
    return inputs, targets, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)


Tests Passed

Build RNN Cell and Initialize

Stack one or more BasicLSTMCells in a MultiRNNCell.

  • The Rnn size should be set using rnn_size
  • Initalize Cell State using the MultiRNNCell's zero_state() function
    • Apply the name "initial_state" to the initial state using tf.identity()

Return the cell and initial state in the following tuple (Cell, InitialState)


In [90]:
def get_init_cell(batch_size, rnn_size):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    # TODO: Implement Function
    num_layers=1
    keep_prob = .8
    # Use a basic LSTM cell
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    
    # Add dropout to the cell
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    
    # Stack up multiple LSTM layers, for deep learning
    cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
    initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
    
    return cell, initial_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)


Tests Passed

Word Embedding

Apply embedding to input_data using TensorFlow. Return the embedded sequence.


In [91]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    # TODO: Implement Function
    # Embed the words for training
    
    
    embed = tf.Variable(tf.random_uniform((vocab_size, embed_dim),
                                         -1, 1))
    embedded = tf.nn.embedding_lookup(embed, input_data)
    return embedded


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)


Tests Passed

Build RNN

You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.

Return the outputs and final_state state in the following tuple (Outputs, FinalState)


In [92]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    # TODO: Implement Function
    #tf.reset_default_graph()
    
    outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
    final_state = tf.identity(state, name='final_state')
    return outputs, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
  • Build RNN using cell and your build_rnn(cell, inputs) function.
  • Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.

Return the logits and final state in the following tuple (Logits, FinalState)


In [93]:
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :param embed_dim: Number of embedding dimensions
    :return: Tuple (Logits, FinalState)
    """
    # TODO: Implement Function
    embedding = get_embed(input_data, vocab_size, rnn_size)
    output, final_state = build_rnn(cell, embedding)

    logits = tf.contrib.layers.fully_connected(output, vocab_size, 
                                               activation_fn=None, 
                                               weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
                                               biases_initializer=tf.zeros_initializer())
    
    return logits, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)


Tests Passed

Batches

Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a single batch of input with the shape [batch size, sequence length]
  • The second element is a single batch of targets with the shape [batch size, sequence length]

If you can't fill the last batch with enough data, drop the last batch.

For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2  3], [ 7  8  9]],
    # Batch of targets
    [[ 2  3  4], [ 8  9 10]]
  ],

  # Second Batch
  [
    # Batch of Input
    [[ 4  5  6], [10 11 12]],
    # Batch of targets
    [[ 5  6  7], [11 12 13]]
  ]
]

In [94]:
def get_batches(int_text, batch_size, seq_length):
    n_batches = len(int_text)//(batch_size*seq_length)
    inputs = np.array(int_text[:n_batches*(batch_size*seq_length)])
    #targets = np.array(int_text[1:n_batches*(batch_size*seq_length)+1])
    targets = np.roll(inputs, -1)
    input_batches = np.split(inputs.reshape(batch_size,-1),n_batches,1)
    target_batches = np.split(targets.reshape(batch_size,-1),n_batches,1)
    
    output = np.array(list(zip(input_batches,target_batches)))
    return output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set num_epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set embed_dim to the size of the embedding.
  • Set seq_length to the length of sequence.
  • Set learning_rate to the learning rate.
  • Set show_every_n_batches to the number of batches the neural network should print progress.

In [109]:
# Number of Epochs
num_epochs = 20
# Batch Size
batch_size = 50
# RNN Size
rnn_size = 300
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Build the Graph

Build the graph using the neural network you implemented.


In [110]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
    train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')


Epoch   0 Batch    0/69   train_loss = 8.841
Epoch   0 Batch    1/69   train_loss = 8.141
Epoch   0 Batch    2/69   train_loss = 6.967
Epoch   0 Batch    3/69   train_loss = 6.629
Epoch   0 Batch    4/69   train_loss = 6.377
Epoch   0 Batch    5/69   train_loss = 6.205
Epoch   0 Batch    6/69   train_loss = 6.080
Epoch   0 Batch    7/69   train_loss = 6.049
Epoch   0 Batch    8/69   train_loss = 5.990
Epoch   0 Batch    9/69   train_loss = 5.916
Epoch   0 Batch   10/69   train_loss = 6.053
Epoch   0 Batch   11/69   train_loss = 5.902
Epoch   0 Batch   12/69   train_loss = 5.768
Epoch   0 Batch   13/69   train_loss = 5.667
Epoch   0 Batch   14/69   train_loss = 5.480
Epoch   0 Batch   15/69   train_loss = 5.693
Epoch   0 Batch   16/69   train_loss = 5.726
Epoch   0 Batch   17/69   train_loss = 5.571
Epoch   0 Batch   18/69   train_loss = 5.762
Epoch   0 Batch   19/69   train_loss = 5.545
Epoch   0 Batch   20/69   train_loss = 5.719
Epoch   0 Batch   21/69   train_loss = 5.433
Epoch   0 Batch   22/69   train_loss = 5.452
Epoch   0 Batch   23/69   train_loss = 5.540
Epoch   0 Batch   24/69   train_loss = 5.500
Epoch   0 Batch   25/69   train_loss = 5.634
Epoch   0 Batch   26/69   train_loss = 5.152
Epoch   0 Batch   27/69   train_loss = 5.333
Epoch   0 Batch   28/69   train_loss = 5.446
Epoch   0 Batch   29/69   train_loss = 5.356
Epoch   0 Batch   30/69   train_loss = 5.488
Epoch   0 Batch   31/69   train_loss = 5.200
Epoch   0 Batch   32/69   train_loss = 5.188
Epoch   0 Batch   33/69   train_loss = 5.313
Epoch   0 Batch   34/69   train_loss = 5.096
Epoch   0 Batch   35/69   train_loss = 5.211
Epoch   0 Batch   36/69   train_loss = 5.077
Epoch   0 Batch   37/69   train_loss = 5.271
Epoch   0 Batch   38/69   train_loss = 5.269
Epoch   0 Batch   39/69   train_loss = 5.433
Epoch   0 Batch   40/69   train_loss = 5.229
Epoch   0 Batch   41/69   train_loss = 5.278
Epoch   0 Batch   42/69   train_loss = 5.384
Epoch   0 Batch   43/69   train_loss = 5.261
Epoch   0 Batch   44/69   train_loss = 5.329
Epoch   0 Batch   45/69   train_loss = 5.164
Epoch   0 Batch   46/69   train_loss = 5.180
Epoch   0 Batch   47/69   train_loss = 5.184
Epoch   0 Batch   48/69   train_loss = 5.055
Epoch   0 Batch   49/69   train_loss = 5.152
Epoch   0 Batch   50/69   train_loss = 5.048
Epoch   0 Batch   51/69   train_loss = 5.367
Epoch   0 Batch   52/69   train_loss = 5.204
Epoch   0 Batch   53/69   train_loss = 5.049
Epoch   0 Batch   54/69   train_loss = 5.035
Epoch   0 Batch   55/69   train_loss = 4.986
Epoch   0 Batch   56/69   train_loss = 5.152
Epoch   0 Batch   57/69   train_loss = 5.240
Epoch   0 Batch   58/69   train_loss = 5.241
Epoch   0 Batch   59/69   train_loss = 5.136
Epoch   0 Batch   60/69   train_loss = 4.894
Epoch   0 Batch   61/69   train_loss = 5.146
Epoch   0 Batch   62/69   train_loss = 5.195
Epoch   0 Batch   63/69   train_loss = 5.052
Epoch   0 Batch   64/69   train_loss = 5.143
Epoch   0 Batch   65/69   train_loss = 5.076
Epoch   0 Batch   66/69   train_loss = 4.967
Epoch   0 Batch   67/69   train_loss = 5.068
Epoch   0 Batch   68/69   train_loss = 5.211
Epoch   1 Batch    0/69   train_loss = 4.831
Epoch   1 Batch    1/69   train_loss = 4.725
Epoch   1 Batch    2/69   train_loss = 4.804
Epoch   1 Batch    3/69   train_loss = 4.742
Epoch   1 Batch    4/69   train_loss = 4.750
Epoch   1 Batch    5/69   train_loss = 4.655
Epoch   1 Batch    6/69   train_loss = 4.612
Epoch   1 Batch    7/69   train_loss = 4.545
Epoch   1 Batch    8/69   train_loss = 4.690
Epoch   1 Batch    9/69   train_loss = 4.738
Epoch   1 Batch   10/69   train_loss = 4.769
Epoch   1 Batch   11/69   train_loss = 4.714
Epoch   1 Batch   12/69   train_loss = 4.566
Epoch   1 Batch   13/69   train_loss = 4.556
Epoch   1 Batch   14/69   train_loss = 4.368
Epoch   1 Batch   15/69   train_loss = 4.600
Epoch   1 Batch   16/69   train_loss = 4.680
Epoch   1 Batch   17/69   train_loss = 4.551
Epoch   1 Batch   18/69   train_loss = 4.685
Epoch   1 Batch   19/69   train_loss = 4.602
Epoch   1 Batch   20/69   train_loss = 4.664
Epoch   1 Batch   21/69   train_loss = 4.452
Epoch   1 Batch   22/69   train_loss = 4.517
Epoch   1 Batch   23/69   train_loss = 4.585
Epoch   1 Batch   24/69   train_loss = 4.591
Epoch   1 Batch   25/69   train_loss = 4.691
Epoch   1 Batch   26/69   train_loss = 4.296
Epoch   1 Batch   27/69   train_loss = 4.416
Epoch   1 Batch   28/69   train_loss = 4.529
Epoch   1 Batch   29/69   train_loss = 4.465
Epoch   1 Batch   30/69   train_loss = 4.603
Epoch   1 Batch   31/69   train_loss = 4.350
Epoch   1 Batch   32/69   train_loss = 4.366
Epoch   1 Batch   33/69   train_loss = 4.469
Epoch   1 Batch   34/69   train_loss = 4.325
Epoch   1 Batch   35/69   train_loss = 4.412
Epoch   1 Batch   36/69   train_loss = 4.265
Epoch   1 Batch   37/69   train_loss = 4.479
Epoch   1 Batch   38/69   train_loss = 4.464
Epoch   1 Batch   39/69   train_loss = 4.614
Epoch   1 Batch   40/69   train_loss = 4.399
Epoch   1 Batch   41/69   train_loss = 4.438
Epoch   1 Batch   42/69   train_loss = 4.522
Epoch   1 Batch   43/69   train_loss = 4.406
Epoch   1 Batch   44/69   train_loss = 4.564
Epoch   1 Batch   45/69   train_loss = 4.470
Epoch   1 Batch   46/69   train_loss = 4.396
Epoch   1 Batch   47/69   train_loss = 4.398
Epoch   1 Batch   48/69   train_loss = 4.332
Epoch   1 Batch   49/69   train_loss = 4.305
Epoch   1 Batch   50/69   train_loss = 4.255
Epoch   1 Batch   51/69   train_loss = 4.572
Epoch   1 Batch   52/69   train_loss = 4.384
Epoch   1 Batch   53/69   train_loss = 4.270
Epoch   1 Batch   54/69   train_loss = 4.316
Epoch   1 Batch   55/69   train_loss = 4.326
Epoch   1 Batch   56/69   train_loss = 4.389
Epoch   1 Batch   57/69   train_loss = 4.504
Epoch   1 Batch   58/69   train_loss = 4.466
Epoch   1 Batch   59/69   train_loss = 4.351
Epoch   1 Batch   60/69   train_loss = 4.174
Epoch   1 Batch   61/69   train_loss = 4.341
Epoch   1 Batch   62/69   train_loss = 4.411
Epoch   1 Batch   63/69   train_loss = 4.273
Epoch   1 Batch   64/69   train_loss = 4.364
Epoch   1 Batch   65/69   train_loss = 4.238
Epoch   1 Batch   66/69   train_loss = 4.243
Epoch   1 Batch   67/69   train_loss = 4.353
Epoch   1 Batch   68/69   train_loss = 4.433
Epoch   2 Batch    0/69   train_loss = 4.073
Epoch   2 Batch    1/69   train_loss = 4.133
Epoch   2 Batch    2/69   train_loss = 4.266
Epoch   2 Batch    3/69   train_loss = 4.152
Epoch   2 Batch    4/69   train_loss = 4.117
Epoch   2 Batch    5/69   train_loss = 4.046
Epoch   2 Batch    6/69   train_loss = 4.044
Epoch   2 Batch    7/69   train_loss = 4.033
Epoch   2 Batch    8/69   train_loss = 4.115
Epoch   2 Batch    9/69   train_loss = 4.178
Epoch   2 Batch   10/69   train_loss = 4.183
Epoch   2 Batch   11/69   train_loss = 4.128
Epoch   2 Batch   12/69   train_loss = 4.022
Epoch   2 Batch   13/69   train_loss = 4.032
Epoch   2 Batch   14/69   train_loss = 3.878
Epoch   2 Batch   15/69   train_loss = 4.081
Epoch   2 Batch   16/69   train_loss = 4.097
Epoch   2 Batch   17/69   train_loss = 3.978
Epoch   2 Batch   18/69   train_loss = 4.029
Epoch   2 Batch   19/69   train_loss = 4.037
Epoch   2 Batch   20/69   train_loss = 4.087
Epoch   2 Batch   21/69   train_loss = 3.925
Epoch   2 Batch   22/69   train_loss = 3.965
Epoch   2 Batch   23/69   train_loss = 4.037
Epoch   2 Batch   24/69   train_loss = 3.992
Epoch   2 Batch   25/69   train_loss = 4.094
Epoch   2 Batch   26/69   train_loss = 3.805
Epoch   2 Batch   27/69   train_loss = 3.869
Epoch   2 Batch   28/69   train_loss = 4.039
Epoch   2 Batch   29/69   train_loss = 3.873
Epoch   2 Batch   30/69   train_loss = 4.020
Epoch   2 Batch   31/69   train_loss = 3.903
Epoch   2 Batch   32/69   train_loss = 3.845
Epoch   2 Batch   33/69   train_loss = 3.918
Epoch   2 Batch   34/69   train_loss = 3.835
Epoch   2 Batch   35/69   train_loss = 3.889
Epoch   2 Batch   36/69   train_loss = 3.737
Epoch   2 Batch   37/69   train_loss = 3.886
Epoch   2 Batch   38/69   train_loss = 3.855
Epoch   2 Batch   39/69   train_loss = 3.973
Epoch   2 Batch   40/69   train_loss = 3.726
Epoch   2 Batch   41/69   train_loss = 3.799
Epoch   2 Batch   42/69   train_loss = 3.865
Epoch   2 Batch   43/69   train_loss = 3.853
Epoch   2 Batch   44/69   train_loss = 3.964
Epoch   2 Batch   45/69   train_loss = 3.908
Epoch   2 Batch   46/69   train_loss = 3.803
Epoch   2 Batch   47/69   train_loss = 3.838
Epoch   2 Batch   48/69   train_loss = 3.711
Epoch   2 Batch   49/69   train_loss = 3.715
Epoch   2 Batch   50/69   train_loss = 3.758
Epoch   2 Batch   51/69   train_loss = 3.955
Epoch   2 Batch   52/69   train_loss = 3.809
Epoch   2 Batch   53/69   train_loss = 3.735
Epoch   2 Batch   54/69   train_loss = 3.735
Epoch   2 Batch   55/69   train_loss = 3.845
Epoch   2 Batch   56/69   train_loss = 3.832
Epoch   2 Batch   57/69   train_loss = 3.786
Epoch   2 Batch   58/69   train_loss = 3.824
Epoch   2 Batch   59/69   train_loss = 3.705
Epoch   2 Batch   60/69   train_loss = 3.656
Epoch   2 Batch   61/69   train_loss = 3.752
Epoch   2 Batch   62/69   train_loss = 3.763
Epoch   2 Batch   63/69   train_loss = 3.730
Epoch   2 Batch   64/69   train_loss = 3.776
Epoch   2 Batch   65/69   train_loss = 3.658
Epoch   2 Batch   66/69   train_loss = 3.675
Epoch   2 Batch   67/69   train_loss = 3.709
Epoch   2 Batch   68/69   train_loss = 3.741
Epoch   3 Batch    0/69   train_loss = 3.495
Epoch   3 Batch    1/69   train_loss = 3.555
Epoch   3 Batch    2/69   train_loss = 3.677
Epoch   3 Batch    3/69   train_loss = 3.552
Epoch   3 Batch    4/69   train_loss = 3.561
Epoch   3 Batch    5/69   train_loss = 3.529
Epoch   3 Batch    6/69   train_loss = 3.564
Epoch   3 Batch    7/69   train_loss = 3.497
Epoch   3 Batch    8/69   train_loss = 3.621
Epoch   3 Batch    9/69   train_loss = 3.637
Epoch   3 Batch   10/69   train_loss = 3.627
Epoch   3 Batch   11/69   train_loss = 3.566
Epoch   3 Batch   12/69   train_loss = 3.488
Epoch   3 Batch   13/69   train_loss = 3.540
Epoch   3 Batch   14/69   train_loss = 3.419
Epoch   3 Batch   15/69   train_loss = 3.577
Epoch   3 Batch   16/69   train_loss = 3.570
Epoch   3 Batch   17/69   train_loss = 3.540
Epoch   3 Batch   18/69   train_loss = 3.514
Epoch   3 Batch   19/69   train_loss = 3.556
Epoch   3 Batch   20/69   train_loss = 3.573
Epoch   3 Batch   21/69   train_loss = 3.519
Epoch   3 Batch   22/69   train_loss = 3.506
Epoch   3 Batch   23/69   train_loss = 3.507
Epoch   3 Batch   24/69   train_loss = 3.525
Epoch   3 Batch   25/69   train_loss = 3.578
Epoch   3 Batch   26/69   train_loss = 3.412
Epoch   3 Batch   27/69   train_loss = 3.499
Epoch   3 Batch   28/69   train_loss = 3.527
Epoch   3 Batch   29/69   train_loss = 3.460
Epoch   3 Batch   30/69   train_loss = 3.508
Epoch   3 Batch   31/69   train_loss = 3.441
Epoch   3 Batch   32/69   train_loss = 3.436
Epoch   3 Batch   33/69   train_loss = 3.462
Epoch   3 Batch   34/69   train_loss = 3.336
Epoch   3 Batch   35/69   train_loss = 3.421
Epoch   3 Batch   36/69   train_loss = 3.360
Epoch   3 Batch   37/69   train_loss = 3.458
Epoch   3 Batch   38/69   train_loss = 3.401
Epoch   3 Batch   39/69   train_loss = 3.463
Epoch   3 Batch   40/69   train_loss = 3.262
Epoch   3 Batch   41/69   train_loss = 3.312
Epoch   3 Batch   42/69   train_loss = 3.417
Epoch   3 Batch   43/69   train_loss = 3.321
Epoch   3 Batch   44/69   train_loss = 3.444
Epoch   3 Batch   45/69   train_loss = 3.479
Epoch   3 Batch   46/69   train_loss = 3.345
Epoch   3 Batch   47/69   train_loss = 3.371
Epoch   3 Batch   48/69   train_loss = 3.293
Epoch   3 Batch   49/69   train_loss = 3.263
Epoch   3 Batch   50/69   train_loss = 3.314
Epoch   3 Batch   51/69   train_loss = 3.445
Epoch   3 Batch   52/69   train_loss = 3.298
Epoch   3 Batch   53/69   train_loss = 3.205
Epoch   3 Batch   54/69   train_loss = 3.240
Epoch   3 Batch   55/69   train_loss = 3.334
Epoch   3 Batch   56/69   train_loss = 3.298
Epoch   3 Batch   57/69   train_loss = 3.276
Epoch   3 Batch   58/69   train_loss = 3.295
Epoch   3 Batch   59/69   train_loss = 3.177
Epoch   3 Batch   60/69   train_loss = 3.173
Epoch   3 Batch   61/69   train_loss = 3.218
Epoch   3 Batch   62/69   train_loss = 3.186
Epoch   3 Batch   63/69   train_loss = 3.241
Epoch   3 Batch   64/69   train_loss = 3.183
Epoch   3 Batch   65/69   train_loss = 3.113
Epoch   3 Batch   66/69   train_loss = 3.192
Epoch   3 Batch   67/69   train_loss = 3.191
Epoch   3 Batch   68/69   train_loss = 3.259
Epoch   4 Batch    0/69   train_loss = 3.039
Epoch   4 Batch    1/69   train_loss = 3.145
Epoch   4 Batch    2/69   train_loss = 3.172
Epoch   4 Batch    3/69   train_loss = 3.091
Epoch   4 Batch    4/69   train_loss = 3.049
Epoch   4 Batch    5/69   train_loss = 3.125
Epoch   4 Batch    6/69   train_loss = 3.134
Epoch   4 Batch    7/69   train_loss = 3.109
Epoch   4 Batch    8/69   train_loss = 3.160
Epoch   4 Batch    9/69   train_loss = 3.198
Epoch   4 Batch   10/69   train_loss = 3.085
Epoch   4 Batch   11/69   train_loss = 3.127
Epoch   4 Batch   12/69   train_loss = 2.990
Epoch   4 Batch   13/69   train_loss = 3.091
Epoch   4 Batch   14/69   train_loss = 3.060
Epoch   4 Batch   15/69   train_loss = 3.175
Epoch   4 Batch   16/69   train_loss = 3.114
Epoch   4 Batch   17/69   train_loss = 3.103
Epoch   4 Batch   18/69   train_loss = 3.097
Epoch   4 Batch   19/69   train_loss = 3.150
Epoch   4 Batch   20/69   train_loss = 3.141
Epoch   4 Batch   21/69   train_loss = 3.100
Epoch   4 Batch   22/69   train_loss = 3.090
Epoch   4 Batch   23/69   train_loss = 3.087
Epoch   4 Batch   24/69   train_loss = 3.076
Epoch   4 Batch   25/69   train_loss = 3.122
Epoch   4 Batch   26/69   train_loss = 3.054
Epoch   4 Batch   27/69   train_loss = 3.096
Epoch   4 Batch   28/69   train_loss = 3.117
Epoch   4 Batch   29/69   train_loss = 3.037
Epoch   4 Batch   30/69   train_loss = 3.107
Epoch   4 Batch   31/69   train_loss = 2.988
Epoch   4 Batch   32/69   train_loss = 3.071
Epoch   4 Batch   33/69   train_loss = 3.041
Epoch   4 Batch   34/69   train_loss = 3.027
Epoch   4 Batch   35/69   train_loss = 3.101
Epoch   4 Batch   36/69   train_loss = 2.966
Epoch   4 Batch   37/69   train_loss = 2.992
Epoch   4 Batch   38/69   train_loss = 3.028
Epoch   4 Batch   39/69   train_loss = 3.033
Epoch   4 Batch   40/69   train_loss = 2.925
Epoch   4 Batch   41/69   train_loss = 2.874
Epoch   4 Batch   42/69   train_loss = 2.982
Epoch   4 Batch   43/69   train_loss = 2.974
Epoch   4 Batch   44/69   train_loss = 3.085
Epoch   4 Batch   45/69   train_loss = 3.140
Epoch   4 Batch   46/69   train_loss = 2.907
Epoch   4 Batch   47/69   train_loss = 2.991
Epoch   4 Batch   48/69   train_loss = 2.897
Epoch   4 Batch   49/69   train_loss = 2.915
Epoch   4 Batch   50/69   train_loss = 2.973
Epoch   4 Batch   51/69   train_loss = 2.981
Epoch   4 Batch   52/69   train_loss = 2.910
Epoch   4 Batch   53/69   train_loss = 2.818
Epoch   4 Batch   54/69   train_loss = 2.873
Epoch   4 Batch   55/69   train_loss = 3.028
Epoch   4 Batch   56/69   train_loss = 2.962
Epoch   4 Batch   57/69   train_loss = 2.899
Epoch   4 Batch   58/69   train_loss = 2.877
Epoch   4 Batch   59/69   train_loss = 2.781
Epoch   4 Batch   60/69   train_loss = 2.831
Epoch   4 Batch   61/69   train_loss = 2.840
Epoch   4 Batch   62/69   train_loss = 2.803
Epoch   4 Batch   63/69   train_loss = 2.829
Epoch   4 Batch   64/69   train_loss = 2.795
Epoch   4 Batch   65/69   train_loss = 2.781
Epoch   4 Batch   66/69   train_loss = 2.858
Epoch   4 Batch   67/69   train_loss = 2.799
Epoch   4 Batch   68/69   train_loss = 2.844
Epoch   5 Batch    0/69   train_loss = 2.711
Epoch   5 Batch    1/69   train_loss = 2.808
Epoch   5 Batch    2/69   train_loss = 2.803
Epoch   5 Batch    3/69   train_loss = 2.715
Epoch   5 Batch    4/69   train_loss = 2.721
Epoch   5 Batch    5/69   train_loss = 2.793
Epoch   5 Batch    6/69   train_loss = 2.796
Epoch   5 Batch    7/69   train_loss = 2.780
Epoch   5 Batch    8/69   train_loss = 2.827
Epoch   5 Batch    9/69   train_loss = 2.825
Epoch   5 Batch   10/69   train_loss = 2.733
Epoch   5 Batch   11/69   train_loss = 2.745
Epoch   5 Batch   12/69   train_loss = 2.689
Epoch   5 Batch   13/69   train_loss = 2.720
Epoch   5 Batch   14/69   train_loss = 2.760
Epoch   5 Batch   15/69   train_loss = 2.795
Epoch   5 Batch   16/69   train_loss = 2.729
Epoch   5 Batch   17/69   train_loss = 2.765
Epoch   5 Batch   18/69   train_loss = 2.714
Epoch   5 Batch   19/69   train_loss = 2.738
Epoch   5 Batch   20/69   train_loss = 2.751
Epoch   5 Batch   21/69   train_loss = 2.765
Epoch   5 Batch   22/69   train_loss = 2.795
Epoch   5 Batch   23/69   train_loss = 2.723
Epoch   5 Batch   24/69   train_loss = 2.695
Epoch   5 Batch   25/69   train_loss = 2.699
Epoch   5 Batch   26/69   train_loss = 2.723
Epoch   5 Batch   27/69   train_loss = 2.759
Epoch   5 Batch   28/69   train_loss = 2.738
Epoch   5 Batch   29/69   train_loss = 2.664
Epoch   5 Batch   30/69   train_loss = 2.754
Epoch   5 Batch   31/69   train_loss = 2.622
Epoch   5 Batch   32/69   train_loss = 2.736
Epoch   5 Batch   33/69   train_loss = 2.678
Epoch   5 Batch   34/69   train_loss = 2.709
Epoch   5 Batch   35/69   train_loss = 2.800
Epoch   5 Batch   36/69   train_loss = 2.698
Epoch   5 Batch   37/69   train_loss = 2.691
Epoch   5 Batch   38/69   train_loss = 2.702
Epoch   5 Batch   39/69   train_loss = 2.641
Epoch   5 Batch   40/69   train_loss = 2.669
Epoch   5 Batch   41/69   train_loss = 2.603
Epoch   5 Batch   42/69   train_loss = 2.664
Epoch   5 Batch   43/69   train_loss = 2.620
Epoch   5 Batch   44/69   train_loss = 2.752
Epoch   5 Batch   45/69   train_loss = 2.807
Epoch   5 Batch   46/69   train_loss = 2.576
Epoch   5 Batch   47/69   train_loss = 2.682
Epoch   5 Batch   48/69   train_loss = 2.655
Epoch   5 Batch   49/69   train_loss = 2.623
Epoch   5 Batch   50/69   train_loss = 2.718
Epoch   5 Batch   51/69   train_loss = 2.634
Epoch   5 Batch   52/69   train_loss = 2.615
Epoch   5 Batch   53/69   train_loss = 2.545
Epoch   5 Batch   54/69   train_loss = 2.591
Epoch   5 Batch   55/69   train_loss = 2.675
Epoch   5 Batch   56/69   train_loss = 2.591
Epoch   5 Batch   57/69   train_loss = 2.557
Epoch   5 Batch   58/69   train_loss = 2.553
Epoch   5 Batch   59/69   train_loss = 2.467
Epoch   5 Batch   60/69   train_loss = 2.510
Epoch   5 Batch   61/69   train_loss = 2.525
Epoch   5 Batch   62/69   train_loss = 2.503
Epoch   5 Batch   63/69   train_loss = 2.563
Epoch   5 Batch   64/69   train_loss = 2.502
Epoch   5 Batch   65/69   train_loss = 2.452
Epoch   5 Batch   66/69   train_loss = 2.573
Epoch   5 Batch   67/69   train_loss = 2.471
Epoch   5 Batch   68/69   train_loss = 2.494
Epoch   6 Batch    0/69   train_loss = 2.423
Epoch   6 Batch    1/69   train_loss = 2.486
Epoch   6 Batch    2/69   train_loss = 2.514
Epoch   6 Batch    3/69   train_loss = 2.447
Epoch   6 Batch    4/69   train_loss = 2.420
Epoch   6 Batch    5/69   train_loss = 2.493
Epoch   6 Batch    6/69   train_loss = 2.449
Epoch   6 Batch    7/69   train_loss = 2.485
Epoch   6 Batch    8/69   train_loss = 2.511
Epoch   6 Batch    9/69   train_loss = 2.529
Epoch   6 Batch   10/69   train_loss = 2.345
Epoch   6 Batch   11/69   train_loss = 2.471
Epoch   6 Batch   12/69   train_loss = 2.403
Epoch   6 Batch   13/69   train_loss = 2.441
Epoch   6 Batch   14/69   train_loss = 2.469
Epoch   6 Batch   15/69   train_loss = 2.497
Epoch   6 Batch   16/69   train_loss = 2.436
Epoch   6 Batch   17/69   train_loss = 2.568
Epoch   6 Batch   18/69   train_loss = 2.356
Epoch   6 Batch   19/69   train_loss = 2.409
Epoch   6 Batch   20/69   train_loss = 2.445
Epoch   6 Batch   21/69   train_loss = 2.517
Epoch   6 Batch   22/69   train_loss = 2.514
Epoch   6 Batch   23/69   train_loss = 2.365
Epoch   6 Batch   24/69   train_loss = 2.416
Epoch   6 Batch   25/69   train_loss = 2.404
Epoch   6 Batch   26/69   train_loss = 2.453
Epoch   6 Batch   27/69   train_loss = 2.426
Epoch   6 Batch   28/69   train_loss = 2.456
Epoch   6 Batch   29/69   train_loss = 2.391
Epoch   6 Batch   30/69   train_loss = 2.508
Epoch   6 Batch   31/69   train_loss = 2.430
Epoch   6 Batch   32/69   train_loss = 2.517
Epoch   6 Batch   33/69   train_loss = 2.418
Epoch   6 Batch   34/69   train_loss = 2.469
Epoch   6 Batch   35/69   train_loss = 2.546
Epoch   6 Batch   36/69   train_loss = 2.417
Epoch   6 Batch   37/69   train_loss = 2.404
Epoch   6 Batch   38/69   train_loss = 2.384
Epoch   6 Batch   39/69   train_loss = 2.336
Epoch   6 Batch   40/69   train_loss = 2.348
Epoch   6 Batch   41/69   train_loss = 2.288
Epoch   6 Batch   42/69   train_loss = 2.355
Epoch   6 Batch   43/69   train_loss = 2.352
Epoch   6 Batch   44/69   train_loss = 2.457
Epoch   6 Batch   45/69   train_loss = 2.558
Epoch   6 Batch   46/69   train_loss = 2.391
Epoch   6 Batch   47/69   train_loss = 2.412
Epoch   6 Batch   48/69   train_loss = 2.321
Epoch   6 Batch   49/69   train_loss = 2.313
Epoch   6 Batch   50/69   train_loss = 2.421
Epoch   6 Batch   51/69   train_loss = 2.352
Epoch   6 Batch   52/69   train_loss = 2.309
Epoch   6 Batch   53/69   train_loss = 2.294
Epoch   6 Batch   54/69   train_loss = 2.289
Epoch   6 Batch   55/69   train_loss = 2.507
Epoch   6 Batch   56/69   train_loss = 2.304
Epoch   6 Batch   57/69   train_loss = 2.293
Epoch   6 Batch   58/69   train_loss = 2.284
Epoch   6 Batch   59/69   train_loss = 2.257
Epoch   6 Batch   60/69   train_loss = 2.282
Epoch   6 Batch   61/69   train_loss = 2.308
Epoch   6 Batch   62/69   train_loss = 2.256
Epoch   6 Batch   63/69   train_loss = 2.294
Epoch   6 Batch   64/69   train_loss = 2.274
Epoch   6 Batch   65/69   train_loss = 2.173
Epoch   6 Batch   66/69   train_loss = 2.361
Epoch   6 Batch   67/69   train_loss = 2.275
Epoch   6 Batch   68/69   train_loss = 2.240
Epoch   7 Batch    0/69   train_loss = 2.231
Epoch   7 Batch    1/69   train_loss = 2.244
Epoch   7 Batch    2/69   train_loss = 2.253
Epoch   7 Batch    3/69   train_loss = 2.216
Epoch   7 Batch    4/69   train_loss = 2.155
Epoch   7 Batch    5/69   train_loss = 2.297
Epoch   7 Batch    6/69   train_loss = 2.259
Epoch   7 Batch    7/69   train_loss = 2.277
Epoch   7 Batch    8/69   train_loss = 2.251
Epoch   7 Batch    9/69   train_loss = 2.248
Epoch   7 Batch   10/69   train_loss = 2.177
Epoch   7 Batch   11/69   train_loss = 2.243
Epoch   7 Batch   12/69   train_loss = 2.184
Epoch   7 Batch   13/69   train_loss = 2.222
Epoch   7 Batch   14/69   train_loss = 2.236
Epoch   7 Batch   15/69   train_loss = 2.264
Epoch   7 Batch   16/69   train_loss = 2.096
Epoch   7 Batch   17/69   train_loss = 2.325
Epoch   7 Batch   18/69   train_loss = 2.180
Epoch   7 Batch   19/69   train_loss = 2.261
Epoch   7 Batch   20/69   train_loss = 2.112
Epoch   7 Batch   21/69   train_loss = 2.259
Epoch   7 Batch   22/69   train_loss = 2.261
Epoch   7 Batch   23/69   train_loss = 2.123
Epoch   7 Batch   24/69   train_loss = 2.138
Epoch   7 Batch   25/69   train_loss = 2.138
Epoch   7 Batch   26/69   train_loss = 2.210
Epoch   7 Batch   27/69   train_loss = 2.194
Epoch   7 Batch   28/69   train_loss = 2.172
Epoch   7 Batch   29/69   train_loss = 2.182
Epoch   7 Batch   30/69   train_loss = 2.195
Epoch   7 Batch   31/69   train_loss = 2.121
Epoch   7 Batch   32/69   train_loss = 2.250
Epoch   7 Batch   33/69   train_loss = 2.168
Epoch   7 Batch   34/69   train_loss = 2.187
Epoch   7 Batch   35/69   train_loss = 2.312
Epoch   7 Batch   36/69   train_loss = 2.153
Epoch   7 Batch   37/69   train_loss = 2.148
Epoch   7 Batch   38/69   train_loss = 2.192
Epoch   7 Batch   39/69   train_loss = 2.128
Epoch   7 Batch   40/69   train_loss = 2.098
Epoch   7 Batch   41/69   train_loss = 2.041
Epoch   7 Batch   42/69   train_loss = 2.175
Epoch   7 Batch   43/69   train_loss = 2.087
Epoch   7 Batch   44/69   train_loss = 2.222
Epoch   7 Batch   45/69   train_loss = 2.255
Epoch   7 Batch   46/69   train_loss = 2.124
Epoch   7 Batch   47/69   train_loss = 2.196
Epoch   7 Batch   48/69   train_loss = 2.110
Epoch   7 Batch   49/69   train_loss = 2.115
Epoch   7 Batch   50/69   train_loss = 2.193
Epoch   7 Batch   51/69   train_loss = 2.132
Epoch   7 Batch   52/69   train_loss = 2.091
Epoch   7 Batch   53/69   train_loss = 2.059
Epoch   7 Batch   54/69   train_loss = 2.099
Epoch   7 Batch   55/69   train_loss = 2.219
Epoch   7 Batch   56/69   train_loss = 2.131
Epoch   7 Batch   57/69   train_loss = 2.055
Epoch   7 Batch   58/69   train_loss = 2.037
Epoch   7 Batch   59/69   train_loss = 1.997
Epoch   7 Batch   60/69   train_loss = 2.092
Epoch   7 Batch   61/69   train_loss = 2.028
Epoch   7 Batch   62/69   train_loss = 2.058
Epoch   7 Batch   63/69   train_loss = 2.044
Epoch   7 Batch   64/69   train_loss = 2.050
Epoch   7 Batch   65/69   train_loss = 1.954
Epoch   7 Batch   66/69   train_loss = 2.114
Epoch   7 Batch   67/69   train_loss = 2.028
Epoch   7 Batch   68/69   train_loss = 1.989
Epoch   8 Batch    0/69   train_loss = 2.000
Epoch   8 Batch    1/69   train_loss = 2.002
Epoch   8 Batch    2/69   train_loss = 2.010
Epoch   8 Batch    3/69   train_loss = 2.038
Epoch   8 Batch    4/69   train_loss = 1.956
Epoch   8 Batch    5/69   train_loss = 2.061
Epoch   8 Batch    6/69   train_loss = 2.042
Epoch   8 Batch    7/69   train_loss = 2.092
Epoch   8 Batch    8/69   train_loss = 2.044
Epoch   8 Batch    9/69   train_loss = 2.097
Epoch   8 Batch   10/69   train_loss = 1.937
Epoch   8 Batch   11/69   train_loss = 2.036
Epoch   8 Batch   12/69   train_loss = 1.972
Epoch   8 Batch   13/69   train_loss = 1.976
Epoch   8 Batch   14/69   train_loss = 2.052
Epoch   8 Batch   15/69   train_loss = 2.060
Epoch   8 Batch   16/69   train_loss = 1.954
Epoch   8 Batch   17/69   train_loss = 2.072
Epoch   8 Batch   18/69   train_loss = 1.943
Epoch   8 Batch   19/69   train_loss = 1.972
Epoch   8 Batch   20/69   train_loss = 1.967
Epoch   8 Batch   21/69   train_loss = 2.019
Epoch   8 Batch   22/69   train_loss = 1.952
Epoch   8 Batch   23/69   train_loss = 1.946
Epoch   8 Batch   24/69   train_loss = 1.910
Epoch   8 Batch   25/69   train_loss = 1.906
Epoch   8 Batch   26/69   train_loss = 1.980
Epoch   8 Batch   27/69   train_loss = 1.957
Epoch   8 Batch   28/69   train_loss = 1.922
Epoch   8 Batch   29/69   train_loss = 1.929
Epoch   8 Batch   30/69   train_loss = 2.007
Epoch   8 Batch   31/69   train_loss = 2.012
Epoch   8 Batch   32/69   train_loss = 2.055
Epoch   8 Batch   33/69   train_loss = 1.955
Epoch   8 Batch   34/69   train_loss = 2.016
Epoch   8 Batch   35/69   train_loss = 2.112
Epoch   8 Batch   36/69   train_loss = 1.970
Epoch   8 Batch   37/69   train_loss = 1.955
Epoch   8 Batch   38/69   train_loss = 1.934
Epoch   8 Batch   39/69   train_loss = 1.878
Epoch   8 Batch   40/69   train_loss = 1.920
Epoch   8 Batch   41/69   train_loss = 1.819
Epoch   8 Batch   42/69   train_loss = 1.949
Epoch   8 Batch   43/69   train_loss = 1.908
Epoch   8 Batch   44/69   train_loss = 1.967
Epoch   8 Batch   45/69   train_loss = 2.073
Epoch   8 Batch   46/69   train_loss = 1.907
Epoch   8 Batch   47/69   train_loss = 1.991
Epoch   8 Batch   48/69   train_loss = 1.950
Epoch   8 Batch   49/69   train_loss = 1.891
Epoch   8 Batch   50/69   train_loss = 1.959
Epoch   8 Batch   51/69   train_loss = 1.913
Epoch   8 Batch   52/69   train_loss = 1.905
Epoch   8 Batch   53/69   train_loss = 1.868
Epoch   8 Batch   54/69   train_loss = 1.929
Epoch   8 Batch   55/69   train_loss = 2.036
Epoch   8 Batch   56/69   train_loss = 1.956
Epoch   8 Batch   57/69   train_loss = 1.839
Epoch   8 Batch   58/69   train_loss = 1.850
Epoch   8 Batch   59/69   train_loss = 1.841
Epoch   8 Batch   60/69   train_loss = 1.955
Epoch   8 Batch   61/69   train_loss = 1.875
Epoch   8 Batch   62/69   train_loss = 1.857
Epoch   8 Batch   63/69   train_loss = 1.861
Epoch   8 Batch   64/69   train_loss = 1.839
Epoch   8 Batch   65/69   train_loss = 1.814
Epoch   8 Batch   66/69   train_loss = 1.934
Epoch   8 Batch   67/69   train_loss = 1.840
Epoch   8 Batch   68/69   train_loss = 1.836
Epoch   9 Batch    0/69   train_loss = 1.801
Epoch   9 Batch    1/69   train_loss = 1.826
Epoch   9 Batch    2/69   train_loss = 1.881
Epoch   9 Batch    3/69   train_loss = 1.825
Epoch   9 Batch    4/69   train_loss = 1.815
Epoch   9 Batch    5/69   train_loss = 1.842
Epoch   9 Batch    6/69   train_loss = 1.845
Epoch   9 Batch    7/69   train_loss = 1.917
Epoch   9 Batch    8/69   train_loss = 1.838
Epoch   9 Batch    9/69   train_loss = 1.872
Epoch   9 Batch   10/69   train_loss = 1.741
Epoch   9 Batch   11/69   train_loss = 1.734
Epoch   9 Batch   12/69   train_loss = 1.826
Epoch   9 Batch   13/69   train_loss = 1.839
Epoch   9 Batch   14/69   train_loss = 1.848
Epoch   9 Batch   15/69   train_loss = 1.843
Epoch   9 Batch   16/69   train_loss = 1.787
Epoch   9 Batch   17/69   train_loss = 1.859
Epoch   9 Batch   18/69   train_loss = 1.743
Epoch   9 Batch   19/69   train_loss = 1.838
Epoch   9 Batch   20/69   train_loss = 1.751
Epoch   9 Batch   21/69   train_loss = 1.858
Epoch   9 Batch   22/69   train_loss = 1.810
Epoch   9 Batch   23/69   train_loss = 1.713
Epoch   9 Batch   24/69   train_loss = 1.777
Epoch   9 Batch   25/69   train_loss = 1.726
Epoch   9 Batch   26/69   train_loss = 1.823
Epoch   9 Batch   27/69   train_loss = 1.741
Epoch   9 Batch   28/69   train_loss = 1.786
Epoch   9 Batch   29/69   train_loss = 1.768
Epoch   9 Batch   30/69   train_loss = 1.811
Epoch   9 Batch   31/69   train_loss = 1.773
Epoch   9 Batch   32/69   train_loss = 1.891
Epoch   9 Batch   33/69   train_loss = 1.765
Epoch   9 Batch   34/69   train_loss = 1.849
Epoch   9 Batch   35/69   train_loss = 1.891
Epoch   9 Batch   36/69   train_loss = 1.827
Epoch   9 Batch   37/69   train_loss = 1.798
Epoch   9 Batch   38/69   train_loss = 1.765
Epoch   9 Batch   39/69   train_loss = 1.779
Epoch   9 Batch   40/69   train_loss = 1.781
Epoch   9 Batch   41/69   train_loss = 1.662
Epoch   9 Batch   42/69   train_loss = 1.800
Epoch   9 Batch   43/69   train_loss = 1.687
Epoch   9 Batch   44/69   train_loss = 1.811
Epoch   9 Batch   45/69   train_loss = 1.886
Epoch   9 Batch   46/69   train_loss = 1.771
Epoch   9 Batch   47/69   train_loss = 1.833
Epoch   9 Batch   48/69   train_loss = 1.767
Epoch   9 Batch   49/69   train_loss = 1.746
Epoch   9 Batch   50/69   train_loss = 1.838
Epoch   9 Batch   51/69   train_loss = 1.713
Epoch   9 Batch   52/69   train_loss = 1.696
Epoch   9 Batch   53/69   train_loss = 1.713
Epoch   9 Batch   54/69   train_loss = 1.790
Epoch   9 Batch   55/69   train_loss = 1.899
Epoch   9 Batch   56/69   train_loss = 1.827
Epoch   9 Batch   57/69   train_loss = 1.693
Epoch   9 Batch   58/69   train_loss = 1.703
Epoch   9 Batch   59/69   train_loss = 1.600
Epoch   9 Batch   60/69   train_loss = 1.767
Epoch   9 Batch   61/69   train_loss = 1.713
Epoch   9 Batch   62/69   train_loss = 1.701
Epoch   9 Batch   63/69   train_loss = 1.774
Epoch   9 Batch   64/69   train_loss = 1.736
Epoch   9 Batch   65/69   train_loss = 1.638
Epoch   9 Batch   66/69   train_loss = 1.702
Epoch   9 Batch   67/69   train_loss = 1.716
Epoch   9 Batch   68/69   train_loss = 1.699
Epoch  10 Batch    0/69   train_loss = 1.649
Epoch  10 Batch    1/69   train_loss = 1.719
Epoch  10 Batch    2/69   train_loss = 1.658
Epoch  10 Batch    3/69   train_loss = 1.712
Epoch  10 Batch    4/69   train_loss = 1.643
Epoch  10 Batch    5/69   train_loss = 1.724
Epoch  10 Batch    6/69   train_loss = 1.690
Epoch  10 Batch    7/69   train_loss = 1.761
Epoch  10 Batch    8/69   train_loss = 1.709
Epoch  10 Batch    9/69   train_loss = 1.711
Epoch  10 Batch   10/69   train_loss = 1.628
Epoch  10 Batch   11/69   train_loss = 1.688
Epoch  10 Batch   12/69   train_loss = 1.679
Epoch  10 Batch   13/69   train_loss = 1.723
Epoch  10 Batch   14/69   train_loss = 1.771
Epoch  10 Batch   15/69   train_loss = 1.719
Epoch  10 Batch   16/69   train_loss = 1.604
Epoch  10 Batch   17/69   train_loss = 1.669
Epoch  10 Batch   18/69   train_loss = 1.658
Epoch  10 Batch   19/69   train_loss = 1.667
Epoch  10 Batch   20/69   train_loss = 1.564
Epoch  10 Batch   21/69   train_loss = 1.684
Epoch  10 Batch   22/69   train_loss = 1.674
Epoch  10 Batch   23/69   train_loss = 1.593
Epoch  10 Batch   24/69   train_loss = 1.607
Epoch  10 Batch   25/69   train_loss = 1.581
Epoch  10 Batch   26/69   train_loss = 1.647
Epoch  10 Batch   27/69   train_loss = 1.579
Epoch  10 Batch   28/69   train_loss = 1.638
Epoch  10 Batch   29/69   train_loss = 1.601
Epoch  10 Batch   30/69   train_loss = 1.681
Epoch  10 Batch   31/69   train_loss = 1.694
Epoch  10 Batch   32/69   train_loss = 1.789
Epoch  10 Batch   33/69   train_loss = 1.601
Epoch  10 Batch   34/69   train_loss = 1.719
Epoch  10 Batch   35/69   train_loss = 1.765
Epoch  10 Batch   36/69   train_loss = 1.670
Epoch  10 Batch   37/69   train_loss = 1.661
Epoch  10 Batch   38/69   train_loss = 1.647
Epoch  10 Batch   39/69   train_loss = 1.583
Epoch  10 Batch   40/69   train_loss = 1.618
Epoch  10 Batch   41/69   train_loss = 1.581
Epoch  10 Batch   42/69   train_loss = 1.662
Epoch  10 Batch   43/69   train_loss = 1.550
Epoch  10 Batch   44/69   train_loss = 1.726
Epoch  10 Batch   45/69   train_loss = 1.804
Epoch  10 Batch   46/69   train_loss = 1.663
Epoch  10 Batch   47/69   train_loss = 1.696
Epoch  10 Batch   48/69   train_loss = 1.643
Epoch  10 Batch   49/69   train_loss = 1.626
Epoch  10 Batch   50/69   train_loss = 1.661
Epoch  10 Batch   51/69   train_loss = 1.602
Epoch  10 Batch   52/69   train_loss = 1.624
Epoch  10 Batch   53/69   train_loss = 1.592
Epoch  10 Batch   54/69   train_loss = 1.613
Epoch  10 Batch   55/69   train_loss = 1.758
Epoch  10 Batch   56/69   train_loss = 1.636
Epoch  10 Batch   57/69   train_loss = 1.599
Epoch  10 Batch   58/69   train_loss = 1.549
Epoch  10 Batch   59/69   train_loss = 1.548
Epoch  10 Batch   60/69   train_loss = 1.634
Epoch  10 Batch   61/69   train_loss = 1.613
Epoch  10 Batch   62/69   train_loss = 1.592
Epoch  10 Batch   63/69   train_loss = 1.633
Epoch  10 Batch   64/69   train_loss = 1.604
Epoch  10 Batch   65/69   train_loss = 1.528
Epoch  10 Batch   66/69   train_loss = 1.703
Epoch  10 Batch   67/69   train_loss = 1.569
Epoch  10 Batch   68/69   train_loss = 1.614
Epoch  11 Batch    0/69   train_loss = 1.534
Epoch  11 Batch    1/69   train_loss = 1.557
Epoch  11 Batch    2/69   train_loss = 1.621
Epoch  11 Batch    3/69   train_loss = 1.557
Epoch  11 Batch    4/69   train_loss = 1.533
Epoch  11 Batch    5/69   train_loss = 1.660
Epoch  11 Batch    6/69   train_loss = 1.578
Epoch  11 Batch    7/69   train_loss = 1.635
Epoch  11 Batch    8/69   train_loss = 1.624
Epoch  11 Batch    9/69   train_loss = 1.602
Epoch  11 Batch   10/69   train_loss = 1.558
Epoch  11 Batch   11/69   train_loss = 1.554
Epoch  11 Batch   12/69   train_loss = 1.550
Epoch  11 Batch   13/69   train_loss = 1.558
Epoch  11 Batch   14/69   train_loss = 1.675
Epoch  11 Batch   15/69   train_loss = 1.594
Epoch  11 Batch   16/69   train_loss = 1.520
Epoch  11 Batch   17/69   train_loss = 1.588
Epoch  11 Batch   18/69   train_loss = 1.475
Epoch  11 Batch   19/69   train_loss = 1.521
Epoch  11 Batch   20/69   train_loss = 1.458
Epoch  11 Batch   21/69   train_loss = 1.656
Epoch  11 Batch   22/69   train_loss = 1.557
Epoch  11 Batch   23/69   train_loss = 1.535
Epoch  11 Batch   24/69   train_loss = 1.480
Epoch  11 Batch   25/69   train_loss = 1.435
Epoch  11 Batch   26/69   train_loss = 1.607
Epoch  11 Batch   27/69   train_loss = 1.469
Epoch  11 Batch   28/69   train_loss = 1.555
Epoch  11 Batch   29/69   train_loss = 1.514
Epoch  11 Batch   30/69   train_loss = 1.544
Epoch  11 Batch   31/69   train_loss = 1.562
Epoch  11 Batch   32/69   train_loss = 1.657
Epoch  11 Batch   33/69   train_loss = 1.537
Epoch  11 Batch   34/69   train_loss = 1.627
Epoch  11 Batch   35/69   train_loss = 1.591
Epoch  11 Batch   36/69   train_loss = 1.576
Epoch  11 Batch   37/69   train_loss = 1.502
Epoch  11 Batch   38/69   train_loss = 1.499
Epoch  11 Batch   39/69   train_loss = 1.434
Epoch  11 Batch   40/69   train_loss = 1.528
Epoch  11 Batch   41/69   train_loss = 1.429
Epoch  11 Batch   42/69   train_loss = 1.552
Epoch  11 Batch   43/69   train_loss = 1.524
Epoch  11 Batch   44/69   train_loss = 1.584
Epoch  11 Batch   45/69   train_loss = 1.672
Epoch  11 Batch   46/69   train_loss = 1.574
Epoch  11 Batch   47/69   train_loss = 1.607
Epoch  11 Batch   48/69   train_loss = 1.558
Epoch  11 Batch   49/69   train_loss = 1.556
Epoch  11 Batch   50/69   train_loss = 1.594
Epoch  11 Batch   51/69   train_loss = 1.500
Epoch  11 Batch   52/69   train_loss = 1.528
Epoch  11 Batch   53/69   train_loss = 1.517
Epoch  11 Batch   54/69   train_loss = 1.584
Epoch  11 Batch   55/69   train_loss = 1.648
Epoch  11 Batch   56/69   train_loss = 1.550
Epoch  11 Batch   57/69   train_loss = 1.465
Epoch  11 Batch   58/69   train_loss = 1.425
Epoch  11 Batch   59/69   train_loss = 1.507
Epoch  11 Batch   60/69   train_loss = 1.509
Epoch  11 Batch   61/69   train_loss = 1.535
Epoch  11 Batch   62/69   train_loss = 1.469
Epoch  11 Batch   63/69   train_loss = 1.504
Epoch  11 Batch   64/69   train_loss = 1.523
Epoch  11 Batch   65/69   train_loss = 1.409
Epoch  11 Batch   66/69   train_loss = 1.533
Epoch  11 Batch   67/69   train_loss = 1.471
Epoch  11 Batch   68/69   train_loss = 1.493
Epoch  12 Batch    0/69   train_loss = 1.435
Epoch  12 Batch    1/69   train_loss = 1.455
Epoch  12 Batch    2/69   train_loss = 1.484
Epoch  12 Batch    3/69   train_loss = 1.521
Epoch  12 Batch    4/69   train_loss = 1.427
Epoch  12 Batch    5/69   train_loss = 1.482
Epoch  12 Batch    6/69   train_loss = 1.524
Epoch  12 Batch    7/69   train_loss = 1.512
Epoch  12 Batch    8/69   train_loss = 1.489
Epoch  12 Batch    9/69   train_loss = 1.528
Epoch  12 Batch   10/69   train_loss = 1.436
Epoch  12 Batch   11/69   train_loss = 1.409
Epoch  12 Batch   12/69   train_loss = 1.466
Epoch  12 Batch   13/69   train_loss = 1.483
Epoch  12 Batch   14/69   train_loss = 1.588
Epoch  12 Batch   15/69   train_loss = 1.449
Epoch  12 Batch   16/69   train_loss = 1.425
Epoch  12 Batch   17/69   train_loss = 1.464
Epoch  12 Batch   18/69   train_loss = 1.389
Epoch  12 Batch   19/69   train_loss = 1.480
Epoch  12 Batch   20/69   train_loss = 1.344
Epoch  12 Batch   21/69   train_loss = 1.538
Epoch  12 Batch   22/69   train_loss = 1.424
Epoch  12 Batch   23/69   train_loss = 1.446
Epoch  12 Batch   24/69   train_loss = 1.464
Epoch  12 Batch   25/69   train_loss = 1.309
Epoch  12 Batch   26/69   train_loss = 1.418
Epoch  12 Batch   27/69   train_loss = 1.347
Epoch  12 Batch   28/69   train_loss = 1.422
Epoch  12 Batch   29/69   train_loss = 1.347
Epoch  12 Batch   30/69   train_loss = 1.443
Epoch  12 Batch   31/69   train_loss = 1.501
Epoch  12 Batch   32/69   train_loss = 1.547
Epoch  12 Batch   33/69   train_loss = 1.365
Epoch  12 Batch   34/69   train_loss = 1.501
Epoch  12 Batch   35/69   train_loss = 1.531
Epoch  12 Batch   36/69   train_loss = 1.404
Epoch  12 Batch   37/69   train_loss = 1.463
Epoch  12 Batch   38/69   train_loss = 1.418
Epoch  12 Batch   39/69   train_loss = 1.387
Epoch  12 Batch   40/69   train_loss = 1.399
Epoch  12 Batch   41/69   train_loss = 1.364
Epoch  12 Batch   42/69   train_loss = 1.419
Epoch  12 Batch   43/69   train_loss = 1.397
Epoch  12 Batch   44/69   train_loss = 1.525
Epoch  12 Batch   45/69   train_loss = 1.539
Epoch  12 Batch   46/69   train_loss = 1.424
Epoch  12 Batch   47/69   train_loss = 1.421
Epoch  12 Batch   48/69   train_loss = 1.495
Epoch  12 Batch   49/69   train_loss = 1.478
Epoch  12 Batch   50/69   train_loss = 1.459
Epoch  12 Batch   51/69   train_loss = 1.394
Epoch  12 Batch   52/69   train_loss = 1.501
Epoch  12 Batch   53/69   train_loss = 1.415
Epoch  12 Batch   54/69   train_loss = 1.438
Epoch  12 Batch   55/69   train_loss = 1.639
Epoch  12 Batch   56/69   train_loss = 1.443
Epoch  12 Batch   57/69   train_loss = 1.414
Epoch  12 Batch   58/69   train_loss = 1.335
Epoch  12 Batch   59/69   train_loss = 1.384
Epoch  12 Batch   60/69   train_loss = 1.432
Epoch  12 Batch   61/69   train_loss = 1.414
Epoch  12 Batch   62/69   train_loss = 1.399
Epoch  12 Batch   63/69   train_loss = 1.436
Epoch  12 Batch   64/69   train_loss = 1.454
Epoch  12 Batch   65/69   train_loss = 1.323
Epoch  12 Batch   66/69   train_loss = 1.478
Epoch  12 Batch   67/69   train_loss = 1.358
Epoch  12 Batch   68/69   train_loss = 1.388
Epoch  13 Batch    0/69   train_loss = 1.339
Epoch  13 Batch    1/69   train_loss = 1.391
Epoch  13 Batch    2/69   train_loss = 1.448
Epoch  13 Batch    3/69   train_loss = 1.395
Epoch  13 Batch    4/69   train_loss = 1.353
Epoch  13 Batch    5/69   train_loss = 1.400
Epoch  13 Batch    6/69   train_loss = 1.395
Epoch  13 Batch    7/69   train_loss = 1.433
Epoch  13 Batch    8/69   train_loss = 1.458
Epoch  13 Batch    9/69   train_loss = 1.443
Epoch  13 Batch   10/69   train_loss = 1.382
Epoch  13 Batch   11/69   train_loss = 1.346
Epoch  13 Batch   12/69   train_loss = 1.415
Epoch  13 Batch   13/69   train_loss = 1.398
Epoch  13 Batch   14/69   train_loss = 1.536
Epoch  13 Batch   15/69   train_loss = 1.371
Epoch  13 Batch   16/69   train_loss = 1.283
Epoch  13 Batch   17/69   train_loss = 1.382
Epoch  13 Batch   18/69   train_loss = 1.310
Epoch  13 Batch   19/69   train_loss = 1.415
Epoch  13 Batch   20/69   train_loss = 1.313
Epoch  13 Batch   21/69   train_loss = 1.455
Epoch  13 Batch   22/69   train_loss = 1.418
Epoch  13 Batch   23/69   train_loss = 1.327
Epoch  13 Batch   24/69   train_loss = 1.261
Epoch  13 Batch   25/69   train_loss = 1.300
Epoch  13 Batch   26/69   train_loss = 1.363
Epoch  13 Batch   27/69   train_loss = 1.258
Epoch  13 Batch   28/69   train_loss = 1.392
Epoch  13 Batch   29/69   train_loss = 1.263
Epoch  13 Batch   30/69   train_loss = 1.388
Epoch  13 Batch   31/69   train_loss = 1.350
Epoch  13 Batch   32/69   train_loss = 1.429
Epoch  13 Batch   33/69   train_loss = 1.302
Epoch  13 Batch   34/69   train_loss = 1.423
Epoch  13 Batch   35/69   train_loss = 1.371
Epoch  13 Batch   36/69   train_loss = 1.362
Epoch  13 Batch   37/69   train_loss = 1.349
Epoch  13 Batch   38/69   train_loss = 1.286
Epoch  13 Batch   39/69   train_loss = 1.290
Epoch  13 Batch   40/69   train_loss = 1.322
Epoch  13 Batch   41/69   train_loss = 1.198
Epoch  13 Batch   42/69   train_loss = 1.403
Epoch  13 Batch   43/69   train_loss = 1.286
Epoch  13 Batch   44/69   train_loss = 1.418
Epoch  13 Batch   45/69   train_loss = 1.436
Epoch  13 Batch   46/69   train_loss = 1.345
Epoch  13 Batch   47/69   train_loss = 1.357
Epoch  13 Batch   48/69   train_loss = 1.403
Epoch  13 Batch   49/69   train_loss = 1.345
Epoch  13 Batch   50/69   train_loss = 1.381
Epoch  13 Batch   51/69   train_loss = 1.317
Epoch  13 Batch   52/69   train_loss = 1.380
Epoch  13 Batch   53/69   train_loss = 1.319
Epoch  13 Batch   54/69   train_loss = 1.395
Epoch  13 Batch   55/69   train_loss = 1.448
Epoch  13 Batch   56/69   train_loss = 1.368
Epoch  13 Batch   57/69   train_loss = 1.363
Epoch  13 Batch   58/69   train_loss = 1.314
Epoch  13 Batch   59/69   train_loss = 1.366
Epoch  13 Batch   60/69   train_loss = 1.377
Epoch  13 Batch   61/69   train_loss = 1.330
Epoch  13 Batch   62/69   train_loss = 1.332
Epoch  13 Batch   63/69   train_loss = 1.351
Epoch  13 Batch   64/69   train_loss = 1.357
Epoch  13 Batch   65/69   train_loss = 1.234
Epoch  13 Batch   66/69   train_loss = 1.381
Epoch  13 Batch   67/69   train_loss = 1.285
Epoch  13 Batch   68/69   train_loss = 1.274
Epoch  14 Batch    0/69   train_loss = 1.298
Epoch  14 Batch    1/69   train_loss = 1.322
Epoch  14 Batch    2/69   train_loss = 1.344
Epoch  14 Batch    3/69   train_loss = 1.316
Epoch  14 Batch    4/69   train_loss = 1.232
Epoch  14 Batch    5/69   train_loss = 1.285
Epoch  14 Batch    6/69   train_loss = 1.378
Epoch  14 Batch    7/69   train_loss = 1.378
Epoch  14 Batch    8/69   train_loss = 1.356
Epoch  14 Batch    9/69   train_loss = 1.343
Epoch  14 Batch   10/69   train_loss = 1.251
Epoch  14 Batch   11/69   train_loss = 1.330
Epoch  14 Batch   12/69   train_loss = 1.362
Epoch  14 Batch   13/69   train_loss = 1.293
Epoch  14 Batch   14/69   train_loss = 1.394
Epoch  14 Batch   15/69   train_loss = 1.385
Epoch  14 Batch   16/69   train_loss = 1.268
Epoch  14 Batch   17/69   train_loss = 1.330
Epoch  14 Batch   18/69   train_loss = 1.247
Epoch  14 Batch   19/69   train_loss = 1.368
Epoch  14 Batch   20/69   train_loss = 1.303
Epoch  14 Batch   21/69   train_loss = 1.387
Epoch  14 Batch   22/69   train_loss = 1.304
Epoch  14 Batch   23/69   train_loss = 1.284
Epoch  14 Batch   24/69   train_loss = 1.269
Epoch  14 Batch   25/69   train_loss = 1.197
Epoch  14 Batch   26/69   train_loss = 1.304
Epoch  14 Batch   27/69   train_loss = 1.262
Epoch  14 Batch   28/69   train_loss = 1.308
Epoch  14 Batch   29/69   train_loss = 1.294
Epoch  14 Batch   30/69   train_loss = 1.232
Epoch  14 Batch   31/69   train_loss = 1.299
Epoch  14 Batch   32/69   train_loss = 1.347
Epoch  14 Batch   33/69   train_loss = 1.274
Epoch  14 Batch   34/69   train_loss = 1.365
Epoch  14 Batch   35/69   train_loss = 1.286
Epoch  14 Batch   36/69   train_loss = 1.298
Epoch  14 Batch   37/69   train_loss = 1.306
Epoch  14 Batch   38/69   train_loss = 1.232
Epoch  14 Batch   39/69   train_loss = 1.228
Epoch  14 Batch   40/69   train_loss = 1.257
Epoch  14 Batch   41/69   train_loss = 1.239
Epoch  14 Batch   42/69   train_loss = 1.332
Epoch  14 Batch   43/69   train_loss = 1.201
Epoch  14 Batch   44/69   train_loss = 1.292
Epoch  14 Batch   45/69   train_loss = 1.368
Epoch  14 Batch   46/69   train_loss = 1.315
Epoch  14 Batch   47/69   train_loss = 1.354
Epoch  14 Batch   48/69   train_loss = 1.347
Epoch  14 Batch   49/69   train_loss = 1.254
Epoch  14 Batch   50/69   train_loss = 1.296
Epoch  14 Batch   51/69   train_loss = 1.178
Epoch  14 Batch   52/69   train_loss = 1.333
Epoch  14 Batch   53/69   train_loss = 1.310
Epoch  14 Batch   54/69   train_loss = 1.326
Epoch  14 Batch   55/69   train_loss = 1.381
Epoch  14 Batch   56/69   train_loss = 1.282
Epoch  14 Batch   57/69   train_loss = 1.320
Epoch  14 Batch   58/69   train_loss = 1.230
Epoch  14 Batch   59/69   train_loss = 1.235
Epoch  14 Batch   60/69   train_loss = 1.323
Epoch  14 Batch   61/69   train_loss = 1.274
Epoch  14 Batch   62/69   train_loss = 1.189
Epoch  14 Batch   63/69   train_loss = 1.315
Epoch  14 Batch   64/69   train_loss = 1.255
Epoch  14 Batch   65/69   train_loss = 1.164
Epoch  14 Batch   66/69   train_loss = 1.300
Epoch  14 Batch   67/69   train_loss = 1.273
Epoch  14 Batch   68/69   train_loss = 1.298
Epoch  15 Batch    0/69   train_loss = 1.236
Epoch  15 Batch    1/69   train_loss = 1.272
Epoch  15 Batch    2/69   train_loss = 1.264
Epoch  15 Batch    3/69   train_loss = 1.344
Epoch  15 Batch    4/69   train_loss = 1.243
Epoch  15 Batch    5/69   train_loss = 1.304
Epoch  15 Batch    6/69   train_loss = 1.354
Epoch  15 Batch    7/69   train_loss = 1.377
Epoch  15 Batch    8/69   train_loss = 1.331
Epoch  15 Batch    9/69   train_loss = 1.289
Epoch  15 Batch   10/69   train_loss = 1.277
Epoch  15 Batch   11/69   train_loss = 1.200
Epoch  15 Batch   12/69   train_loss = 1.278
Epoch  15 Batch   13/69   train_loss = 1.284
Epoch  15 Batch   14/69   train_loss = 1.331
Epoch  15 Batch   15/69   train_loss = 1.286
Epoch  15 Batch   16/69   train_loss = 1.256
Epoch  15 Batch   17/69   train_loss = 1.283
Epoch  15 Batch   18/69   train_loss = 1.243
Epoch  15 Batch   19/69   train_loss = 1.271
Epoch  15 Batch   20/69   train_loss = 1.173
Epoch  15 Batch   21/69   train_loss = 1.318
Epoch  15 Batch   22/69   train_loss = 1.285
Epoch  15 Batch   23/69   train_loss = 1.220
Epoch  15 Batch   24/69   train_loss = 1.192
Epoch  15 Batch   25/69   train_loss = 1.156
Epoch  15 Batch   26/69   train_loss = 1.281
Epoch  15 Batch   27/69   train_loss = 1.209
Epoch  15 Batch   28/69   train_loss = 1.298
Epoch  15 Batch   29/69   train_loss = 1.203
Epoch  15 Batch   30/69   train_loss = 1.246
Epoch  15 Batch   31/69   train_loss = 1.265
Epoch  15 Batch   32/69   train_loss = 1.315
Epoch  15 Batch   33/69   train_loss = 1.199
Epoch  15 Batch   34/69   train_loss = 1.303
Epoch  15 Batch   35/69   train_loss = 1.271
Epoch  15 Batch   36/69   train_loss = 1.244
Epoch  15 Batch   37/69   train_loss = 1.205
Epoch  15 Batch   38/69   train_loss = 1.237
Epoch  15 Batch   39/69   train_loss = 1.188
Epoch  15 Batch   40/69   train_loss = 1.184
Epoch  15 Batch   41/69   train_loss = 1.199
Epoch  15 Batch   42/69   train_loss = 1.233
Epoch  15 Batch   43/69   train_loss = 1.156
Epoch  15 Batch   44/69   train_loss = 1.226
Epoch  15 Batch   45/69   train_loss = 1.337
Epoch  15 Batch   46/69   train_loss = 1.287
Epoch  15 Batch   47/69   train_loss = 1.295
Epoch  15 Batch   48/69   train_loss = 1.235
Epoch  15 Batch   49/69   train_loss = 1.242
Epoch  15 Batch   50/69   train_loss = 1.309
Epoch  15 Batch   51/69   train_loss = 1.187
Epoch  15 Batch   52/69   train_loss = 1.280
Epoch  15 Batch   53/69   train_loss = 1.233
Epoch  15 Batch   54/69   train_loss = 1.341
Epoch  15 Batch   55/69   train_loss = 1.324
Epoch  15 Batch   56/69   train_loss = 1.283
Epoch  15 Batch   57/69   train_loss = 1.272
Epoch  15 Batch   58/69   train_loss = 1.239
Epoch  15 Batch   59/69   train_loss = 1.254
Epoch  15 Batch   60/69   train_loss = 1.248
Epoch  15 Batch   61/69   train_loss = 1.215
Epoch  15 Batch   62/69   train_loss = 1.232
Epoch  15 Batch   63/69   train_loss = 1.274
Epoch  15 Batch   64/69   train_loss = 1.186
Epoch  15 Batch   65/69   train_loss = 1.205
Epoch  15 Batch   66/69   train_loss = 1.276
Epoch  15 Batch   67/69   train_loss = 1.229
Epoch  15 Batch   68/69   train_loss = 1.240
Epoch  16 Batch    0/69   train_loss = 1.195
Epoch  16 Batch    1/69   train_loss = 1.193
Epoch  16 Batch    2/69   train_loss = 1.255
Epoch  16 Batch    3/69   train_loss = 1.303
Epoch  16 Batch    4/69   train_loss = 1.146
Epoch  16 Batch    5/69   train_loss = 1.306
Epoch  16 Batch    6/69   train_loss = 1.294
Epoch  16 Batch    7/69   train_loss = 1.299
Epoch  16 Batch    8/69   train_loss = 1.271
Epoch  16 Batch    9/69   train_loss = 1.290
Epoch  16 Batch   10/69   train_loss = 1.230
Epoch  16 Batch   11/69   train_loss = 1.230
Epoch  16 Batch   12/69   train_loss = 1.223
Epoch  16 Batch   13/69   train_loss = 1.289
Epoch  16 Batch   14/69   train_loss = 1.324
Epoch  16 Batch   15/69   train_loss = 1.302
Epoch  16 Batch   16/69   train_loss = 1.195
Epoch  16 Batch   17/69   train_loss = 1.212
Epoch  16 Batch   18/69   train_loss = 1.197
Epoch  16 Batch   19/69   train_loss = 1.208
Epoch  16 Batch   20/69   train_loss = 1.193
Epoch  16 Batch   21/69   train_loss = 1.268
Epoch  16 Batch   22/69   train_loss = 1.166
Epoch  16 Batch   23/69   train_loss = 1.167
Epoch  16 Batch   24/69   train_loss = 1.134
Epoch  16 Batch   25/69   train_loss = 1.106
Epoch  16 Batch   26/69   train_loss = 1.264
Epoch  16 Batch   27/69   train_loss = 1.197
Epoch  16 Batch   28/69   train_loss = 1.214
Epoch  16 Batch   29/69   train_loss = 1.115
Epoch  16 Batch   30/69   train_loss = 1.230
Epoch  16 Batch   31/69   train_loss = 1.224
Epoch  16 Batch   32/69   train_loss = 1.243
Epoch  16 Batch   33/69   train_loss = 1.180
Epoch  16 Batch   34/69   train_loss = 1.250
Epoch  16 Batch   35/69   train_loss = 1.190
Epoch  16 Batch   36/69   train_loss = 1.193
Epoch  16 Batch   37/69   train_loss = 1.205
Epoch  16 Batch   38/69   train_loss = 1.184
Epoch  16 Batch   39/69   train_loss = 1.168
Epoch  16 Batch   40/69   train_loss = 1.160
Epoch  16 Batch   41/69   train_loss = 1.112
Epoch  16 Batch   42/69   train_loss = 1.219
Epoch  16 Batch   43/69   train_loss = 1.097
Epoch  16 Batch   44/69   train_loss = 1.179
Epoch  16 Batch   45/69   train_loss = 1.301
Epoch  16 Batch   46/69   train_loss = 1.157
Epoch  16 Batch   47/69   train_loss = 1.292
Epoch  16 Batch   48/69   train_loss = 1.194
Epoch  16 Batch   49/69   train_loss = 1.240
Epoch  16 Batch   50/69   train_loss = 1.224
Epoch  16 Batch   51/69   train_loss = 1.210
Epoch  16 Batch   52/69   train_loss = 1.294
Epoch  16 Batch   53/69   train_loss = 1.173
Epoch  16 Batch   54/69   train_loss = 1.221
Epoch  16 Batch   55/69   train_loss = 1.374
Epoch  16 Batch   56/69   train_loss = 1.127
Epoch  16 Batch   57/69   train_loss = 1.247
Epoch  16 Batch   58/69   train_loss = 1.161
Epoch  16 Batch   59/69   train_loss = 1.213
Epoch  16 Batch   60/69   train_loss = 1.265
Epoch  16 Batch   61/69   train_loss = 1.148
Epoch  16 Batch   62/69   train_loss = 1.193
Epoch  16 Batch   63/69   train_loss = 1.221
Epoch  16 Batch   64/69   train_loss = 1.214
Epoch  16 Batch   65/69   train_loss = 1.108
Epoch  16 Batch   66/69   train_loss = 1.200
Epoch  16 Batch   67/69   train_loss = 1.231
Epoch  16 Batch   68/69   train_loss = 1.199
Epoch  17 Batch    0/69   train_loss = 1.121
Epoch  17 Batch    1/69   train_loss = 1.137
Epoch  17 Batch    2/69   train_loss = 1.225
Epoch  17 Batch    3/69   train_loss = 1.230
Epoch  17 Batch    4/69   train_loss = 1.084
Epoch  17 Batch    5/69   train_loss = 1.294
Epoch  17 Batch    6/69   train_loss = 1.258
Epoch  17 Batch    7/69   train_loss = 1.220
Epoch  17 Batch    8/69   train_loss = 1.258
Epoch  17 Batch    9/69   train_loss = 1.265
Epoch  17 Batch   10/69   train_loss = 1.193
Epoch  17 Batch   11/69   train_loss = 1.193
Epoch  17 Batch   12/69   train_loss = 1.212
Epoch  17 Batch   13/69   train_loss = 1.167
Epoch  17 Batch   14/69   train_loss = 1.242
Epoch  17 Batch   15/69   train_loss = 1.196
Epoch  17 Batch   16/69   train_loss = 1.145
Epoch  17 Batch   17/69   train_loss = 1.163
Epoch  17 Batch   18/69   train_loss = 1.162
Epoch  17 Batch   19/69   train_loss = 1.224
Epoch  17 Batch   20/69   train_loss = 1.145
Epoch  17 Batch   21/69   train_loss = 1.275
Epoch  17 Batch   22/69   train_loss = 1.163
Epoch  17 Batch   23/69   train_loss = 1.152
Epoch  17 Batch   24/69   train_loss = 1.089
Epoch  17 Batch   25/69   train_loss = 1.142
Epoch  17 Batch   26/69   train_loss = 1.244
Epoch  17 Batch   27/69   train_loss = 1.116
Epoch  17 Batch   28/69   train_loss = 1.204
Epoch  17 Batch   29/69   train_loss = 1.166
Epoch  17 Batch   30/69   train_loss = 1.197
Epoch  17 Batch   31/69   train_loss = 1.159
Epoch  17 Batch   32/69   train_loss = 1.272
Epoch  17 Batch   33/69   train_loss = 1.187
Epoch  17 Batch   34/69   train_loss = 1.243
Epoch  17 Batch   35/69   train_loss = 1.206
Epoch  17 Batch   36/69   train_loss = 1.127
Epoch  17 Batch   37/69   train_loss = 1.139
Epoch  17 Batch   38/69   train_loss = 1.105
Epoch  17 Batch   39/69   train_loss = 1.139
Epoch  17 Batch   40/69   train_loss = 1.094
Epoch  17 Batch   41/69   train_loss = 1.091
Epoch  17 Batch   42/69   train_loss = 1.209
Epoch  17 Batch   43/69   train_loss = 1.017
Epoch  17 Batch   44/69   train_loss = 1.200
Epoch  17 Batch   45/69   train_loss = 1.218
Epoch  17 Batch   46/69   train_loss = 1.118
Epoch  17 Batch   47/69   train_loss = 1.202
Epoch  17 Batch   48/69   train_loss = 1.132
Epoch  17 Batch   49/69   train_loss = 1.163
Epoch  17 Batch   50/69   train_loss = 1.141
Epoch  17 Batch   51/69   train_loss = 1.148
Epoch  17 Batch   52/69   train_loss = 1.140
Epoch  17 Batch   53/69   train_loss = 1.179
Epoch  17 Batch   54/69   train_loss = 1.174
Epoch  17 Batch   55/69   train_loss = 1.250
Epoch  17 Batch   56/69   train_loss = 1.173
Epoch  17 Batch   57/69   train_loss = 1.179
Epoch  17 Batch   58/69   train_loss = 1.125
Epoch  17 Batch   59/69   train_loss = 1.136
Epoch  17 Batch   60/69   train_loss = 1.222
Epoch  17 Batch   61/69   train_loss = 1.146
Epoch  17 Batch   62/69   train_loss = 1.155
Epoch  17 Batch   63/69   train_loss = 1.151
Epoch  17 Batch   64/69   train_loss = 1.112
Epoch  17 Batch   65/69   train_loss = 1.069
Epoch  17 Batch   66/69   train_loss = 1.204
Epoch  17 Batch   67/69   train_loss = 1.120
Epoch  17 Batch   68/69   train_loss = 1.232
Epoch  18 Batch    0/69   train_loss = 1.144
Epoch  18 Batch    1/69   train_loss = 1.151
Epoch  18 Batch    2/69   train_loss = 1.126
Epoch  18 Batch    3/69   train_loss = 1.134
Epoch  18 Batch    4/69   train_loss = 1.048
Epoch  18 Batch    5/69   train_loss = 1.235
Epoch  18 Batch    6/69   train_loss = 1.230
Epoch  18 Batch    7/69   train_loss = 1.246
Epoch  18 Batch    8/69   train_loss = 1.140
Epoch  18 Batch    9/69   train_loss = 1.223
Epoch  18 Batch   10/69   train_loss = 1.128
Epoch  18 Batch   11/69   train_loss = 1.118
Epoch  18 Batch   12/69   train_loss = 1.176

Save Parameters

Save seq_length and save_dir for generating a new TV script.


In [98]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint


In [99]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Implement Generate Functions

Get Tensors

Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)


In [100]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    # TODO: Implement Function
    input_tensor = loaded_graph.get_tensor_by_name("input:0")
    initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
    final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
    probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
    return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)


Tests Passed

Choose Word

Implement the pick_word() function to select the next word using probabilities.


In [101]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    # TODO: Implement Function
    p = np.squeeze(probabilities)
    p[np.argsort(p)[:-1]] = 0
    p = p / np.sum(p)
    c = np.random.choice(len(int_to_vocab), 1, p=p)[0]
    return int_to_vocab[c]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)


Tests Passed

Generate TV Script

This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.


In [102]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)


---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-102-1db8abe06e71> in <module>()
      9 with tf.Session(graph=loaded_graph) as sess:
     10     # Load saved model
---> 11     loader = tf.train.import_meta_graph(load_dir + '.meta')
     12     loader.restore(sess, load_dir)
     13 

~/anaconda/envs/image_class/lib/python3.6/site-packages/tensorflow/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs)
   1586   """
   1587   if not isinstance(meta_graph_or_file, meta_graph_pb2.MetaGraphDef):
-> 1588     meta_graph_def = meta_graph.read_meta_graph_file(meta_graph_or_file)
   1589   else:
   1590     meta_graph_def = meta_graph_or_file

~/anaconda/envs/image_class/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py in read_meta_graph_file(filename)
    400   meta_graph_def = meta_graph_pb2.MetaGraphDef()
    401   if not file_io.file_exists(filename):
--> 402     raise IOError("File %s does not exist." % filename)
    403   # First try to read it as a binary file.
    404   file_content = file_io.FileIO(filename, "rb").read()

OSError: File ./save.meta does not exist.

The TV Script is Nonsensical

It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.