Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .

French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.


In [3]:
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    # TODO: Implement Function
    source_ids, target_ids = [], []
    source_split = source_text.split("\n")
    for s in source_split:
        ids = [source_vocab_to_int[word] for word in s.split()]
        source_ids.append(ids)
    target_split = target_text.split("\n")
    for s in target_split:
        ids = [target_vocab_to_int[word] for word in s.split()]
        ids.append(target_vocab_to_int['<EOS>'])
        target_ids.append(ids)

    return (source_ids, target_ids)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoder_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
  • Target sequence length placeholder named "target_sequence_length" with rank 1
  • Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
  • Source sequence length placeholder named "source_sequence_length" with rank 1

Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)


In [7]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
    :return: Tuple (input, targets, learning rate, keep probability, target sequence length,
    max target sequence length, source sequence length)
    """
    # TODO: Implement Function
    _input = tf.placeholder(tf.int32, [None, None], name="input")
    _targets = tf.placeholder(tf.int32, [None, None], name="targets")
    _lr = tf.placeholder(tf.float32, name="learning_rate")
    _keep_prob = tf.placeholder(tf.float32, name="keep_prob")
    _target_sequence_length = tf.placeholder(tf.int32, (None,), name="target_sequence_length")
    _max_target_sequence_length = tf.reduce_max(_target_sequence_length, name="max_target_len")
    _source_sequence_length = tf.placeholder(tf.int32, (None,), name="source_sequence_length")
    return (
        _input, _targets, _lr, _keep_prob, _target_sequence_length, 
        _max_target_sequence_length, _source_sequence_length)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoder Input

Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.


In [8]:
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for encoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    # TODO: Implement Function
    # worth mentioning that I had a hard time understanding strided_slice, but basically
    # you are drawing a rectangle with two coordinates, [0,0] is the top left corner
    # [batch_size, -1] is the bottom right corner, and [1,1] just means your stride
    # so (in this case, keep everything)
    ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])
    # this basically creates a rank 2 tensor of '<GO'>, that is batch_size x 1, like a vertical vector
    # it then pre-pends it to the ending tensor which is batch_size x (len(target[1]) - 1)
    # and it does this along the column axis (axis = 1)
    decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
    return decoder_input

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)


Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer:


In [9]:
from imp import reload
reload(tests)

def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, 
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param source_vocab_size: vocabulary size of source data
    :param encoding_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """
    # TODO: Implement Function
    # Taking the input, you use the vocabulary size, and encoding_embedding size as two of
    # the i don't know 3?? dimensions, this needs futher study
    enc_embedding = tf.contrib.layers.embed_sequence(
        rnn_inputs, source_vocab_size, encoding_embedding_size)
    
    # encoder
    def make_cell(rnn_size):
        # construct our cell and initialize
        # made of LSTM cells
        enc_cell = tf.contrib.rnn.DropoutWrapper(
            tf.contrib.rnn.LSTMCell(
                rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)),
            input_keep_prob=keep_prob)
        
        return enc_cell # this is just a single layer of our encoder
    
    # now to make the full multi_rnn_cell of num_layer encoding cells
    enc_cell = tf.contrib.rnn.MultiRNNCell(
        [make_cell(rnn_size) for _ in range(num_layers)])
    # print(enc_cell)
    
    # lets embed the input
    enc_output, enc_state = tf.nn.dynamic_rnn(
        enc_cell, enc_embedding, sequence_length=source_sequence_length, dtype=tf.float32)
    
    return (enc_output, enc_state)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training

Create a training decoding layer:


In [10]:
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, 
                         target_sequence_length, max_summary_length, 
                         output_layer, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_summary_length: The length of the longest sequence in the batch
    :param output_layer: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing training logits and sample_id
    """
    # TODO: Implement Function
    # helper for the training process.  used by BasicDecoder to read inputs.
    # whatever this means :P
    # print(max_summary_length.shape, max_summary_length.dtype)
    training_helper = tf.contrib.seq2seq.TrainingHelper(
        inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False)
    # basic decoder
    # ?? get the max target sequence length
    #max_target_sequence_length = tf.reduce_max(target_sequence_length)
    # wrapping a dropout for keep probability
    # print(dec_cell)
    dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob)
    # print(dec_cell)
    # make the training decoder
    training_decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, training_helper, encoder_state, output_layer)
    # perform dynamic decoding using the decoder
    x = tf.contrib.seq2seq.dynamic_decode(
        training_decoder, impute_finished=True, maximum_iterations=max_summary_length)
    basic_decoder_output = x[0]

    return basic_decoder_output



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

Decoding - Inference

Create inference decoder:


In [11]:
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
                         end_of_sequence_id, max_target_sequence_length,
                         vocab_size, output_layer, batch_size, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param max_target_sequence_length: Maximum length of target sequences
    :param vocab_size: Size of decoder/target vocabulary
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_layer: Function to apply the output layer
    :param batch_size: Batch size
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing inference logits and sample_id
    """
    # TODO: Implement Function
    # create a 1d tensor of <GO> codes as our start tokens
    start_tokens = tf.tile(
        tf.constant([start_of_sequence_id], dtype=tf.int32),
        [batch_size], name="start_tokens")
    
    # Helper for the inference process
    # This is an interesting function, it needs a vector of start_sequences, and a scalar
    # of the end_of_sequence
    inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
        dec_embeddings, start_tokens, end_of_sequence_id)
    

    # adding dropout wrapper to the decode cell
    dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob)
    
    # Basic decoder    
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, inference_helper, encoder_state, output_layer)
    
    # Perform dynamic decoding using the decoder
    x = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
    
    basic_decoder_output = x[0]
    
    return basic_decoder_output



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Embed the target sequences
  • Construct the decoder LSTM cell (just like you constructed the encoder cell above)
  • Create an output layer to map the outputs of the decoder to the elements of our vocabulary
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.


In [12]:
def decoding_layer(dec_input, encoder_state,
                   target_sequence_length, max_target_sequence_length,
                   rnn_size,
                   num_layers, target_vocab_to_int, target_vocab_size,
                   batch_size, keep_prob, decoding_embedding_size):
    """
    Create decoding layer
    :param dec_input: Decoder input
    :param encoder_state: Encoder state
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_target_sequence_length: Maximum length of target sequences
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param target_vocab_size: Size of target vocabulary
    :param batch_size: The size of the batch
    :param keep_prob: Dropout keep probability
    :param decoding_embedding_size: Decoding embedding size
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    # TODO: Implement Function
    # 1. Decoder embedding
    dec_embeddings = tf.Variable(tf.random_uniform(
        [target_vocab_size, decoding_embedding_size]))
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
    
    # 2. Construct the decoder cell
    def make_cell(rnn_size):
        # construct our cell and initialize
        # made of LSTM cells
        dec_cell = tf.contrib.rnn.DropoutWrapper(
            tf.contrib.rnn.LSTMCell(
                rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)),
            input_keep_prob=keep_prob)
        
        return dec_cell # this is just a single layer of our encoder
    
    # Stack layers
    dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])  
    
    # 3. Dense layer to translate decoders outputs at each time step into a choice from the
    # target vocabulary
    output_layer = Dense(
        target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
    
    # 4. Set up training decoder
    with tf.variable_scope("decode"):
        train = decoding_layer_train(
            encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length,
            output_layer, keep_prob)
    
    # 5. Set up inference decoder
    with tf.variable_scope("decode", reuse=True):
        start_of_sequence_id = target_vocab_to_int['<GO>']
        end_of_sequence_id = target_vocab_to_int['<EOS>']
        infer = decoding_layer_infer(
            encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
            max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
    
    return (train, infer)



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
  • Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
  • Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.

In [13]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
                  source_sequence_length, target_sequence_length,
                  max_target_sentence_length,
                  source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size,
                  rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param source_sequence_length: Sequence Lengths of source sequences in the batch
    :param target_sequence_length: Sequence Lengths of target sequences in the batch
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    # TODO: Implement Function
    # 1. Encode the input
    _, enc_state = encoding_layer(
        input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size,
        enc_embedding_size)
    # 2. Process target data
    dec_input = process_decoder_input(
        target_data, target_vocab_to_int, batch_size)
    
    # 3. Decode the encoded input using the decoding layer
    train, infer = decoding_layer(
        dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size,
        num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
    
    return train, infer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability
  • Set display_step to state how many steps between each debug output statement

In [14]:
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 300
# Number of Layers
num_layers = 4
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.0005
# Dropout Keep Probability
keep_probability = 0.75
display_step = 50

Build the Graph

Build the graph using the neural network you implemented.


In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()

    #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
                                                   targets,
                                                   keep_prob,
                                                   batch_size,
                                                   source_sequence_length,
                                                   target_sequence_length,
                                                   max_target_sequence_length,
                                                   len(source_vocab_to_int),
                                                   len(target_vocab_to_int),
                                                   encoding_embedding_size,
                                                   decoding_embedding_size,
                                                   rnn_size,
                                                   num_layers,
                                                   target_vocab_to_int)


    training_logits = tf.identity(train_logits.rnn_output, name='logits')
    inference_logits = tf.identity(inference_logits.sample_id, name='predictions')

    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            training_logits,
            targets,
            masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Batch and pad the source and target sequences


In [16]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
    """Pad sentences with <PAD> so that each sentence of a batch has the same length"""
    max_sentence = max([len(sentence) for sentence in sentence_batch])
    return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]


def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
    """Batch targets, sources, and the lengths of their sentences together"""
    for batch_i in range(0, len(sources)//batch_size):
        start_i = batch_i * batch_size

        # Slice the right amount for the batch
        sources_batch = sources[start_i:start_i + batch_size]
        targets_batch = targets[start_i:start_i + batch_size]

        # Pad
        pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
        pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))

        # Need the lengths for the _lengths parameters
        pad_targets_lengths = []
        for target in pad_targets_batch:
            pad_targets_lengths.append(len(target))

        pad_source_lengths = []
        for source in pad_sources_batch:
            pad_source_lengths.append(len(source))

        yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [17]:
from tqdm import tqdm
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1])],
            'constant')

    return np.mean(np.equal(target, logits))

# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
                                                                                                             valid_target,
                                                                                                             batch_size,
                                                                                                             source_vocab_to_int['<PAD>'],
                                                                                                             target_vocab_to_int['<PAD>']))                                                                                                  
with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in tqdm(range(epochs)):
        for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
                get_batches(train_source, train_target, batch_size,
                            source_vocab_to_int['<PAD>'],
                            target_vocab_to_int['<PAD>'])):

            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})


            if batch_i % display_step == 0 and batch_i > 0:


                batch_train_logits = sess.run(
                    inference_logits,
                    {input_data: source_batch,
                     source_sequence_length: sources_lengths,
                     target_sequence_length: targets_lengths,
                     keep_prob: 1.0})


                batch_valid_logits = sess.run(
                    inference_logits,
                    {input_data: valid_sources_batch,
                     source_sequence_length: valid_sources_lengths,
                     target_sequence_length: valid_targets_lengths,
                     keep_prob: 1.0})

                train_acc = get_accuracy(target_batch, batch_train_logits)

                valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)

                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')


  0%|          | 0/10 [00:00<?, ?it/s]
Epoch   0 Batch   50/2154 - Train Accuracy: 0.4013, Validation Accuracy: 0.4446, Loss: 2.7040
Epoch   0 Batch  100/2154 - Train Accuracy: 0.4286, Validation Accuracy: 0.4766, Loss: 2.3948
Epoch   0 Batch  150/2154 - Train Accuracy: 0.4524, Validation Accuracy: 0.4872, Loss: 1.8425
Epoch   0 Batch  200/2154 - Train Accuracy: 0.4461, Validation Accuracy: 0.4787, Loss: 1.8954
Epoch   0 Batch  250/2154 - Train Accuracy: 0.4906, Validation Accuracy: 0.5050, Loss: 1.6998
Epoch   0 Batch  300/2154 - Train Accuracy: 0.4424, Validation Accuracy: 0.4759, Loss: 1.8310
Epoch   0 Batch  350/2154 - Train Accuracy: 0.4422, Validation Accuracy: 0.4787, Loss: 1.5219
Epoch   0 Batch  400/2154 - Train Accuracy: 0.4781, Validation Accuracy: 0.4844, Loss: 1.4638
Epoch   0 Batch  450/2154 - Train Accuracy: 0.5023, Validation Accuracy: 0.4787, Loss: 1.3020
Epoch   0 Batch  500/2154 - Train Accuracy: 0.5280, Validation Accuracy: 0.5284, Loss: 1.2601
Epoch   0 Batch  550/2154 - Train Accuracy: 0.5759, Validation Accuracy: 0.5518, Loss: 1.2354
Epoch   0 Batch  600/2154 - Train Accuracy: 0.5016, Validation Accuracy: 0.5426, Loss: 1.1219
Epoch   0 Batch  650/2154 - Train Accuracy: 0.4875, Validation Accuracy: 0.5057, Loss: 1.2114
Epoch   0 Batch  700/2154 - Train Accuracy: 0.4901, Validation Accuracy: 0.5405, Loss: 1.0268
Epoch   0 Batch  750/2154 - Train Accuracy: 0.5203, Validation Accuracy: 0.5341, Loss: 1.0591
Epoch   0 Batch  800/2154 - Train Accuracy: 0.5600, Validation Accuracy: 0.5511, Loss: 1.0643
Epoch   0 Batch  850/2154 - Train Accuracy: 0.5578, Validation Accuracy: 0.5455, Loss: 0.9381
Epoch   0 Batch  900/2154 - Train Accuracy: 0.5197, Validation Accuracy: 0.5412, Loss: 0.9996
Epoch   0 Batch  950/2154 - Train Accuracy: 0.6012, Validation Accuracy: 0.5597, Loss: 0.9382
Epoch   0 Batch 1000/2154 - Train Accuracy: 0.5580, Validation Accuracy: 0.5611, Loss: 0.8380
Epoch   0 Batch 1050/2154 - Train Accuracy: 0.5117, Validation Accuracy: 0.5760, Loss: 0.9467
Epoch   0 Batch 1100/2154 - Train Accuracy: 0.5500, Validation Accuracy: 0.5675, Loss: 0.9565
Epoch   0 Batch 1150/2154 - Train Accuracy: 0.5773, Validation Accuracy: 0.5682, Loss: 0.8756
Epoch   0 Batch 1200/2154 - Train Accuracy: 0.5543, Validation Accuracy: 0.5831, Loss: 0.9819
Epoch   0 Batch 1250/2154 - Train Accuracy: 0.6328, Validation Accuracy: 0.5426, Loss: 0.7891
Epoch   0 Batch 1300/2154 - Train Accuracy: 0.5570, Validation Accuracy: 0.5639, Loss: 0.8361
Epoch   0 Batch 1350/2154 - Train Accuracy: 0.5984, Validation Accuracy: 0.5767, Loss: 0.7789
Epoch   0 Batch 1400/2154 - Train Accuracy: 0.5436, Validation Accuracy: 0.5646, Loss: 0.8160
Epoch   0 Batch 1450/2154 - Train Accuracy: 0.6217, Validation Accuracy: 0.5938, Loss: 0.7319
Epoch   0 Batch 1500/2154 - Train Accuracy: 0.6209, Validation Accuracy: 0.6009, Loss: 0.7740
Epoch   0 Batch 1550/2154 - Train Accuracy: 0.6367, Validation Accuracy: 0.6229, Loss: 0.7092
Epoch   0 Batch 1600/2154 - Train Accuracy: 0.6414, Validation Accuracy: 0.6023, Loss: 0.6999
Epoch   0 Batch 1650/2154 - Train Accuracy: 0.6213, Validation Accuracy: 0.6030, Loss: 0.6448
Epoch   0 Batch 1700/2154 - Train Accuracy: 0.6352, Validation Accuracy: 0.6115, Loss: 0.6454
Epoch   0 Batch 1750/2154 - Train Accuracy: 0.5813, Validation Accuracy: 0.5916, Loss: 0.6168
Epoch   0 Batch 1800/2154 - Train Accuracy: 0.5977, Validation Accuracy: 0.6264, Loss: 0.6329
Epoch   0 Batch 1850/2154 - Train Accuracy: 0.6628, Validation Accuracy: 0.6399, Loss: 0.6741
Epoch   0 Batch 1900/2154 - Train Accuracy: 0.6883, Validation Accuracy: 0.6101, Loss: 0.5738
Epoch   0 Batch 1950/2154 - Train Accuracy: 0.6242, Validation Accuracy: 0.6406, Loss: 0.5847
Epoch   0 Batch 2000/2154 - Train Accuracy: 0.6594, Validation Accuracy: 0.6342, Loss: 0.5683
Epoch   0 Batch 2050/2154 - Train Accuracy: 0.6382, Validation Accuracy: 0.6449, Loss: 0.6355
Epoch   0 Batch 2100/2154 - Train Accuracy: 0.6930, Validation Accuracy: 0.6250, Loss: 0.5350
Epoch   0 Batch 2150/2154 - Train Accuracy: 0.6778, Validation Accuracy: 0.6420, Loss: 0.5984
 10%|█         | 1/10 [08:39<1:17:53, 519.33s/it]
Epoch   1 Batch   50/2154 - Train Accuracy: 0.6316, Validation Accuracy: 0.6726, Loss: 0.5414
Epoch   1 Batch  100/2154 - Train Accuracy: 0.6161, Validation Accuracy: 0.6286, Loss: 0.5458
Epoch   1 Batch  150/2154 - Train Accuracy: 0.6592, Validation Accuracy: 0.6364, Loss: 0.4606
Epoch   1 Batch  200/2154 - Train Accuracy: 0.6758, Validation Accuracy: 0.6577, Loss: 0.4777
Epoch   1 Batch  250/2154 - Train Accuracy: 0.6922, Validation Accuracy: 0.6477, Loss: 0.5007
Epoch   1 Batch  300/2154 - Train Accuracy: 0.6817, Validation Accuracy: 0.6761, Loss: 0.4948
Epoch   1 Batch  350/2154 - Train Accuracy: 0.6773, Validation Accuracy: 0.6470, Loss: 0.4750
Epoch   1 Batch  400/2154 - Train Accuracy: 0.7047, Validation Accuracy: 0.6783, Loss: 0.4387
Epoch   1 Batch  450/2154 - Train Accuracy: 0.7328, Validation Accuracy: 0.6868, Loss: 0.4098
Epoch   1 Batch  500/2154 - Train Accuracy: 0.6801, Validation Accuracy: 0.6854, Loss: 0.4160
Epoch   1 Batch  550/2154 - Train Accuracy: 0.7433, Validation Accuracy: 0.6733, Loss: 0.4102
Epoch   1 Batch  600/2154 - Train Accuracy: 0.7492, Validation Accuracy: 0.6847, Loss: 0.3704
Epoch   1 Batch  650/2154 - Train Accuracy: 0.7023, Validation Accuracy: 0.6442, Loss: 0.3952
Epoch   1 Batch  700/2154 - Train Accuracy: 0.6176, Validation Accuracy: 0.6477, Loss: 0.3776
Epoch   1 Batch  750/2154 - Train Accuracy: 0.6664, Validation Accuracy: 0.6790, Loss: 0.3963
Epoch   1 Batch  800/2154 - Train Accuracy: 0.7467, Validation Accuracy: 0.6967, Loss: 0.3748
Epoch   1 Batch  850/2154 - Train Accuracy: 0.6875, Validation Accuracy: 0.7152, Loss: 0.3386
Epoch   1 Batch  900/2154 - Train Accuracy: 0.6595, Validation Accuracy: 0.6776, Loss: 0.3915
Epoch   1 Batch  950/2154 - Train Accuracy: 0.7122, Validation Accuracy: 0.6996, Loss: 0.3682
Epoch   1 Batch 1000/2154 - Train Accuracy: 0.7188, Validation Accuracy: 0.7159, Loss: 0.3131
Epoch   1 Batch 1050/2154 - Train Accuracy: 0.6961, Validation Accuracy: 0.6733, Loss: 0.3560
Epoch   1 Batch 1100/2154 - Train Accuracy: 0.6758, Validation Accuracy: 0.6974, Loss: 0.3406
Epoch   1 Batch 1150/2154 - Train Accuracy: 0.7319, Validation Accuracy: 0.7280, Loss: 0.3221
Epoch   1 Batch 1200/2154 - Train Accuracy: 0.7064, Validation Accuracy: 0.7266, Loss: 0.3813
Epoch   1 Batch 1250/2154 - Train Accuracy: 0.7797, Validation Accuracy: 0.7322, Loss: 0.2835
Epoch   1 Batch 1300/2154 - Train Accuracy: 0.7078, Validation Accuracy: 0.7216, Loss: 0.3148
Epoch   1 Batch 1350/2154 - Train Accuracy: 0.7734, Validation Accuracy: 0.7237, Loss: 0.2843
Epoch   1 Batch 1400/2154 - Train Accuracy: 0.7475, Validation Accuracy: 0.7031, Loss: 0.2734
Epoch   1 Batch 1450/2154 - Train Accuracy: 0.7648, Validation Accuracy: 0.7344, Loss: 0.2958
Epoch   1 Batch 1500/2154 - Train Accuracy: 0.6941, Validation Accuracy: 0.7507, Loss: 0.2716
Epoch   1 Batch 1550/2154 - Train Accuracy: 0.7703, Validation Accuracy: 0.7145, Loss: 0.2824
Epoch   1 Batch 1600/2154 - Train Accuracy: 0.7594, Validation Accuracy: 0.7415, Loss: 0.2675
Epoch   1 Batch 1650/2154 - Train Accuracy: 0.7999, Validation Accuracy: 0.7301, Loss: 0.2236
Epoch   1 Batch 1700/2154 - Train Accuracy: 0.7406, Validation Accuracy: 0.7223, Loss: 0.2298
Epoch   1 Batch 1750/2154 - Train Accuracy: 0.7961, Validation Accuracy: 0.7734, Loss: 0.2297
Epoch   1 Batch 1800/2154 - Train Accuracy: 0.7617, Validation Accuracy: 0.7749, Loss: 0.2132
Epoch   1 Batch 1850/2154 - Train Accuracy: 0.8076, Validation Accuracy: 0.7578, Loss: 0.2457
Epoch   1 Batch 1900/2154 - Train Accuracy: 0.8328, Validation Accuracy: 0.7486, Loss: 0.2144
Epoch   1 Batch 1950/2154 - Train Accuracy: 0.7836, Validation Accuracy: 0.7315, Loss: 0.2036
Epoch   1 Batch 2000/2154 - Train Accuracy: 0.7891, Validation Accuracy: 0.7678, Loss: 0.2092
Epoch   1 Batch 2050/2154 - Train Accuracy: 0.8158, Validation Accuracy: 0.7401, Loss: 0.2228
Epoch   1 Batch 2100/2154 - Train Accuracy: 0.8125, Validation Accuracy: 0.7450, Loss: 0.2001
Epoch   1 Batch 2150/2154 - Train Accuracy: 0.8147, Validation Accuracy: 0.7642, Loss: 0.2284
 20%|██        | 2/10 [16:57<1:08:23, 512.98s/it]
Epoch   2 Batch   50/2154 - Train Accuracy: 0.7377, Validation Accuracy: 0.7599, Loss: 0.1919
Epoch   2 Batch  100/2154 - Train Accuracy: 0.7024, Validation Accuracy: 0.7656, Loss: 0.2196
Epoch   2 Batch  150/2154 - Train Accuracy: 0.7976, Validation Accuracy: 0.7955, Loss: 0.1827
Epoch   2 Batch  200/2154 - Train Accuracy: 0.7680, Validation Accuracy: 0.7450, Loss: 0.1550
Epoch   2 Batch  250/2154 - Train Accuracy: 0.8375, Validation Accuracy: 0.7720, Loss: 0.1782
Epoch   2 Batch  300/2154 - Train Accuracy: 0.8051, Validation Accuracy: 0.7777, Loss: 0.1973
Epoch   2 Batch  350/2154 - Train Accuracy: 0.8086, Validation Accuracy: 0.7834, Loss: 0.1831
Epoch   2 Batch  400/2154 - Train Accuracy: 0.8766, Validation Accuracy: 0.8047, Loss: 0.1615
Epoch   2 Batch  450/2154 - Train Accuracy: 0.8328, Validation Accuracy: 0.7493, Loss: 0.1552
Epoch   2 Batch  500/2154 - Train Accuracy: 0.8314, Validation Accuracy: 0.7670, Loss: 0.1653
Epoch   2 Batch  550/2154 - Train Accuracy: 0.8423, Validation Accuracy: 0.7962, Loss: 0.1509
Epoch   2 Batch  600/2154 - Train Accuracy: 0.8577, Validation Accuracy: 0.8097, Loss: 0.1355
Epoch   2 Batch  650/2154 - Train Accuracy: 0.8242, Validation Accuracy: 0.7642, Loss: 0.1662
Epoch   2 Batch  700/2154 - Train Accuracy: 0.7673, Validation Accuracy: 0.7976, Loss: 0.1445
Epoch   2 Batch  750/2154 - Train Accuracy: 0.7961, Validation Accuracy: 0.7741, Loss: 0.1704
Epoch   2 Batch  800/2154 - Train Accuracy: 0.8207, Validation Accuracy: 0.7891, Loss: 0.1612
Epoch   2 Batch  850/2154 - Train Accuracy: 0.8078, Validation Accuracy: 0.7976, Loss: 0.1300
Epoch   2 Batch  900/2154 - Train Accuracy: 0.7936, Validation Accuracy: 0.7692, Loss: 0.1602
Epoch   2 Batch  950/2154 - Train Accuracy: 0.7854, Validation Accuracy: 0.7756, Loss: 0.1535
Epoch   2 Batch 1000/2154 - Train Accuracy: 0.8006, Validation Accuracy: 0.8033, Loss: 0.1269
Epoch   2 Batch 1050/2154 - Train Accuracy: 0.7773, Validation Accuracy: 0.7955, Loss: 0.1569
Epoch   2 Batch 1100/2154 - Train Accuracy: 0.8195, Validation Accuracy: 0.8224, Loss: 0.1403
Epoch   2 Batch 1150/2154 - Train Accuracy: 0.8454, Validation Accuracy: 0.8026, Loss: 0.1288
Epoch   2 Batch 1200/2154 - Train Accuracy: 0.8512, Validation Accuracy: 0.8636, Loss: 0.1624
Epoch   2 Batch 1250/2154 - Train Accuracy: 0.8359, Validation Accuracy: 0.8551, Loss: 0.1111
Epoch   2 Batch 1300/2154 - Train Accuracy: 0.7812, Validation Accuracy: 0.8288, Loss: 0.1307
Epoch   2 Batch 1350/2154 - Train Accuracy: 0.8641, Validation Accuracy: 0.8253, Loss: 0.1420
Epoch   2 Batch 1400/2154 - Train Accuracy: 0.8635, Validation Accuracy: 0.8246, Loss: 0.1036
Epoch   2 Batch 1450/2154 - Train Accuracy: 0.8380, Validation Accuracy: 0.8928, Loss: 0.1368
Epoch   2 Batch 1500/2154 - Train Accuracy: 0.8043, Validation Accuracy: 0.8558, Loss: 0.1171
Epoch   2 Batch 1550/2154 - Train Accuracy: 0.8633, Validation Accuracy: 0.8572, Loss: 0.1398
Epoch   2 Batch 1600/2154 - Train Accuracy: 0.7875, Validation Accuracy: 0.8445, Loss: 0.1306
Epoch   2 Batch 1650/2154 - Train Accuracy: 0.8690, Validation Accuracy: 0.8764, Loss: 0.0987
Epoch   2 Batch 1700/2154 - Train Accuracy: 0.8516, Validation Accuracy: 0.8516, Loss: 0.0956
Epoch   2 Batch 1750/2154 - Train Accuracy: 0.9023, Validation Accuracy: 0.8849, Loss: 0.1028
Epoch   2 Batch 1800/2154 - Train Accuracy: 0.8688, Validation Accuracy: 0.9062, Loss: 0.0918
Epoch   2 Batch 1850/2154 - Train Accuracy: 0.8668, Validation Accuracy: 0.8970, Loss: 0.1239
Epoch   2 Batch 1900/2154 - Train Accuracy: 0.9242, Validation Accuracy: 0.8707, Loss: 0.0988
Epoch   2 Batch 1950/2154 - Train Accuracy: 0.9211, Validation Accuracy: 0.9126, Loss: 0.0854
Epoch   2 Batch 2000/2154 - Train Accuracy: 0.9000, Validation Accuracy: 0.8729, Loss: 0.0944
Epoch   2 Batch 2050/2154 - Train Accuracy: 0.8536, Validation Accuracy: 0.8864, Loss: 0.1071
Epoch   2 Batch 2100/2154 - Train Accuracy: 0.9055, Validation Accuracy: 0.8643, Loss: 0.0703
Epoch   2 Batch 2150/2154 - Train Accuracy: 0.9315, Validation Accuracy: 0.9148, Loss: 0.0963
 30%|███       | 3/10 [25:18<59:26, 509.48s/it]  
Epoch   3 Batch   50/2154 - Train Accuracy: 0.8882, Validation Accuracy: 0.8970, Loss: 0.0928
Epoch   3 Batch  100/2154 - Train Accuracy: 0.8824, Validation Accuracy: 0.9290, Loss: 0.1053
Epoch   3 Batch  150/2154 - Train Accuracy: 0.9412, Validation Accuracy: 0.8849, Loss: 0.0731
Epoch   3 Batch  200/2154 - Train Accuracy: 0.9391, Validation Accuracy: 0.9169, Loss: 0.0556
Epoch   3 Batch  250/2154 - Train Accuracy: 0.9187, Validation Accuracy: 0.8679, Loss: 0.0842
Epoch   3 Batch  300/2154 - Train Accuracy: 0.9071, Validation Accuracy: 0.8750, Loss: 0.0930
Epoch   3 Batch  350/2154 - Train Accuracy: 0.9313, Validation Accuracy: 0.9013, Loss: 0.0742
Epoch   3 Batch  400/2154 - Train Accuracy: 0.9688, Validation Accuracy: 0.8821, Loss: 0.0617
Epoch   3 Batch  450/2154 - Train Accuracy: 0.9445, Validation Accuracy: 0.9027, Loss: 0.0770
Epoch   3 Batch  500/2154 - Train Accuracy: 0.9638, Validation Accuracy: 0.8991, Loss: 0.0737
Epoch   3 Batch  550/2154 - Train Accuracy: 0.9323, Validation Accuracy: 0.9027, Loss: 0.0692
Epoch   3 Batch  600/2154 - Train Accuracy: 0.9433, Validation Accuracy: 0.9134, Loss: 0.0546
Epoch   3 Batch  650/2154 - Train Accuracy: 0.9305, Validation Accuracy: 0.9020, Loss: 0.0633
Epoch   3 Batch  700/2154 - Train Accuracy: 0.8775, Validation Accuracy: 0.8885, Loss: 0.0651
Epoch   3 Batch  750/2154 - Train Accuracy: 0.9039, Validation Accuracy: 0.9070, Loss: 0.0769
Epoch   3 Batch  800/2154 - Train Accuracy: 0.9449, Validation Accuracy: 0.9297, Loss: 0.0653
Epoch   3 Batch  850/2154 - Train Accuracy: 0.9219, Validation Accuracy: 0.8906, Loss: 0.0477
Epoch   3 Batch  900/2154 - Train Accuracy: 0.9498, Validation Accuracy: 0.8714, Loss: 0.0730
Epoch   3 Batch  950/2154 - Train Accuracy: 0.9137, Validation Accuracy: 0.9233, Loss: 0.0704
Epoch   3 Batch 1000/2154 - Train Accuracy: 0.9323, Validation Accuracy: 0.9070, Loss: 0.0488
Epoch   3 Batch 1050/2154 - Train Accuracy: 0.9141, Validation Accuracy: 0.9112, Loss: 0.0747
Epoch   3 Batch 1100/2154 - Train Accuracy: 0.9187, Validation Accuracy: 0.9375, Loss: 0.0669
Epoch   3 Batch 1150/2154 - Train Accuracy: 0.9161, Validation Accuracy: 0.8977, Loss: 0.0747
Epoch   3 Batch 1200/2154 - Train Accuracy: 0.9079, Validation Accuracy: 0.9176, Loss: 0.0826
Epoch   3 Batch 1250/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.9055, Loss: 0.0452
Epoch   3 Batch 1300/2154 - Train Accuracy: 0.9211, Validation Accuracy: 0.9027, Loss: 0.0642
Epoch   3 Batch 1350/2154 - Train Accuracy: 0.9148, Validation Accuracy: 0.9176, Loss: 0.0561
Epoch   3 Batch 1400/2154 - Train Accuracy: 0.9276, Validation Accuracy: 0.9148, Loss: 0.0832
Epoch   3 Batch 1450/2154 - Train Accuracy: 0.9704, Validation Accuracy: 0.9091, Loss: 0.0694
Epoch   3 Batch 1500/2154 - Train Accuracy: 0.9276, Validation Accuracy: 0.8920, Loss: 0.0442
Epoch   3 Batch 1550/2154 - Train Accuracy: 0.9492, Validation Accuracy: 0.9354, Loss: 0.0703
Epoch   3 Batch 1600/2154 - Train Accuracy: 0.9180, Validation Accuracy: 0.9148, Loss: 0.0797
Epoch   3 Batch 1650/2154 - Train Accuracy: 0.9472, Validation Accuracy: 0.9354, Loss: 0.0533
Epoch   3 Batch 1700/2154 - Train Accuracy: 0.9414, Validation Accuracy: 0.9318, Loss: 0.0509
Epoch   3 Batch 1750/2154 - Train Accuracy: 0.9406, Validation Accuracy: 0.9119, Loss: 0.0502
Epoch   3 Batch 1800/2154 - Train Accuracy: 0.9445, Validation Accuracy: 0.9048, Loss: 0.0515
Epoch   3 Batch 1850/2154 - Train Accuracy: 0.9778, Validation Accuracy: 0.9219, Loss: 0.0725
Epoch   3 Batch 1900/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9084, Loss: 0.0613
Epoch   3 Batch 1950/2154 - Train Accuracy: 0.9437, Validation Accuracy: 0.8885, Loss: 0.0442
Epoch   3 Batch 2000/2154 - Train Accuracy: 0.9680, Validation Accuracy: 0.8885, Loss: 0.0591
Epoch   3 Batch 2050/2154 - Train Accuracy: 0.9252, Validation Accuracy: 0.9155, Loss: 0.0616
Epoch   3 Batch 2100/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.8928, Loss: 0.0514
Epoch   3 Batch 2150/2154 - Train Accuracy: 0.9628, Validation Accuracy: 0.9155, Loss: 0.0578
 40%|████      | 4/10 [33:40<50:43, 507.19s/it]
Epoch   4 Batch   50/2154 - Train Accuracy: 0.9342, Validation Accuracy: 0.9190, Loss: 0.0643
Epoch   4 Batch  100/2154 - Train Accuracy: 0.9204, Validation Accuracy: 0.9006, Loss: 0.0683
Epoch   4 Batch  150/2154 - Train Accuracy: 0.9315, Validation Accuracy: 0.9041, Loss: 0.0399
Epoch   4 Batch  200/2154 - Train Accuracy: 0.9609, Validation Accuracy: 0.9261, Loss: 0.0332
Epoch   4 Batch  250/2154 - Train Accuracy: 0.9398, Validation Accuracy: 0.9119, Loss: 0.0482
Epoch   4 Batch  300/2154 - Train Accuracy: 0.9556, Validation Accuracy: 0.9318, Loss: 0.0543
Epoch   4 Batch  350/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.9219, Loss: 0.0433
Epoch   4 Batch  400/2154 - Train Accuracy: 0.9586, Validation Accuracy: 0.8913, Loss: 0.0481
Epoch   4 Batch  450/2154 - Train Accuracy: 0.9719, Validation Accuracy: 0.9254, Loss: 0.0433
Epoch   4 Batch  500/2154 - Train Accuracy: 0.9704, Validation Accuracy: 0.9070, Loss: 0.0559
Epoch   4 Batch  550/2154 - Train Accuracy: 0.9211, Validation Accuracy: 0.9183, Loss: 0.0474
Epoch   4 Batch  600/2154 - Train Accuracy: 0.9515, Validation Accuracy: 0.9283, Loss: 0.0343
Epoch   4 Batch  650/2154 - Train Accuracy: 0.9031, Validation Accuracy: 0.9325, Loss: 0.0636
Epoch   4 Batch  700/2154 - Train Accuracy: 0.8997, Validation Accuracy: 0.8999, Loss: 0.0382
Epoch   4 Batch  750/2154 - Train Accuracy: 0.9633, Validation Accuracy: 0.9474, Loss: 0.0495
Epoch   4 Batch  800/2154 - Train Accuracy: 0.9696, Validation Accuracy: 0.9347, Loss: 0.0372
Epoch   4 Batch  850/2154 - Train Accuracy: 0.9727, Validation Accuracy: 0.9517, Loss: 0.0307
Epoch   4 Batch  900/2154 - Train Accuracy: 0.9605, Validation Accuracy: 0.9105, Loss: 0.0536
Epoch   4 Batch  950/2154 - Train Accuracy: 0.9597, Validation Accuracy: 0.9631, Loss: 0.0368
Epoch   4 Batch 1000/2154 - Train Accuracy: 0.9435, Validation Accuracy: 0.9624, Loss: 0.0331
Epoch   4 Batch 1050/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.9453, Loss: 0.0650
Epoch   4 Batch 1100/2154 - Train Accuracy: 0.9547, Validation Accuracy: 0.9396, Loss: 0.0462
Epoch   4 Batch 1150/2154 - Train Accuracy: 0.9359, Validation Accuracy: 0.9446, Loss: 0.0496
Epoch   4 Batch 1200/2154 - Train Accuracy: 0.9169, Validation Accuracy: 0.9375, Loss: 0.0591
Epoch   4 Batch 1250/2154 - Train Accuracy: 0.9656, Validation Accuracy: 0.9276, Loss: 0.0315
Epoch   4 Batch 1300/2154 - Train Accuracy: 0.9523, Validation Accuracy: 0.9411, Loss: 0.0436
Epoch   4 Batch 1350/2154 - Train Accuracy: 0.9461, Validation Accuracy: 0.9411, Loss: 0.0451
Epoch   4 Batch 1400/2154 - Train Accuracy: 0.9819, Validation Accuracy: 0.9361, Loss: 0.0285
Epoch   4 Batch 1450/2154 - Train Accuracy: 0.9515, Validation Accuracy: 0.9332, Loss: 0.0448
Epoch   4 Batch 1500/2154 - Train Accuracy: 0.9720, Validation Accuracy: 0.9496, Loss: 0.0229
Epoch   4 Batch 1550/2154 - Train Accuracy: 0.9578, Validation Accuracy: 0.9496, Loss: 0.0495
Epoch   4 Batch 1600/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.9638, Loss: 0.0436
Epoch   4 Batch 1650/2154 - Train Accuracy: 0.9152, Validation Accuracy: 0.9801, Loss: 0.0430
Epoch   4 Batch 1700/2154 - Train Accuracy: 0.9273, Validation Accuracy: 0.9680, Loss: 0.0362
Epoch   4 Batch 1750/2154 - Train Accuracy: 0.9375, Validation Accuracy: 0.9474, Loss: 0.0415
Epoch   4 Batch 1800/2154 - Train Accuracy: 0.9711, Validation Accuracy: 0.9382, Loss: 0.0366
Epoch   4 Batch 1850/2154 - Train Accuracy: 0.9803, Validation Accuracy: 0.9581, Loss: 0.0463
Epoch   4 Batch 1900/2154 - Train Accuracy: 0.9828, Validation Accuracy: 0.9538, Loss: 0.0341
Epoch   4 Batch 1950/2154 - Train Accuracy: 0.9680, Validation Accuracy: 0.9822, Loss: 0.0264
Epoch   4 Batch 2000/2154 - Train Accuracy: 0.9648, Validation Accuracy: 0.9347, Loss: 0.0484
Epoch   4 Batch 2050/2154 - Train Accuracy: 0.9646, Validation Accuracy: 0.9375, Loss: 0.0624
Epoch   4 Batch 2100/2154 - Train Accuracy: 0.9805, Validation Accuracy: 0.9183, Loss: 0.0244
Epoch   4 Batch 2150/2154 - Train Accuracy: 0.9695, Validation Accuracy: 0.9389, Loss: 0.0383
 50%|█████     | 5/10 [41:59<42:03, 504.68s/it]
Epoch   5 Batch   50/2154 - Train Accuracy: 0.9350, Validation Accuracy: 0.9048, Loss: 0.0485
Epoch   5 Batch  100/2154 - Train Accuracy: 0.9301, Validation Accuracy: 0.9368, Loss: 0.0521
Epoch   5 Batch  150/2154 - Train Accuracy: 0.9420, Validation Accuracy: 0.9545, Loss: 0.0305
Epoch   5 Batch  200/2154 - Train Accuracy: 0.9875, Validation Accuracy: 0.9645, Loss: 0.0234
Epoch   5 Batch  250/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9560, Loss: 0.0382
Epoch   5 Batch  300/2154 - Train Accuracy: 0.9605, Validation Accuracy: 0.9347, Loss: 0.0383
Epoch   5 Batch  350/2154 - Train Accuracy: 0.9867, Validation Accuracy: 0.9595, Loss: 0.0304
Epoch   5 Batch  400/2154 - Train Accuracy: 0.9836, Validation Accuracy: 0.9673, Loss: 0.0280
Epoch   5 Batch  450/2154 - Train Accuracy: 0.9891, Validation Accuracy: 0.9652, Loss: 0.0278
Epoch   5 Batch  500/2154 - Train Accuracy: 0.9745, Validation Accuracy: 0.9631, Loss: 0.0370
Epoch   5 Batch  550/2154 - Train Accuracy: 0.9405, Validation Accuracy: 0.9709, Loss: 0.0409
Epoch   5 Batch  600/2154 - Train Accuracy: 0.9564, Validation Accuracy: 0.9581, Loss: 0.0262
Epoch   5 Batch  650/2154 - Train Accuracy: 0.9313, Validation Accuracy: 0.9531, Loss: 0.0348
Epoch   5 Batch  700/2154 - Train Accuracy: 0.9367, Validation Accuracy: 0.9460, Loss: 0.0258
Epoch   5 Batch  750/2154 - Train Accuracy: 0.9703, Validation Accuracy: 0.9865, Loss: 0.0357
Epoch   5 Batch  800/2154 - Train Accuracy: 0.9786, Validation Accuracy: 0.9474, Loss: 0.0321
Epoch   5 Batch  850/2154 - Train Accuracy: 0.9680, Validation Accuracy: 0.9531, Loss: 0.0201
Epoch   5 Batch  900/2154 - Train Accuracy: 0.9531, Validation Accuracy: 0.9396, Loss: 0.0383
Epoch   5 Batch  950/2154 - Train Accuracy: 0.9819, Validation Accuracy: 0.9751, Loss: 0.0341
Epoch   5 Batch 1000/2154 - Train Accuracy: 0.9799, Validation Accuracy: 0.9624, Loss: 0.0205
Epoch   5 Batch 1050/2154 - Train Accuracy: 0.9297, Validation Accuracy: 0.9673, Loss: 0.0571
Epoch   5 Batch 1100/2154 - Train Accuracy: 0.9727, Validation Accuracy: 0.9545, Loss: 0.0367
Epoch   5 Batch 1150/2154 - Train Accuracy: 0.9416, Validation Accuracy: 0.9411, Loss: 0.0339
Epoch   5 Batch 1200/2154 - Train Accuracy: 0.9408, Validation Accuracy: 0.9503, Loss: 0.0508
Epoch   5 Batch 1250/2154 - Train Accuracy: 0.9734, Validation Accuracy: 0.9581, Loss: 0.0223
Epoch   5 Batch 1300/2154 - Train Accuracy: 0.9750, Validation Accuracy: 0.9680, Loss: 0.0268
Epoch   5 Batch 1350/2154 - Train Accuracy: 0.9391, Validation Accuracy: 0.9645, Loss: 0.0346
Epoch   5 Batch 1400/2154 - Train Accuracy: 0.9581, Validation Accuracy: 0.9616, Loss: 0.0207
Epoch   5 Batch 1450/2154 - Train Accuracy: 0.9778, Validation Accuracy: 0.9680, Loss: 0.0329
Epoch   5 Batch 1500/2154 - Train Accuracy: 0.9770, Validation Accuracy: 0.9709, Loss: 0.0232
Epoch   5 Batch 1550/2154 - Train Accuracy: 0.9734, Validation Accuracy: 0.9879, Loss: 0.0316
Epoch   5 Batch 1600/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9709, Loss: 0.0330
Epoch   5 Batch 1650/2154 - Train Accuracy: 0.9509, Validation Accuracy: 0.9936, Loss: 0.0368
Epoch   5 Batch 1700/2154 - Train Accuracy: 0.9328, Validation Accuracy: 0.9929, Loss: 0.0230
Epoch   5 Batch 1750/2154 - Train Accuracy: 0.9250, Validation Accuracy: 0.9723, Loss: 0.0352
Epoch   5 Batch 1800/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9936, Loss: 0.0323
Epoch   5 Batch 1850/2154 - Train Accuracy: 0.9778, Validation Accuracy: 0.9780, Loss: 0.0467
Epoch   5 Batch 1900/2154 - Train Accuracy: 0.9805, Validation Accuracy: 0.9879, Loss: 0.0264
Epoch   5 Batch 1950/2154 - Train Accuracy: 0.9984, Validation Accuracy: 0.9737, Loss: 0.0162
Epoch   5 Batch 2000/2154 - Train Accuracy: 0.9750, Validation Accuracy: 0.9567, Loss: 0.0350
Epoch   5 Batch 2050/2154 - Train Accuracy: 0.9630, Validation Accuracy: 0.9503, Loss: 0.0333
Epoch   5 Batch 2100/2154 - Train Accuracy: 0.9656, Validation Accuracy: 0.9737, Loss: 0.0239
Epoch   5 Batch 2150/2154 - Train Accuracy: 0.9829, Validation Accuracy: 0.9886, Loss: 0.0271
 60%|██████    | 6/10 [50:10<33:22, 500.72s/it]
Epoch   6 Batch   50/2154 - Train Accuracy: 0.9589, Validation Accuracy: 0.9879, Loss: 0.0383
Epoch   6 Batch  100/2154 - Train Accuracy: 0.9546, Validation Accuracy: 0.9645, Loss: 0.0595
Epoch   6 Batch  150/2154 - Train Accuracy: 0.9717, Validation Accuracy: 0.9482, Loss: 0.0244
Epoch   6 Batch  200/2154 - Train Accuracy: 0.9734, Validation Accuracy: 0.9893, Loss: 0.0147
Epoch   6 Batch  250/2154 - Train Accuracy: 0.9664, Validation Accuracy: 0.9709, Loss: 0.0188
Epoch   6 Batch  300/2154 - Train Accuracy: 0.9918, Validation Accuracy: 0.9659, Loss: 0.0276
Epoch   6 Batch  350/2154 - Train Accuracy: 0.9703, Validation Accuracy: 0.9659, Loss: 0.0205
Epoch   6 Batch  400/2154 - Train Accuracy: 0.9766, Validation Accuracy: 0.9702, Loss: 0.0229
Epoch   6 Batch  450/2154 - Train Accuracy: 0.9789, Validation Accuracy: 0.9872, Loss: 0.0191
Epoch   6 Batch  500/2154 - Train Accuracy: 0.9737, Validation Accuracy: 0.9943, Loss: 0.0232
Epoch   6 Batch  550/2154 - Train Accuracy: 0.9501, Validation Accuracy: 0.9759, Loss: 0.0285
Epoch   6 Batch  600/2154 - Train Accuracy: 0.9507, Validation Accuracy: 0.9638, Loss: 0.0230
Epoch   6 Batch  650/2154 - Train Accuracy: 0.9391, Validation Accuracy: 0.9879, Loss: 0.0276
Epoch   6 Batch  700/2154 - Train Accuracy: 0.9564, Validation Accuracy: 0.9680, Loss: 0.0294
Epoch   6 Batch  750/2154 - Train Accuracy: 0.9883, Validation Accuracy: 0.9936, Loss: 0.0283
Epoch   6 Batch  800/2154 - Train Accuracy: 0.9852, Validation Accuracy: 0.9766, Loss: 0.0222
Epoch   6 Batch  850/2154 - Train Accuracy: 0.9602, Validation Accuracy: 0.9652, Loss: 0.0179
Epoch   6 Batch  900/2154 - Train Accuracy: 0.9811, Validation Accuracy: 0.9503, Loss: 0.0299
Epoch   6 Batch  950/2154 - Train Accuracy: 0.9819, Validation Accuracy: 0.9695, Loss: 0.0213
Epoch   6 Batch 1000/2154 - Train Accuracy: 0.9747, Validation Accuracy: 0.9801, Loss: 0.0156
Epoch   6 Batch 1050/2154 - Train Accuracy: 0.9523, Validation Accuracy: 0.9936, Loss: 0.0444
Epoch   6 Batch 1100/2154 - Train Accuracy: 0.9789, Validation Accuracy: 0.9659, Loss: 0.0245
Epoch   6 Batch 1150/2154 - Train Accuracy: 0.9663, Validation Accuracy: 0.9659, Loss: 0.0294
Epoch   6 Batch 1200/2154 - Train Accuracy: 0.9572, Validation Accuracy: 0.9695, Loss: 0.0321
Epoch   6 Batch 1250/2154 - Train Accuracy: 0.9578, Validation Accuracy: 0.9723, Loss: 0.0159
Epoch   6 Batch 1300/2154 - Train Accuracy: 0.9922, Validation Accuracy: 0.9872, Loss: 0.0237
Epoch   6 Batch 1350/2154 - Train Accuracy: 0.9547, Validation Accuracy: 0.9638, Loss: 0.0322
Epoch   6 Batch 1400/2154 - Train Accuracy: 0.9704, Validation Accuracy: 0.9688, Loss: 0.0498
Epoch   6 Batch 1450/2154 - Train Accuracy: 0.9910, Validation Accuracy: 0.9759, Loss: 0.0241
Epoch   6 Batch 1500/2154 - Train Accuracy: 0.9836, Validation Accuracy: 0.9801, Loss: 0.0185
Epoch   6 Batch 1550/2154 - Train Accuracy: 0.9812, Validation Accuracy: 0.9879, Loss: 0.0323
Epoch   6 Batch 1600/2154 - Train Accuracy: 0.9555, Validation Accuracy: 0.9901, Loss: 0.0288
Epoch   6 Batch 1650/2154 - Train Accuracy: 0.9643, Validation Accuracy: 0.9936, Loss: 0.0238
Epoch   6 Batch 1700/2154 - Train Accuracy: 0.9719, Validation Accuracy: 0.9808, Loss: 0.0237
Epoch   6 Batch 1750/2154 - Train Accuracy: 0.9313, Validation Accuracy: 0.9744, Loss: 0.0351
Epoch   6 Batch 1800/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9851, Loss: 0.0267
Epoch   6 Batch 1850/2154 - Train Accuracy: 0.9663, Validation Accuracy: 0.9901, Loss: 0.0341
Epoch   6 Batch 1900/2154 - Train Accuracy: 0.9898, Validation Accuracy: 0.9695, Loss: 0.0228
Epoch   6 Batch 1950/2154 - Train Accuracy: 0.9945, Validation Accuracy: 0.9737, Loss: 0.0123
Epoch   6 Batch 2000/2154 - Train Accuracy: 0.9789, Validation Accuracy: 0.9830, Loss: 0.0249
Epoch   6 Batch 2050/2154 - Train Accuracy: 0.9523, Validation Accuracy: 0.9780, Loss: 0.0297
Epoch   6 Batch 2100/2154 - Train Accuracy: 0.9820, Validation Accuracy: 0.9759, Loss: 0.0129
Epoch   6 Batch 2150/2154 - Train Accuracy: 0.9769, Validation Accuracy: 0.9943, Loss: 0.0187
 70%|███████   | 7/10 [58:21<24:53, 497.78s/it]
Epoch   7 Batch   50/2154 - Train Accuracy: 0.9589, Validation Accuracy: 0.9837, Loss: 0.0354
Epoch   7 Batch  100/2154 - Train Accuracy: 0.9479, Validation Accuracy: 0.9673, Loss: 0.0380
Epoch   7 Batch  150/2154 - Train Accuracy: 0.9576, Validation Accuracy: 0.9815, Loss: 0.0190
Epoch   7 Batch  200/2154 - Train Accuracy: 0.9852, Validation Accuracy: 0.9893, Loss: 0.0094
Epoch   7 Batch  250/2154 - Train Accuracy: 0.9719, Validation Accuracy: 0.9830, Loss: 0.0231
Epoch   7 Batch  300/2154 - Train Accuracy: 0.9926, Validation Accuracy: 0.9780, Loss: 0.0299
Epoch   7 Batch  350/2154 - Train Accuracy: 0.9609, Validation Accuracy: 0.9830, Loss: 0.0171
Epoch   7 Batch  400/2154 - Train Accuracy: 0.9695, Validation Accuracy: 0.9787, Loss: 0.0194
Epoch   7 Batch  450/2154 - Train Accuracy: 0.9656, Validation Accuracy: 0.9879, Loss: 0.0114
Epoch   7 Batch  500/2154 - Train Accuracy: 0.9827, Validation Accuracy: 0.9936, Loss: 0.0206
Epoch   7 Batch  550/2154 - Train Accuracy: 0.9635, Validation Accuracy: 0.9645, Loss: 0.0202
Epoch   7 Batch  600/2154 - Train Accuracy: 0.9638, Validation Accuracy: 0.9794, Loss: 0.0154
Epoch   7 Batch  650/2154 - Train Accuracy: 0.9375, Validation Accuracy: 0.9886, Loss: 0.0157
Epoch   7 Batch  700/2154 - Train Accuracy: 0.9531, Validation Accuracy: 0.9709, Loss: 0.0169
Epoch   7 Batch  750/2154 - Train Accuracy: 0.9711, Validation Accuracy: 0.9943, Loss: 0.0256
Epoch   7 Batch  800/2154 - Train Accuracy: 0.9860, Validation Accuracy: 0.9886, Loss: 0.0192
Epoch   7 Batch  850/2154 - Train Accuracy: 0.9500, Validation Accuracy: 0.9851, Loss: 0.0169
Epoch   7 Batch  900/2154 - Train Accuracy: 0.9803, Validation Accuracy: 0.9695, Loss: 0.0260
Epoch   7 Batch  950/2154 - Train Accuracy: 0.9762, Validation Accuracy: 0.9872, Loss: 0.0276
Epoch   7 Batch 1000/2154 - Train Accuracy: 0.9807, Validation Accuracy: 0.9851, Loss: 0.0129
Epoch   7 Batch 1050/2154 - Train Accuracy: 0.9516, Validation Accuracy: 0.9751, Loss: 0.0355
Epoch   7 Batch 1100/2154 - Train Accuracy: 0.9789, Validation Accuracy: 0.9695, Loss: 0.0222
Epoch   7 Batch 1150/2154 - Train Accuracy: 0.9482, Validation Accuracy: 0.9531, Loss: 0.0288
Epoch   7 Batch 1200/2154 - Train Accuracy: 0.9482, Validation Accuracy: 0.9794, Loss: 0.0262
Epoch   7 Batch 1250/2154 - Train Accuracy: 0.9969, Validation Accuracy: 0.9787, Loss: 0.0150
Epoch   7 Batch 1300/2154 - Train Accuracy: 0.9945, Validation Accuracy: 0.9915, Loss: 0.0113
Epoch   7 Batch 1350/2154 - Train Accuracy: 0.9766, Validation Accuracy: 0.9773, Loss: 0.0181
Epoch   7 Batch 1400/2154 - Train Accuracy: 0.9934, Validation Accuracy: 0.9780, Loss: 0.0166
Epoch   7 Batch 1450/2154 - Train Accuracy: 0.9762, Validation Accuracy: 0.9879, Loss: 0.0216
Epoch   7 Batch 1500/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9659, Loss: 0.0080
Epoch   7 Batch 1550/2154 - Train Accuracy: 0.9945, Validation Accuracy: 0.9886, Loss: 0.0151
Epoch   7 Batch 1600/2154 - Train Accuracy: 0.9859, Validation Accuracy: 0.9815, Loss: 0.0249
Epoch   7 Batch 1650/2154 - Train Accuracy: 0.9784, Validation Accuracy: 0.9837, Loss: 0.0239
Epoch   7 Batch 1700/2154 - Train Accuracy: 0.9758, Validation Accuracy: 0.9943, Loss: 0.0191
Epoch   7 Batch 1750/2154 - Train Accuracy: 0.9016, Validation Accuracy: 0.9844, Loss: 0.0274
Epoch   7 Batch 1800/2154 - Train Accuracy: 0.9672, Validation Accuracy: 0.9830, Loss: 0.0224
Epoch   7 Batch 1850/2154 - Train Accuracy: 0.9753, Validation Accuracy: 0.9908, Loss: 0.0290
Epoch   7 Batch 1900/2154 - Train Accuracy: 0.9805, Validation Accuracy: 0.9787, Loss: 0.0184
Epoch   7 Batch 1950/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9844, Loss: 0.0156
Epoch   7 Batch 2000/2154 - Train Accuracy: 0.9828, Validation Accuracy: 0.9844, Loss: 0.0280
Epoch   7 Batch 2050/2154 - Train Accuracy: 0.9671, Validation Accuracy: 0.9844, Loss: 0.0289
Epoch   7 Batch 2100/2154 - Train Accuracy: 0.9938, Validation Accuracy: 0.9702, Loss: 0.0195
Epoch   7 Batch 2150/2154 - Train Accuracy: 0.9754, Validation Accuracy: 0.9943, Loss: 0.0162
 80%|████████  | 8/10 [1:06:33<16:31, 495.83s/it]
Epoch   8 Batch   50/2154 - Train Accuracy: 0.9679, Validation Accuracy: 0.9844, Loss: 0.0350
Epoch   8 Batch  100/2154 - Train Accuracy: 0.9561, Validation Accuracy: 0.9844, Loss: 0.0374
Epoch   8 Batch  150/2154 - Train Accuracy: 0.9821, Validation Accuracy: 0.9744, Loss: 0.0176
Epoch   8 Batch  200/2154 - Train Accuracy: 0.9742, Validation Accuracy: 0.9837, Loss: 0.0103
Epoch   8 Batch  250/2154 - Train Accuracy: 0.9711, Validation Accuracy: 0.9822, Loss: 0.0233
Epoch   8 Batch  300/2154 - Train Accuracy: 0.9926, Validation Accuracy: 0.9837, Loss: 0.0217
Epoch   8 Batch  350/2154 - Train Accuracy: 0.9641, Validation Accuracy: 0.9794, Loss: 0.0156
Epoch   8 Batch  400/2154 - Train Accuracy: 0.9719, Validation Accuracy: 0.9830, Loss: 0.0116
Epoch   8 Batch  450/2154 - Train Accuracy: 0.9797, Validation Accuracy: 0.9879, Loss: 0.0125
Epoch   8 Batch  500/2154 - Train Accuracy: 0.9819, Validation Accuracy: 0.9837, Loss: 0.0144
Epoch   8 Batch  550/2154 - Train Accuracy: 0.9673, Validation Accuracy: 0.9893, Loss: 0.0194
Epoch   8 Batch  600/2154 - Train Accuracy: 0.9745, Validation Accuracy: 0.9751, Loss: 0.0162
Epoch   8 Batch  650/2154 - Train Accuracy: 0.9641, Validation Accuracy: 0.9759, Loss: 0.0155
Epoch   8 Batch  700/2154 - Train Accuracy: 0.9753, Validation Accuracy: 0.9801, Loss: 0.0166
Epoch   8 Batch  750/2154 - Train Accuracy: 0.9664, Validation Accuracy: 0.9936, Loss: 0.0215
Epoch   8 Batch  800/2154 - Train Accuracy: 0.9852, Validation Accuracy: 0.9822, Loss: 0.0178
Epoch   8 Batch  850/2154 - Train Accuracy: 0.9625, Validation Accuracy: 0.9759, Loss: 0.0160
Epoch   8 Batch  900/2154 - Train Accuracy: 0.9827, Validation Accuracy: 0.9886, Loss: 0.0247
Epoch   8 Batch  950/2154 - Train Accuracy: 0.9926, Validation Accuracy: 0.9787, Loss: 0.0155
Epoch   8 Batch 1000/2154 - Train Accuracy: 0.9792, Validation Accuracy: 0.9943, Loss: 0.0105
Epoch   8 Batch 1050/2154 - Train Accuracy: 0.9422, Validation Accuracy: 0.9858, Loss: 0.0331
Epoch   8 Batch 1100/2154 - Train Accuracy: 0.9695, Validation Accuracy: 0.9787, Loss: 0.0179
Epoch   8 Batch 1150/2154 - Train Accuracy: 0.9589, Validation Accuracy: 0.9837, Loss: 0.0466
Epoch   8 Batch 1200/2154 - Train Accuracy: 0.9523, Validation Accuracy: 0.9844, Loss: 0.0227
Epoch   8 Batch 1250/2154 - Train Accuracy: 0.9672, Validation Accuracy: 0.9936, Loss: 0.0124
Epoch   8 Batch 1300/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9879, Loss: 0.0160
Epoch   8 Batch 1350/2154 - Train Accuracy: 0.9820, Validation Accuracy: 0.9787, Loss: 0.0289
Epoch   8 Batch 1400/2154 - Train Accuracy: 0.9942, Validation Accuracy: 0.9886, Loss: 0.0134
Epoch   8 Batch 1450/2154 - Train Accuracy: 0.9877, Validation Accuracy: 0.9851, Loss: 0.0154
Epoch   8 Batch 1500/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9943, Loss: 0.0072
Epoch   8 Batch 1550/2154 - Train Accuracy: 0.9953, Validation Accuracy: 0.9851, Loss: 0.0157
Epoch   8 Batch 1600/2154 - Train Accuracy: 0.9883, Validation Accuracy: 0.9943, Loss: 0.0184
Epoch   8 Batch 1650/2154 - Train Accuracy: 0.9769, Validation Accuracy: 0.9844, Loss: 0.0128
Epoch   8 Batch 1700/2154 - Train Accuracy: 0.9852, Validation Accuracy: 0.9943, Loss: 0.0103
Epoch   8 Batch 1750/2154 - Train Accuracy: 0.9391, Validation Accuracy: 0.9837, Loss: 0.0233
Epoch   8 Batch 1800/2154 - Train Accuracy: 0.9750, Validation Accuracy: 0.9801, Loss: 0.0167
Epoch   8 Batch 1850/2154 - Train Accuracy: 0.9762, Validation Accuracy: 0.9943, Loss: 0.0288
Epoch   8 Batch 1900/2154 - Train Accuracy: 0.9867, Validation Accuracy: 0.9751, Loss: 0.0124
Epoch   8 Batch 1950/2154 - Train Accuracy: 0.9930, Validation Accuracy: 0.9844, Loss: 0.0054
Epoch   8 Batch 2000/2154 - Train Accuracy: 0.9984, Validation Accuracy: 0.9886, Loss: 0.0151
Epoch   8 Batch 2050/2154 - Train Accuracy: 0.9564, Validation Accuracy: 0.9751, Loss: 0.0229
Epoch   8 Batch 2100/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9780, Loss: 0.0105
Epoch   8 Batch 2150/2154 - Train Accuracy: 0.9829, Validation Accuracy: 0.9801, Loss: 0.0198
 90%|█████████ | 9/10 [1:14:45<08:14, 494.67s/it]
Epoch   9 Batch   50/2154 - Train Accuracy: 0.9778, Validation Accuracy: 0.9893, Loss: 0.0194
Epoch   9 Batch  100/2154 - Train Accuracy: 0.9546, Validation Accuracy: 0.9716, Loss: 0.0376
Epoch   9 Batch  150/2154 - Train Accuracy: 0.9702, Validation Accuracy: 0.9616, Loss: 0.0119
Epoch   9 Batch  200/2154 - Train Accuracy: 0.9875, Validation Accuracy: 0.9801, Loss: 0.0071
Epoch   9 Batch  250/2154 - Train Accuracy: 0.9695, Validation Accuracy: 0.9950, Loss: 0.0128
Epoch   9 Batch  300/2154 - Train Accuracy: 0.9967, Validation Accuracy: 0.9929, Loss: 0.0170
Epoch   9 Batch  350/2154 - Train Accuracy: 0.9867, Validation Accuracy: 0.9744, Loss: 0.0110
Epoch   9 Batch  400/2154 - Train Accuracy: 0.9641, Validation Accuracy: 0.9936, Loss: 0.0141
Epoch   9 Batch  450/2154 - Train Accuracy: 0.9922, Validation Accuracy: 0.9808, Loss: 0.0060
Epoch   9 Batch  500/2154 - Train Accuracy: 0.9836, Validation Accuracy: 0.9893, Loss: 0.0131
Epoch   9 Batch  550/2154 - Train Accuracy: 0.9762, Validation Accuracy: 0.9815, Loss: 0.0114
Epoch   9 Batch  600/2154 - Train Accuracy: 0.9663, Validation Accuracy: 0.9822, Loss: 0.0109
Epoch   9 Batch  650/2154 - Train Accuracy: 0.9641, Validation Accuracy: 0.9886, Loss: 0.0142
Epoch   9 Batch  700/2154 - Train Accuracy: 0.9753, Validation Accuracy: 0.9936, Loss: 0.0144
Epoch   9 Batch  750/2154 - Train Accuracy: 0.9820, Validation Accuracy: 0.9950, Loss: 0.0090
Epoch   9 Batch  800/2154 - Train Accuracy: 0.9868, Validation Accuracy: 0.9886, Loss: 0.0150
Epoch   9 Batch  850/2154 - Train Accuracy: 0.9734, Validation Accuracy: 0.9851, Loss: 0.0137
Epoch   9 Batch  900/2154 - Train Accuracy: 0.9836, Validation Accuracy: 0.9744, Loss: 0.0143
Epoch   9 Batch  950/2154 - Train Accuracy: 0.9984, Validation Accuracy: 0.9680, Loss: 0.0117
Epoch   9 Batch 1000/2154 - Train Accuracy: 0.9695, Validation Accuracy: 0.9943, Loss: 0.0111
Epoch   9 Batch 1050/2154 - Train Accuracy: 0.9672, Validation Accuracy: 0.9943, Loss: 0.0286
Epoch   9 Batch 1100/2154 - Train Accuracy: 1.0000, Validation Accuracy: 0.9893, Loss: 0.0129
Epoch   9 Batch 1150/2154 - Train Accuracy: 0.9844, Validation Accuracy: 0.9844, Loss: 0.0148
Epoch   9 Batch 1200/2154 - Train Accuracy: 0.9794, Validation Accuracy: 1.0000, Loss: 0.0171
Epoch   9 Batch 1250/2154 - Train Accuracy: 0.9883, Validation Accuracy: 0.9943, Loss: 0.0103
Epoch   9 Batch 1300/2154 - Train Accuracy: 0.9914, Validation Accuracy: 0.9879, Loss: 0.0085
Epoch   9 Batch 1350/2154 - Train Accuracy: 0.9898, Validation Accuracy: 0.9879, Loss: 0.0159
Epoch   9 Batch 1400/2154 - Train Accuracy: 1.0000, Validation Accuracy: 0.9886, Loss: 0.0086
Epoch   9 Batch 1450/2154 - Train Accuracy: 0.9836, Validation Accuracy: 0.9893, Loss: 0.0108
Epoch   9 Batch 1500/2154 - Train Accuracy: 0.9992, Validation Accuracy: 0.9936, Loss: 0.0064
Epoch   9 Batch 1550/2154 - Train Accuracy: 0.9977, Validation Accuracy: 0.9808, Loss: 0.0119
Epoch   9 Batch 1600/2154 - Train Accuracy: 0.9867, Validation Accuracy: 0.9893, Loss: 0.0138
Epoch   9 Batch 1650/2154 - Train Accuracy: 0.9792, Validation Accuracy: 0.9943, Loss: 0.0151
Epoch   9 Batch 1700/2154 - Train Accuracy: 0.9742, Validation Accuracy: 0.9936, Loss: 0.0165
Epoch   9 Batch 1750/2154 - Train Accuracy: 0.9508, Validation Accuracy: 0.9851, Loss: 0.0260
Epoch   9 Batch 1800/2154 - Train Accuracy: 0.9750, Validation Accuracy: 0.9844, Loss: 0.0152
Epoch   9 Batch 1850/2154 - Train Accuracy: 0.9762, Validation Accuracy: 0.9943, Loss: 0.0209
Epoch   9 Batch 1900/2154 - Train Accuracy: 0.9797, Validation Accuracy: 0.9886, Loss: 0.0172
Epoch   9 Batch 1950/2154 - Train Accuracy: 1.0000, Validation Accuracy: 0.9993, Loss: 0.0072
Epoch   9 Batch 2000/2154 - Train Accuracy: 0.9828, Validation Accuracy: 0.9844, Loss: 0.0262
Epoch   9 Batch 2050/2154 - Train Accuracy: 0.9786, Validation Accuracy: 0.9844, Loss: 0.0158
Epoch   9 Batch 2100/2154 - Train Accuracy: 1.0000, Validation Accuracy: 0.9844, Loss: 0.0090
Epoch   9 Batch 2150/2154 - Train Accuracy: 0.9829, Validation Accuracy: 0.9794, Loss: 0.0150
100%|██████████| 10/10 [1:22:56<00:00, 493.78s/it]
Model Trained and Saved

Hyperparameter selections and results

embed_input embed_output batch_size rnn_size keep_prob lr l epochs training_time train_acc val_acc loss
30 30 128 50 0.50 0.0010 2 10 ? 0.6684 0.7248 0.3571
15 15 256 100 0.50 0.0010 2 10 09:10 0.7305 0.7228 0.3687
15 15 512 100 0.75 0.0010 2 10 04:56 0.6604 0.6671 0.4456
15 15 512 100 0.75 0.0005 2 10 04:53 0.6127 0.6345 0.6592
15 15 512 100 0.75 0.0005 2 30 14:50 0.8104 0.7900 0.2596
15 15 256 100 0.75 0.0010 2 20 18:02 0.8766 0.8699 0.1053
15 15 256 150 0.75 0.0010 2 20 18:27 0.9640 0.9350 0.0368
64 64 64 300 0.75 0.0005 4 10 1:22:56 0.9829 0.9794 0.0150

Test sentence

Input Word Ids: [177, 128, 65, 66, 93, 46, 178] English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']

Prediction Word Ids: [119, 301, 7, 121, 31, 300, 69, 1] French Words: il a vu un camion jaune .

Save Parameters

Save the batch_size and save_path parameters for inference.


In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [19]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.

In [20]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    # TODO: Implement Function
    sentence = sentence.lower()
    seq = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()]
    
    return seq


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)


Tests Passed

Translate

This will translate translate_sentence from English to French.


In [21]:
translate_sentence = 'he saw a old yellow truck .'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
                                         target_sequence_length: [len(translate_sentence)*2]*batch_size,
                                         source_sequence_length: [len(translate_sentence)]*batch_size,
                                         keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in translate_logits]))
print('  French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))


INFO:tensorflow:Restoring parameters from checkpoints/dev
Input
  Word Ids:      [177, 128, 65, 66, 93, 46, 178]
  English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']

Prediction
  Word Ids:      [119, 301, 7, 121, 31, 300, 69, 1]
  French Words: il a vu un camion jaune . <EOS>

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.


In [ ]: