Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .

French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.


In [3]:
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    source_sentences = source_text.split("\n")
    target_sentences = target_text.split("\n")
    source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
    target_id_text = [[target_vocab_to_int[word] for word in (sentence + ' <EOS>').split()] for sentence in target_sentences]


    return source_id_text, target_id_text

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoder_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
  • Target sequence length placeholder named "target_sequence_length" with rank 1
  • Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
  • Source sequence length placeholder named "source_sequence_length" with rank 1

Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)


In [7]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
    :return: Tuple (input, targets, learning rate, keep probability, target sequence length,
    max target sequence length, source sequence length)
    """
    input_placeholer = tf.placeholder(tf.int32, [None, None], name="input")
    targets_placeholder = tf.placeholder(tf.int32, [None, None], name="targets")
    learning_rate_placeholder = tf.placeholder(tf.float32, name="learning_rate")
    keep_prob_placeholder = tf.placeholder(tf.float32, name="keep_prob")
    target_sequence_length_placeholder = tf.placeholder(tf.int32, [None], name="target_sequence_length")
    max_target_sequence_length = tf.reduce_max(target_sequence_length_placeholder, name="max_target_len")
    source_sequence_length_placeholder = tf.placeholder(tf.int32, [None], name="source_sequence_length")
    
    return input_placeholer, targets_placeholder, learning_rate_placeholder, keep_prob_placeholder, target_sequence_length_placeholder, max_target_sequence_length, source_sequence_length_placeholder


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoder Input

Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.


In [8]:
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for encoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    go_id = target_vocab_to_int["<GO>"]
    init_list = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    preprocessed = tf.concat([tf.fill([batch_size, 1], go_id), init_list], 1)

    return preprocessed

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)


Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer:


In [9]:
from imp import reload
reload(tests)

def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, 
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param source_vocab_size: vocabulary size of source data
    :param encoding_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """
    # TODO: Implement Function
    encoder_embedded_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
    lstm_cells = [tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]
    lstm_cells = [tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) for cell in lstm_cells]
    encoder_stacked_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
    encoding_rnn_output, encoding_rnn_state = tf.nn.dynamic_rnn(encoder_stacked_cell, encoder_embedded_input, dtype=tf.float32)
    return encoding_rnn_output, encoding_rnn_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training

Create a training decoding layer:


In [10]:
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, 
                         target_sequence_length, max_summary_length, 
                         output_layer, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_summary_length: The length of the longest sequence in the batch
    :param output_layer: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing training logits and sample_id
    """
    training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
    decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, training_helper, encoder_state, output_layer=output_layer
    )
    (training_decoder_outputs, _) = tf.contrib.seq2seq.dynamic_decode(
        decoder, impute_finished=True, maximum_iterations=max_summary_length
    )
    training_rnn_output = training_decoder_outputs.rnn_output
    simple_id = training_decoder_outputs.sample_id
    training_rnn_output_drop = tf.nn.dropout(training_rnn_output, keep_prob)

    return tf.contrib.seq2seq.BasicDecoderOutput(training_rnn_output_drop, simple_id)




"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

Decoding - Inference

Create inference decoder:


In [11]:
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
                         end_of_sequence_id, max_target_sequence_length,
                         vocab_size, output_layer, batch_size, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param max_target_sequence_length: Maximum length of target sequences
    :param vocab_size: Size of decoder/target vocabulary
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_layer: Function to apply the output layer
    :param batch_size: Batch size
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing inference logits and sample_id
    """

    start_tokens = tf.tile(
        tf.constant([start_of_sequence_id], dtype=tf.int32),
        [batch_size],
        name='start_tokens'
    )
    
    embedding_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
        dec_embeddings, start_tokens, end_of_sequence_id
    )

    decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, embedding_helper, encoder_state, output_layer=output_layer
    )
    
    (training_decoder_outputs, _) = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
    training_rnn_output = training_decoder_outputs.rnn_output
    simple_id = training_decoder_outputs.sample_id
    training_rnn_output_drop = tf.nn.dropout(training_rnn_output, keep_prob)

    return tf.contrib.seq2seq.BasicDecoderOutput(training_rnn_output_drop, simple_id)



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Embed the target sequences
  • Construct the decoder LSTM cell (just like you constructed the encoder cell above)
  • Create an output layer to map the outputs of the decoder to the elements of our vocabulary
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.


In [12]:
def decoding_layer(dec_input, encoder_state,
                   target_sequence_length, max_target_sequence_length,
                   rnn_size,
                   num_layers, target_vocab_to_int, target_vocab_size,
                   batch_size, keep_prob, decoding_embedding_size):
    """
    Create decoding layer
    :param dec_input: Decoder input
    :param encoder_state: Encoder state
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_target_sequence_length: Maximum length of target sequences
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param target_vocab_size: Size of target vocabulary
    :param batch_size: The size of the batch
    :param keep_prob: Dropout keep probability
    :param decoding_embedding_size: Decoding embedding size
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    decoding_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
    decoding_embed_input = tf.nn.embedding_lookup(decoding_embeddings, dec_input)
    
    lstm_cells = [tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]
    lstm_cells = [tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) for cell in lstm_cells]
    decoder_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
    
    output_layer = Dense(
        target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)
    )

    with tf.variable_scope("decode"):
        training_logits = decoding_layer_train(
            encoder_state, decoder_cell, decoding_embed_input, 
            target_sequence_length, max_target_sequence_length, 
            output_layer, keep_prob
        )
        
    with tf.variable_scope("decode", reuse=True):
        inference_logits = decoding_layer_infer(
            encoder_state, decoder_cell, decoding_embeddings, 
            target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, 
            target_vocab_size, output_layer, batch_size, keep_prob
        )
        
    return training_logits, inference_logits

    



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
  • Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
  • Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.

In [13]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
                  source_sequence_length, target_sequence_length,
                  max_target_sentence_length,
                  source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size,
                  rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param source_sequence_length: Sequence Lengths of source sequences in the batch
    :param target_sequence_length: Sequence Lengths of target sequences in the batch
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    (_, encoder_state) = encoding_layer(
        input_data, rnn_size, num_layers,
        keep_prob, source_sequence_length, source_vocab_size,
        enc_embedding_size
    )

    decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
    training_decoder_output, inference_decoder_output = decoding_layer(
        decoder_input, encoder_state, target_sequence_length,
        max_target_sentence_length, rnn_size, num_layers,
        target_vocab_to_int, target_vocab_size, batch_size, 
        keep_prob, dec_embedding_size
    )
    return training_decoder_output, inference_decoder_output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability
  • Set display_step to state how many steps between each debug output statement

In [48]:
# Number of Epochs
epochs = 25
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
display_step = 10

Build the Graph

Build the graph using the neural network you implemented.


In [49]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()

    #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
                                                   targets,
                                                   keep_prob,
                                                   batch_size,
                                                   source_sequence_length,
                                                   target_sequence_length,
                                                   max_target_sequence_length,
                                                   len(source_vocab_to_int),
                                                   len(target_vocab_to_int),
                                                   encoding_embedding_size,
                                                   decoding_embedding_size,
                                                   rnn_size,
                                                   num_layers,
                                                   target_vocab_to_int)


    training_logits = tf.identity(train_logits.rnn_output, name='logits')
    inference_logits = tf.identity(inference_logits.sample_id, name='predictions')

    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            training_logits,
            targets,
            masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Batch and pad the source and target sequences


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
    """Pad sentences with <PAD> so that each sentence of a batch has the same length"""
    max_sentence = max([len(sentence) for sentence in sentence_batch])
    return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]


def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
    """Batch targets, sources, and the lengths of their sentences together"""
    for batch_i in range(0, len(sources)//batch_size):
        start_i = batch_i * batch_size

        # Slice the right amount for the batch
        sources_batch = sources[start_i:start_i + batch_size]
        targets_batch = targets[start_i:start_i + batch_size]

        # Pad
        pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
        pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))

        # Need the lengths for the _lengths parameters
        pad_targets_lengths = []
        for target in pad_targets_batch:
            pad_targets_lengths.append(len(target))

        pad_source_lengths = []
        for source in pad_sources_batch:
            pad_source_lengths.append(len(source))

        yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1])],
            'constant')

    return np.mean(np.equal(target, logits))

# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
                                                                                                             valid_target,
                                                                                                             batch_size,
                                                                                                             source_vocab_to_int['<PAD>'],
                                                                                                             target_vocab_to_int['<PAD>']))                                                                                                  
with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
                get_batches(train_source, train_target, batch_size,
                            source_vocab_to_int['<PAD>'],
                            target_vocab_to_int['<PAD>'])):

            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})


            if batch_i % display_step == 0 and batch_i > 0:


                batch_train_logits = sess.run(
                    inference_logits,
                    {input_data: source_batch,
                     source_sequence_length: sources_lengths,
                     target_sequence_length: targets_lengths,
                     keep_prob: 1.0})


                batch_valid_logits = sess.run(
                    inference_logits,
                    {input_data: valid_sources_batch,
                     source_sequence_length: valid_sources_lengths,
                     target_sequence_length: valid_targets_lengths,
                     keep_prob: 1.0})

                train_acc = get_accuracy(target_batch, batch_train_logits)

                valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)

                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')


Epoch   0 Batch   10/269 - Train Accuracy: 0.2966, Validation Accuracy: 0.3638, Loss: 4.5097
Epoch   0 Batch   20/269 - Train Accuracy: 0.3695, Validation Accuracy: 0.4249, Loss: 4.1080
Epoch   0 Batch   30/269 - Train Accuracy: 0.3219, Validation Accuracy: 0.3536, Loss: 3.8329
Epoch   0 Batch   40/269 - Train Accuracy: 0.4230, Validation Accuracy: 0.4719, Loss: 3.7811
Epoch   0 Batch   50/269 - Train Accuracy: 0.4483, Validation Accuracy: 0.5009, Loss: 3.6247
Epoch   0 Batch   60/269 - Train Accuracy: 0.5023, Validation Accuracy: 0.5114, Loss: 3.4271
Epoch   0 Batch   70/269 - Train Accuracy: 0.5121, Validation Accuracy: 0.5218, Loss: 3.4070
Epoch   0 Batch   80/269 - Train Accuracy: 0.4661, Validation Accuracy: 0.4838, Loss: 3.3466
Epoch   0 Batch   90/269 - Train Accuracy: 0.3901, Validation Accuracy: 0.4665, Loss: 3.4063
Epoch   0 Batch  100/269 - Train Accuracy: 0.4916, Validation Accuracy: 0.4993, Loss: 3.1930
Epoch   0 Batch  110/269 - Train Accuracy: 0.4916, Validation Accuracy: 0.5171, Loss: 3.2106
Epoch   0 Batch  120/269 - Train Accuracy: 0.4171, Validation Accuracy: 0.4786, Loss: 3.2197
Epoch   0 Batch  130/269 - Train Accuracy: 0.4719, Validation Accuracy: 0.5185, Loss: 3.1357
Epoch   0 Batch  140/269 - Train Accuracy: 0.5326, Validation Accuracy: 0.5478, Loss: 3.0107
Epoch   0 Batch  150/269 - Train Accuracy: 0.5345, Validation Accuracy: 0.5568, Loss: 3.0497
Epoch   0 Batch  160/269 - Train Accuracy: 0.4918, Validation Accuracy: 0.5088, Loss: 2.9505
Epoch   0 Batch  170/269 - Train Accuracy: 0.5459, Validation Accuracy: 0.5615, Loss: 2.8902
Epoch   0 Batch  180/269 - Train Accuracy: 0.5509, Validation Accuracy: 0.5623, Loss: 2.8384
Epoch   0 Batch  190/269 - Train Accuracy: 0.5576, Validation Accuracy: 0.5757, Loss: 2.8118
Epoch   0 Batch  200/269 - Train Accuracy: 0.5584, Validation Accuracy: 0.5759, Loss: 2.8356
Epoch   0 Batch  210/269 - Train Accuracy: 0.5414, Validation Accuracy: 0.5475, Loss: 2.7796
Epoch   0 Batch  220/269 - Train Accuracy: 0.5745, Validation Accuracy: 0.5791, Loss: 2.7169
Epoch   0 Batch  230/269 - Train Accuracy: 0.5720, Validation Accuracy: 0.5845, Loss: 2.7332
Epoch   0 Batch  240/269 - Train Accuracy: 0.6175, Validation Accuracy: 0.5913, Loss: 2.6683
Epoch   0 Batch  250/269 - Train Accuracy: 0.5580, Validation Accuracy: 0.5861, Loss: 2.6817
Epoch   0 Batch  260/269 - Train Accuracy: 0.5613, Validation Accuracy: 0.5819, Loss: 2.6875
Epoch   1 Batch   10/269 - Train Accuracy: 0.5726, Validation Accuracy: 0.5933, Loss: 2.6571
Epoch   1 Batch   20/269 - Train Accuracy: 0.5676, Validation Accuracy: 0.5872, Loss: 2.6379
Epoch   1 Batch   30/269 - Train Accuracy: 0.5911, Validation Accuracy: 0.5922, Loss: 2.6333
Epoch   1 Batch   40/269 - Train Accuracy: 0.5842, Validation Accuracy: 0.6060, Loss: 2.6444
Epoch   1 Batch   50/269 - Train Accuracy: 0.5787, Validation Accuracy: 0.6062, Loss: 2.6104
Epoch   1 Batch   60/269 - Train Accuracy: 0.6087, Validation Accuracy: 0.6031, Loss: 2.5239
Epoch   1 Batch   70/269 - Train Accuracy: 0.6153, Validation Accuracy: 0.6056, Loss: 2.5390
Epoch   1 Batch   80/269 - Train Accuracy: 0.6242, Validation Accuracy: 0.6226, Loss: 2.5571
Epoch   1 Batch   90/269 - Train Accuracy: 0.5761, Validation Accuracy: 0.6118, Loss: 2.5594
Epoch   1 Batch  100/269 - Train Accuracy: 0.6575, Validation Accuracy: 0.6309, Loss: 2.4917
Epoch   1 Batch  110/269 - Train Accuracy: 0.6057, Validation Accuracy: 0.6251, Loss: 2.4824
Epoch   1 Batch  120/269 - Train Accuracy: 0.6177, Validation Accuracy: 0.6351, Loss: 2.5119
Epoch   1 Batch  130/269 - Train Accuracy: 0.6098, Validation Accuracy: 0.6461, Loss: 2.5095
Epoch   1 Batch  140/269 - Train Accuracy: 0.6485, Validation Accuracy: 0.6388, Loss: 2.5058
Epoch   1 Batch  150/269 - Train Accuracy: 0.6401, Validation Accuracy: 0.6511, Loss: 2.4427
Epoch   1 Batch  160/269 - Train Accuracy: 0.6441, Validation Accuracy: 0.6492, Loss: 2.4719
Epoch   1 Batch  170/269 - Train Accuracy: 0.6596, Validation Accuracy: 0.6548, Loss: 2.4519
Epoch   1 Batch  180/269 - Train Accuracy: 0.6437, Validation Accuracy: 0.6514, Loss: 2.4582
Epoch   1 Batch  190/269 - Train Accuracy: 0.6647, Validation Accuracy: 0.6594, Loss: 2.4165
Epoch   1 Batch  200/269 - Train Accuracy: 0.6557, Validation Accuracy: 0.6650, Loss: 2.4339
Epoch   1 Batch  210/269 - Train Accuracy: 0.6753, Validation Accuracy: 0.6732, Loss: 2.4146
Epoch   1 Batch  220/269 - Train Accuracy: 0.6752, Validation Accuracy: 0.6715, Loss: 2.3792
Epoch   1 Batch  230/269 - Train Accuracy: 0.6700, Validation Accuracy: 0.6816, Loss: 2.3888
Epoch   1 Batch  240/269 - Train Accuracy: 0.7037, Validation Accuracy: 0.6810, Loss: 2.3370
Epoch   1 Batch  250/269 - Train Accuracy: 0.6755, Validation Accuracy: 0.6935, Loss: 2.3599
Epoch   1 Batch  260/269 - Train Accuracy: 0.6729, Validation Accuracy: 0.7006, Loss: 2.4035
Epoch   2 Batch   10/269 - Train Accuracy: 0.6900, Validation Accuracy: 0.7028, Loss: 2.3466
Epoch   2 Batch   20/269 - Train Accuracy: 0.7105, Validation Accuracy: 0.7140, Loss: 2.3684
Epoch   2 Batch   30/269 - Train Accuracy: 0.7117, Validation Accuracy: 0.7081, Loss: 2.3845
Epoch   2 Batch   40/269 - Train Accuracy: 0.7183, Validation Accuracy: 0.7207, Loss: 2.3575
Epoch   2 Batch   50/269 - Train Accuracy: 0.7045, Validation Accuracy: 0.7360, Loss: 2.3610
Epoch   2 Batch   60/269 - Train Accuracy: 0.7334, Validation Accuracy: 0.7283, Loss: 2.3586
Epoch   2 Batch   70/269 - Train Accuracy: 0.7449, Validation Accuracy: 0.7301, Loss: 2.3382
Epoch   2 Batch   80/269 - Train Accuracy: 0.7524, Validation Accuracy: 0.7471, Loss: 2.3423
Epoch   2 Batch   90/269 - Train Accuracy: 0.7207, Validation Accuracy: 0.7522, Loss: 2.3566
Epoch   2 Batch  100/269 - Train Accuracy: 0.7647, Validation Accuracy: 0.7485, Loss: 2.3355
Epoch   2 Batch  110/269 - Train Accuracy: 0.7548, Validation Accuracy: 0.7625, Loss: 2.2768
Epoch   2 Batch  120/269 - Train Accuracy: 0.7701, Validation Accuracy: 0.7745, Loss: 2.3000
Epoch   2 Batch  130/269 - Train Accuracy: 0.7721, Validation Accuracy: 0.7544, Loss: 2.2455
Epoch   2 Batch  140/269 - Train Accuracy: 0.7961, Validation Accuracy: 0.7734, Loss: 2.2561
Epoch   2 Batch  150/269 - Train Accuracy: 0.7964, Validation Accuracy: 0.7835, Loss: 2.2635
Epoch   2 Batch  160/269 - Train Accuracy: 0.7964, Validation Accuracy: 0.7864, Loss: 2.2377
Epoch   2 Batch  170/269 - Train Accuracy: 0.7820, Validation Accuracy: 0.7939, Loss: 2.2647
Epoch   2 Batch  180/269 - Train Accuracy: 0.8155, Validation Accuracy: 0.7913, Loss: 2.2381
Epoch   2 Batch  190/269 - Train Accuracy: 0.8155, Validation Accuracy: 0.8055, Loss: 2.2198
Epoch   2 Batch  200/269 - Train Accuracy: 0.8014, Validation Accuracy: 0.8176, Loss: 2.2379
Epoch   2 Batch  210/269 - Train Accuracy: 0.8301, Validation Accuracy: 0.8241, Loss: 2.2049
Epoch   2 Batch  220/269 - Train Accuracy: 0.8323, Validation Accuracy: 0.8249, Loss: 2.2189
Epoch   2 Batch  230/269 - Train Accuracy: 0.8402, Validation Accuracy: 0.8360, Loss: 2.2255
Epoch   2 Batch  240/269 - Train Accuracy: 0.8590, Validation Accuracy: 0.8451, Loss: 2.1723
Epoch   2 Batch  250/269 - Train Accuracy: 0.8354, Validation Accuracy: 0.8376, Loss: 2.2015
Epoch   2 Batch  260/269 - Train Accuracy: 0.8311, Validation Accuracy: 0.8425, Loss: 2.1967
Epoch   3 Batch   10/269 - Train Accuracy: 0.8729, Validation Accuracy: 0.8596, Loss: 2.1477
Epoch   3 Batch   20/269 - Train Accuracy: 0.8705, Validation Accuracy: 0.8627, Loss: 2.1857
Epoch   3 Batch   30/269 - Train Accuracy: 0.8594, Validation Accuracy: 0.8644, Loss: 2.1630
Epoch   3 Batch   40/269 - Train Accuracy: 0.8585, Validation Accuracy: 0.8708, Loss: 2.1395
Epoch   3 Batch   50/269 - Train Accuracy: 0.8586, Validation Accuracy: 0.8730, Loss: 2.1718
Epoch   3 Batch   60/269 - Train Accuracy: 0.8709, Validation Accuracy: 0.8729, Loss: 2.1160
Epoch   3 Batch   70/269 - Train Accuracy: 0.8744, Validation Accuracy: 0.8708, Loss: 2.1736
Epoch   3 Batch   80/269 - Train Accuracy: 0.8934, Validation Accuracy: 0.8888, Loss: 2.1409
Epoch   3 Batch   90/269 - Train Accuracy: 0.8745, Validation Accuracy: 0.8794, Loss: 2.1545
Epoch   3 Batch  100/269 - Train Accuracy: 0.8921, Validation Accuracy: 0.8864, Loss: 2.0968
Epoch   3 Batch  110/269 - Train Accuracy: 0.8752, Validation Accuracy: 0.8878, Loss: 2.1329
Epoch   3 Batch  120/269 - Train Accuracy: 0.8882, Validation Accuracy: 0.8815, Loss: 2.1657
Epoch   3 Batch  130/269 - Train Accuracy: 0.8863, Validation Accuracy: 0.8723, Loss: 2.1475
Epoch   3 Batch  140/269 - Train Accuracy: 0.9014, Validation Accuracy: 0.8943, Loss: 2.1251
Epoch   3 Batch  150/269 - Train Accuracy: 0.8964, Validation Accuracy: 0.8983, Loss: 2.1306
Epoch   3 Batch  160/269 - Train Accuracy: 0.9060, Validation Accuracy: 0.9053, Loss: 2.1086
Epoch   3 Batch  170/269 - Train Accuracy: 0.8909, Validation Accuracy: 0.8918, Loss: 2.1068
Epoch   3 Batch  180/269 - Train Accuracy: 0.9103, Validation Accuracy: 0.9001, Loss: 2.1260
Epoch   3 Batch  190/269 - Train Accuracy: 0.9023, Validation Accuracy: 0.9098, Loss: 2.0831
Epoch   3 Batch  200/269 - Train Accuracy: 0.9017, Validation Accuracy: 0.8977, Loss: 2.1202
Epoch   3 Batch  210/269 - Train Accuracy: 0.9058, Validation Accuracy: 0.9090, Loss: 2.1017
Epoch   3 Batch  220/269 - Train Accuracy: 0.9142, Validation Accuracy: 0.9048, Loss: 2.0739
Epoch   3 Batch  230/269 - Train Accuracy: 0.9145, Validation Accuracy: 0.9110, Loss: 2.1126
Epoch   3 Batch  240/269 - Train Accuracy: 0.9074, Validation Accuracy: 0.9024, Loss: 2.0836
Epoch   3 Batch  250/269 - Train Accuracy: 0.9114, Validation Accuracy: 0.9023, Loss: 2.1141
Epoch   3 Batch  260/269 - Train Accuracy: 0.9005, Validation Accuracy: 0.8983, Loss: 2.0676
Epoch   4 Batch   10/269 - Train Accuracy: 0.9285, Validation Accuracy: 0.9100, Loss: 2.0954
Epoch   4 Batch   20/269 - Train Accuracy: 0.9226, Validation Accuracy: 0.9117, Loss: 2.0269
Epoch   4 Batch   30/269 - Train Accuracy: 0.9273, Validation Accuracy: 0.9148, Loss: 2.0301
Epoch   4 Batch   40/269 - Train Accuracy: 0.9032, Validation Accuracy: 0.9240, Loss: 2.0781
Epoch   4 Batch   50/269 - Train Accuracy: 0.9023, Validation Accuracy: 0.9170, Loss: 2.0942
Epoch   4 Batch   60/269 - Train Accuracy: 0.9301, Validation Accuracy: 0.9231, Loss: 2.0694
Epoch   4 Batch   70/269 - Train Accuracy: 0.9223, Validation Accuracy: 0.9198, Loss: 2.1041
Epoch   4 Batch   80/269 - Train Accuracy: 0.9264, Validation Accuracy: 0.9180, Loss: 2.1036
Epoch   4 Batch   90/269 - Train Accuracy: 0.9188, Validation Accuracy: 0.9257, Loss: 2.0605
Epoch   4 Batch  100/269 - Train Accuracy: 0.9322, Validation Accuracy: 0.9275, Loss: 2.0645
Epoch   4 Batch  110/269 - Train Accuracy: 0.9319, Validation Accuracy: 0.9228, Loss: 2.0736
Epoch   4 Batch  120/269 - Train Accuracy: 0.9374, Validation Accuracy: 0.9228, Loss: 2.0476
Epoch   4 Batch  130/269 - Train Accuracy: 0.9313, Validation Accuracy: 0.9256, Loss: 2.0668
Epoch   4 Batch  140/269 - Train Accuracy: 0.9241, Validation Accuracy: 0.9324, Loss: 2.0492
Epoch   4 Batch  150/269 - Train Accuracy: 0.9261, Validation Accuracy: 0.9272, Loss: 2.1218
Epoch   4 Batch  160/269 - Train Accuracy: 0.9306, Validation Accuracy: 0.9327, Loss: 2.0656
Epoch   4 Batch  170/269 - Train Accuracy: 0.9264, Validation Accuracy: 0.9244, Loss: 2.0667
Epoch   4 Batch  180/269 - Train Accuracy: 0.9298, Validation Accuracy: 0.9312, Loss: 2.0797
Epoch   4 Batch  190/269 - Train Accuracy: 0.9310, Validation Accuracy: 0.9296, Loss: 2.0588
Epoch   4 Batch  200/269 - Train Accuracy: 0.9395, Validation Accuracy: 0.9367, Loss: 2.0665
Epoch   4 Batch  210/269 - Train Accuracy: 0.9355, Validation Accuracy: 0.9431, Loss: 2.0685
Epoch   4 Batch  220/269 - Train Accuracy: 0.9430, Validation Accuracy: 0.9304, Loss: 2.0515
Epoch   4 Batch  230/269 - Train Accuracy: 0.9406, Validation Accuracy: 0.9363, Loss: 2.0451
Epoch   4 Batch  240/269 - Train Accuracy: 0.9437, Validation Accuracy: 0.9438, Loss: 2.0467
Epoch   4 Batch  250/269 - Train Accuracy: 0.9393, Validation Accuracy: 0.9434, Loss: 2.0159
Epoch   4 Batch  260/269 - Train Accuracy: 0.9364, Validation Accuracy: 0.9355, Loss: 2.0207
Epoch   5 Batch   10/269 - Train Accuracy: 0.9424, Validation Accuracy: 0.9254, Loss: 2.0299
Epoch   5 Batch   20/269 - Train Accuracy: 0.9432, Validation Accuracy: 0.9423, Loss: 2.0115
Epoch   5 Batch   30/269 - Train Accuracy: 0.9400, Validation Accuracy: 0.9342, Loss: 2.0591
Epoch   5 Batch   40/269 - Train Accuracy: 0.9352, Validation Accuracy: 0.9379, Loss: 2.0428
Epoch   5 Batch   50/269 - Train Accuracy: 0.9194, Validation Accuracy: 0.9308, Loss: 2.0665
Epoch   5 Batch   60/269 - Train Accuracy: 0.9390, Validation Accuracy: 0.9498, Loss: 2.0290
Epoch   5 Batch   70/269 - Train Accuracy: 0.9398, Validation Accuracy: 0.9456, Loss: 2.0502
Epoch   5 Batch   80/269 - Train Accuracy: 0.9547, Validation Accuracy: 0.9433, Loss: 2.0386
Epoch   5 Batch   90/269 - Train Accuracy: 0.9529, Validation Accuracy: 0.9474, Loss: 2.0717
Epoch   5 Batch  100/269 - Train Accuracy: 0.9472, Validation Accuracy: 0.9413, Loss: 2.0392
Epoch   5 Batch  110/269 - Train Accuracy: 0.9342, Validation Accuracy: 0.9428, Loss: 2.0365
Epoch   5 Batch  120/269 - Train Accuracy: 0.9462, Validation Accuracy: 0.9447, Loss: 2.0524
Epoch   5 Batch  130/269 - Train Accuracy: 0.9523, Validation Accuracy: 0.9451, Loss: 2.0667
Epoch   5 Batch  140/269 - Train Accuracy: 0.9408, Validation Accuracy: 0.9480, Loss: 2.0096
Epoch   5 Batch  150/269 - Train Accuracy: 0.9450, Validation Accuracy: 0.9463, Loss: 2.0427
Epoch   5 Batch  160/269 - Train Accuracy: 0.9445, Validation Accuracy: 0.9499, Loss: 2.0687
Epoch   5 Batch  170/269 - Train Accuracy: 0.9437, Validation Accuracy: 0.9523, Loss: 2.0679
Epoch   5 Batch  180/269 - Train Accuracy: 0.9536, Validation Accuracy: 0.9487, Loss: 2.0011
Epoch   5 Batch  190/269 - Train Accuracy: 0.9474, Validation Accuracy: 0.9449, Loss: 1.9697
Epoch   5 Batch  200/269 - Train Accuracy: 0.9392, Validation Accuracy: 0.9471, Loss: 2.0376
Epoch   5 Batch  210/269 - Train Accuracy: 0.9388, Validation Accuracy: 0.9496, Loss: 2.0281
Epoch   5 Batch  220/269 - Train Accuracy: 0.9482, Validation Accuracy: 0.9490, Loss: 2.0722
Epoch   5 Batch  230/269 - Train Accuracy: 0.9513, Validation Accuracy: 0.9537, Loss: 2.0310
Epoch   5 Batch  240/269 - Train Accuracy: 0.9479, Validation Accuracy: 0.9493, Loss: 2.0124
Epoch   5 Batch  250/269 - Train Accuracy: 0.9503, Validation Accuracy: 0.9525, Loss: 1.9873
Epoch   5 Batch  260/269 - Train Accuracy: 0.9445, Validation Accuracy: 0.9490, Loss: 2.0442
Epoch   6 Batch   10/269 - Train Accuracy: 0.9579, Validation Accuracy: 0.9450, Loss: 2.0421
Epoch   6 Batch   20/269 - Train Accuracy: 0.9562, Validation Accuracy: 0.9557, Loss: 1.9941
Epoch   6 Batch   30/269 - Train Accuracy: 0.9528, Validation Accuracy: 0.9509, Loss: 2.0210
Epoch   6 Batch   40/269 - Train Accuracy: 0.9438, Validation Accuracy: 0.9528, Loss: 2.0097
Epoch   6 Batch   50/269 - Train Accuracy: 0.9445, Validation Accuracy: 0.9446, Loss: 2.0141
Epoch   6 Batch   60/269 - Train Accuracy: 0.9477, Validation Accuracy: 0.9473, Loss: 2.0367
Epoch   6 Batch   70/269 - Train Accuracy: 0.9450, Validation Accuracy: 0.9416, Loss: 2.0111
Epoch   6 Batch   80/269 - Train Accuracy: 0.9607, Validation Accuracy: 0.9498, Loss: 1.9891
Epoch   6 Batch   90/269 - Train Accuracy: 0.9571, Validation Accuracy: 0.9561, Loss: 2.0191
Epoch   6 Batch  100/269 - Train Accuracy: 0.9620, Validation Accuracy: 0.9577, Loss: 2.0183
Epoch   6 Batch  110/269 - Train Accuracy: 0.9461, Validation Accuracy: 0.9513, Loss: 1.9896
Epoch   6 Batch  120/269 - Train Accuracy: 0.9538, Validation Accuracy: 0.9482, Loss: 2.0178
Epoch   6 Batch  130/269 - Train Accuracy: 0.9603, Validation Accuracy: 0.9502, Loss: 2.0707
Epoch   6 Batch  140/269 - Train Accuracy: 0.9499, Validation Accuracy: 0.9585, Loss: 2.0340
Epoch   6 Batch  150/269 - Train Accuracy: 0.9510, Validation Accuracy: 0.9614, Loss: 2.0369
Epoch   6 Batch  160/269 - Train Accuracy: 0.9520, Validation Accuracy: 0.9602, Loss: 1.9848
Epoch   6 Batch  170/269 - Train Accuracy: 0.9546, Validation Accuracy: 0.9585, Loss: 2.0172
Epoch   6 Batch  180/269 - Train Accuracy: 0.9687, Validation Accuracy: 0.9656, Loss: 2.0468
Epoch   6 Batch  190/269 - Train Accuracy: 0.9613, Validation Accuracy: 0.9583, Loss: 2.0354
Epoch   6 Batch  200/269 - Train Accuracy: 0.9561, Validation Accuracy: 0.9616, Loss: 2.0616
Epoch   6 Batch  210/269 - Train Accuracy: 0.9623, Validation Accuracy: 0.9612, Loss: 2.0275
Epoch   6 Batch  220/269 - Train Accuracy: 0.9554, Validation Accuracy: 0.9582, Loss: 2.0334
Epoch   6 Batch  230/269 - Train Accuracy: 0.9611, Validation Accuracy: 0.9579, Loss: 1.9970
Epoch   6 Batch  240/269 - Train Accuracy: 0.9671, Validation Accuracy: 0.9598, Loss: 2.0509
Epoch   6 Batch  250/269 - Train Accuracy: 0.9610, Validation Accuracy: 0.9516, Loss: 1.9968
Epoch   6 Batch  260/269 - Train Accuracy: 0.9634, Validation Accuracy: 0.9593, Loss: 2.0286
Epoch   7 Batch   10/269 - Train Accuracy: 0.9711, Validation Accuracy: 0.9648, Loss: 2.0141
Epoch   7 Batch   20/269 - Train Accuracy: 0.9562, Validation Accuracy: 0.9582, Loss: 2.0189
Epoch   7 Batch   30/269 - Train Accuracy: 0.9569, Validation Accuracy: 0.9648, Loss: 2.0658
Epoch   7 Batch   40/269 - Train Accuracy: 0.9551, Validation Accuracy: 0.9590, Loss: 2.0398
Epoch   7 Batch   50/269 - Train Accuracy: 0.9553, Validation Accuracy: 0.9630, Loss: 2.0126
Epoch   7 Batch   60/269 - Train Accuracy: 0.9523, Validation Accuracy: 0.9497, Loss: 2.0088
Epoch   7 Batch   70/269 - Train Accuracy: 0.9622, Validation Accuracy: 0.9578, Loss: 1.9794
Epoch   7 Batch   80/269 - Train Accuracy: 0.9711, Validation Accuracy: 0.9473, Loss: 2.0026
Epoch   7 Batch   90/269 - Train Accuracy: 0.9676, Validation Accuracy: 0.9580, Loss: 2.0103
Epoch   7 Batch  100/269 - Train Accuracy: 0.9681, Validation Accuracy: 0.9662, Loss: 1.9829
Epoch   7 Batch  110/269 - Train Accuracy: 0.9604, Validation Accuracy: 0.9663, Loss: 2.0024
Epoch   7 Batch  120/269 - Train Accuracy: 0.9546, Validation Accuracy: 0.9631, Loss: 2.0495
Epoch   7 Batch  130/269 - Train Accuracy: 0.9623, Validation Accuracy: 0.9626, Loss: 2.0053
Epoch   7 Batch  140/269 - Train Accuracy: 0.9562, Validation Accuracy: 0.9581, Loss: 2.0837
Epoch   7 Batch  150/269 - Train Accuracy: 0.9634, Validation Accuracy: 0.9577, Loss: 2.0087
Epoch   7 Batch  160/269 - Train Accuracy: 0.9633, Validation Accuracy: 0.9656, Loss: 2.0330
Epoch   7 Batch  170/269 - Train Accuracy: 0.9685, Validation Accuracy: 0.9631, Loss: 1.9923
Epoch   7 Batch  180/269 - Train Accuracy: 0.9696, Validation Accuracy: 0.9632, Loss: 2.0531
Epoch   7 Batch  190/269 - Train Accuracy: 0.9620, Validation Accuracy: 0.9620, Loss: 2.0262
Epoch   7 Batch  200/269 - Train Accuracy: 0.9617, Validation Accuracy: 0.9585, Loss: 2.0273
Epoch   7 Batch  210/269 - Train Accuracy: 0.9626, Validation Accuracy: 0.9511, Loss: 2.0256
Epoch   7 Batch  220/269 - Train Accuracy: 0.9624, Validation Accuracy: 0.9653, Loss: 2.0181
Epoch   7 Batch  230/269 - Train Accuracy: 0.9638, Validation Accuracy: 0.9649, Loss: 2.0124
Epoch   7 Batch  240/269 - Train Accuracy: 0.9701, Validation Accuracy: 0.9613, Loss: 2.0331
Epoch   7 Batch  250/269 - Train Accuracy: 0.9699, Validation Accuracy: 0.9606, Loss: 2.0142
Epoch   7 Batch  260/269 - Train Accuracy: 0.9637, Validation Accuracy: 0.9618, Loss: 2.0262
Epoch   8 Batch   10/269 - Train Accuracy: 0.9771, Validation Accuracy: 0.9608, Loss: 1.9967
Epoch   8 Batch   20/269 - Train Accuracy: 0.9642, Validation Accuracy: 0.9618, Loss: 2.0427
Epoch   8 Batch   30/269 - Train Accuracy: 0.9645, Validation Accuracy: 0.9669, Loss: 2.0364
Epoch   8 Batch   40/269 - Train Accuracy: 0.9583, Validation Accuracy: 0.9608, Loss: 1.9862
Epoch   8 Batch   50/269 - Train Accuracy: 0.9464, Validation Accuracy: 0.9641, Loss: 2.0551
Epoch   8 Batch   60/269 - Train Accuracy: 0.9647, Validation Accuracy: 0.9616, Loss: 2.0414
Epoch   8 Batch   70/269 - Train Accuracy: 0.9560, Validation Accuracy: 0.9521, Loss: 2.0407
Epoch   8 Batch   80/269 - Train Accuracy: 0.9656, Validation Accuracy: 0.9638, Loss: 2.0032
Epoch   8 Batch   90/269 - Train Accuracy: 0.9675, Validation Accuracy: 0.9596, Loss: 2.0225
Epoch   8 Batch  100/269 - Train Accuracy: 0.9681, Validation Accuracy: 0.9635, Loss: 2.0203
Epoch   8 Batch  110/269 - Train Accuracy: 0.9718, Validation Accuracy: 0.9634, Loss: 2.0580
Epoch   8 Batch  120/269 - Train Accuracy: 0.9555, Validation Accuracy: 0.9664, Loss: 2.0175
Epoch   8 Batch  130/269 - Train Accuracy: 0.9648, Validation Accuracy: 0.9634, Loss: 2.0175
Epoch   8 Batch  140/269 - Train Accuracy: 0.9604, Validation Accuracy: 0.9639, Loss: 1.9935
Epoch   8 Batch  150/269 - Train Accuracy: 0.9715, Validation Accuracy: 0.9671, Loss: 1.9851
Epoch   8 Batch  160/269 - Train Accuracy: 0.9680, Validation Accuracy: 0.9698, Loss: 2.0395
Epoch   8 Batch  170/269 - Train Accuracy: 0.9660, Validation Accuracy: 0.9670, Loss: 1.9903
Epoch   8 Batch  180/269 - Train Accuracy: 0.9714, Validation Accuracy: 0.9671, Loss: 2.0164
Epoch   8 Batch  190/269 - Train Accuracy: 0.9695, Validation Accuracy: 0.9668, Loss: 1.9879
Epoch   8 Batch  200/269 - Train Accuracy: 0.9712, Validation Accuracy: 0.9665, Loss: 1.9899
Epoch   8 Batch  210/269 - Train Accuracy: 0.9748, Validation Accuracy: 0.9687, Loss: 2.0251
Epoch   8 Batch  220/269 - Train Accuracy: 0.9633, Validation Accuracy: 0.9646, Loss: 1.9779
Epoch   8 Batch  230/269 - Train Accuracy: 0.9746, Validation Accuracy: 0.9780, Loss: 1.9971
Epoch   8 Batch  240/269 - Train Accuracy: 0.9743, Validation Accuracy: 0.9694, Loss: 2.0731
Epoch   8 Batch  250/269 - Train Accuracy: 0.9714, Validation Accuracy: 0.9688, Loss: 2.0069
Epoch   8 Batch  260/269 - Train Accuracy: 0.9777, Validation Accuracy: 0.9704, Loss: 2.0356
Epoch   9 Batch   10/269 - Train Accuracy: 0.9714, Validation Accuracy: 0.9723, Loss: 2.0130
Epoch   9 Batch   20/269 - Train Accuracy: 0.9747, Validation Accuracy: 0.9732, Loss: 2.0461
Epoch   9 Batch   30/269 - Train Accuracy: 0.9634, Validation Accuracy: 0.9701, Loss: 1.9886
Epoch   9 Batch   40/269 - Train Accuracy: 0.9713, Validation Accuracy: 0.9662, Loss: 2.0583
Epoch   9 Batch   50/269 - Train Accuracy: 0.9578, Validation Accuracy: 0.9686, Loss: 2.0622
Epoch   9 Batch   60/269 - Train Accuracy: 0.9718, Validation Accuracy: 0.9656, Loss: 2.0280
Epoch   9 Batch   70/269 - Train Accuracy: 0.9722, Validation Accuracy: 0.9696, Loss: 2.0604
Epoch   9 Batch   80/269 - Train Accuracy: 0.9773, Validation Accuracy: 0.9680, Loss: 2.0295
Epoch   9 Batch   90/269 - Train Accuracy: 0.9712, Validation Accuracy: 0.9695, Loss: 1.9710
Epoch   9 Batch  100/269 - Train Accuracy: 0.9718, Validation Accuracy: 0.9692, Loss: 2.0430
Epoch   9 Batch  110/269 - Train Accuracy: 0.9747, Validation Accuracy: 0.9706, Loss: 2.0349
Epoch   9 Batch  120/269 - Train Accuracy: 0.9719, Validation Accuracy: 0.9705, Loss: 2.0030
Epoch   9 Batch  130/269 - Train Accuracy: 0.9738, Validation Accuracy: 0.9691, Loss: 2.0490
Epoch   9 Batch  140/269 - Train Accuracy: 0.9701, Validation Accuracy: 0.9713, Loss: 2.0180
Epoch   9 Batch  150/269 - Train Accuracy: 0.9738, Validation Accuracy: 0.9751, Loss: 2.0185
Epoch   9 Batch  160/269 - Train Accuracy: 0.9772, Validation Accuracy: 0.9756, Loss: 1.9893
Epoch   9 Batch  170/269 - Train Accuracy: 0.9746, Validation Accuracy: 0.9732, Loss: 2.0082
Epoch   9 Batch  180/269 - Train Accuracy: 0.9818, Validation Accuracy: 0.9753, Loss: 2.0038
Epoch   9 Batch  190/269 - Train Accuracy: 0.9643, Validation Accuracy: 0.9751, Loss: 2.0133
Epoch   9 Batch  200/269 - Train Accuracy: 0.9766, Validation Accuracy: 0.9650, Loss: 1.9756
Epoch   9 Batch  210/269 - Train Accuracy: 0.9702, Validation Accuracy: 0.9713, Loss: 2.0235
Epoch   9 Batch  220/269 - Train Accuracy: 0.9719, Validation Accuracy: 0.9753, Loss: 2.0196
Epoch   9 Batch  230/269 - Train Accuracy: 0.9779, Validation Accuracy: 0.9722, Loss: 1.9973
Epoch   9 Batch  240/269 - Train Accuracy: 0.9744, Validation Accuracy: 0.9751, Loss: 2.0203
Epoch   9 Batch  250/269 - Train Accuracy: 0.9652, Validation Accuracy: 0.9660, Loss: 1.9810
Epoch   9 Batch  260/269 - Train Accuracy: 0.9755, Validation Accuracy: 0.9724, Loss: 2.0051
Epoch  10 Batch   10/269 - Train Accuracy: 0.9796, Validation Accuracy: 0.9668, Loss: 2.0126
Epoch  10 Batch   20/269 - Train Accuracy: 0.9727, Validation Accuracy: 0.9697, Loss: 1.9987
Epoch  10 Batch   30/269 - Train Accuracy: 0.9744, Validation Accuracy: 0.9769, Loss: 2.0395
Epoch  10 Batch   40/269 - Train Accuracy: 0.9765, Validation Accuracy: 0.9729, Loss: 2.0174
Epoch  10 Batch   50/269 - Train Accuracy: 0.9581, Validation Accuracy: 0.9709, Loss: 2.0044
Epoch  10 Batch   60/269 - Train Accuracy: 0.9752, Validation Accuracy: 0.9649, Loss: 2.0120
Epoch  10 Batch   70/269 - Train Accuracy: 0.9762, Validation Accuracy: 0.9727, Loss: 2.0233
Epoch  10 Batch   80/269 - Train Accuracy: 0.9801, Validation Accuracy: 0.9741, Loss: 1.9796
Epoch  10 Batch   90/269 - Train Accuracy: 0.9769, Validation Accuracy: 0.9731, Loss: 2.0222
Epoch  10 Batch  100/269 - Train Accuracy: 0.9734, Validation Accuracy: 0.9737, Loss: 2.0009
Epoch  10 Batch  110/269 - Train Accuracy: 0.9782, Validation Accuracy: 0.9717, Loss: 2.0205
Epoch  10 Batch  120/269 - Train Accuracy: 0.9764, Validation Accuracy: 0.9661, Loss: 2.0420
Epoch  10 Batch  130/269 - Train Accuracy: 0.9722, Validation Accuracy: 0.9664, Loss: 2.0108
Epoch  10 Batch  140/269 - Train Accuracy: 0.9735, Validation Accuracy: 0.9781, Loss: 2.0103
Epoch  10 Batch  150/269 - Train Accuracy: 0.9727, Validation Accuracy: 0.9760, Loss: 1.9808
Epoch  10 Batch  160/269 - Train Accuracy: 0.9764, Validation Accuracy: 0.9744, Loss: 2.0131
Epoch  10 Batch  170/269 - Train Accuracy: 0.9778, Validation Accuracy: 0.9719, Loss: 2.0638
Epoch  10 Batch  180/269 - Train Accuracy: 0.9807, Validation Accuracy: 0.9724, Loss: 1.9954
Epoch  10 Batch  190/269 - Train Accuracy: 0.9747, Validation Accuracy: 0.9652, Loss: 1.9953
Epoch  10 Batch  200/269 - Train Accuracy: 0.9848, Validation Accuracy: 0.9668, Loss: 2.0273
Epoch  10 Batch  210/269 - Train Accuracy: 0.9778, Validation Accuracy: 0.9743, Loss: 1.9601
Epoch  10 Batch  220/269 - Train Accuracy: 0.9730, Validation Accuracy: 0.9715, Loss: 2.0025
Epoch  10 Batch  230/269 - Train Accuracy: 0.9771, Validation Accuracy: 0.9752, Loss: 2.0347
Epoch  10 Batch  240/269 - Train Accuracy: 0.9761, Validation Accuracy: 0.9732, Loss: 2.0179
Epoch  10 Batch  250/269 - Train Accuracy: 0.9784, Validation Accuracy: 0.9728, Loss: 1.9972
Epoch  10 Batch  260/269 - Train Accuracy: 0.9855, Validation Accuracy: 0.9754, Loss: 2.0260
Epoch  11 Batch   10/269 - Train Accuracy: 0.9762, Validation Accuracy: 0.9707, Loss: 1.9653
Epoch  11 Batch   20/269 - Train Accuracy: 0.9800, Validation Accuracy: 0.9717, Loss: 2.0157
Epoch  11 Batch   30/269 - Train Accuracy: 0.9767, Validation Accuracy: 0.9777, Loss: 1.9961
Epoch  11 Batch   40/269 - Train Accuracy: 0.9711, Validation Accuracy: 0.9684, Loss: 2.0004
Epoch  11 Batch   50/269 - Train Accuracy: 0.9592, Validation Accuracy: 0.9749, Loss: 1.9859
Epoch  11 Batch   60/269 - Train Accuracy: 0.9803, Validation Accuracy: 0.9724, Loss: 1.9786
Epoch  11 Batch   70/269 - Train Accuracy: 0.9835, Validation Accuracy: 0.9738, Loss: 2.0115
Epoch  11 Batch   80/269 - Train Accuracy: 0.9785, Validation Accuracy: 0.9714, Loss: 2.0092
Epoch  11 Batch   90/269 - Train Accuracy: 0.9727, Validation Accuracy: 0.9741, Loss: 1.9896
Epoch  11 Batch  100/269 - Train Accuracy: 0.9813, Validation Accuracy: 0.9779, Loss: 2.0187
Epoch  11 Batch  110/269 - Train Accuracy: 0.9782, Validation Accuracy: 0.9749, Loss: 1.9594
Epoch  11 Batch  120/269 - Train Accuracy: 0.9764, Validation Accuracy: 0.9706, Loss: 1.9842
Epoch  11 Batch  130/269 - Train Accuracy: 0.9812, Validation Accuracy: 0.9709, Loss: 2.0631
Epoch  11 Batch  140/269 - Train Accuracy: 0.9732, Validation Accuracy: 0.9784, Loss: 2.0489
Epoch  11 Batch  150/269 - Train Accuracy: 0.9757, Validation Accuracy: 0.9739, Loss: 1.9915
Epoch  11 Batch  160/269 - Train Accuracy: 0.9768, Validation Accuracy: 0.9741, Loss: 2.0065
Epoch  11 Batch  170/269 - Train Accuracy: 0.9747, Validation Accuracy: 0.9799, Loss: 2.0159
Epoch  11 Batch  180/269 - Train Accuracy: 0.9813, Validation Accuracy: 0.9758, Loss: 2.0027
Epoch  11 Batch  190/269 - Train Accuracy: 0.9803, Validation Accuracy: 0.9763, Loss: 2.0297
Epoch  11 Batch  200/269 - Train Accuracy: 0.9856, Validation Accuracy: 0.9737, Loss: 1.9902
Epoch  11 Batch  210/269 - Train Accuracy: 0.9798, Validation Accuracy: 0.9747, Loss: 2.0306
Epoch  11 Batch  220/269 - Train Accuracy: 0.9737, Validation Accuracy: 0.9767, Loss: 2.0252
Epoch  11 Batch  230/269 - Train Accuracy: 0.9807, Validation Accuracy: 0.9761, Loss: 2.0127
Epoch  11 Batch  240/269 - Train Accuracy: 0.9758, Validation Accuracy: 0.9769, Loss: 2.0153
Epoch  11 Batch  250/269 - Train Accuracy: 0.9809, Validation Accuracy: 0.9776, Loss: 2.0053
Epoch  11 Batch  260/269 - Train Accuracy: 0.9805, Validation Accuracy: 0.9742, Loss: 2.0388
Epoch  12 Batch   10/269 - Train Accuracy: 0.9759, Validation Accuracy: 0.9770, Loss: 1.9826
Epoch  12 Batch   20/269 - Train Accuracy: 0.9827, Validation Accuracy: 0.9717, Loss: 2.0206
Epoch  12 Batch   30/269 - Train Accuracy: 0.9792, Validation Accuracy: 0.9724, Loss: 1.9775
Epoch  12 Batch   40/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9749, Loss: 1.9757
Epoch  12 Batch   50/269 - Train Accuracy: 0.9634, Validation Accuracy: 0.9768, Loss: 2.0495
Epoch  12 Batch   60/269 - Train Accuracy: 0.9774, Validation Accuracy: 0.9767, Loss: 2.0345
Epoch  12 Batch   70/269 - Train Accuracy: 0.9827, Validation Accuracy: 0.9830, Loss: 2.0124
Epoch  12 Batch   80/269 - Train Accuracy: 0.9769, Validation Accuracy: 0.9721, Loss: 1.9800
Epoch  12 Batch   90/269 - Train Accuracy: 0.9776, Validation Accuracy: 0.9756, Loss: 2.0369
Epoch  12 Batch  100/269 - Train Accuracy: 0.9769, Validation Accuracy: 0.9754, Loss: 1.9750
Epoch  12 Batch  110/269 - Train Accuracy: 0.9805, Validation Accuracy: 0.9759, Loss: 2.0337
Epoch  12 Batch  120/269 - Train Accuracy: 0.9750, Validation Accuracy: 0.9719, Loss: 1.9669
Epoch  12 Batch  130/269 - Train Accuracy: 0.9811, Validation Accuracy: 0.9744, Loss: 1.9853
Epoch  12 Batch  140/269 - Train Accuracy: 0.9700, Validation Accuracy: 0.9826, Loss: 1.9933
Epoch  12 Batch  150/269 - Train Accuracy: 0.9814, Validation Accuracy: 0.9835, Loss: 2.0486
Epoch  12 Batch  160/269 - Train Accuracy: 0.9828, Validation Accuracy: 0.9838, Loss: 1.9932
Epoch  12 Batch  170/269 - Train Accuracy: 0.9752, Validation Accuracy: 0.9751, Loss: 2.0126
Epoch  12 Batch  180/269 - Train Accuracy: 0.9814, Validation Accuracy: 0.9804, Loss: 2.0292
Epoch  12 Batch  190/269 - Train Accuracy: 0.9772, Validation Accuracy: 0.9781, Loss: 1.9505
Epoch  12 Batch  200/269 - Train Accuracy: 0.9888, Validation Accuracy: 0.9764, Loss: 1.9775
Epoch  12 Batch  210/269 - Train Accuracy: 0.9790, Validation Accuracy: 0.9702, Loss: 2.0250
Epoch  12 Batch  220/269 - Train Accuracy: 0.9810, Validation Accuracy: 0.9765, Loss: 2.0202
Epoch  12 Batch  230/269 - Train Accuracy: 0.9786, Validation Accuracy: 0.9836, Loss: 2.0084
Epoch  12 Batch  240/269 - Train Accuracy: 0.9795, Validation Accuracy: 0.9790, Loss: 2.0199
Epoch  12 Batch  250/269 - Train Accuracy: 0.9812, Validation Accuracy: 0.9830, Loss: 2.0304
Epoch  12 Batch  260/269 - Train Accuracy: 0.9795, Validation Accuracy: 0.9761, Loss: 2.0089
Epoch  13 Batch   10/269 - Train Accuracy: 0.9792, Validation Accuracy: 0.9744, Loss: 2.0275
Epoch  13 Batch   20/269 - Train Accuracy: 0.9738, Validation Accuracy: 0.9690, Loss: 2.0140
Epoch  13 Batch   30/269 - Train Accuracy: 0.9817, Validation Accuracy: 0.9719, Loss: 2.0034
Epoch  13 Batch   40/269 - Train Accuracy: 0.9818, Validation Accuracy: 0.9766, Loss: 2.0286
Epoch  13 Batch   50/269 - Train Accuracy: 0.9596, Validation Accuracy: 0.9825, Loss: 2.0315
Epoch  13 Batch   60/269 - Train Accuracy: 0.9793, Validation Accuracy: 0.9810, Loss: 2.0360
Epoch  13 Batch   70/269 - Train Accuracy: 0.9783, Validation Accuracy: 0.9745, Loss: 2.0241
Epoch  13 Batch   80/269 - Train Accuracy: 0.9834, Validation Accuracy: 0.9731, Loss: 2.0060
Epoch  13 Batch   90/269 - Train Accuracy: 0.9754, Validation Accuracy: 0.9779, Loss: 2.0090
Epoch  13 Batch  100/269 - Train Accuracy: 0.9813, Validation Accuracy: 0.9767, Loss: 2.0501
Epoch  13 Batch  110/269 - Train Accuracy: 0.9886, Validation Accuracy: 0.9770, Loss: 1.9775
Epoch  13 Batch  120/269 - Train Accuracy: 0.9822, Validation Accuracy: 0.9760, Loss: 2.0288
Epoch  13 Batch  130/269 - Train Accuracy: 0.9853, Validation Accuracy: 0.9773, Loss: 1.9671
Epoch  13 Batch  140/269 - Train Accuracy: 0.9815, Validation Accuracy: 0.9821, Loss: 2.0104
Epoch  13 Batch  150/269 - Train Accuracy: 0.9819, Validation Accuracy: 0.9789, Loss: 2.0133
Epoch  13 Batch  160/269 - Train Accuracy: 0.9856, Validation Accuracy: 0.9812, Loss: 1.9701
Epoch  13 Batch  170/269 - Train Accuracy: 0.9767, Validation Accuracy: 0.9750, Loss: 1.9899
Epoch  13 Batch  180/269 - Train Accuracy: 0.9781, Validation Accuracy: 0.9756, Loss: 2.0194
Epoch  13 Batch  190/269 - Train Accuracy: 0.9825, Validation Accuracy: 0.9829, Loss: 1.9984
Epoch  13 Batch  200/269 - Train Accuracy: 0.9912, Validation Accuracy: 0.9775, Loss: 1.9929
Epoch  13 Batch  210/269 - Train Accuracy: 0.9847, Validation Accuracy: 0.9741, Loss: 2.0026
Epoch  13 Batch  220/269 - Train Accuracy: 0.9792, Validation Accuracy: 0.9739, Loss: 1.9890
Epoch  13 Batch  230/269 - Train Accuracy: 0.9807, Validation Accuracy: 0.9761, Loss: 1.9994
Epoch  13 Batch  240/269 - Train Accuracy: 0.9812, Validation Accuracy: 0.9802, Loss: 2.0155
Epoch  13 Batch  250/269 - Train Accuracy: 0.9868, Validation Accuracy: 0.9808, Loss: 2.0068
Epoch  13 Batch  260/269 - Train Accuracy: 0.9898, Validation Accuracy: 0.9826, Loss: 2.0080
Epoch  14 Batch   10/269 - Train Accuracy: 0.9781, Validation Accuracy: 0.9811, Loss: 1.9863
Epoch  14 Batch   20/269 - Train Accuracy: 0.9784, Validation Accuracy: 0.9760, Loss: 2.0224
Epoch  14 Batch   30/269 - Train Accuracy: 0.9813, Validation Accuracy: 0.9806, Loss: 1.9844
Epoch  14 Batch   40/269 - Train Accuracy: 0.9819, Validation Accuracy: 0.9803, Loss: 1.9769
Epoch  14 Batch   50/269 - Train Accuracy: 0.9732, Validation Accuracy: 0.9798, Loss: 2.0287
Epoch  14 Batch   60/269 - Train Accuracy: 0.9803, Validation Accuracy: 0.9790, Loss: 2.0317
Epoch  14 Batch   70/269 - Train Accuracy: 0.9875, Validation Accuracy: 0.9826, Loss: 2.0297
Epoch  14 Batch   80/269 - Train Accuracy: 0.9874, Validation Accuracy: 0.9814, Loss: 1.9850
Epoch  14 Batch   90/269 - Train Accuracy: 0.9791, Validation Accuracy: 0.9776, Loss: 2.0428
Epoch  14 Batch  100/269 - Train Accuracy: 0.9881, Validation Accuracy: 0.9806, Loss: 2.0013
Epoch  14 Batch  110/269 - Train Accuracy: 0.9871, Validation Accuracy: 0.9833, Loss: 2.0085
Epoch  14 Batch  120/269 - Train Accuracy: 0.9859, Validation Accuracy: 0.9773, Loss: 1.9582
Epoch  14 Batch  130/269 - Train Accuracy: 0.9854, Validation Accuracy: 0.9748, Loss: 2.0137
Epoch  14 Batch  140/269 - Train Accuracy: 0.9834, Validation Accuracy: 0.9751, Loss: 2.0218
Epoch  14 Batch  150/269 - Train Accuracy: 0.9834, Validation Accuracy: 0.9773, Loss: 1.9975
Epoch  14 Batch  160/269 - Train Accuracy: 0.9827, Validation Accuracy: 0.9782, Loss: 2.0337
Epoch  14 Batch  170/269 - Train Accuracy: 0.9793, Validation Accuracy: 0.9723, Loss: 1.9885
Epoch  14 Batch  180/269 - Train Accuracy: 0.9879, Validation Accuracy: 0.9782, Loss: 1.9928
Epoch  14 Batch  190/269 - Train Accuracy: 0.9843, Validation Accuracy: 0.9782, Loss: 1.9787
Epoch  14 Batch  200/269 - Train Accuracy: 0.9924, Validation Accuracy: 0.9799, Loss: 1.9909
Epoch  14 Batch  210/269 - Train Accuracy: 0.9852, Validation Accuracy: 0.9808, Loss: 1.9923
Epoch  14 Batch  220/269 - Train Accuracy: 0.9766, Validation Accuracy: 0.9802, Loss: 1.9905
Epoch  14 Batch  230/269 - Train Accuracy: 0.9882, Validation Accuracy: 0.9821, Loss: 1.9913
Epoch  14 Batch  240/269 - Train Accuracy: 0.9868, Validation Accuracy: 0.9811, Loss: 1.9934
Epoch  14 Batch  250/269 - Train Accuracy: 0.9821, Validation Accuracy: 0.9791, Loss: 1.9880
Epoch  14 Batch  260/269 - Train Accuracy: 0.9891, Validation Accuracy: 0.9788, Loss: 1.9933
Epoch  15 Batch   10/269 - Train Accuracy: 0.9776, Validation Accuracy: 0.9749, Loss: 1.9997
Epoch  15 Batch   20/269 - Train Accuracy: 0.9835, Validation Accuracy: 0.9755, Loss: 1.9918
Epoch  15 Batch   30/269 - Train Accuracy: 0.9759, Validation Accuracy: 0.9798, Loss: 1.9797
Epoch  15 Batch   40/269 - Train Accuracy: 0.9792, Validation Accuracy: 0.9786, Loss: 2.0251
Epoch  15 Batch   50/269 - Train Accuracy: 0.9808, Validation Accuracy: 0.9829, Loss: 1.9864
Epoch  15 Batch   60/269 - Train Accuracy: 0.9771, Validation Accuracy: 0.9772, Loss: 2.0125
Epoch  15 Batch   70/269 - Train Accuracy: 0.9868, Validation Accuracy: 0.9789, Loss: 2.0163
Epoch  15 Batch   80/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9814, Loss: 2.0530
Epoch  15 Batch   90/269 - Train Accuracy: 0.9815, Validation Accuracy: 0.9778, Loss: 1.9675
Epoch  15 Batch  100/269 - Train Accuracy: 0.9824, Validation Accuracy: 0.9816, Loss: 1.9877
Epoch  15 Batch  110/269 - Train Accuracy: 0.9904, Validation Accuracy: 0.9752, Loss: 2.0162
Epoch  15 Batch  120/269 - Train Accuracy: 0.9862, Validation Accuracy: 0.9789, Loss: 2.0038
Epoch  15 Batch  130/269 - Train Accuracy: 0.9864, Validation Accuracy: 0.9763, Loss: 2.0244
Epoch  15 Batch  140/269 - Train Accuracy: 0.9863, Validation Accuracy: 0.9755, Loss: 2.0319
Epoch  15 Batch  150/269 - Train Accuracy: 0.9803, Validation Accuracy: 0.9762, Loss: 1.9789
Epoch  15 Batch  160/269 - Train Accuracy: 0.9889, Validation Accuracy: 0.9846, Loss: 2.0124
Epoch  15 Batch  170/269 - Train Accuracy: 0.9847, Validation Accuracy: 0.9816, Loss: 1.9811
Epoch  15 Batch  180/269 - Train Accuracy: 0.9826, Validation Accuracy: 0.9802, Loss: 2.0550
Epoch  15 Batch  190/269 - Train Accuracy: 0.9814, Validation Accuracy: 0.9761, Loss: 2.0152
Epoch  15 Batch  200/269 - Train Accuracy: 0.9915, Validation Accuracy: 0.9803, Loss: 1.9691
Epoch  15 Batch  210/269 - Train Accuracy: 0.9808, Validation Accuracy: 0.9692, Loss: 2.0226
Epoch  15 Batch  220/269 - Train Accuracy: 0.9768, Validation Accuracy: 0.9782, Loss: 2.0362
Epoch  15 Batch  230/269 - Train Accuracy: 0.9819, Validation Accuracy: 0.9813, Loss: 2.0040
Epoch  15 Batch  240/269 - Train Accuracy: 0.9856, Validation Accuracy: 0.9809, Loss: 2.0066
Epoch  15 Batch  250/269 - Train Accuracy: 0.9849, Validation Accuracy: 0.9780, Loss: 2.0154
Epoch  15 Batch  260/269 - Train Accuracy: 0.9899, Validation Accuracy: 0.9814, Loss: 1.9955
Epoch  16 Batch   10/269 - Train Accuracy: 0.9870, Validation Accuracy: 0.9742, Loss: 2.0109
Epoch  16 Batch   20/269 - Train Accuracy: 0.9834, Validation Accuracy: 0.9747, Loss: 2.0149
Epoch  16 Batch   30/269 - Train Accuracy: 0.9838, Validation Accuracy: 0.9805, Loss: 2.0322
Epoch  16 Batch   40/269 - Train Accuracy: 0.9822, Validation Accuracy: 0.9771, Loss: 1.9909
Epoch  16 Batch   50/269 - Train Accuracy: 0.9721, Validation Accuracy: 0.9804, Loss: 2.0175
Epoch  16 Batch   60/269 - Train Accuracy: 0.9817, Validation Accuracy: 0.9828, Loss: 2.0242
Epoch  16 Batch   70/269 - Train Accuracy: 0.9900, Validation Accuracy: 0.9819, Loss: 2.0225
Epoch  16 Batch   80/269 - Train Accuracy: 0.9863, Validation Accuracy: 0.9825, Loss: 2.0292
Epoch  16 Batch   90/269 - Train Accuracy: 0.9780, Validation Accuracy: 0.9804, Loss: 1.9803
Epoch  16 Batch  100/269 - Train Accuracy: 0.9805, Validation Accuracy: 0.9832, Loss: 1.9944
Epoch  16 Batch  110/269 - Train Accuracy: 0.9818, Validation Accuracy: 0.9773, Loss: 1.9386
Epoch  16 Batch  120/269 - Train Accuracy: 0.9817, Validation Accuracy: 0.9748, Loss: 2.0394
Epoch  16 Batch  130/269 - Train Accuracy: 0.9779, Validation Accuracy: 0.9749, Loss: 2.0373
Epoch  16 Batch  140/269 - Train Accuracy: 0.9813, Validation Accuracy: 0.9772, Loss: 2.0117
Epoch  16 Batch  150/269 - Train Accuracy: 0.9866, Validation Accuracy: 0.9798, Loss: 1.9966
Epoch  16 Batch  160/269 - Train Accuracy: 0.9870, Validation Accuracy: 0.9798, Loss: 1.9737
Epoch  16 Batch  170/269 - Train Accuracy: 0.9776, Validation Accuracy: 0.9759, Loss: 2.0345
Epoch  16 Batch  180/269 - Train Accuracy: 0.9898, Validation Accuracy: 0.9758, Loss: 1.9936
Epoch  16 Batch  190/269 - Train Accuracy: 0.9829, Validation Accuracy: 0.9785, Loss: 1.9610
Epoch  16 Batch  200/269 - Train Accuracy: 0.9947, Validation Accuracy: 0.9780, Loss: 2.0224
Epoch  16 Batch  210/269 - Train Accuracy: 0.9835, Validation Accuracy: 0.9720, Loss: 1.9758
Epoch  16 Batch  220/269 - Train Accuracy: 0.9784, Validation Accuracy: 0.9740, Loss: 1.9542
Epoch  16 Batch  230/269 - Train Accuracy: 0.9890, Validation Accuracy: 0.9742, Loss: 2.0063
Epoch  16 Batch  240/269 - Train Accuracy: 0.9817, Validation Accuracy: 0.9815, Loss: 2.0358
Epoch  16 Batch  250/269 - Train Accuracy: 0.9832, Validation Accuracy: 0.9787, Loss: 2.0275
Epoch  16 Batch  260/269 - Train Accuracy: 0.9893, Validation Accuracy: 0.9711, Loss: 2.0206
Epoch  17 Batch   10/269 - Train Accuracy: 0.9857, Validation Accuracy: 0.9769, Loss: 1.9988
Epoch  17 Batch   20/269 - Train Accuracy: 0.9850, Validation Accuracy: 0.9836, Loss: 2.0057
Epoch  17 Batch   30/269 - Train Accuracy: 0.9829, Validation Accuracy: 0.9804, Loss: 1.9897
Epoch  17 Batch   40/269 - Train Accuracy: 0.9874, Validation Accuracy: 0.9791, Loss: 2.0086
Epoch  17 Batch   50/269 - Train Accuracy: 0.9832, Validation Accuracy: 0.9859, Loss: 2.0119
Epoch  17 Batch   60/269 - Train Accuracy: 0.9846, Validation Accuracy: 0.9790, Loss: 1.9695
Epoch  17 Batch   70/269 - Train Accuracy: 0.9893, Validation Accuracy: 0.9776, Loss: 1.9646
Epoch  17 Batch   80/269 - Train Accuracy: 0.9901, Validation Accuracy: 0.9795, Loss: 2.0176
Epoch  17 Batch   90/269 - Train Accuracy: 0.9760, Validation Accuracy: 0.9782, Loss: 1.9767
Epoch  17 Batch  100/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9782, Loss: 1.9823
Epoch  17 Batch  110/269 - Train Accuracy: 0.9886, Validation Accuracy: 0.9805, Loss: 1.9921
Epoch  17 Batch  120/269 - Train Accuracy: 0.9846, Validation Accuracy: 0.9789, Loss: 2.0089
Epoch  17 Batch  130/269 - Train Accuracy: 0.9850, Validation Accuracy: 0.9803, Loss: 2.0150
Epoch  17 Batch  140/269 - Train Accuracy: 0.9860, Validation Accuracy: 0.9862, Loss: 2.0221
Epoch  17 Batch  150/269 - Train Accuracy: 0.9881, Validation Accuracy: 0.9800, Loss: 1.9699
Epoch  17 Batch  160/269 - Train Accuracy: 0.9881, Validation Accuracy: 0.9813, Loss: 2.0203
Epoch  17 Batch  170/269 - Train Accuracy: 0.9814, Validation Accuracy: 0.9806, Loss: 2.0094
Epoch  17 Batch  180/269 - Train Accuracy: 0.9901, Validation Accuracy: 0.9798, Loss: 1.9861
Epoch  17 Batch  190/269 - Train Accuracy: 0.9851, Validation Accuracy: 0.9727, Loss: 1.9728
Epoch  17 Batch  200/269 - Train Accuracy: 0.9878, Validation Accuracy: 0.9776, Loss: 2.0072
Epoch  17 Batch  210/269 - Train Accuracy: 0.9896, Validation Accuracy: 0.9717, Loss: 1.9843
Epoch  17 Batch  220/269 - Train Accuracy: 0.9797, Validation Accuracy: 0.9799, Loss: 2.0232
Epoch  17 Batch  230/269 - Train Accuracy: 0.9851, Validation Accuracy: 0.9785, Loss: 2.0034
Epoch  17 Batch  240/269 - Train Accuracy: 0.9869, Validation Accuracy: 0.9783, Loss: 1.9723
Epoch  17 Batch  250/269 - Train Accuracy: 0.9875, Validation Accuracy: 0.9735, Loss: 2.0703
Epoch  17 Batch  260/269 - Train Accuracy: 0.9896, Validation Accuracy: 0.9765, Loss: 1.9990
Epoch  18 Batch   10/269 - Train Accuracy: 0.9861, Validation Accuracy: 0.9798, Loss: 1.9953
Epoch  18 Batch   20/269 - Train Accuracy: 0.9861, Validation Accuracy: 0.9798, Loss: 2.0270
Epoch  18 Batch   30/269 - Train Accuracy: 0.9794, Validation Accuracy: 0.9805, Loss: 1.9947
Epoch  18 Batch   40/269 - Train Accuracy: 0.9849, Validation Accuracy: 0.9782, Loss: 1.9933
Epoch  18 Batch   50/269 - Train Accuracy: 0.9822, Validation Accuracy: 0.9784, Loss: 2.0006
Epoch  18 Batch   60/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9781, Loss: 1.9881
Epoch  18 Batch   70/269 - Train Accuracy: 0.9906, Validation Accuracy: 0.9769, Loss: 2.0022
Epoch  18 Batch   80/269 - Train Accuracy: 0.9896, Validation Accuracy: 0.9750, Loss: 2.0066
Epoch  18 Batch   90/269 - Train Accuracy: 0.9805, Validation Accuracy: 0.9739, Loss: 2.0123
Epoch  18 Batch  100/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9775, Loss: 2.0228
Epoch  18 Batch  110/269 - Train Accuracy: 0.9895, Validation Accuracy: 0.9778, Loss: 1.9917
Epoch  18 Batch  120/269 - Train Accuracy: 0.9817, Validation Accuracy: 0.9711, Loss: 2.0090
Epoch  18 Batch  130/269 - Train Accuracy: 0.9905, Validation Accuracy: 0.9751, Loss: 2.0280
Epoch  18 Batch  140/269 - Train Accuracy: 0.9855, Validation Accuracy: 0.9851, Loss: 2.0046
Epoch  18 Batch  150/269 - Train Accuracy: 0.9879, Validation Accuracy: 0.9757, Loss: 1.9963
Epoch  18 Batch  160/269 - Train Accuracy: 0.9911, Validation Accuracy: 0.9813, Loss: 2.0561
Epoch  18 Batch  170/269 - Train Accuracy: 0.9837, Validation Accuracy: 0.9820, Loss: 1.9404
Epoch  18 Batch  180/269 - Train Accuracy: 0.9900, Validation Accuracy: 0.9821, Loss: 1.9695
Epoch  18 Batch  190/269 - Train Accuracy: 0.9819, Validation Accuracy: 0.9827, Loss: 2.0080
Epoch  18 Batch  200/269 - Train Accuracy: 0.9885, Validation Accuracy: 0.9790, Loss: 1.9921
Epoch  18 Batch  210/269 - Train Accuracy: 0.9902, Validation Accuracy: 0.9764, Loss: 1.9907
Epoch  18 Batch  220/269 - Train Accuracy: 0.9832, Validation Accuracy: 0.9803, Loss: 2.0037
Epoch  18 Batch  230/269 - Train Accuracy: 0.9838, Validation Accuracy: 0.9761, Loss: 1.9621
Epoch  18 Batch  240/269 - Train Accuracy: 0.9869, Validation Accuracy: 0.9835, Loss: 2.0175
Epoch  18 Batch  250/269 - Train Accuracy: 0.9900, Validation Accuracy: 0.9822, Loss: 2.0297
Epoch  18 Batch  260/269 - Train Accuracy: 0.9910, Validation Accuracy: 0.9767, Loss: 2.0130
Epoch  19 Batch   10/269 - Train Accuracy: 0.9932, Validation Accuracy: 0.9814, Loss: 1.9666
Epoch  19 Batch   20/269 - Train Accuracy: 0.9922, Validation Accuracy: 0.9836, Loss: 1.9739
Epoch  19 Batch   30/269 - Train Accuracy: 0.9839, Validation Accuracy: 0.9827, Loss: 1.9972
Epoch  19 Batch   40/269 - Train Accuracy: 0.9864, Validation Accuracy: 0.9795, Loss: 2.0003
Epoch  19 Batch   50/269 - Train Accuracy: 0.9850, Validation Accuracy: 0.9813, Loss: 2.0132
Epoch  19 Batch   60/269 - Train Accuracy: 0.9877, Validation Accuracy: 0.9865, Loss: 2.0093
Epoch  19 Batch   70/269 - Train Accuracy: 0.9908, Validation Accuracy: 0.9822, Loss: 1.9599
Epoch  19 Batch   80/269 - Train Accuracy: 0.9928, Validation Accuracy: 0.9796, Loss: 1.9992
Epoch  19 Batch   90/269 - Train Accuracy: 0.9848, Validation Accuracy: 0.9838, Loss: 1.9995
Epoch  19 Batch  100/269 - Train Accuracy: 0.9892, Validation Accuracy: 0.9797, Loss: 2.0075
Epoch  19 Batch  110/269 - Train Accuracy: 0.9911, Validation Accuracy: 0.9755, Loss: 2.0199
Epoch  19 Batch  120/269 - Train Accuracy: 0.9880, Validation Accuracy: 0.9821, Loss: 2.0107
Epoch  19 Batch  130/269 - Train Accuracy: 0.9842, Validation Accuracy: 0.9826, Loss: 2.0011
Epoch  20 Batch  110/269 - Train Accuracy: 0.9900, Validation Accuracy: 0.9788, Loss: 1.9477
Epoch  20 Batch  120/269 - Train Accuracy: 0.9800, Validation Accuracy: 0.9787, Loss: 2.0294
Epoch  20 Batch  130/269 - Train Accuracy: 0.9910, Validation Accuracy: 0.9795, Loss: 1.9813
Epoch  20 Batch  140/269 - Train Accuracy: 0.9800, Validation Accuracy: 0.9793, Loss: 1.9997
Epoch  20 Batch  150/269 - Train Accuracy: 0.9851, Validation Accuracy: 0.9774, Loss: 1.9992
Epoch  20 Batch  160/269 - Train Accuracy: 0.9905, Validation Accuracy: 0.9740, Loss: 2.0051
Epoch  20 Batch  170/269 - Train Accuracy: 0.9821, Validation Accuracy: 0.9798, Loss: 1.9933
Epoch  20 Batch  180/269 - Train Accuracy: 0.9912, Validation Accuracy: 0.9775, Loss: 2.0219
Epoch  20 Batch  190/269 - Train Accuracy: 0.9871, Validation Accuracy: 0.9787, Loss: 1.9809
Epoch  20 Batch  200/269 - Train Accuracy: 0.9947, Validation Accuracy: 0.9742, Loss: 2.0190
Epoch  20 Batch  210/269 - Train Accuracy: 0.9898, Validation Accuracy: 0.9715, Loss: 2.0281
Epoch  20 Batch  220/269 - Train Accuracy: 0.9854, Validation Accuracy: 0.9733, Loss: 2.0061
Epoch  20 Batch  230/269 - Train Accuracy: 0.9894, Validation Accuracy: 0.9750, Loss: 2.0009
Epoch  20 Batch  240/269 - Train Accuracy: 0.9897, Validation Accuracy: 0.9763, Loss: 2.0177
Epoch  20 Batch  250/269 - Train Accuracy: 0.9904, Validation Accuracy: 0.9803, Loss: 1.9770
Epoch  20 Batch  260/269 - Train Accuracy: 0.9949, Validation Accuracy: 0.9771, Loss: 1.9706
Epoch  21 Batch   10/269 - Train Accuracy: 0.9851, Validation Accuracy: 0.9798, Loss: 1.9743
Epoch  21 Batch   20/269 - Train Accuracy: 0.9898, Validation Accuracy: 0.9826, Loss: 2.0361
Epoch  21 Batch   30/269 - Train Accuracy: 0.9859, Validation Accuracy: 0.9791, Loss: 1.9800
Epoch  21 Batch   40/269 - Train Accuracy: 0.9873, Validation Accuracy: 0.9784, Loss: 2.0112
Epoch  21 Batch   50/269 - Train Accuracy: 0.9880, Validation Accuracy: 0.9863, Loss: 2.0063
Epoch  21 Batch   60/269 - Train Accuracy: 0.9854, Validation Accuracy: 0.9829, Loss: 2.0007
Epoch  21 Batch   70/269 - Train Accuracy: 0.9916, Validation Accuracy: 0.9817, Loss: 2.0032
Epoch  21 Batch   80/269 - Train Accuracy: 0.9888, Validation Accuracy: 0.9813, Loss: 2.0213
Epoch  21 Batch   90/269 - Train Accuracy: 0.9914, Validation Accuracy: 0.9773, Loss: 2.0094
Epoch  21 Batch  100/269 - Train Accuracy: 0.9892, Validation Accuracy: 0.9790, Loss: 2.0410
Epoch  21 Batch  110/269 - Train Accuracy: 0.9934, Validation Accuracy: 0.9790, Loss: 1.9823
Epoch  21 Batch  120/269 - Train Accuracy: 0.9876, Validation Accuracy: 0.9836, Loss: 2.0115
Epoch  21 Batch  130/269 - Train Accuracy: 0.9887, Validation Accuracy: 0.9790, Loss: 1.9980
Epoch  21 Batch  140/269 - Train Accuracy: 0.9826, Validation Accuracy: 0.9854, Loss: 2.0002
Epoch  21 Batch  150/269 - Train Accuracy: 0.9860, Validation Accuracy: 0.9832, Loss: 1.9681
Epoch  21 Batch  160/269 - Train Accuracy: 0.9902, Validation Accuracy: 0.9831, Loss: 1.9628
Epoch  21 Batch  170/269 - Train Accuracy: 0.9888, Validation Accuracy: 0.9806, Loss: 1.9884
Epoch  21 Batch  180/269 - Train Accuracy: 0.9877, Validation Accuracy: 0.9827, Loss: 1.9856
Epoch  21 Batch  190/269 - Train Accuracy: 0.9875, Validation Accuracy: 0.9801, Loss: 2.0090
Epoch  21 Batch  200/269 - Train Accuracy: 0.9967, Validation Accuracy: 0.9800, Loss: 2.0572
Epoch  21 Batch  210/269 - Train Accuracy: 0.9907, Validation Accuracy: 0.9788, Loss: 1.9778
Epoch  21 Batch  220/269 - Train Accuracy: 0.9894, Validation Accuracy: 0.9743, Loss: 1.9875
Epoch  21 Batch  230/269 - Train Accuracy: 0.9790, Validation Accuracy: 0.9822, Loss: 1.9624
Epoch  21 Batch  240/269 - Train Accuracy: 0.9916, Validation Accuracy: 0.9822, Loss: 2.0483
Epoch  21 Batch  250/269 - Train Accuracy: 0.9916, Validation Accuracy: 0.9814, Loss: 1.9486
Epoch  21 Batch  260/269 - Train Accuracy: 0.9893, Validation Accuracy: 0.9806, Loss: 1.9532
Epoch  22 Batch   10/269 - Train Accuracy: 0.9905, Validation Accuracy: 0.9782, Loss: 2.0107
Epoch  22 Batch   20/269 - Train Accuracy: 0.9923, Validation Accuracy: 0.9795, Loss: 1.9838
Epoch  22 Batch   30/269 - Train Accuracy: 0.9869, Validation Accuracy: 0.9805, Loss: 1.9851
Epoch  22 Batch   40/269 - Train Accuracy: 0.9914, Validation Accuracy: 0.9862, Loss: 1.9853
Epoch  22 Batch   50/269 - Train Accuracy: 0.9860, Validation Accuracy: 0.9872, Loss: 1.9486
Epoch  22 Batch   60/269 - Train Accuracy: 0.9856, Validation Accuracy: 0.9894, Loss: 1.9814
Epoch  22 Batch   70/269 - Train Accuracy: 0.9934, Validation Accuracy: 0.9856, Loss: 1.9859
Epoch  22 Batch   80/269 - Train Accuracy: 0.9855, Validation Accuracy: 0.9822, Loss: 2.0316
Epoch  22 Batch   90/269 - Train Accuracy: 0.9918, Validation Accuracy: 0.9841, Loss: 1.9651
Epoch  22 Batch  100/269 - Train Accuracy: 0.9870, Validation Accuracy: 0.9821, Loss: 1.9732
Epoch  22 Batch  110/269 - Train Accuracy: 0.9924, Validation Accuracy: 0.9857, Loss: 1.9733
Epoch  22 Batch  120/269 - Train Accuracy: 0.9889, Validation Accuracy: 0.9798, Loss: 1.9671
Epoch  22 Batch  130/269 - Train Accuracy: 0.9884, Validation Accuracy: 0.9836, Loss: 2.0383
Epoch  22 Batch  140/269 - Train Accuracy: 0.9905, Validation Accuracy: 0.9865, Loss: 2.0317
Epoch  22 Batch  150/269 - Train Accuracy: 0.9914, Validation Accuracy: 0.9862, Loss: 1.9884
Epoch  22 Batch  160/269 - Train Accuracy: 0.9928, Validation Accuracy: 0.9827, Loss: 1.9991
Epoch  22 Batch  170/269 - Train Accuracy: 0.9871, Validation Accuracy: 0.9839, Loss: 2.0140
Epoch  22 Batch  180/269 - Train Accuracy: 0.9904, Validation Accuracy: 0.9826, Loss: 1.9751
Epoch  22 Batch  190/269 - Train Accuracy: 0.9910, Validation Accuracy: 0.9834, Loss: 2.0002
Epoch  22 Batch  200/269 - Train Accuracy: 0.9952, Validation Accuracy: 0.9820, Loss: 1.9660
Epoch  22 Batch  210/269 - Train Accuracy: 0.9889, Validation Accuracy: 0.9783, Loss: 1.9828
Epoch  22 Batch  220/269 - Train Accuracy: 0.9888, Validation Accuracy: 0.9775, Loss: 1.9816
Epoch  22 Batch  230/269 - Train Accuracy: 0.9887, Validation Accuracy: 0.9799, Loss: 1.9857
Epoch  22 Batch  240/269 - Train Accuracy: 0.9857, Validation Accuracy: 0.9782, Loss: 1.9877
Epoch  22 Batch  250/269 - Train Accuracy: 0.9894, Validation Accuracy: 0.9791, Loss: 1.9600
Epoch  22 Batch  260/269 - Train Accuracy: 0.9887, Validation Accuracy: 0.9755, Loss: 1.9990
Epoch  23 Batch   10/269 - Train Accuracy: 0.9931, Validation Accuracy: 0.9841, Loss: 1.9600
Epoch  23 Batch   20/269 - Train Accuracy: 0.9938, Validation Accuracy: 0.9837, Loss: 1.9799
Epoch  23 Batch   30/269 - Train Accuracy: 0.9847, Validation Accuracy: 0.9779, Loss: 1.9940
Epoch  23 Batch   40/269 - Train Accuracy: 0.9880, Validation Accuracy: 0.9696, Loss: 1.9972
Epoch  23 Batch   50/269 - Train Accuracy: 0.9827, Validation Accuracy: 0.9786, Loss: 2.0395
Epoch  23 Batch   60/269 - Train Accuracy: 0.9869, Validation Accuracy: 0.9812, Loss: 2.0106
Epoch  23 Batch   70/269 - Train Accuracy: 0.9901, Validation Accuracy: 0.9762, Loss: 2.0103
Epoch  23 Batch   80/269 - Train Accuracy: 0.9905, Validation Accuracy: 0.9783, Loss: 2.0207
Epoch  23 Batch   90/269 - Train Accuracy: 0.9840, Validation Accuracy: 0.9795, Loss: 2.0007
Epoch  23 Batch  100/269 - Train Accuracy: 0.9891, Validation Accuracy: 0.9811, Loss: 1.9732
Epoch  23 Batch  110/269 - Train Accuracy: 0.9942, Validation Accuracy: 0.9839, Loss: 1.9582
Epoch  23 Batch  120/269 - Train Accuracy: 0.9846, Validation Accuracy: 0.9785, Loss: 1.9732
Epoch  23 Batch  130/269 - Train Accuracy: 0.9893, Validation Accuracy: 0.9828, Loss: 2.0048
Epoch  23 Batch  140/269 - Train Accuracy: 0.9865, Validation Accuracy: 0.9854, Loss: 2.0049
Epoch  23 Batch  150/269 - Train Accuracy: 0.9885, Validation Accuracy: 0.9821, Loss: 1.9945
Epoch  23 Batch  160/269 - Train Accuracy: 0.9900, Validation Accuracy: 0.9809, Loss: 1.9899
Epoch  23 Batch  170/269 - Train Accuracy: 0.9886, Validation Accuracy: 0.9835, Loss: 2.0570
Epoch  23 Batch  180/269 - Train Accuracy: 0.9965, Validation Accuracy: 0.9822, Loss: 1.9832
Epoch  23 Batch  190/269 - Train Accuracy: 0.9891, Validation Accuracy: 0.9781, Loss: 2.0164
Epoch  23 Batch  200/269 - Train Accuracy: 0.9978, Validation Accuracy: 0.9781, Loss: 1.9910
Epoch  23 Batch  210/269 - Train Accuracy: 0.9885, Validation Accuracy: 0.9795, Loss: 2.0187
Epoch  23 Batch  220/269 - Train Accuracy: 0.9875, Validation Accuracy: 0.9751, Loss: 1.9807
Epoch  23 Batch  230/269 - Train Accuracy: 0.9890, Validation Accuracy: 0.9779, Loss: 2.0048
Epoch  23 Batch  240/269 - Train Accuracy: 0.9885, Validation Accuracy: 0.9803, Loss: 1.9992
Epoch  23 Batch  250/269 - Train Accuracy: 0.9929, Validation Accuracy: 0.9800, Loss: 2.0025
Epoch  23 Batch  260/269 - Train Accuracy: 0.9921, Validation Accuracy: 0.9786, Loss: 2.0044
Epoch  24 Batch   10/269 - Train Accuracy: 0.9928, Validation Accuracy: 0.9805, Loss: 2.0332
Epoch  24 Batch   20/269 - Train Accuracy: 0.9946, Validation Accuracy: 0.9806, Loss: 2.0059
Epoch  24 Batch   30/269 - Train Accuracy: 0.9896, Validation Accuracy: 0.9772, Loss: 2.0326
Epoch  24 Batch   40/269 - Train Accuracy: 0.9946, Validation Accuracy: 0.9845, Loss: 1.9944
Epoch  24 Batch   50/269 - Train Accuracy: 0.9897, Validation Accuracy: 0.9828, Loss: 1.9746
Epoch  24 Batch   60/269 - Train Accuracy: 0.9911, Validation Accuracy: 0.9820, Loss: 1.9781
Epoch  24 Batch   70/269 - Train Accuracy: 0.9953, Validation Accuracy: 0.9779, Loss: 1.9749
Epoch  24 Batch   80/269 - Train Accuracy: 0.9909, Validation Accuracy: 0.9805, Loss: 1.9386
Epoch  24 Batch   90/269 - Train Accuracy: 0.9891, Validation Accuracy: 0.9789, Loss: 1.9697
Epoch  24 Batch  100/269 - Train Accuracy: 0.9903, Validation Accuracy: 0.9793, Loss: 1.9664
Epoch  24 Batch  110/269 - Train Accuracy: 0.9934, Validation Accuracy: 0.9804, Loss: 1.9824
Epoch  24 Batch  120/269 - Train Accuracy: 0.9897, Validation Accuracy: 0.9798, Loss: 2.0109
Epoch  24 Batch  130/269 - Train Accuracy: 0.9937, Validation Accuracy: 0.9804, Loss: 1.9841
Epoch  24 Batch  140/269 - Train Accuracy: 0.9909, Validation Accuracy: 0.9835, Loss: 1.9509
Epoch  24 Batch  150/269 - Train Accuracy: 0.9882, Validation Accuracy: 0.9854, Loss: 2.0031
Epoch  24 Batch  160/269 - Train Accuracy: 0.9926, Validation Accuracy: 0.9826, Loss: 1.9797
Epoch  24 Batch  170/269 - Train Accuracy: 0.9853, Validation Accuracy: 0.9828, Loss: 2.0160
Epoch  24 Batch  180/269 - Train Accuracy: 0.9961, Validation Accuracy: 0.9790, Loss: 1.9769
Epoch  24 Batch  190/269 - Train Accuracy: 0.9887, Validation Accuracy: 0.9806, Loss: 2.0015
Epoch  24 Batch  200/269 - Train Accuracy: 0.9954, Validation Accuracy: 0.9844, Loss: 2.0536
Epoch  24 Batch  210/269 - Train Accuracy: 0.9912, Validation Accuracy: 0.9773, Loss: 2.0058
Epoch  24 Batch  220/269 - Train Accuracy: 0.9941, Validation Accuracy: 0.9802, Loss: 1.9793
Epoch  24 Batch  230/269 - Train Accuracy: 0.9868, Validation Accuracy: 0.9808, Loss: 1.9841
Epoch  24 Batch  240/269 - Train Accuracy: 0.9924, Validation Accuracy: 0.9786, Loss: 1.9968
Epoch  24 Batch  250/269 - Train Accuracy: 0.9930, Validation Accuracy: 0.9776, Loss: 1.9926
Epoch  24 Batch  260/269 - Train Accuracy: 0.9925, Validation Accuracy: 0.9782, Loss: 2.0071
Model Trained and Saved

Save Parameters

Save the batch_size and save_path parameters for inference.


In [58]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [59]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.

In [60]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    unk_int = vocab_to_int['<UNK>']

    word_ids = [vocab_to_int.get(word, unk_int) for word in sentence.lower().split()]
    return word_ids


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)


Tests Passed

Translate

This will translate translate_sentence from English to French.


In [63]:
translate_sentence = 'he saw a old yellow truck .'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
                                         target_sequence_length: [len(translate_sentence)*2]*batch_size,
                                         source_sequence_length: [len(translate_sentence)]*batch_size,
                                         keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in translate_logits]))
print('  French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))


INFO:tensorflow:Restoring parameters from checkpoints/dev
Input
  Word Ids:      [44, 33, 74, 59, 167, 38, 88]
  English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']

Prediction
  Word Ids:      [130, 214, 4, 266, 85, 185, 15, 312, 1]
  French Words: il a vu une vieille voiture jaune . <EOS>

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.