Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.


In [8]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .

French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.


In [3]:
def single_text_to_ids(text, vocab_to_int, add_EOS):
    id_text = []
    for sentence in text.split('\n'):
        id_sentence = []
        for word in sentence.split():
            id_sentence.append(vocab_to_int[word])
        if add_EOS:
            id_sentence.append(vocab_to_int['<EOS>'])
        #print(sentence)
        #print(id_sentence)
        id_text.append(id_sentence)
    
    #print(id_text)
    return id_text

def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    # TODO: Implement Function
    #print(source_text)
    #print(target_text)
    #print(source_vocab_to_int)
    #print(target_vocab_to_int)
    
    source_id_text = single_text_to_ids(source_text, source_vocab_to_int, False)
    target_id_text = single_text_to_ids(target_text, target_vocab_to_int, True)
    
    return source_id_text, target_id_text

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [9]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoding_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.

Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)


In [11]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate, keep probability)
    """
    # TODO: Implement Function
    input = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None, None], name='targets')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')

    return input, targets, learning_rate, keep_prob

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoding Input

Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.


In [12]:
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for dencoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    # TODO: Implement Function
    ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
    
    return dec_input

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)


Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().


In [13]:
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :return: RNN state
    """
    # TODO: Implement Function
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    enc_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)

    _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)

    return enc_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training

Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.


In [14]:
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
                         output_fn, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param sequence_length: Sequence Length
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_fn: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: Train Logits
    """
    # TODO: Implement Function
    #with tf.variable_scope("decoding") as decoding_scope:
    # Training Decoder
    train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
    dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
    train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
        dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)

    # Apply output function
    train_logits =  output_fn(train_pred)
    
    return train_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

In [15]:
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
                         maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param maximum_length: The maximum allowed time steps to decode
    :param vocab_size: Size of vocabulary
    :param decoding_scope: TensorFlow Variable Scope for decoding
    :param output_fn: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: Inference Logits
    """
    # TODO: Implement Function
    # Inference Decoder
    infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
        output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, 
        maximum_length, vocab_size)

    inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
    
    return inference_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Create RNN cell for decoding using rnn_size and num_layers.
  • Create the output fuction using lambda to transform it's input, logits, to class logits.
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.


In [27]:
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
                   num_layers, target_vocab_to_int, keep_prob):
    """
    Create decoding layer
    :param dec_embed_input: Decoder embedded input
    :param dec_embeddings: Decoder embeddings
    :param encoder_state: The encoded state
    :param vocab_size: Size of vocabulary
    :param sequence_length: Sequence Length
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param keep_prob: Dropout keep probability
    :return: Tuple of (Training Logits, Inference Logits)
    """
    # TODO: Implement Function    
    # Decoder RNNs
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)

    with tf.variable_scope("decoding") as decoding_scope:
        # Output Layer
        output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
    
    #with tf.variable_scope("decoding") as decoding_scope:
        train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)

    with tf.variable_scope("decoding", reuse=True) as decoding_scope:
        start_of_sequence_id = target_vocab_to_int['<GO>']
        end_of_sequence_id = target_vocab_to_int['<EOS>']
        maximum_length = sequence_length - 1
        inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)
    
    return train_logits, inference_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to the input data for the encoder.
  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
  • Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
  • Apply embedding to the target data for the decoder.
  • Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).

In [17]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param sequence_length: Sequence Length
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training Logits, Inference Logits)
    """
    # TODO: Implement Function
    
    #Apply embedding to the input data for the encoder.
    enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
    
    #Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
    enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
    
    #Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
    dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
    
    #Apply embedding to the target data for the decoder.
    dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
    
    #Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
    train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)

    return train_logits, inference_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability

In [18]:
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5

Build the Graph

Build the graph using the neural network you implemented.


In [19]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob = model_inputs()
    sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)
    
    train_logits, inference_logits = seq2seq_model(
        tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
        encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)

    tf.identity(inference_logits, 'logits')
    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            train_logits,
            targets,
            tf.ones([input_shape[0], sequence_length]))

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [20]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time

def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1]), (0,0)],
            'constant')

    return np.mean(np.equal(target, np.argmax(logits, 2)))

train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]

valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch) in enumerate(
                helper.batch_data(train_source, train_target, batch_size)):
            start_time = time.time()
            
            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 sequence_length: target_batch.shape[1],
                 keep_prob: keep_probability})
            
            batch_train_logits = sess.run(
                inference_logits,
                {input_data: source_batch, keep_prob: 1.0})
            batch_valid_logits = sess.run(
                inference_logits,
                {input_data: valid_source, keep_prob: 1.0})
                
            train_acc = get_accuracy(target_batch, batch_train_logits)
            valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
            end_time = time.time()
            if batch_i % 10 == 0:
                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')


Epoch   0 Batch    0/269 - Train Accuracy:  0.242, Validation Accuracy:  0.310, Loss:  5.904
Epoch   0 Batch   10/269 - Train Accuracy:  0.257, Validation Accuracy:  0.332, Loss:  4.702
Epoch   0 Batch   20/269 - Train Accuracy:  0.309, Validation Accuracy:  0.373, Loss:  3.844
Epoch   0 Batch   30/269 - Train Accuracy:  0.352, Validation Accuracy:  0.391, Loss:  3.343
Epoch   0 Batch   40/269 - Train Accuracy:  0.358, Validation Accuracy:  0.419, Loss:  3.242
Epoch   0 Batch   50/269 - Train Accuracy:  0.396, Validation Accuracy:  0.449, Loss:  3.084
Epoch   0 Batch   60/269 - Train Accuracy:  0.425, Validation Accuracy:  0.446, Loss:  2.771
Epoch   0 Batch   70/269 - Train Accuracy:  0.438, Validation Accuracy:  0.458, Loss:  2.718
Epoch   0 Batch   80/269 - Train Accuracy:  0.431, Validation Accuracy:  0.453, Loss:  2.616
Epoch   0 Batch   90/269 - Train Accuracy:  0.415, Validation Accuracy:  0.474, Loss:  2.744
Epoch   0 Batch  100/269 - Train Accuracy:  0.475, Validation Accuracy:  0.485, Loss:  2.435
Epoch   0 Batch  110/269 - Train Accuracy:  0.457, Validation Accuracy:  0.488, Loss:  2.401
Epoch   0 Batch  120/269 - Train Accuracy:  0.450, Validation Accuracy:  0.498, Loss:  2.428
Epoch   0 Batch  130/269 - Train Accuracy:  0.442, Validation Accuracy:  0.496, Loss:  2.366
Epoch   0 Batch  140/269 - Train Accuracy:  0.495, Validation Accuracy:  0.506, Loss:  2.136
Epoch   0 Batch  150/269 - Train Accuracy:  0.487, Validation Accuracy:  0.511, Loss:  2.049
Epoch   0 Batch  160/269 - Train Accuracy:  0.490, Validation Accuracy:  0.517, Loss:  1.974
Epoch   0 Batch  170/269 - Train Accuracy:  0.484, Validation Accuracy:  0.513, Loss:  1.901
Epoch   0 Batch  180/269 - Train Accuracy:  0.475, Validation Accuracy:  0.498, Loss:  1.821
Epoch   0 Batch  190/269 - Train Accuracy:  0.454, Validation Accuracy:  0.475, Loss:  1.767
Epoch   0 Batch  200/269 - Train Accuracy:  0.442, Validation Accuracy:  0.478, Loss:  1.787
Epoch   0 Batch  210/269 - Train Accuracy:  0.434, Validation Accuracy:  0.459, Loss:  1.672
Epoch   0 Batch  220/269 - Train Accuracy:  0.450, Validation Accuracy:  0.453, Loss:  1.576
Epoch   0 Batch  230/269 - Train Accuracy:  0.431, Validation Accuracy:  0.464, Loss:  1.606
Epoch   0 Batch  240/269 - Train Accuracy:  0.498, Validation Accuracy:  0.488, Loss:  1.444
Epoch   0 Batch  250/269 - Train Accuracy:  0.420, Validation Accuracy:  0.470, Loss:  1.575
Epoch   0 Batch  260/269 - Train Accuracy:  0.428, Validation Accuracy:  0.474, Loss:  1.555
Epoch   1 Batch    0/269 - Train Accuracy:  0.433, Validation Accuracy:  0.476, Loss:  1.506
Epoch   1 Batch   10/269 - Train Accuracy:  0.427, Validation Accuracy:  0.469, Loss:  1.464
Epoch   1 Batch   20/269 - Train Accuracy:  0.410, Validation Accuracy:  0.469, Loss:  1.432
Epoch   1 Batch   30/269 - Train Accuracy:  0.450, Validation Accuracy:  0.481, Loss:  1.347
Epoch   1 Batch   40/269 - Train Accuracy:  0.440, Validation Accuracy:  0.475, Loss:  1.369
Epoch   1 Batch   50/269 - Train Accuracy:  0.437, Validation Accuracy:  0.477, Loss:  1.367
Epoch   1 Batch   60/269 - Train Accuracy:  0.488, Validation Accuracy:  0.494, Loss:  1.229
Epoch   1 Batch   70/269 - Train Accuracy:  0.505, Validation Accuracy:  0.509, Loss:  1.252
Epoch   1 Batch   80/269 - Train Accuracy:  0.515, Validation Accuracy:  0.511, Loss:  1.209
Epoch   1 Batch   90/269 - Train Accuracy:  0.447, Validation Accuracy:  0.491, Loss:  1.280
Epoch   1 Batch  100/269 - Train Accuracy:  0.514, Validation Accuracy:  0.503, Loss:  1.159
Epoch   1 Batch  110/269 - Train Accuracy:  0.521, Validation Accuracy:  0.529, Loss:  1.163
Epoch   1 Batch  120/269 - Train Accuracy:  0.490, Validation Accuracy:  0.526, Loss:  1.193
Epoch   1 Batch  130/269 - Train Accuracy:  0.486, Validation Accuracy:  0.530, Loss:  1.200
Epoch   1 Batch  140/269 - Train Accuracy:  0.532, Validation Accuracy:  0.534, Loss:  1.119
Epoch   1 Batch  150/269 - Train Accuracy:  0.524, Validation Accuracy:  0.539, Loss:  1.110
Epoch   1 Batch  160/269 - Train Accuracy:  0.525, Validation Accuracy:  0.540, Loss:  1.094
Epoch   1 Batch  170/269 - Train Accuracy:  0.535, Validation Accuracy:  0.546, Loss:  1.061
Epoch   1 Batch  180/269 - Train Accuracy:  0.529, Validation Accuracy:  0.540, Loss:  1.048
Epoch   1 Batch  190/269 - Train Accuracy:  0.530, Validation Accuracy:  0.541, Loss:  1.025
Epoch   1 Batch  200/269 - Train Accuracy:  0.522, Validation Accuracy:  0.549, Loss:  1.071
Epoch   1 Batch  210/269 - Train Accuracy:  0.548, Validation Accuracy:  0.546, Loss:  1.012
Epoch   1 Batch  220/269 - Train Accuracy:  0.540, Validation Accuracy:  0.547, Loss:  0.969
Epoch   1 Batch  230/269 - Train Accuracy:  0.544, Validation Accuracy:  0.548, Loss:  1.005
Epoch   1 Batch  240/269 - Train Accuracy:  0.574, Validation Accuracy:  0.554, Loss:  0.916
Epoch   1 Batch  250/269 - Train Accuracy:  0.526, Validation Accuracy:  0.552, Loss:  1.001
Epoch   1 Batch  260/269 - Train Accuracy:  0.537, Validation Accuracy:  0.554, Loss:  1.012
Epoch   2 Batch    0/269 - Train Accuracy:  0.542, Validation Accuracy:  0.566, Loss:  1.008
Epoch   2 Batch   10/269 - Train Accuracy:  0.535, Validation Accuracy:  0.560, Loss:  0.972
Epoch   2 Batch   20/269 - Train Accuracy:  0.559, Validation Accuracy:  0.575, Loss:  0.963
Epoch   2 Batch   30/269 - Train Accuracy:  0.564, Validation Accuracy:  0.576, Loss:  0.917
Epoch   2 Batch   40/269 - Train Accuracy:  0.568, Validation Accuracy:  0.581, Loss:  0.950
Epoch   2 Batch   50/269 - Train Accuracy:  0.549, Validation Accuracy:  0.581, Loss:  0.950
Epoch   2 Batch   60/269 - Train Accuracy:  0.592, Validation Accuracy:  0.593, Loss:  0.873
Epoch   2 Batch   70/269 - Train Accuracy:  0.594, Validation Accuracy:  0.593, Loss:  0.893
Epoch   2 Batch   80/269 - Train Accuracy:  0.610, Validation Accuracy:  0.596, Loss:  0.868
Epoch   2 Batch   90/269 - Train Accuracy:  0.554, Validation Accuracy:  0.594, Loss:  0.935
Epoch   2 Batch  100/269 - Train Accuracy:  0.604, Validation Accuracy:  0.599, Loss:  0.854
Epoch   2 Batch  110/269 - Train Accuracy:  0.586, Validation Accuracy:  0.603, Loss:  0.852
Epoch   2 Batch  120/269 - Train Accuracy:  0.583, Validation Accuracy:  0.602, Loss:  0.889
Epoch   2 Batch  130/269 - Train Accuracy:  0.567, Validation Accuracy:  0.599, Loss:  0.903
Epoch   2 Batch  140/269 - Train Accuracy:  0.613, Validation Accuracy:  0.610, Loss:  0.858
Epoch   2 Batch  150/269 - Train Accuracy:  0.594, Validation Accuracy:  0.601, Loss:  0.851
Epoch   2 Batch  160/269 - Train Accuracy:  0.589, Validation Accuracy:  0.600, Loss:  0.833
Epoch   2 Batch  170/269 - Train Accuracy:  0.600, Validation Accuracy:  0.611, Loss:  0.815
Epoch   2 Batch  180/269 - Train Accuracy:  0.596, Validation Accuracy:  0.606, Loss:  0.816
Epoch   2 Batch  190/269 - Train Accuracy:  0.602, Validation Accuracy:  0.609, Loss:  0.795
Epoch   2 Batch  200/269 - Train Accuracy:  0.586, Validation Accuracy:  0.610, Loss:  0.844
Epoch   2 Batch  210/269 - Train Accuracy:  0.607, Validation Accuracy:  0.610, Loss:  0.798
Epoch   2 Batch  220/269 - Train Accuracy:  0.615, Validation Accuracy:  0.612, Loss:  0.772
Epoch   2 Batch  230/269 - Train Accuracy:  0.605, Validation Accuracy:  0.612, Loss:  0.798
Epoch   2 Batch  240/269 - Train Accuracy:  0.632, Validation Accuracy:  0.616, Loss:  0.726
Epoch   2 Batch  250/269 - Train Accuracy:  0.588, Validation Accuracy:  0.614, Loss:  0.809
Epoch   2 Batch  260/269 - Train Accuracy:  0.595, Validation Accuracy:  0.615, Loss:  0.827
Epoch   3 Batch    0/269 - Train Accuracy:  0.605, Validation Accuracy:  0.617, Loss:  0.819
Epoch   3 Batch   10/269 - Train Accuracy:  0.602, Validation Accuracy:  0.620, Loss:  0.795
Epoch   3 Batch   20/269 - Train Accuracy:  0.606, Validation Accuracy:  0.622, Loss:  0.792
Epoch   3 Batch   30/269 - Train Accuracy:  0.619, Validation Accuracy:  0.624, Loss:  0.752
Epoch   3 Batch   40/269 - Train Accuracy:  0.607, Validation Accuracy:  0.622, Loss:  0.783
Epoch   3 Batch   50/269 - Train Accuracy:  0.609, Validation Accuracy:  0.622, Loss:  0.784
Epoch   3 Batch   60/269 - Train Accuracy:  0.627, Validation Accuracy:  0.625, Loss:  0.730
Epoch   3 Batch   70/269 - Train Accuracy:  0.637, Validation Accuracy:  0.628, Loss:  0.747
Epoch   3 Batch   80/269 - Train Accuracy:  0.636, Validation Accuracy:  0.632, Loss:  0.726
Epoch   3 Batch   90/269 - Train Accuracy:  0.587, Validation Accuracy:  0.629, Loss:  0.785
Epoch   3 Batch  100/269 - Train Accuracy:  0.652, Validation Accuracy:  0.633, Loss:  0.717
Epoch   3 Batch  110/269 - Train Accuracy:  0.617, Validation Accuracy:  0.628, Loss:  0.718
Epoch   3 Batch  120/269 - Train Accuracy:  0.611, Validation Accuracy:  0.626, Loss:  0.751
Epoch   3 Batch  130/269 - Train Accuracy:  0.602, Validation Accuracy:  0.637, Loss:  0.768
Epoch   3 Batch  140/269 - Train Accuracy:  0.631, Validation Accuracy:  0.629, Loss:  0.743
Epoch   3 Batch  150/269 - Train Accuracy:  0.625, Validation Accuracy:  0.632, Loss:  0.719
Epoch   3 Batch  160/269 - Train Accuracy:  0.625, Validation Accuracy:  0.634, Loss:  0.712
Epoch   3 Batch  170/269 - Train Accuracy:  0.627, Validation Accuracy:  0.642, Loss:  0.701
Epoch   3 Batch  180/269 - Train Accuracy:  0.633, Validation Accuracy:  0.641, Loss:  0.685
Epoch   3 Batch  190/269 - Train Accuracy:  0.627, Validation Accuracy:  0.636, Loss:  0.687
Epoch   3 Batch  200/269 - Train Accuracy:  0.632, Validation Accuracy:  0.646, Loss:  0.718
Epoch   3 Batch  210/269 - Train Accuracy:  0.638, Validation Accuracy:  0.639, Loss:  0.670
Epoch   3 Batch  220/269 - Train Accuracy:  0.643, Validation Accuracy:  0.644, Loss:  0.662
Epoch   3 Batch  230/269 - Train Accuracy:  0.633, Validation Accuracy:  0.647, Loss:  0.689
Epoch   3 Batch  240/269 - Train Accuracy:  0.654, Validation Accuracy:  0.644, Loss:  0.622
Epoch   3 Batch  250/269 - Train Accuracy:  0.624, Validation Accuracy:  0.646, Loss:  0.695
Epoch   3 Batch  260/269 - Train Accuracy:  0.613, Validation Accuracy:  0.648, Loss:  0.713
Epoch   4 Batch    0/269 - Train Accuracy:  0.629, Validation Accuracy:  0.647, Loss:  0.703
Epoch   4 Batch   10/269 - Train Accuracy:  0.629, Validation Accuracy:  0.650, Loss:  0.684
Epoch   4 Batch   20/269 - Train Accuracy:  0.641, Validation Accuracy:  0.653, Loss:  0.685
Epoch   4 Batch   30/269 - Train Accuracy:  0.645, Validation Accuracy:  0.653, Loss:  0.657
Epoch   4 Batch   40/269 - Train Accuracy:  0.634, Validation Accuracy:  0.650, Loss:  0.692
Epoch   4 Batch   50/269 - Train Accuracy:  0.638, Validation Accuracy:  0.650, Loss:  0.679
Epoch   4 Batch   60/269 - Train Accuracy:  0.649, Validation Accuracy:  0.651, Loss:  0.623
Epoch   4 Batch   70/269 - Train Accuracy:  0.657, Validation Accuracy:  0.650, Loss:  0.652
Epoch   4 Batch   80/269 - Train Accuracy:  0.668, Validation Accuracy:  0.655, Loss:  0.632
Epoch   4 Batch   90/269 - Train Accuracy:  0.611, Validation Accuracy:  0.657, Loss:  0.687
Epoch   4 Batch  100/269 - Train Accuracy:  0.675, Validation Accuracy:  0.655, Loss:  0.621
Epoch   4 Batch  110/269 - Train Accuracy:  0.637, Validation Accuracy:  0.650, Loss:  0.628
Epoch   4 Batch  120/269 - Train Accuracy:  0.641, Validation Accuracy:  0.655, Loss:  0.650
Epoch   4 Batch  130/269 - Train Accuracy:  0.631, Validation Accuracy:  0.661, Loss:  0.659
Epoch   4 Batch  140/269 - Train Accuracy:  0.648, Validation Accuracy:  0.655, Loss:  0.639
Epoch   4 Batch  150/269 - Train Accuracy:  0.668, Validation Accuracy:  0.659, Loss:  0.626
Epoch   4 Batch  160/269 - Train Accuracy:  0.655, Validation Accuracy:  0.663, Loss:  0.613
Epoch   4 Batch  170/269 - Train Accuracy:  0.668, Validation Accuracy:  0.671, Loss:  0.608
Epoch   4 Batch  180/269 - Train Accuracy:  0.667, Validation Accuracy:  0.672, Loss:  0.607
Epoch   4 Batch  190/269 - Train Accuracy:  0.651, Validation Accuracy:  0.666, Loss:  0.594
Epoch   4 Batch  200/269 - Train Accuracy:  0.659, Validation Accuracy:  0.670, Loss:  0.624
Epoch   4 Batch  210/269 - Train Accuracy:  0.672, Validation Accuracy:  0.671, Loss:  0.592
Epoch   4 Batch  220/269 - Train Accuracy:  0.671, Validation Accuracy:  0.675, Loss:  0.577
Epoch   4 Batch  230/269 - Train Accuracy:  0.671, Validation Accuracy:  0.680, Loss:  0.597
Epoch   4 Batch  240/269 - Train Accuracy:  0.705, Validation Accuracy:  0.683, Loss:  0.539
Epoch   4 Batch  250/269 - Train Accuracy:  0.674, Validation Accuracy:  0.677, Loss:  0.602
Epoch   4 Batch  260/269 - Train Accuracy:  0.659, Validation Accuracy:  0.676, Loss:  0.613
Epoch   5 Batch    0/269 - Train Accuracy:  0.680, Validation Accuracy:  0.680, Loss:  0.607
Epoch   5 Batch   10/269 - Train Accuracy:  0.678, Validation Accuracy:  0.686, Loss:  0.594
Epoch   5 Batch   20/269 - Train Accuracy:  0.692, Validation Accuracy:  0.684, Loss:  0.593
Epoch   5 Batch   30/269 - Train Accuracy:  0.680, Validation Accuracy:  0.685, Loss:  0.566
Epoch   5 Batch   40/269 - Train Accuracy:  0.666, Validation Accuracy:  0.683, Loss:  0.592
Epoch   5 Batch   50/269 - Train Accuracy:  0.670, Validation Accuracy:  0.683, Loss:  0.587
Epoch   5 Batch   60/269 - Train Accuracy:  0.683, Validation Accuracy:  0.689, Loss:  0.541
Epoch   5 Batch   70/269 - Train Accuracy:  0.679, Validation Accuracy:  0.682, Loss:  0.572
Epoch   5 Batch   80/269 - Train Accuracy:  0.683, Validation Accuracy:  0.683, Loss:  0.551
Epoch   5 Batch   90/269 - Train Accuracy:  0.640, Validation Accuracy:  0.691, Loss:  0.592
Epoch   5 Batch  100/269 - Train Accuracy:  0.702, Validation Accuracy:  0.690, Loss:  0.544
Epoch   5 Batch  110/269 - Train Accuracy:  0.680, Validation Accuracy:  0.684, Loss:  0.547
Epoch   5 Batch  120/269 - Train Accuracy:  0.673, Validation Accuracy:  0.684, Loss:  0.560
Epoch   5 Batch  130/269 - Train Accuracy:  0.677, Validation Accuracy:  0.692, Loss:  0.576
Epoch   5 Batch  140/269 - Train Accuracy:  0.679, Validation Accuracy:  0.683, Loss:  0.560
Epoch   5 Batch  150/269 - Train Accuracy:  0.697, Validation Accuracy:  0.691, Loss:  0.555
Epoch   5 Batch  160/269 - Train Accuracy:  0.692, Validation Accuracy:  0.688, Loss:  0.538
Epoch   5 Batch  170/269 - Train Accuracy:  0.688, Validation Accuracy:  0.696, Loss:  0.527
Epoch   5 Batch  180/269 - Train Accuracy:  0.703, Validation Accuracy:  0.694, Loss:  0.535
Epoch   5 Batch  190/269 - Train Accuracy:  0.679, Validation Accuracy:  0.688, Loss:  0.526
Epoch   5 Batch  200/269 - Train Accuracy:  0.692, Validation Accuracy:  0.692, Loss:  0.543
Epoch   5 Batch  210/269 - Train Accuracy:  0.703, Validation Accuracy:  0.685, Loss:  0.523
Epoch   5 Batch  220/269 - Train Accuracy:  0.702, Validation Accuracy:  0.685, Loss:  0.499
Epoch   5 Batch  230/269 - Train Accuracy:  0.705, Validation Accuracy:  0.699, Loss:  0.518
Epoch   5 Batch  240/269 - Train Accuracy:  0.727, Validation Accuracy:  0.696, Loss:  0.478
Epoch   5 Batch  250/269 - Train Accuracy:  0.699, Validation Accuracy:  0.702, Loss:  0.524
Epoch   5 Batch  260/269 - Train Accuracy:  0.691, Validation Accuracy:  0.699, Loss:  0.535
Epoch   6 Batch    0/269 - Train Accuracy:  0.724, Validation Accuracy:  0.716, Loss:  0.535
Epoch   6 Batch   10/269 - Train Accuracy:  0.695, Validation Accuracy:  0.711, Loss:  0.531
Epoch   6 Batch   20/269 - Train Accuracy:  0.721, Validation Accuracy:  0.709, Loss:  0.521
Epoch   6 Batch   30/269 - Train Accuracy:  0.719, Validation Accuracy:  0.727, Loss:  0.501
Epoch   6 Batch   40/269 - Train Accuracy:  0.720, Validation Accuracy:  0.730, Loss:  0.519
Epoch   6 Batch   50/269 - Train Accuracy:  0.700, Validation Accuracy:  0.712, Loss:  0.512
Epoch   6 Batch   60/269 - Train Accuracy:  0.732, Validation Accuracy:  0.727, Loss:  0.476
Epoch   6 Batch   70/269 - Train Accuracy:  0.716, Validation Accuracy:  0.718, Loss:  0.506
Epoch   6 Batch   80/269 - Train Accuracy:  0.746, Validation Accuracy:  0.737, Loss:  0.485
Epoch   6 Batch   90/269 - Train Accuracy:  0.701, Validation Accuracy:  0.721, Loss:  0.517
Epoch   6 Batch  100/269 - Train Accuracy:  0.753, Validation Accuracy:  0.721, Loss:  0.472
Epoch   6 Batch  110/269 - Train Accuracy:  0.727, Validation Accuracy:  0.734, Loss:  0.485
Epoch   6 Batch  120/269 - Train Accuracy:  0.727, Validation Accuracy:  0.746, Loss:  0.503
Epoch   6 Batch  130/269 - Train Accuracy:  0.743, Validation Accuracy:  0.750, Loss:  0.511
Epoch   6 Batch  140/269 - Train Accuracy:  0.726, Validation Accuracy:  0.736, Loss:  0.493
Epoch   6 Batch  150/269 - Train Accuracy:  0.743, Validation Accuracy:  0.745, Loss:  0.467
Epoch   6 Batch  160/269 - Train Accuracy:  0.741, Validation Accuracy:  0.733, Loss:  0.474
Epoch   6 Batch  170/269 - Train Accuracy:  0.743, Validation Accuracy:  0.749, Loss:  0.460
Epoch   6 Batch  180/269 - Train Accuracy:  0.749, Validation Accuracy:  0.746, Loss:  0.462
Epoch   6 Batch  190/269 - Train Accuracy:  0.734, Validation Accuracy:  0.737, Loss:  0.459
Epoch   6 Batch  200/269 - Train Accuracy:  0.741, Validation Accuracy:  0.751, Loss:  0.482
Epoch   6 Batch  210/269 - Train Accuracy:  0.745, Validation Accuracy:  0.741, Loss:  0.456
Epoch   6 Batch  220/269 - Train Accuracy:  0.747, Validation Accuracy:  0.744, Loss:  0.443
Epoch   6 Batch  230/269 - Train Accuracy:  0.752, Validation Accuracy:  0.754, Loss:  0.460
Epoch   6 Batch  240/269 - Train Accuracy:  0.764, Validation Accuracy:  0.752, Loss:  0.412
Epoch   6 Batch  250/269 - Train Accuracy:  0.741, Validation Accuracy:  0.745, Loss:  0.463
Epoch   6 Batch  260/269 - Train Accuracy:  0.735, Validation Accuracy:  0.746, Loss:  0.471
Epoch   7 Batch    0/269 - Train Accuracy:  0.759, Validation Accuracy:  0.763, Loss:  0.480
Epoch   7 Batch   10/269 - Train Accuracy:  0.754, Validation Accuracy:  0.765, Loss:  0.457
Epoch   7 Batch   20/269 - Train Accuracy:  0.756, Validation Accuracy:  0.757, Loss:  0.452
Epoch   7 Batch   30/269 - Train Accuracy:  0.752, Validation Accuracy:  0.761, Loss:  0.437
Epoch   7 Batch   40/269 - Train Accuracy:  0.752, Validation Accuracy:  0.762, Loss:  0.458
Epoch   7 Batch   50/269 - Train Accuracy:  0.751, Validation Accuracy:  0.766, Loss:  0.459
Epoch   7 Batch   60/269 - Train Accuracy:  0.768, Validation Accuracy:  0.768, Loss:  0.424
Epoch   7 Batch   70/269 - Train Accuracy:  0.769, Validation Accuracy:  0.762, Loss:  0.447
Epoch   7 Batch   80/269 - Train Accuracy:  0.769, Validation Accuracy:  0.771, Loss:  0.425
Epoch   7 Batch   90/269 - Train Accuracy:  0.742, Validation Accuracy:  0.758, Loss:  0.462
Epoch   7 Batch  100/269 - Train Accuracy:  0.778, Validation Accuracy:  0.759, Loss:  0.415
Epoch   7 Batch  110/269 - Train Accuracy:  0.763, Validation Accuracy:  0.776, Loss:  0.425
Epoch   7 Batch  120/269 - Train Accuracy:  0.755, Validation Accuracy:  0.763, Loss:  0.436
Epoch   7 Batch  130/269 - Train Accuracy:  0.776, Validation Accuracy:  0.771, Loss:  0.448
Epoch   7 Batch  140/269 - Train Accuracy:  0.754, Validation Accuracy:  0.773, Loss:  0.436
Epoch   7 Batch  150/269 - Train Accuracy:  0.769, Validation Accuracy:  0.776, Loss:  0.423
Epoch   7 Batch  160/269 - Train Accuracy:  0.775, Validation Accuracy:  0.780, Loss:  0.417
Epoch   7 Batch  170/269 - Train Accuracy:  0.776, Validation Accuracy:  0.777, Loss:  0.411
Epoch   7 Batch  180/269 - Train Accuracy:  0.779, Validation Accuracy:  0.775, Loss:  0.410
Epoch   7 Batch  190/269 - Train Accuracy:  0.761, Validation Accuracy:  0.784, Loss:  0.409
Epoch   7 Batch  200/269 - Train Accuracy:  0.778, Validation Accuracy:  0.771, Loss:  0.425
Epoch   7 Batch  210/269 - Train Accuracy:  0.778, Validation Accuracy:  0.761, Loss:  0.405
Epoch   7 Batch  220/269 - Train Accuracy:  0.785, Validation Accuracy:  0.777, Loss:  0.390
Epoch   7 Batch  230/269 - Train Accuracy:  0.778, Validation Accuracy:  0.788, Loss:  0.406
Epoch   7 Batch  240/269 - Train Accuracy:  0.807, Validation Accuracy:  0.783, Loss:  0.369
Epoch   7 Batch  250/269 - Train Accuracy:  0.792, Validation Accuracy:  0.781, Loss:  0.412
Epoch   7 Batch  260/269 - Train Accuracy:  0.774, Validation Accuracy:  0.788, Loss:  0.419
Epoch   8 Batch    0/269 - Train Accuracy:  0.787, Validation Accuracy:  0.781, Loss:  0.423
Epoch   8 Batch   10/269 - Train Accuracy:  0.786, Validation Accuracy:  0.785, Loss:  0.414
Epoch   8 Batch   20/269 - Train Accuracy:  0.788, Validation Accuracy:  0.789, Loss:  0.401
Epoch   8 Batch   30/269 - Train Accuracy:  0.779, Validation Accuracy:  0.785, Loss:  0.395
Epoch   8 Batch   40/269 - Train Accuracy:  0.775, Validation Accuracy:  0.796, Loss:  0.403
Epoch   8 Batch   50/269 - Train Accuracy:  0.781, Validation Accuracy:  0.788, Loss:  0.412
Epoch   8 Batch   60/269 - Train Accuracy:  0.799, Validation Accuracy:  0.795, Loss:  0.373
Epoch   8 Batch   70/269 - Train Accuracy:  0.794, Validation Accuracy:  0.786, Loss:  0.395
Epoch   8 Batch   80/269 - Train Accuracy:  0.803, Validation Accuracy:  0.791, Loss:  0.377
Epoch   8 Batch   90/269 - Train Accuracy:  0.780, Validation Accuracy:  0.791, Loss:  0.412
Epoch   8 Batch  100/269 - Train Accuracy:  0.811, Validation Accuracy:  0.796, Loss:  0.369
Epoch   8 Batch  110/269 - Train Accuracy:  0.779, Validation Accuracy:  0.793, Loss:  0.378
Epoch   8 Batch  120/269 - Train Accuracy:  0.793, Validation Accuracy:  0.782, Loss:  0.385
Epoch   8 Batch  130/269 - Train Accuracy:  0.797, Validation Accuracy:  0.792, Loss:  0.393
Epoch   8 Batch  140/269 - Train Accuracy:  0.792, Validation Accuracy:  0.790, Loss:  0.400
Epoch   8 Batch  150/269 - Train Accuracy:  0.800, Validation Accuracy:  0.808, Loss:  0.374
Epoch   8 Batch  160/269 - Train Accuracy:  0.794, Validation Accuracy:  0.795, Loss:  0.372
Epoch   8 Batch  170/269 - Train Accuracy:  0.800, Validation Accuracy:  0.802, Loss:  0.367
Epoch   8 Batch  180/269 - Train Accuracy:  0.795, Validation Accuracy:  0.801, Loss:  0.366
Epoch   8 Batch  190/269 - Train Accuracy:  0.784, Validation Accuracy:  0.799, Loss:  0.357
Epoch   8 Batch  200/269 - Train Accuracy:  0.793, Validation Accuracy:  0.805, Loss:  0.380
Epoch   8 Batch  210/269 - Train Accuracy:  0.810, Validation Accuracy:  0.809, Loss:  0.363
Epoch   8 Batch  220/269 - Train Accuracy:  0.813, Validation Accuracy:  0.799, Loss:  0.350
Epoch   8 Batch  230/269 - Train Accuracy:  0.799, Validation Accuracy:  0.803, Loss:  0.354
Epoch   8 Batch  240/269 - Train Accuracy:  0.828, Validation Accuracy:  0.805, Loss:  0.331
Epoch   8 Batch  250/269 - Train Accuracy:  0.811, Validation Accuracy:  0.806, Loss:  0.372
Epoch   8 Batch  260/269 - Train Accuracy:  0.796, Validation Accuracy:  0.800, Loss:  0.377
Epoch   9 Batch    0/269 - Train Accuracy:  0.814, Validation Accuracy:  0.815, Loss:  0.379
Epoch   9 Batch   10/269 - Train Accuracy:  0.807, Validation Accuracy:  0.810, Loss:  0.369
Epoch   9 Batch   20/269 - Train Accuracy:  0.816, Validation Accuracy:  0.816, Loss:  0.362
Epoch   9 Batch   30/269 - Train Accuracy:  0.801, Validation Accuracy:  0.812, Loss:  0.348
Epoch   9 Batch   40/269 - Train Accuracy:  0.807, Validation Accuracy:  0.817, Loss:  0.372
Epoch   9 Batch   50/269 - Train Accuracy:  0.804, Validation Accuracy:  0.825, Loss:  0.370
Epoch   9 Batch   60/269 - Train Accuracy:  0.810, Validation Accuracy:  0.813, Loss:  0.332
Epoch   9 Batch   70/269 - Train Accuracy:  0.825, Validation Accuracy:  0.821, Loss:  0.351
Epoch   9 Batch   80/269 - Train Accuracy:  0.830, Validation Accuracy:  0.824, Loss:  0.342
Epoch   9 Batch   90/269 - Train Accuracy:  0.809, Validation Accuracy:  0.814, Loss:  0.366
Epoch   9 Batch  100/269 - Train Accuracy:  0.833, Validation Accuracy:  0.823, Loss:  0.329
Epoch   9 Batch  110/269 - Train Accuracy:  0.819, Validation Accuracy:  0.817, Loss:  0.341
Epoch   9 Batch  120/269 - Train Accuracy:  0.814, Validation Accuracy:  0.815, Loss:  0.349
Epoch   9 Batch  130/269 - Train Accuracy:  0.831, Validation Accuracy:  0.827, Loss:  0.365
Epoch   9 Batch  140/269 - Train Accuracy:  0.813, Validation Accuracy:  0.826, Loss:  0.356
Epoch   9 Batch  150/269 - Train Accuracy:  0.819, Validation Accuracy:  0.829, Loss:  0.338
Epoch   9 Batch  160/269 - Train Accuracy:  0.837, Validation Accuracy:  0.835, Loss:  0.336
Epoch   9 Batch  170/269 - Train Accuracy:  0.825, Validation Accuracy:  0.827, Loss:  0.326
Epoch   9 Batch  180/269 - Train Accuracy:  0.832, Validation Accuracy:  0.831, Loss:  0.326
Epoch   9 Batch  190/269 - Train Accuracy:  0.824, Validation Accuracy:  0.829, Loss:  0.322
Epoch   9 Batch  200/269 - Train Accuracy:  0.826, Validation Accuracy:  0.834, Loss:  0.346
Epoch   9 Batch  210/269 - Train Accuracy:  0.839, Validation Accuracy:  0.832, Loss:  0.322
Epoch   9 Batch  220/269 - Train Accuracy:  0.847, Validation Accuracy:  0.837, Loss:  0.317
Epoch   9 Batch  230/269 - Train Accuracy:  0.822, Validation Accuracy:  0.817, Loss:  0.325
Epoch   9 Batch  240/269 - Train Accuracy:  0.857, Validation Accuracy:  0.835, Loss:  0.296
Epoch   9 Batch  250/269 - Train Accuracy:  0.837, Validation Accuracy:  0.831, Loss:  0.335
Epoch   9 Batch  260/269 - Train Accuracy:  0.812, Validation Accuracy:  0.833, Loss:  0.343
Epoch  10 Batch    0/269 - Train Accuracy:  0.832, Validation Accuracy:  0.834, Loss:  0.337
Epoch  10 Batch   10/269 - Train Accuracy:  0.838, Validation Accuracy:  0.838, Loss:  0.332
Epoch  10 Batch   20/269 - Train Accuracy:  0.842, Validation Accuracy:  0.829, Loss:  0.326
Epoch  10 Batch   30/269 - Train Accuracy:  0.837, Validation Accuracy:  0.849, Loss:  0.317
Epoch  10 Batch   40/269 - Train Accuracy:  0.827, Validation Accuracy:  0.833, Loss:  0.335
Epoch  10 Batch   50/269 - Train Accuracy:  0.814, Validation Accuracy:  0.843, Loss:  0.337
Epoch  10 Batch   60/269 - Train Accuracy:  0.836, Validation Accuracy:  0.841, Loss:  0.302
Epoch  10 Batch   70/269 - Train Accuracy:  0.845, Validation Accuracy:  0.839, Loss:  0.329
Epoch  10 Batch   80/269 - Train Accuracy:  0.852, Validation Accuracy:  0.842, Loss:  0.313
Epoch  10 Batch   90/269 - Train Accuracy:  0.832, Validation Accuracy:  0.842, Loss:  0.333
Epoch  10 Batch  100/269 - Train Accuracy:  0.852, Validation Accuracy:  0.848, Loss:  0.299
Epoch  10 Batch  110/269 - Train Accuracy:  0.842, Validation Accuracy:  0.845, Loss:  0.309
Epoch  10 Batch  120/269 - Train Accuracy:  0.835, Validation Accuracy:  0.833, Loss:  0.313
Epoch  10 Batch  130/269 - Train Accuracy:  0.834, Validation Accuracy:  0.835, Loss:  0.320
Epoch  10 Batch  140/269 - Train Accuracy:  0.836, Validation Accuracy:  0.848, Loss:  0.317
Epoch  10 Batch  150/269 - Train Accuracy:  0.840, Validation Accuracy:  0.844, Loss:  0.310
Epoch  10 Batch  160/269 - Train Accuracy:  0.853, Validation Accuracy:  0.843, Loss:  0.304
Epoch  10 Batch  170/269 - Train Accuracy:  0.852, Validation Accuracy:  0.844, Loss:  0.288
Epoch  10 Batch  180/269 - Train Accuracy:  0.855, Validation Accuracy:  0.845, Loss:  0.302
Epoch  10 Batch  190/269 - Train Accuracy:  0.853, Validation Accuracy:  0.843, Loss:  0.298
Epoch  10 Batch  200/269 - Train Accuracy:  0.836, Validation Accuracy:  0.849, Loss:  0.317
Epoch  10 Batch  210/269 - Train Accuracy:  0.857, Validation Accuracy:  0.851, Loss:  0.299
Epoch  10 Batch  220/269 - Train Accuracy:  0.854, Validation Accuracy:  0.853, Loss:  0.287
Epoch  10 Batch  230/269 - Train Accuracy:  0.849, Validation Accuracy:  0.854, Loss:  0.289
Epoch  10 Batch  240/269 - Train Accuracy:  0.865, Validation Accuracy:  0.847, Loss:  0.273
Epoch  10 Batch  250/269 - Train Accuracy:  0.866, Validation Accuracy:  0.841, Loss:  0.303
Epoch  10 Batch  260/269 - Train Accuracy:  0.832, Validation Accuracy:  0.851, Loss:  0.314
Epoch  11 Batch    0/269 - Train Accuracy:  0.856, Validation Accuracy:  0.852, Loss:  0.313
Epoch  11 Batch   10/269 - Train Accuracy:  0.857, Validation Accuracy:  0.858, Loss:  0.291
Epoch  11 Batch   20/269 - Train Accuracy:  0.857, Validation Accuracy:  0.853, Loss:  0.293
Epoch  11 Batch   30/269 - Train Accuracy:  0.852, Validation Accuracy:  0.848, Loss:  0.281
Epoch  11 Batch   40/269 - Train Accuracy:  0.844, Validation Accuracy:  0.851, Loss:  0.305
Epoch  11 Batch   50/269 - Train Accuracy:  0.835, Validation Accuracy:  0.862, Loss:  0.298
Epoch  11 Batch   60/269 - Train Accuracy:  0.866, Validation Accuracy:  0.858, Loss:  0.276
Epoch  11 Batch   70/269 - Train Accuracy:  0.863, Validation Accuracy:  0.862, Loss:  0.291
Epoch  11 Batch   80/269 - Train Accuracy:  0.866, Validation Accuracy:  0.855, Loss:  0.275
Epoch  11 Batch   90/269 - Train Accuracy:  0.856, Validation Accuracy:  0.862, Loss:  0.308
Epoch  11 Batch  100/269 - Train Accuracy:  0.877, Validation Accuracy:  0.861, Loss:  0.274
Epoch  11 Batch  110/269 - Train Accuracy:  0.864, Validation Accuracy:  0.860, Loss:  0.264
Epoch  11 Batch  120/269 - Train Accuracy:  0.863, Validation Accuracy:  0.852, Loss:  0.292
Epoch  11 Batch  130/269 - Train Accuracy:  0.855, Validation Accuracy:  0.854, Loss:  0.287
Epoch  11 Batch  140/269 - Train Accuracy:  0.858, Validation Accuracy:  0.860, Loss:  0.293
Epoch  11 Batch  150/269 - Train Accuracy:  0.853, Validation Accuracy:  0.854, Loss:  0.278
Epoch  11 Batch  160/269 - Train Accuracy:  0.867, Validation Accuracy:  0.866, Loss:  0.275
Epoch  11 Batch  170/269 - Train Accuracy:  0.869, Validation Accuracy:  0.863, Loss:  0.267
Epoch  11 Batch  180/269 - Train Accuracy:  0.868, Validation Accuracy:  0.851, Loss:  0.270
Epoch  11 Batch  190/269 - Train Accuracy:  0.863, Validation Accuracy:  0.866, Loss:  0.267
Epoch  11 Batch  200/269 - Train Accuracy:  0.859, Validation Accuracy:  0.853, Loss:  0.282
Epoch  11 Batch  210/269 - Train Accuracy:  0.864, Validation Accuracy:  0.871, Loss:  0.274
Epoch  11 Batch  220/269 - Train Accuracy:  0.873, Validation Accuracy:  0.867, Loss:  0.263
Epoch  11 Batch  230/269 - Train Accuracy:  0.867, Validation Accuracy:  0.863, Loss:  0.268
Epoch  11 Batch  240/269 - Train Accuracy:  0.889, Validation Accuracy:  0.865, Loss:  0.247
Epoch  11 Batch  250/269 - Train Accuracy:  0.875, Validation Accuracy:  0.862, Loss:  0.280
Epoch  11 Batch  260/269 - Train Accuracy:  0.856, Validation Accuracy:  0.867, Loss:  0.280
Epoch  12 Batch    0/269 - Train Accuracy:  0.871, Validation Accuracy:  0.869, Loss:  0.280
Epoch  12 Batch   10/269 - Train Accuracy:  0.867, Validation Accuracy:  0.863, Loss:  0.264
Epoch  12 Batch   20/269 - Train Accuracy:  0.877, Validation Accuracy:  0.866, Loss:  0.268
Epoch  12 Batch   30/269 - Train Accuracy:  0.867, Validation Accuracy:  0.871, Loss:  0.260
Epoch  12 Batch   40/269 - Train Accuracy:  0.853, Validation Accuracy:  0.869, Loss:  0.278
Epoch  12 Batch   50/269 - Train Accuracy:  0.841, Validation Accuracy:  0.866, Loss:  0.277
Epoch  12 Batch   60/269 - Train Accuracy:  0.872, Validation Accuracy:  0.869, Loss:  0.252
Epoch  12 Batch   70/269 - Train Accuracy:  0.875, Validation Accuracy:  0.862, Loss:  0.267
Epoch  12 Batch   80/269 - Train Accuracy:  0.878, Validation Accuracy:  0.860, Loss:  0.257
Epoch  12 Batch   90/269 - Train Accuracy:  0.870, Validation Accuracy:  0.872, Loss:  0.277
Epoch  12 Batch  100/269 - Train Accuracy:  0.881, Validation Accuracy:  0.861, Loss:  0.250
Epoch  12 Batch  110/269 - Train Accuracy:  0.865, Validation Accuracy:  0.868, Loss:  0.252
Epoch  12 Batch  120/269 - Train Accuracy:  0.880, Validation Accuracy:  0.864, Loss:  0.260
Epoch  12 Batch  130/269 - Train Accuracy:  0.872, Validation Accuracy:  0.863, Loss:  0.265
Epoch  12 Batch  140/269 - Train Accuracy:  0.862, Validation Accuracy:  0.857, Loss:  0.268
Epoch  12 Batch  150/269 - Train Accuracy:  0.862, Validation Accuracy:  0.867, Loss:  0.256
Epoch  12 Batch  160/269 - Train Accuracy:  0.875, Validation Accuracy:  0.872, Loss:  0.253
Epoch  12 Batch  170/269 - Train Accuracy:  0.873, Validation Accuracy:  0.874, Loss:  0.252
Epoch  12 Batch  180/269 - Train Accuracy:  0.878, Validation Accuracy:  0.863, Loss:  0.248
Epoch  12 Batch  190/269 - Train Accuracy:  0.872, Validation Accuracy:  0.867, Loss:  0.245
Epoch  12 Batch  200/269 - Train Accuracy:  0.873, Validation Accuracy:  0.872, Loss:  0.257
Epoch  12 Batch  210/269 - Train Accuracy:  0.878, Validation Accuracy:  0.876, Loss:  0.242
Epoch  12 Batch  220/269 - Train Accuracy:  0.875, Validation Accuracy:  0.873, Loss:  0.238
Epoch  12 Batch  230/269 - Train Accuracy:  0.884, Validation Accuracy:  0.877, Loss:  0.248
Epoch  12 Batch  240/269 - Train Accuracy:  0.899, Validation Accuracy:  0.872, Loss:  0.227
Epoch  12 Batch  250/269 - Train Accuracy:  0.882, Validation Accuracy:  0.875, Loss:  0.249
Epoch  12 Batch  260/269 - Train Accuracy:  0.869, Validation Accuracy:  0.875, Loss:  0.254
Epoch  13 Batch    0/269 - Train Accuracy:  0.886, Validation Accuracy:  0.878, Loss:  0.276
Epoch  13 Batch   10/269 - Train Accuracy:  0.887, Validation Accuracy:  0.880, Loss:  0.245
Epoch  13 Batch   20/269 - Train Accuracy:  0.880, Validation Accuracy:  0.874, Loss:  0.247
Epoch  13 Batch   30/269 - Train Accuracy:  0.876, Validation Accuracy:  0.871, Loss:  0.236
Epoch  13 Batch   40/269 - Train Accuracy:  0.864, Validation Accuracy:  0.876, Loss:  0.256
Epoch  13 Batch   50/269 - Train Accuracy:  0.859, Validation Accuracy:  0.877, Loss:  0.264
Epoch  13 Batch   60/269 - Train Accuracy:  0.888, Validation Accuracy:  0.882, Loss:  0.231
Epoch  13 Batch   70/269 - Train Accuracy:  0.882, Validation Accuracy:  0.881, Loss:  0.252
Epoch  13 Batch   80/269 - Train Accuracy:  0.889, Validation Accuracy:  0.878, Loss:  0.242
Epoch  13 Batch   90/269 - Train Accuracy:  0.874, Validation Accuracy:  0.873, Loss:  0.254
Epoch  13 Batch  100/269 - Train Accuracy:  0.897, Validation Accuracy:  0.883, Loss:  0.223
Epoch  13 Batch  110/269 - Train Accuracy:  0.887, Validation Accuracy:  0.884, Loss:  0.229
Epoch  13 Batch  120/269 - Train Accuracy:  0.892, Validation Accuracy:  0.875, Loss:  0.241
Epoch  13 Batch  130/269 - Train Accuracy:  0.886, Validation Accuracy:  0.878, Loss:  0.242
Epoch  13 Batch  140/269 - Train Accuracy:  0.872, Validation Accuracy:  0.878, Loss:  0.254
Epoch  13 Batch  150/269 - Train Accuracy:  0.883, Validation Accuracy:  0.889, Loss:  0.235
Epoch  13 Batch  160/269 - Train Accuracy:  0.882, Validation Accuracy:  0.890, Loss:  0.234
Epoch  13 Batch  170/269 - Train Accuracy:  0.887, Validation Accuracy:  0.886, Loss:  0.222
Epoch  13 Batch  180/269 - Train Accuracy:  0.890, Validation Accuracy:  0.880, Loss:  0.227
Epoch  13 Batch  190/269 - Train Accuracy:  0.883, Validation Accuracy:  0.880, Loss:  0.225
Epoch  13 Batch  200/269 - Train Accuracy:  0.884, Validation Accuracy:  0.881, Loss:  0.236
Epoch  13 Batch  210/269 - Train Accuracy:  0.895, Validation Accuracy:  0.886, Loss:  0.228
Epoch  13 Batch  220/269 - Train Accuracy:  0.895, Validation Accuracy:  0.889, Loss:  0.219
Epoch  13 Batch  230/269 - Train Accuracy:  0.892, Validation Accuracy:  0.892, Loss:  0.224
Epoch  13 Batch  240/269 - Train Accuracy:  0.897, Validation Accuracy:  0.881, Loss:  0.209
Epoch  13 Batch  250/269 - Train Accuracy:  0.892, Validation Accuracy:  0.892, Loss:  0.232
Epoch  13 Batch  260/269 - Train Accuracy:  0.875, Validation Accuracy:  0.894, Loss:  0.236
Epoch  14 Batch    0/269 - Train Accuracy:  0.899, Validation Accuracy:  0.886, Loss:  0.245
Epoch  14 Batch   10/269 - Train Accuracy:  0.897, Validation Accuracy:  0.888, Loss:  0.222
Epoch  14 Batch   20/269 - Train Accuracy:  0.893, Validation Accuracy:  0.886, Loss:  0.233
Epoch  14 Batch   30/269 - Train Accuracy:  0.888, Validation Accuracy:  0.895, Loss:  0.220
Epoch  14 Batch   40/269 - Train Accuracy:  0.876, Validation Accuracy:  0.884, Loss:  0.239
Epoch  14 Batch   50/269 - Train Accuracy:  0.868, Validation Accuracy:  0.889, Loss:  0.240
Epoch  14 Batch   60/269 - Train Accuracy:  0.902, Validation Accuracy:  0.900, Loss:  0.210
Epoch  14 Batch   70/269 - Train Accuracy:  0.897, Validation Accuracy:  0.894, Loss:  0.227
Epoch  14 Batch   80/269 - Train Accuracy:  0.889, Validation Accuracy:  0.891, Loss:  0.219
Epoch  14 Batch   90/269 - Train Accuracy:  0.886, Validation Accuracy:  0.892, Loss:  0.234
Epoch  14 Batch  100/269 - Train Accuracy:  0.908, Validation Accuracy:  0.899, Loss:  0.208
Epoch  14 Batch  110/269 - Train Accuracy:  0.896, Validation Accuracy:  0.899, Loss:  0.209
Epoch  14 Batch  120/269 - Train Accuracy:  0.896, Validation Accuracy:  0.896, Loss:  0.222
Epoch  14 Batch  130/269 - Train Accuracy:  0.899, Validation Accuracy:  0.882, Loss:  0.227
Epoch  14 Batch  140/269 - Train Accuracy:  0.879, Validation Accuracy:  0.893, Loss:  0.230
Epoch  14 Batch  150/269 - Train Accuracy:  0.893, Validation Accuracy:  0.897, Loss:  0.214
Epoch  14 Batch  160/269 - Train Accuracy:  0.900, Validation Accuracy:  0.898, Loss:  0.213
Epoch  14 Batch  170/269 - Train Accuracy:  0.898, Validation Accuracy:  0.894, Loss:  0.216
Epoch  14 Batch  180/269 - Train Accuracy:  0.904, Validation Accuracy:  0.897, Loss:  0.212
Epoch  14 Batch  190/269 - Train Accuracy:  0.895, Validation Accuracy:  0.890, Loss:  0.209
Epoch  14 Batch  200/269 - Train Accuracy:  0.894, Validation Accuracy:  0.902, Loss:  0.224
Epoch  14 Batch  210/269 - Train Accuracy:  0.888, Validation Accuracy:  0.900, Loss:  0.215
Epoch  14 Batch  220/269 - Train Accuracy:  0.892, Validation Accuracy:  0.893, Loss:  0.201
Epoch  14 Batch  230/269 - Train Accuracy:  0.902, Validation Accuracy:  0.895, Loss:  0.207
Epoch  14 Batch  240/269 - Train Accuracy:  0.914, Validation Accuracy:  0.907, Loss:  0.184
Epoch  14 Batch  250/269 - Train Accuracy:  0.906, Validation Accuracy:  0.908, Loss:  0.205
Epoch  14 Batch  260/269 - Train Accuracy:  0.890, Validation Accuracy:  0.902, Loss:  0.216
Epoch  15 Batch    0/269 - Train Accuracy:  0.903, Validation Accuracy:  0.890, Loss:  0.231
Epoch  15 Batch   10/269 - Train Accuracy:  0.903, Validation Accuracy:  0.903, Loss:  0.204
Epoch  15 Batch   20/269 - Train Accuracy:  0.901, Validation Accuracy:  0.898, Loss:  0.217
Epoch  15 Batch   30/269 - Train Accuracy:  0.897, Validation Accuracy:  0.903, Loss:  0.203
Epoch  15 Batch   40/269 - Train Accuracy:  0.880, Validation Accuracy:  0.909, Loss:  0.219
Epoch  15 Batch   50/269 - Train Accuracy:  0.881, Validation Accuracy:  0.904, Loss:  0.229
Epoch  15 Batch   60/269 - Train Accuracy:  0.910, Validation Accuracy:  0.906, Loss:  0.192
Epoch  15 Batch   70/269 - Train Accuracy:  0.904, Validation Accuracy:  0.906, Loss:  0.221
Epoch  15 Batch   80/269 - Train Accuracy:  0.911, Validation Accuracy:  0.908, Loss:  0.196
Epoch  15 Batch   90/269 - Train Accuracy:  0.902, Validation Accuracy:  0.912, Loss:  0.213
Epoch  15 Batch  100/269 - Train Accuracy:  0.909, Validation Accuracy:  0.904, Loss:  0.205
Epoch  15 Batch  110/269 - Train Accuracy:  0.896, Validation Accuracy:  0.907, Loss:  0.196
Epoch  15 Batch  120/269 - Train Accuracy:  0.906, Validation Accuracy:  0.909, Loss:  0.205
Epoch  15 Batch  130/269 - Train Accuracy:  0.902, Validation Accuracy:  0.911, Loss:  0.211
Epoch  15 Batch  140/269 - Train Accuracy:  0.888, Validation Accuracy:  0.900, Loss:  0.218
Epoch  15 Batch  150/269 - Train Accuracy:  0.905, Validation Accuracy:  0.909, Loss:  0.198
Epoch  15 Batch  160/269 - Train Accuracy:  0.906, Validation Accuracy:  0.907, Loss:  0.198
Epoch  15 Batch  170/269 - Train Accuracy:  0.899, Validation Accuracy:  0.908, Loss:  0.192
Epoch  15 Batch  180/269 - Train Accuracy:  0.904, Validation Accuracy:  0.892, Loss:  0.195
Epoch  15 Batch  190/269 - Train Accuracy:  0.905, Validation Accuracy:  0.907, Loss:  0.197
Epoch  15 Batch  200/269 - Train Accuracy:  0.902, Validation Accuracy:  0.909, Loss:  0.201
Epoch  15 Batch  210/269 - Train Accuracy:  0.900, Validation Accuracy:  0.913, Loss:  0.181
Epoch  15 Batch  220/269 - Train Accuracy:  0.904, Validation Accuracy:  0.905, Loss:  0.194
Epoch  15 Batch  230/269 - Train Accuracy:  0.916, Validation Accuracy:  0.916, Loss:  0.195
Epoch  15 Batch  240/269 - Train Accuracy:  0.917, Validation Accuracy:  0.912, Loss:  0.180
Epoch  15 Batch  250/269 - Train Accuracy:  0.917, Validation Accuracy:  0.915, Loss:  0.193
Epoch  15 Batch  260/269 - Train Accuracy:  0.905, Validation Accuracy:  0.911, Loss:  0.210
Epoch  16 Batch    0/269 - Train Accuracy:  0.914, Validation Accuracy:  0.911, Loss:  0.208
Epoch  16 Batch   10/269 - Train Accuracy:  0.904, Validation Accuracy:  0.914, Loss:  0.191
Epoch  16 Batch   20/269 - Train Accuracy:  0.909, Validation Accuracy:  0.919, Loss:  0.190
Epoch  16 Batch   30/269 - Train Accuracy:  0.907, Validation Accuracy:  0.919, Loss:  0.182
Epoch  16 Batch   40/269 - Train Accuracy:  0.892, Validation Accuracy:  0.917, Loss:  0.206
Epoch  16 Batch   50/269 - Train Accuracy:  0.894, Validation Accuracy:  0.914, Loss:  0.196
Epoch  16 Batch   60/269 - Train Accuracy:  0.916, Validation Accuracy:  0.915, Loss:  0.172
Epoch  16 Batch   70/269 - Train Accuracy:  0.910, Validation Accuracy:  0.920, Loss:  0.193
Epoch  16 Batch   80/269 - Train Accuracy:  0.912, Validation Accuracy:  0.912, Loss:  0.197
Epoch  16 Batch   90/269 - Train Accuracy:  0.910, Validation Accuracy:  0.918, Loss:  0.199
Epoch  16 Batch  100/269 - Train Accuracy:  0.922, Validation Accuracy:  0.915, Loss:  0.176
Epoch  16 Batch  110/269 - Train Accuracy:  0.913, Validation Accuracy:  0.920, Loss:  0.181
Epoch  16 Batch  120/269 - Train Accuracy:  0.916, Validation Accuracy:  0.923, Loss:  0.188
Epoch  16 Batch  130/269 - Train Accuracy:  0.918, Validation Accuracy:  0.915, Loss:  0.197
Epoch  16 Batch  140/269 - Train Accuracy:  0.897, Validation Accuracy:  0.919, Loss:  0.200
Epoch  16 Batch  150/269 - Train Accuracy:  0.909, Validation Accuracy:  0.918, Loss:  0.191
Epoch  16 Batch  160/269 - Train Accuracy:  0.908, Validation Accuracy:  0.917, Loss:  0.180
Epoch  16 Batch  170/269 - Train Accuracy:  0.908, Validation Accuracy:  0.917, Loss:  0.176
Epoch  16 Batch  180/269 - Train Accuracy:  0.914, Validation Accuracy:  0.920, Loss:  0.175
Epoch  16 Batch  190/269 - Train Accuracy:  0.912, Validation Accuracy:  0.915, Loss:  0.181
Epoch  16 Batch  200/269 - Train Accuracy:  0.907, Validation Accuracy:  0.919, Loss:  0.185
Epoch  16 Batch  210/269 - Train Accuracy:  0.903, Validation Accuracy:  0.916, Loss:  0.182
Epoch  16 Batch  220/269 - Train Accuracy:  0.908, Validation Accuracy:  0.922, Loss:  0.175
Epoch  16 Batch  230/269 - Train Accuracy:  0.914, Validation Accuracy:  0.918, Loss:  0.182
Epoch  16 Batch  240/269 - Train Accuracy:  0.933, Validation Accuracy:  0.921, Loss:  0.172
Epoch  16 Batch  250/269 - Train Accuracy:  0.915, Validation Accuracy:  0.922, Loss:  0.175
Epoch  16 Batch  260/269 - Train Accuracy:  0.918, Validation Accuracy:  0.927, Loss:  0.182
Epoch  17 Batch    0/269 - Train Accuracy:  0.919, Validation Accuracy:  0.918, Loss:  0.193
Epoch  17 Batch   10/269 - Train Accuracy:  0.918, Validation Accuracy:  0.924, Loss:  0.169
Epoch  17 Batch   20/269 - Train Accuracy:  0.916, Validation Accuracy:  0.928, Loss:  0.180
Epoch  17 Batch   30/269 - Train Accuracy:  0.910, Validation Accuracy:  0.923, Loss:  0.179
Epoch  17 Batch   40/269 - Train Accuracy:  0.898, Validation Accuracy:  0.919, Loss:  0.191
Epoch  17 Batch   50/269 - Train Accuracy:  0.900, Validation Accuracy:  0.931, Loss:  0.194
Epoch  17 Batch   60/269 - Train Accuracy:  0.919, Validation Accuracy:  0.922, Loss:  0.160
Epoch  17 Batch   70/269 - Train Accuracy:  0.915, Validation Accuracy:  0.918, Loss:  0.187
Epoch  17 Batch   80/269 - Train Accuracy:  0.920, Validation Accuracy:  0.925, Loss:  0.171
Epoch  17 Batch   90/269 - Train Accuracy:  0.914, Validation Accuracy:  0.918, Loss:  0.181
Epoch  17 Batch  100/269 - Train Accuracy:  0.926, Validation Accuracy:  0.926, Loss:  0.171
Epoch  17 Batch  110/269 - Train Accuracy:  0.907, Validation Accuracy:  0.919, Loss:  0.169
Epoch  17 Batch  120/269 - Train Accuracy:  0.914, Validation Accuracy:  0.930, Loss:  0.178
Epoch  17 Batch  130/269 - Train Accuracy:  0.922, Validation Accuracy:  0.924, Loss:  0.171
Epoch  17 Batch  140/269 - Train Accuracy:  0.906, Validation Accuracy:  0.924, Loss:  0.190
Epoch  17 Batch  150/269 - Train Accuracy:  0.917, Validation Accuracy:  0.926, Loss:  0.168
Epoch  17 Batch  160/269 - Train Accuracy:  0.910, Validation Accuracy:  0.931, Loss:  0.172
Epoch  17 Batch  170/269 - Train Accuracy:  0.918, Validation Accuracy:  0.927, Loss:  0.159
Epoch  17 Batch  180/269 - Train Accuracy:  0.922, Validation Accuracy:  0.926, Loss:  0.165
Epoch  17 Batch  190/269 - Train Accuracy:  0.922, Validation Accuracy:  0.923, Loss:  0.168
Epoch  17 Batch  200/269 - Train Accuracy:  0.914, Validation Accuracy:  0.927, Loss:  0.176
Epoch  17 Batch  210/269 - Train Accuracy:  0.924, Validation Accuracy:  0.926, Loss:  0.158
Epoch  17 Batch  220/269 - Train Accuracy:  0.915, Validation Accuracy:  0.927, Loss:  0.170
Epoch  17 Batch  230/269 - Train Accuracy:  0.927, Validation Accuracy:  0.930, Loss:  0.167
Epoch  17 Batch  240/269 - Train Accuracy:  0.928, Validation Accuracy:  0.924, Loss:  0.146
Epoch  17 Batch  250/269 - Train Accuracy:  0.920, Validation Accuracy:  0.928, Loss:  0.173
Epoch  17 Batch  260/269 - Train Accuracy:  0.913, Validation Accuracy:  0.927, Loss:  0.174
Epoch  18 Batch    0/269 - Train Accuracy:  0.931, Validation Accuracy:  0.930, Loss:  0.180
Epoch  18 Batch   10/269 - Train Accuracy:  0.917, Validation Accuracy:  0.925, Loss:  0.160
Epoch  18 Batch   20/269 - Train Accuracy:  0.914, Validation Accuracy:  0.931, Loss:  0.171
Epoch  18 Batch   30/269 - Train Accuracy:  0.919, Validation Accuracy:  0.927, Loss:  0.159
Epoch  18 Batch   40/269 - Train Accuracy:  0.899, Validation Accuracy:  0.926, Loss:  0.171
Epoch  18 Batch   50/269 - Train Accuracy:  0.905, Validation Accuracy:  0.934, Loss:  0.174
Epoch  18 Batch   60/269 - Train Accuracy:  0.929, Validation Accuracy:  0.929, Loss:  0.152
Epoch  18 Batch   70/269 - Train Accuracy:  0.920, Validation Accuracy:  0.926, Loss:  0.169
Epoch  18 Batch   80/269 - Train Accuracy:  0.922, Validation Accuracy:  0.927, Loss:  0.159
Epoch  18 Batch   90/269 - Train Accuracy:  0.924, Validation Accuracy:  0.926, Loss:  0.168
Epoch  18 Batch  100/269 - Train Accuracy:  0.930, Validation Accuracy:  0.930, Loss:  0.157
Epoch  18 Batch  110/269 - Train Accuracy:  0.922, Validation Accuracy:  0.929, Loss:  0.151
Epoch  18 Batch  120/269 - Train Accuracy:  0.929, Validation Accuracy:  0.932, Loss:  0.160
Epoch  18 Batch  130/269 - Train Accuracy:  0.923, Validation Accuracy:  0.930, Loss:  0.159
Epoch  18 Batch  140/269 - Train Accuracy:  0.910, Validation Accuracy:  0.931, Loss:  0.172
Epoch  18 Batch  150/269 - Train Accuracy:  0.920, Validation Accuracy:  0.935, Loss:  0.163
Epoch  18 Batch  160/269 - Train Accuracy:  0.916, Validation Accuracy:  0.933, Loss:  0.166
Epoch  18 Batch  170/269 - Train Accuracy:  0.923, Validation Accuracy:  0.937, Loss:  0.161
Epoch  18 Batch  180/269 - Train Accuracy:  0.924, Validation Accuracy:  0.939, Loss:  0.154
Epoch  18 Batch  190/269 - Train Accuracy:  0.922, Validation Accuracy:  0.928, Loss:  0.150
Epoch  18 Batch  200/269 - Train Accuracy:  0.917, Validation Accuracy:  0.932, Loss:  0.158
Epoch  18 Batch  210/269 - Train Accuracy:  0.921, Validation Accuracy:  0.933, Loss:  0.150
Epoch  18 Batch  220/269 - Train Accuracy:  0.922, Validation Accuracy:  0.931, Loss:  0.149
Epoch  18 Batch  230/269 - Train Accuracy:  0.927, Validation Accuracy:  0.931, Loss:  0.153
Epoch  18 Batch  240/269 - Train Accuracy:  0.929, Validation Accuracy:  0.932, Loss:  0.142
Epoch  18 Batch  250/269 - Train Accuracy:  0.927, Validation Accuracy:  0.934, Loss:  0.157
Epoch  18 Batch  260/269 - Train Accuracy:  0.914, Validation Accuracy:  0.925, Loss:  0.165
Epoch  19 Batch    0/269 - Train Accuracy:  0.929, Validation Accuracy:  0.930, Loss:  0.164
Epoch  19 Batch   10/269 - Train Accuracy:  0.922, Validation Accuracy:  0.927, Loss:  0.148
Epoch  19 Batch   20/269 - Train Accuracy:  0.924, Validation Accuracy:  0.937, Loss:  0.148
Epoch  19 Batch   30/269 - Train Accuracy:  0.914, Validation Accuracy:  0.934, Loss:  0.153
Epoch  19 Batch   40/269 - Train Accuracy:  0.911, Validation Accuracy:  0.927, Loss:  0.163
Epoch  19 Batch   50/269 - Train Accuracy:  0.908, Validation Accuracy:  0.936, Loss:  0.165
Epoch  19 Batch   60/269 - Train Accuracy:  0.932, Validation Accuracy:  0.935, Loss:  0.141
Epoch  19 Batch   70/269 - Train Accuracy:  0.925, Validation Accuracy:  0.935, Loss:  0.163
Epoch  19 Batch   80/269 - Train Accuracy:  0.933, Validation Accuracy:  0.940, Loss:  0.152
Epoch  19 Batch   90/269 - Train Accuracy:  0.932, Validation Accuracy:  0.939, Loss:  0.156
Epoch  19 Batch  100/269 - Train Accuracy:  0.932, Validation Accuracy:  0.937, Loss:  0.147
Epoch  19 Batch  110/269 - Train Accuracy:  0.928, Validation Accuracy:  0.928, Loss:  0.149
Epoch  19 Batch  120/269 - Train Accuracy:  0.936, Validation Accuracy:  0.938, Loss:  0.145
Epoch  19 Batch  130/269 - Train Accuracy:  0.929, Validation Accuracy:  0.930, Loss:  0.160
Epoch  19 Batch  140/269 - Train Accuracy:  0.918, Validation Accuracy:  0.940, Loss:  0.163
Epoch  19 Batch  150/269 - Train Accuracy:  0.928, Validation Accuracy:  0.941, Loss:  0.146
Epoch  19 Batch  160/269 - Train Accuracy:  0.927, Validation Accuracy:  0.941, Loss:  0.142
Epoch  19 Batch  170/269 - Train Accuracy:  0.923, Validation Accuracy:  0.941, Loss:  0.142
Epoch  19 Batch  180/269 - Train Accuracy:  0.936, Validation Accuracy:  0.941, Loss:  0.145
Epoch  19 Batch  190/269 - Train Accuracy:  0.928, Validation Accuracy:  0.938, Loss:  0.142
Epoch  19 Batch  200/269 - Train Accuracy:  0.919, Validation Accuracy:  0.939, Loss:  0.145
Epoch  19 Batch  210/269 - Train Accuracy:  0.924, Validation Accuracy:  0.935, Loss:  0.138
Epoch  19 Batch  220/269 - Train Accuracy:  0.924, Validation Accuracy:  0.941, Loss:  0.137
Epoch  19 Batch  230/269 - Train Accuracy:  0.934, Validation Accuracy:  0.943, Loss:  0.138
Epoch  19 Batch  240/269 - Train Accuracy:  0.940, Validation Accuracy:  0.941, Loss:  0.128
Epoch  19 Batch  250/269 - Train Accuracy:  0.928, Validation Accuracy:  0.938, Loss:  0.140
Epoch  19 Batch  260/269 - Train Accuracy:  0.919, Validation Accuracy:  0.937, Loss:  0.147
Model Trained and Saved

Save Parameters

Save the batch_size and save_path parameters for inference.


In [21]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [22]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.

In [23]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    # TODO: Implement Function
    lower_sentence = sentence.lower()

    id_seq = []
    for word in lower_sentence.split():
        id_seq.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
    
    return id_seq


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)


Tests Passed

Translate

This will translate translate_sentence from English to French.


In [24]:
translate_sentence = 'he saw a old yellow truck .'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('logits:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in np.argmax(translate_logits, 1)]))
print('  French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))


Input
  Word Ids:      [32, 191, 208, 110, 204, 184, 69]
  English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']

Prediction
  Word Ids:      [347, 169, 186, 348, 70, 12, 67, 204, 1]
  French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>']

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. Additionally, the translations in this data set were made by Google translate, so the translations themselves aren't particularly good. (We apologize to the French speakers out there!) Thankfully, for this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.