Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [2]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .

French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.


In [3]:
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    # TODO: Implement Function

    x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \
    for sentence in source_text.split('\n')]
    y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \
    for sentence in target_text.split('\n')]
    
    source_id_text = []
    target_id_text = []

    """
    found in a forum post. necessary?
    n1 = len(x[i])
    n2 = len(y[i])
    n = n1 if n1 < n2 else n2
    if abs(n1 - n2) <= 0.3 * n:
    if n1 <= 17 and n2 <= 17:
    """
    for i in range(len(x)):
        source_id_text.append(x[i])
        target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']])

    return (source_id_text, target_id_text)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)


Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoding_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.

Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)


In [29]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate, keep probability)
    """
    # TODO: Implement Function
    input_text = tf.placeholder(tf.int32,[None, None], name="input")
    target_text = tf.placeholder(tf.int32,[None, None], name="targets")
    learning_rate = tf.placeholder(tf.float32, name="learning_rate")
    keep_prob = tf.placeholder(tf.float32, name="keep_prob")

    return input_text, target_text, learning_rate, keep_prob

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoding Input

Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.


In [8]:
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for dencoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    # TODO: Implement Function
    
    ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)

    return dec_input

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)


Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().


In [9]:
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :return: RNN state
    """
    # TODO: Implement Function
    
    enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
    enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
    _, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32)
    
    return enc_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training

Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.


In [10]:
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
                         output_fn, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param sequence_length: Sequence Length
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_fn: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: Train Logits
    """
    # TODO: Implement Function
    
    train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
    
    train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \
                    dec_embed_input, sequence_length, scope=decoding_scope)

    
    train_logits = output_fn(train_logits_drop)
    
    #I'm missing the keep_prob! don't know where to put it
    
    return train_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

In [11]:
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
                         maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param maximum_length: Maximum length of 
    :param vocab_size: Size of vocabulary
    :param decoding_scope: TensorFlow Variable Scope for decoding
    :param output_fn: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: Inference Logits
    """
    # TODO: Implement Function
    infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
        output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, 
        maximum_length, vocab_size)
    inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
    
    #Again, don't know where to put the keep_drop param

    return inference_logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Create RNN cell for decoding using rnn_size and num_layers.
  • Create the output fuction using lambda to transform it's input, logits, to class logits.
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.


In [20]:
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
                   num_layers, target_vocab_to_int, keep_prob):
    """
    Create decoding layer
    :param dec_embed_input: Decoder embedded input
    :param dec_embeddings: Decoder embeddings
    :param encoder_state: The encoded state
    :param vocab_size: Size of vocabulary
    :param sequence_length: Sequence Length
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param keep_prob: Dropout keep probability
    :return: Tuple of (Training Logits, Inference Logits)
    """
    # TODO: Implement Function
    
    dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
    dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
    
    # Output Layer
    output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\
                                None, scope=decoding_scope)
    
    with tf.variable_scope("decoding") as decoding_scope:

        train_logits = decoding_layer_train(encoder_state, dec_cell_drop, dec_embed_input,\
                          sequence_length, decoding_scope, output_fn, keep_prob)

    with tf.variable_scope("decoding", reuse=True) as decoding_scope:

        infer_logits = decoding_layer_infer(encoder_state, dec_cell_drop, dec_embeddings,\
                target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'], sequence_length,\
                       vocab_size, decoding_scope, output_fn, keep_prob)

    return train_logits, infer_logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Apply embedding to the input data for the encoder.
  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
  • Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
  • Apply embedding to the target data for the decoder.
  • Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).

In [65]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param sequence_length: Sequence Length
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training Logits, Inference Logits)
    """
    # TODO: Implement Function
    
    embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)

    encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob)
    
    processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)
    
    dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
    
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data)
    
    train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\
                                   sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
    
    
    return train_logits, infer_logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability

In [ ]:
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 200
# Number of Layers
num_layers = 30
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8

Build the Graph

Build the graph using the neural network you implemented.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob = model_inputs()
    sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)
    
    train_logits, inference_logits = seq2seq_model(
        tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
        encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)

    tf.identity(inference_logits, 'logits')
    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            train_logits,
            targets,
            tf.ones([input_shape[0], sequence_length]))

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [68]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time

def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target_batch,
            [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
            'constant')
    if max_seq - batch_train_logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1]), (0,0)],
            'constant')

    return np.mean(np.equal(target, np.argmax(logits, 2)))

train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]

valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch) in enumerate(
                helper.batch_data(train_source, train_target, batch_size)):
            start_time = time.time()
            
            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 sequence_length: target_batch.shape[1],
                 keep_prob: keep_probability})
            
            batch_train_logits = sess.run(
                inference_logits,
                {input_data: source_batch, keep_prob: 1.0})
            batch_valid_logits = sess.run(
                inference_logits,
                {input_data: valid_source, keep_prob: 1.0})
                
            train_acc = get_accuracy(target_batch, batch_train_logits)
            valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
            end_time = time.time()
            print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
                  .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')


Epoch   0 Batch    0/538 - Train Accuracy:  0.268, Validation Accuracy:  0.345, Loss:  5.881
Epoch   0 Batch    1/538 - Train Accuracy:  0.264, Validation Accuracy:  0.345, Loss:  5.873
Epoch   0 Batch    2/538 - Train Accuracy:  0.285, Validation Accuracy:  0.345, Loss:  5.853
Epoch   0 Batch    3/538 - Train Accuracy:  0.261, Validation Accuracy:  0.345, Loss:  5.789
Epoch   0 Batch    4/538 - Train Accuracy:  0.248, Validation Accuracy:  0.328, Loss:  5.475
Epoch   0 Batch    5/538 - Train Accuracy:  0.275, Validation Accuracy:  0.328, Loss:  4.725
Epoch   0 Batch    6/538 - Train Accuracy:  0.299, Validation Accuracy:  0.345, Loss:  4.363
Epoch   0 Batch    7/538 - Train Accuracy:  0.279, Validation Accuracy:  0.346, Loss:  4.301
Epoch   0 Batch    8/538 - Train Accuracy:  0.280, Validation Accuracy:  0.347, Loss:  4.096
Epoch   0 Batch    9/538 - Train Accuracy:  0.279, Validation Accuracy:  0.347, Loss:  3.970
Epoch   0 Batch   10/538 - Train Accuracy:  0.260, Validation Accuracy:  0.347, Loss:  3.906
Epoch   0 Batch   11/538 - Train Accuracy:  0.288, Validation Accuracy:  0.361, Loss:  3.721
Epoch   0 Batch   12/538 - Train Accuracy:  0.296, Validation Accuracy:  0.371, Loss:  3.664
Epoch   0 Batch   13/538 - Train Accuracy:  0.348, Validation Accuracy:  0.375, Loss:  3.372
Epoch   0 Batch   14/538 - Train Accuracy:  0.307, Validation Accuracy:  0.375, Loss:  3.488
Epoch   0 Batch   15/538 - Train Accuracy:  0.345, Validation Accuracy:  0.374, Loss:  3.262
Epoch   0 Batch   16/538 - Train Accuracy:  0.329, Validation Accuracy:  0.371, Loss:  3.238
Epoch   0 Batch   17/538 - Train Accuracy:  0.312, Validation Accuracy:  0.374, Loss:  3.360
Epoch   0 Batch   18/538 - Train Accuracy:  0.311, Validation Accuracy:  0.379, Loss:  3.336
Epoch   0 Batch   19/538 - Train Accuracy:  0.306, Validation Accuracy:  0.377, Loss:  3.326
Epoch   0 Batch   20/538 - Train Accuracy:  0.338, Validation Accuracy:  0.380, Loss:  3.156
Epoch   0 Batch   21/538 - Train Accuracy:  0.278, Validation Accuracy:  0.384, Loss:  3.369
Epoch   0 Batch   22/538 - Train Accuracy:  0.318, Validation Accuracy:  0.384, Loss:  3.231
Epoch   0 Batch   23/538 - Train Accuracy:  0.322, Validation Accuracy:  0.380, Loss:  3.197
Epoch   0 Batch   24/538 - Train Accuracy:  0.337, Validation Accuracy:  0.379, Loss:  3.115
Epoch   0 Batch   25/538 - Train Accuracy:  0.319, Validation Accuracy:  0.384, Loss:  3.230
Epoch   0 Batch   26/538 - Train Accuracy:  0.311, Validation Accuracy:  0.379, Loss:  3.193
Epoch   0 Batch   27/538 - Train Accuracy:  0.329, Validation Accuracy:  0.390, Loss:  3.165
Epoch   0 Batch   28/538 - Train Accuracy:  0.383, Validation Accuracy:  0.386, Loss:  2.890
Epoch   0 Batch   29/538 - Train Accuracy:  0.347, Validation Accuracy:  0.386, Loss:  3.036
Epoch   0 Batch   30/538 - Train Accuracy:  0.322, Validation Accuracy:  0.390, Loss:  3.175
Epoch   0 Batch   31/538 - Train Accuracy:  0.355, Validation Accuracy:  0.390, Loss:  3.006
Epoch   0 Batch   32/538 - Train Accuracy:  0.342, Validation Accuracy:  0.393, Loss:  3.046
Epoch   0 Batch   33/538 - Train Accuracy:  0.351, Validation Accuracy:  0.385, Loss:  3.002
Epoch   0 Batch   34/538 - Train Accuracy:  0.329, Validation Accuracy:  0.390, Loss:  3.139
Epoch   0 Batch   35/538 - Train Accuracy:  0.317, Validation Accuracy:  0.390, Loss:  3.084
Epoch   0 Batch   36/538 - Train Accuracy:  0.352, Validation Accuracy:  0.389, Loss:  2.974
Epoch   0 Batch   37/538 - Train Accuracy:  0.323, Validation Accuracy:  0.385, Loss:  3.063
Epoch   0 Batch   38/538 - Train Accuracy:  0.318, Validation Accuracy:  0.392, Loss:  3.093
Epoch   0 Batch   39/538 - Train Accuracy:  0.330, Validation Accuracy:  0.396, Loss:  3.081
Epoch   0 Batch   40/538 - Train Accuracy:  0.388, Validation Accuracy:  0.396, Loss:  2.807
Epoch   0 Batch   41/538 - Train Accuracy:  0.341, Validation Accuracy:  0.399, Loss:  3.025
Epoch   0 Batch   42/538 - Train Accuracy:  0.336, Validation Accuracy:  0.396, Loss:  3.021
Epoch   0 Batch   43/538 - Train Accuracy:  0.341, Validation Accuracy:  0.396, Loss:  3.071
Epoch   0 Batch   44/538 - Train Accuracy:  0.330, Validation Accuracy:  0.395, Loss:  3.048
Epoch   0 Batch   45/538 - Train Accuracy:  0.362, Validation Accuracy:  0.395, Loss:  2.927
Epoch   0 Batch   46/538 - Train Accuracy:  0.330, Validation Accuracy:  0.393, Loss:  3.032
Epoch   0 Batch   47/538 - Train Accuracy:  0.363, Validation Accuracy:  0.396, Loss:  2.929
Epoch   0 Batch   48/538 - Train Accuracy:  0.370, Validation Accuracy:  0.395, Loss:  2.913
Epoch   0 Batch   49/538 - Train Accuracy:  0.325, Validation Accuracy:  0.397, Loss:  3.058
Epoch   0 Batch   50/538 - Train Accuracy:  0.341, Validation Accuracy:  0.397, Loss:  2.971
Epoch   0 Batch   51/538 - Train Accuracy:  0.289, Validation Accuracy:  0.396, Loss:  3.170
Epoch   0 Batch   52/538 - Train Accuracy:  0.344, Validation Accuracy:  0.397, Loss:  3.011
Epoch   0 Batch   53/538 - Train Accuracy:  0.387, Validation Accuracy:  0.399, Loss:  2.768
Epoch   0 Batch   54/538 - Train Accuracy:  0.346, Validation Accuracy:  0.399, Loss:  2.977
Epoch   0 Batch   55/538 - Train Accuracy:  0.333, Validation Accuracy:  0.396, Loss:  2.985
Epoch   0 Batch   56/538 - Train Accuracy:  0.364, Validation Accuracy:  0.396, Loss:  2.888
Epoch   0 Batch   57/538 - Train Accuracy:  0.327, Validation Accuracy:  0.396, Loss:  3.006
Epoch   0 Batch   58/538 - Train Accuracy:  0.322, Validation Accuracy:  0.396, Loss:  2.995
Epoch   0 Batch   59/538 - Train Accuracy:  0.331, Validation Accuracy:  0.396, Loss:  2.976
Epoch   0 Batch   60/538 - Train Accuracy:  0.335, Validation Accuracy:  0.396, Loss:  2.948
Epoch   0 Batch   61/538 - Train Accuracy:  0.331, Validation Accuracy:  0.396, Loss:  2.969
Epoch   0 Batch   62/538 - Train Accuracy:  0.361, Validation Accuracy:  0.396, Loss:  2.842
Epoch   0 Batch   63/538 - Train Accuracy:  0.365, Validation Accuracy:  0.396, Loss:  2.839
Epoch   0 Batch   64/538 - Train Accuracy:  0.360, Validation Accuracy:  0.397, Loss:  2.864
Epoch   0 Batch   65/538 - Train Accuracy:  0.323, Validation Accuracy:  0.396, Loss:  3.014
Epoch   0 Batch   66/538 - Train Accuracy:  0.358, Validation Accuracy:  0.396, Loss:  2.847
Epoch   0 Batch   67/538 - Train Accuracy:  0.339, Validation Accuracy:  0.397, Loss:  2.941
Epoch   0 Batch   68/538 - Train Accuracy:  0.361, Validation Accuracy:  0.395, Loss:  2.762
Epoch   0 Batch   69/538 - Train Accuracy:  0.333, Validation Accuracy:  0.396, Loss:  2.941
Epoch   0 Batch   70/538 - Train Accuracy:  0.361, Validation Accuracy:  0.396, Loss:  2.813
Epoch   0 Batch   71/538 - Train Accuracy:  0.333, Validation Accuracy:  0.396, Loss:  2.924
Epoch   0 Batch   72/538 - Train Accuracy:  0.364, Validation Accuracy:  0.396, Loss:  2.766
Epoch   0 Batch   73/538 - Train Accuracy:  0.328, Validation Accuracy:  0.396, Loss:  2.954
Epoch   0 Batch   74/538 - Train Accuracy:  0.370, Validation Accuracy:  0.398, Loss:  2.805
Epoch   0 Batch   75/538 - Train Accuracy:  0.368, Validation Accuracy:  0.398, Loss:  2.803
Epoch   0 Batch   76/538 - Train Accuracy:  0.332, Validation Accuracy:  0.401, Loss:  2.924
Epoch   0 Batch   77/538 - Train Accuracy:  0.324, Validation Accuracy:  0.401, Loss:  2.931
Epoch   0 Batch   78/538 - Train Accuracy:  0.368, Validation Accuracy:  0.400, Loss:  2.812
Epoch   0 Batch   79/538 - Train Accuracy:  0.362, Validation Accuracy:  0.400, Loss:  2.747
Epoch   0 Batch   80/538 - Train Accuracy:  0.343, Validation Accuracy:  0.401, Loss:  2.904
Epoch   0 Batch   81/538 - Train Accuracy:  0.335, Validation Accuracy:  0.401, Loss:  2.896
Epoch   0 Batch   82/538 - Train Accuracy:  0.339, Validation Accuracy:  0.403, Loss:  2.874
Epoch   0 Batch   83/538 - Train Accuracy:  0.338, Validation Accuracy:  0.398, Loss:  2.876
Epoch   0 Batch   84/538 - Train Accuracy:  0.365, Validation Accuracy:  0.398, Loss:  2.793
Epoch   0 Batch   85/538 - Train Accuracy:  0.383, Validation Accuracy:  0.398, Loss:  2.691
Epoch   0 Batch   86/538 - Train Accuracy:  0.341, Validation Accuracy:  0.396, Loss:  2.876
Epoch   0 Batch   87/538 - Train Accuracy:  0.329, Validation Accuracy:  0.399, Loss:  2.894
Epoch   0 Batch   88/538 - Train Accuracy:  0.338, Validation Accuracy:  0.396, Loss:  2.882
Epoch   0 Batch   89/538 - Train Accuracy:  0.338, Validation Accuracy:  0.396, Loss:  2.857
Epoch   0 Batch   90/538 - Train Accuracy:  0.362, Validation Accuracy:  0.396, Loss:  2.777
Epoch   0 Batch   91/538 - Train Accuracy:  0.330, Validation Accuracy:  0.397, Loss:  2.863
Epoch   0 Batch   92/538 - Train Accuracy:  0.338, Validation Accuracy:  0.401, Loss:  2.876
Epoch   0 Batch   93/538 - Train Accuracy:  0.337, Validation Accuracy:  0.401, Loss:  2.841
Epoch   0 Batch   94/538 - Train Accuracy:  0.329, Validation Accuracy:  0.396, Loss:  2.890
Epoch   0 Batch   95/538 - Train Accuracy:  0.395, Validation Accuracy:  0.396, Loss:  2.603
Epoch   0 Batch   96/538 - Train Accuracy:  0.367, Validation Accuracy:  0.398, Loss:  2.720
Epoch   0 Batch   97/538 - Train Accuracy:  0.336, Validation Accuracy:  0.398, Loss:  2.841
Epoch   0 Batch   98/538 - Train Accuracy:  0.373, Validation Accuracy:  0.398, Loss:  2.706
Epoch   0 Batch   99/538 - Train Accuracy:  0.330, Validation Accuracy:  0.401, Loss:  2.904
Epoch   0 Batch  100/538 - Train Accuracy:  0.337, Validation Accuracy:  0.401, Loss:  2.832
Epoch   0 Batch  101/538 - Train Accuracy:  0.335, Validation Accuracy:  0.403, Loss:  2.844
Epoch   0 Batch  102/538 - Train Accuracy:  0.341, Validation Accuracy:  0.403, Loss:  2.878
Epoch   0 Batch  103/538 - Train Accuracy:  0.360, Validation Accuracy:  0.403, Loss:  2.771
Epoch   0 Batch  104/538 - Train Accuracy:  0.368, Validation Accuracy:  0.403, Loss:  2.732
Epoch   0 Batch  105/538 - Train Accuracy:  0.360, Validation Accuracy:  0.401, Loss:  2.733
Epoch   0 Batch  106/538 - Train Accuracy:  0.334, Validation Accuracy:  0.400, Loss:  2.836
Epoch   0 Batch  107/538 - Train Accuracy:  0.326, Validation Accuracy:  0.400, Loss:  2.860
Epoch   0 Batch  108/538 - Train Accuracy:  0.348, Validation Accuracy:  0.401, Loss:  2.816
Epoch   0 Batch  109/538 - Train Accuracy:  0.342, Validation Accuracy:  0.401, Loss:  2.797
Epoch   0 Batch  110/538 - Train Accuracy:  0.337, Validation Accuracy:  0.402, Loss:  2.871
Epoch   0 Batch  111/538 - Train Accuracy:  0.372, Validation Accuracy:  0.402, Loss:  2.696
Epoch   0 Batch  112/538 - Train Accuracy:  0.336, Validation Accuracy:  0.402, Loss:  2.838
Epoch   0 Batch  113/538 - Train Accuracy:  0.338, Validation Accuracy:  0.403, Loss:  2.861
Epoch   0 Batch  114/538 - Train Accuracy:  0.370, Validation Accuracy:  0.403, Loss:  2.706
Epoch   0 Batch  115/538 - Train Accuracy:  0.339, Validation Accuracy:  0.403, Loss:  2.800
Epoch   0 Batch  116/538 - Train Accuracy:  0.371, Validation Accuracy:  0.401, Loss:  2.727
Epoch   0 Batch  117/538 - Train Accuracy:  0.371, Validation Accuracy:  0.401, Loss:  2.710
Epoch   0 Batch  118/538 - Train Accuracy:  0.365, Validation Accuracy:  0.400, Loss:  2.703
Epoch   0 Batch  119/538 - Train Accuracy:  0.371, Validation Accuracy:  0.400, Loss:  2.682
Epoch   0 Batch  120/538 - Train Accuracy:  0.331, Validation Accuracy:  0.402, Loss:  2.820
Epoch   0 Batch  121/538 - Train Accuracy:  0.388, Validation Accuracy:  0.398, Loss:  2.611
Epoch   0 Batch  122/538 - Train Accuracy:  0.361, Validation Accuracy:  0.402, Loss:  2.695
Epoch   0 Batch  123/538 - Train Accuracy:  0.377, Validation Accuracy:  0.398, Loss:  2.678
Epoch   0 Batch  124/538 - Train Accuracy:  0.393, Validation Accuracy:  0.403, Loss:  2.589
Epoch   0 Batch  125/538 - Train Accuracy:  0.366, Validation Accuracy:  0.398, Loss:  2.709
Epoch   0 Batch  126/538 - Train Accuracy:  0.396, Validation Accuracy:  0.398, Loss:  2.600
Epoch   0 Batch  127/538 - Train Accuracy:  0.329, Validation Accuracy:  0.401, Loss:  2.845
Epoch   0 Batch  128/538 - Train Accuracy:  0.365, Validation Accuracy:  0.400, Loss:  2.679
Epoch   0 Batch  129/538 - Train Accuracy:  0.367, Validation Accuracy:  0.400, Loss:  2.707
Epoch   0 Batch  130/538 - Train Accuracy:  0.366, Validation Accuracy:  0.398, Loss:  2.684
Epoch   0 Batch  131/538 - Train Accuracy:  0.338, Validation Accuracy:  0.402, Loss:  2.844
Epoch   0 Batch  132/538 - Train Accuracy:  0.366, Validation Accuracy:  0.400, Loss:  2.679
Epoch   0 Batch  133/538 - Train Accuracy:  0.382, Validation Accuracy:  0.400, Loss:  2.611
Epoch   0 Batch  134/538 - Train Accuracy:  0.332, Validation Accuracy:  0.402, Loss:  2.874
Epoch   0 Batch  135/538 - Train Accuracy:  0.353, Validation Accuracy:  0.384, Loss:  2.709
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-68-29c4deffcdd2> in <module>()
     42                  lr: learning_rate,
     43                  sequence_length: target_batch.shape[1],
---> 44                  keep_prob: keep_probability})
     45 
     46             batch_train_logits = sess.run(

/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    963     if final_fetches or final_targets:
    964       results = self._do_run(handle, final_targets, final_fetches,
--> 965                              feed_dict_string, options, run_metadata)
    966     else:
    967       results = []

/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1013     if handle is None:
   1014       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015                            target_list, options, run_metadata)
   1016     else:
   1017       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1020   def _do_call(self, fn, *args):
   1021     try:
-> 1022       return fn(*args)
   1023     except errors.OpError as e:
   1024       message = compat.as_text(e.message)

/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1002         return tf_session.TF_Run(session, options,
   1003                                  feed_dict, fetch_list, target_list,
-> 1004                                  status, run_metadata)
   1005 
   1006     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Save Parameters

Save the batch_size and save_path parameters for inference.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.

In [ ]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    # TODO: Implement Function
    return None


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)

Translate

This will translate translate_sentence from English to French.


In [ ]:
translate_sentence = 'he saw a old yellow truck .'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('logits:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in np.argmax(translate_logits, 1)]))
print('  French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.