Language Translation

Get the Data


In [1]:
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data


In [2]:
view_sentence_range = (0, 10)

import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .

French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .

Implement Preprocessing Function

Text to Word Ids

turn the text into a number so the computer can understand.


In [31]:
def text_to_ids(src_text, tar_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param src_text: String that contains all the source text.
    :param tar_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    
    EOS = target_vocab_to_int['<EOS>']
    src_id, tar_id = [], []
    src_text, tar_text = src_text.split('\n'), tar_text.split('\n')  # Split on new line    
    assert len(src_text) == len(tar_text)
    
    for i in range(len(src_text)):
        src_sentance_int = [source_vocab_to_int[w] for w in src_text[i].split()]  # Split on word
        src_id.append(src_sentance_int)
        
        tar_sentance_int = [target_vocab_to_int[w] for w in tar_text[i].split()]
        tar_sentance_int.append(EOS)  
        tar_id.append(tar_sentance_int)
        # print(tar_text[i], src_text[i], src_id)  # uncomment for debug
    
    return src_id, tar_id


tests.test_text_to_ids(text_to_ids)


Tests Passed

Preprocess all the data and save it


In [32]:
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point


In [33]:
import numpy as np
import helper

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [34]:
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Build the Neural Network

Input


In [135]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
    :return: Tuple (input, targets, learning rate, keep probability, target sequence length,
    max target sequence length, source sequence length)
    """

    inputs = tf.placeholder(tf.int32, [None, None], "input")
    targets = tf.placeholder(tf.int32, [None, None], "targets")
    lr = tf.placeholder(tf.float32, name="learning_rate")
    keep_prob = tf.placeholder(tf.float32, name="keep_prob")
    tar_seq_len = tf.placeholder(tf.int32, (None,), "target_sequence_length")
    max_tar_seq_len = tf.reduce_max(tar_seq_len, name="max_target_length")
    src_seq_len = tf.placeholder(tf.int32, (None,), "source_sequence_length")
    
    return inputs, targets, lr, keep_prob, tar_seq_len, max_tar_seq_len, src_seq_len


tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoder Input

Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.


In [136]:
def process_decoder_input(t, tar_v_to_int, b_len):
    """
    Preprocess target data for encoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    
    e = tf.strided_slice(t, [0, 0], [b_len, -1], [1,1])
    t = tf.fill([b_len, 1], tar_v_to_int['<GO>'])
    d_in = tf.concat([t, e], 1)
    
    return d_in


tests.test_process_encoding_input(process_decoder_input)


Tests Passed

Encoding


In [165]:
from imp import reload
reload(tests)


def make_cell(rnn_size):
        
    init = tf.random_uniform_initializer(-.1, .1, seed=2)
    cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=init)
    return cell


def encoding_layer(rnn_inputs, rnn_len, num_layers, keep_prob, 
                   s_seq_len, src_vocab_size, 
                   e_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param src_vocab_size: vocabulary size of source data
    :param e_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """
   
    e_in = tf.contrib.layers.embed_sequence(rnn_inputs, src_vocab_size, e_embedding_size)

    e = [make_cell(rnn_len) for _ in range(num_layers)]
    
    e_cell = tf.contrib.rnn.MultiRNNCell(e)
    
    e_out, e_state = tf.nn.dynamic_rnn(e_cell, e_in, s_seq_len, dtype=tf.float32)
    
    return e_out, e_state


tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training


In [166]:
from tensorflow.contrib import seq2seq

def decoding_layer_train(e_state, dec_cell, dec_embed_input, 
                         tar_seq_len, max_summary_length, 
                         output_layer, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_summary_length: The length of the longest sequence in the batch
    :param output_layer: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing training logits and sample_id
    """
    
    # t == training  d == decoder
    
    t_helper = seq2seq.TrainingHelper(dec_embed_input, tar_seq_len, time_major=False)
    
    t_d = seq2seq.BasicDecoder(dec_cell, t_helper, e_state, output_layer)
    
    t_d_out, _ = seq2seq.dynamic_decode(t_d, impute_finished=True, 
                                        maximum_iterations=max_summary_length)
    
    
    return t_d_out



tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

Decoding - Inference

Create inference decoder:


In [139]:
def decoding_layer_infer(e_state, dec_cell, dec_embeddings, go_id,
                         end_id, max_target_sequence_length,
                         vocab_size, output_layer, batch_size, keep_prob):
    """
    Create a decoding layer for inference
    :param e_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param max_target_sequence_length: Maximum length of target sequences
    :param vocab_size: Size of decoder/target vocabulary
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_layer: Function to apply the output layer
    :param batch_size: Batch size
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing inference logits and sample_id
    """
     
    
    s_ids = tf.tile(tf.constant([go_id], tf.int32), [batch_size], name='start_tokens')
    
    i_helper = seq2seq.GreedyEmbeddingHelper(dec_embeddings, s_ids, end_id)
    
    i_dec = seq2seq.BasicDecoder(dec_cell, i_helper, e_state, output_layer)
    
    i_d_out, _ = seq2seq.dynamic_decode(i_dec, impute_finished=True, maximum_iterations=max_target_sequence_length)
    
    return i_d_out



tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer


In [167]:
def decoding_layer(d_input, e_state,
                   t_seq_len, max_t_seq_len,
                   rnn_len, num_layers, target_vocab_to_int, target_vocab_len,
                   batch_size, keep_prob, d_embedding_len):
    """
    Create decoding layer
    :param dec_input: Decoder input
    :param encoder_state: Encoder state
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_target_sequence_length: Maximum length of target sequences
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param target_vocab_size: Size of target vocabulary
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    
    go_id = target_vocab_to_int["<GO>"]
    end_id = target_vocab_to_int["<EOS>"]
    
    # Embeding
    t_v_len = len(target_vocab_to_int)
    d_embeddings = tf.Variable(tf.random_uniform([t_v_len, d_embedding_len]))
    d_embed_input = tf.nn.embedding_lookup(d_embeddings, d_input)
    
    # Decoder cell
    d = [make_cell(rnn_len) for _ in range(num_layers)]
    d_cell = tf.contrib.rnn.MultiRNNCell(d)
    
    # Output
    init_d = tf.truncated_normal_initializer(mean = 0.0, stddev = .1) 
    output_layer = Dense(target_vocab_len, kernel_initializer = init_d)
    
    
    with tf.variable_scope("decode"):
        
        t_d_out = decoding_layer_train(e_state, d_cell, d_embed_input, 
                                   t_seq_len, max_t_seq_len, output_layer, keep_prob)
    
    
    with tf.variable_scope("decode", reuse=True):
        
        i_d_out = decoding_layer_infer(e_state, d_cell, d_embeddings, go_id,
                         end_id, max_t_seq_len,
                         target_vocab_len, output_layer, batch_size, keep_prob)
        
    
    return t_d_out, i_d_out



tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network


In [168]:
def seq2seq_model(input_data, targets, keep_prob, batch_size,
                  source_sequence_length, target_sequence_length,
                  max_target_sequence_length,
                  source_vocab_size, target_vocab_size,
                  e_embedding_size, d_embedding_size,
                  rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param source_sequence_length: Sequence Lengths of source sequences in the batch
    :param target_sequence_length: Sequence Lengths of target sequences in the batch
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    
    _, e_state = encoding_layer(input_data, 
                                  rnn_size, 
                                  num_layers, keep_prob,
                                  source_sequence_length,
                                  source_vocab_size, e_embedding_size)
    
    d_input = process_decoder_input(targets, target_vocab_to_int, batch_size)
    
    t_d_out, i_d_out = decoding_layer(d_input, e_state, target_sequence_length, 
                                      max_target_sequence_length, rnn_size, num_layers, 
                                      target_vocab_to_int, target_vocab_size, 
                                      batch_size, keep_prob, d_embedding_size)
    
    return t_d_out, i_d_out



tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training


In [150]:
epochs = 3
batch_size = 256
rnn_size = 256
num_layers = 3
encoding_embedding_size = 512
decoding_embedding_size = 512
learning_rate = .001
keep_probability = .5
display_step = 20

Build the Graph


In [151]:
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()

    #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
                                                   targets,
                                                   keep_prob,
                                                   batch_size,
                                                   source_sequence_length,
                                                   target_sequence_length,
                                                   max_target_sequence_length,
                                                   len(source_vocab_to_int),
                                                   len(target_vocab_to_int),
                                                   encoding_embedding_size,
                                                   decoding_embedding_size,
                                                   rnn_size,
                                                   num_layers,
                                                   target_vocab_to_int)


    training_logits = tf.identity(train_logits.rnn_output, name='logits')
    inference_logits = tf.identity(inference_logits.sample_id, name='predictions')

    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            training_logits,
            targets,
            masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Batch and pad the source and target sequences


In [152]:
def pad_sentence_batch(sentence_batch, pad_int):
    """Pad sentences with <PAD> so that each sentence of a batch has the same length"""
    max_sentence = max([len(sentence) for sentence in sentence_batch])
    return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]


def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
    """Batch targets, sources, and the lengths of their sentences together"""
    for batch_i in range(0, len(sources)//batch_size):
        start_i = batch_i * batch_size

        # Slice the right amount for the batch
        sources_batch = sources[start_i:start_i + batch_size]
        targets_batch = targets[start_i:start_i + batch_size]

        # Pad
        pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
        pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))

        # Need the lengths for the _lengths parameters
        pad_targets_lengths = []
        for target in pad_targets_batch:
            pad_targets_lengths.append(len(target))

        pad_source_lengths = []
        for source in pad_sources_batch:
            pad_source_lengths.append(len(source))

        yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [153]:
def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1])],
            'constant')

    return np.mean(np.equal(target, logits))

# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
                                                                                                             valid_target,
                                                                                                             batch_size,
                                                                                                             source_vocab_to_int['<PAD>'],
                                                                                                             target_vocab_to_int['<PAD>']))                                                                                                  
with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
                get_batches(train_source, train_target, batch_size,
                            source_vocab_to_int['<PAD>'],
                            target_vocab_to_int['<PAD>'])):

            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})


            if batch_i % display_step == 0 and batch_i > 0:


                batch_train_logits = sess.run(
                    inference_logits,
                    {input_data: source_batch,
                     source_sequence_length: sources_lengths,
                     target_sequence_length: targets_lengths,
                     keep_prob: 1.0})


                batch_valid_logits = sess.run(
                    inference_logits,
                    {input_data: valid_sources_batch,
                     source_sequence_length: valid_sources_lengths,
                     target_sequence_length: valid_targets_lengths,
                     keep_prob: 1.0})

                train_acc = get_accuracy(target_batch, batch_train_logits)

                valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)

                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')


Epoch   0 Batch   20/538 - Train Accuracy: 0.4014, Validation Accuracy: 0.4405, Loss: 2.7497
Epoch   0 Batch   40/538 - Train Accuracy: 0.5110, Validation Accuracy: 0.5059, Loss: 2.2436
Epoch   0 Batch   60/538 - Train Accuracy: 0.4557, Validation Accuracy: 0.5078, Loss: 2.0795
Epoch   0 Batch   80/538 - Train Accuracy: 0.4871, Validation Accuracy: 0.5458, Loss: 1.8658
Epoch   0 Batch  100/538 - Train Accuracy: 0.4906, Validation Accuracy: 0.5295, Loss: 1.6215
Epoch   0 Batch  120/538 - Train Accuracy: 0.5068, Validation Accuracy: 0.5382, Loss: 1.5076
Epoch   0 Batch  140/538 - Train Accuracy: 0.4816, Validation Accuracy: 0.5407, Loss: 1.5012
Epoch   0 Batch  160/538 - Train Accuracy: 0.5359, Validation Accuracy: 0.5586, Loss: 1.2461
Epoch   0 Batch  180/538 - Train Accuracy: 0.5491, Validation Accuracy: 0.5542, Loss: 1.1868
Epoch   0 Batch  200/538 - Train Accuracy: 0.5133, Validation Accuracy: 0.5460, Loss: 1.1371
Epoch   0 Batch  220/538 - Train Accuracy: 0.5506, Validation Accuracy: 0.5865, Loss: 1.0426
Epoch   0 Batch  240/538 - Train Accuracy: 0.5572, Validation Accuracy: 0.5831, Loss: 1.0060
Epoch   0 Batch  260/538 - Train Accuracy: 0.5720, Validation Accuracy: 0.5911, Loss: 0.9165
Epoch   0 Batch  280/538 - Train Accuracy: 0.6135, Validation Accuracy: 0.6026, Loss: 0.8337
Epoch   0 Batch  300/538 - Train Accuracy: 0.6004, Validation Accuracy: 0.6055, Loss: 0.7928
Epoch   0 Batch  320/538 - Train Accuracy: 0.6209, Validation Accuracy: 0.6035, Loss: 0.7702
Epoch   0 Batch  340/538 - Train Accuracy: 0.5859, Validation Accuracy: 0.6106, Loss: 0.7820
Epoch   0 Batch  360/538 - Train Accuracy: 0.5867, Validation Accuracy: 0.6126, Loss: 0.7299
Epoch   0 Batch  380/538 - Train Accuracy: 0.5926, Validation Accuracy: 0.6218, Loss: 0.6683
Epoch   0 Batch  400/538 - Train Accuracy: 0.6460, Validation Accuracy: 0.6293, Loss: 0.6300
Epoch   0 Batch  420/538 - Train Accuracy: 0.6521, Validation Accuracy: 0.6502, Loss: 0.6034
Epoch   0 Batch  440/538 - Train Accuracy: 0.6465, Validation Accuracy: 0.6452, Loss: 0.6006
Epoch   0 Batch  460/538 - Train Accuracy: 0.6549, Validation Accuracy: 0.6680, Loss: 0.5388
Epoch   0 Batch  480/538 - Train Accuracy: 0.7028, Validation Accuracy: 0.6674, Loss: 0.5105
Epoch   0 Batch  500/538 - Train Accuracy: 0.7315, Validation Accuracy: 0.7163, Loss: 0.4557
Epoch   0 Batch  520/538 - Train Accuracy: 0.6967, Validation Accuracy: 0.7287, Loss: 0.4838
Epoch   1 Batch   20/538 - Train Accuracy: 0.7522, Validation Accuracy: 0.7502, Loss: 0.4202
Epoch   1 Batch   40/538 - Train Accuracy: 0.7955, Validation Accuracy: 0.7674, Loss: 0.3502
Epoch   1 Batch   60/538 - Train Accuracy: 0.7979, Validation Accuracy: 0.7869, Loss: 0.3636
Epoch   1 Batch   80/538 - Train Accuracy: 0.7896, Validation Accuracy: 0.7962, Loss: 0.3484
Epoch   1 Batch  100/538 - Train Accuracy: 0.8086, Validation Accuracy: 0.8029, Loss: 0.2905
Epoch   1 Batch  120/538 - Train Accuracy: 0.8434, Validation Accuracy: 0.8294, Loss: 0.2649
Epoch   1 Batch  140/538 - Train Accuracy: 0.8316, Validation Accuracy: 0.8308, Loss: 0.2816
Epoch   1 Batch  160/538 - Train Accuracy: 0.8478, Validation Accuracy: 0.8388, Loss: 0.2209
Epoch   1 Batch  180/538 - Train Accuracy: 0.8644, Validation Accuracy: 0.8565, Loss: 0.2157
Epoch   1 Batch  200/538 - Train Accuracy: 0.8635, Validation Accuracy: 0.8516, Loss: 0.1955
Epoch   1 Batch  220/538 - Train Accuracy: 0.8568, Validation Accuracy: 0.8469, Loss: 0.1750
Epoch   1 Batch  240/538 - Train Accuracy: 0.8793, Validation Accuracy: 0.8649, Loss: 0.1768
Epoch   1 Batch  260/538 - Train Accuracy: 0.8402, Validation Accuracy: 0.8780, Loss: 0.1700
Epoch   1 Batch  280/538 - Train Accuracy: 0.9033, Validation Accuracy: 0.8599, Loss: 0.1297
Epoch   1 Batch  300/538 - Train Accuracy: 0.8999, Validation Accuracy: 0.8823, Loss: 0.1300
Epoch   1 Batch  320/538 - Train Accuracy: 0.8776, Validation Accuracy: 0.8974, Loss: 0.1240
Epoch   1 Batch  340/538 - Train Accuracy: 0.9074, Validation Accuracy: 0.9052, Loss: 0.1136
Epoch   1 Batch  360/538 - Train Accuracy: 0.8969, Validation Accuracy: 0.9082, Loss: 0.1069
Epoch   1 Batch  380/538 - Train Accuracy: 0.9229, Validation Accuracy: 0.9018, Loss: 0.0968
Epoch   1 Batch  400/538 - Train Accuracy: 0.9262, Validation Accuracy: 0.9153, Loss: 0.0997
Epoch   1 Batch  420/538 - Train Accuracy: 0.9439, Validation Accuracy: 0.9141, Loss: 0.0856
Epoch   1 Batch  440/538 - Train Accuracy: 0.9088, Validation Accuracy: 0.8986, Loss: 0.0970
Epoch   1 Batch  460/538 - Train Accuracy: 0.9007, Validation Accuracy: 0.9201, Loss: 0.0935
Epoch   1 Batch  480/538 - Train Accuracy: 0.9379, Validation Accuracy: 0.9150, Loss: 0.0754
Epoch   1 Batch  500/538 - Train Accuracy: 0.9411, Validation Accuracy: 0.9169, Loss: 0.0593
Epoch   1 Batch  520/538 - Train Accuracy: 0.9266, Validation Accuracy: 0.9183, Loss: 0.0726
Epoch   2 Batch   20/538 - Train Accuracy: 0.9392, Validation Accuracy: 0.9347, Loss: 0.0665
Epoch   2 Batch   40/538 - Train Accuracy: 0.9371, Validation Accuracy: 0.9343, Loss: 0.0506
Epoch   2 Batch   60/538 - Train Accuracy: 0.9285, Validation Accuracy: 0.9336, Loss: 0.0587
Epoch   2 Batch   80/538 - Train Accuracy: 0.9432, Validation Accuracy: 0.9359, Loss: 0.0620
Epoch   2 Batch  100/538 - Train Accuracy: 0.9447, Validation Accuracy: 0.9345, Loss: 0.0490
Epoch   2 Batch  120/538 - Train Accuracy: 0.9602, Validation Accuracy: 0.9430, Loss: 0.0420
Epoch   2 Batch  140/538 - Train Accuracy: 0.9316, Validation Accuracy: 0.9357, Loss: 0.0664
Epoch   2 Batch  160/538 - Train Accuracy: 0.9386, Validation Accuracy: 0.9384, Loss: 0.0443
Epoch   2 Batch  180/538 - Train Accuracy: 0.9477, Validation Accuracy: 0.9292, Loss: 0.0509
Epoch   2 Batch  200/538 - Train Accuracy: 0.9623, Validation Accuracy: 0.9382, Loss: 0.0417
Epoch   2 Batch  220/538 - Train Accuracy: 0.9336, Validation Accuracy: 0.9380, Loss: 0.0502
Epoch   2 Batch  240/538 - Train Accuracy: 0.9439, Validation Accuracy: 0.9409, Loss: 0.0490
Epoch   2 Batch  260/538 - Train Accuracy: 0.9366, Validation Accuracy: 0.9370, Loss: 0.0481
Epoch   2 Batch  280/538 - Train Accuracy: 0.9671, Validation Accuracy: 0.9434, Loss: 0.0356
Epoch   2 Batch  300/538 - Train Accuracy: 0.9401, Validation Accuracy: 0.9499, Loss: 0.0461
Epoch   2 Batch  320/538 - Train Accuracy: 0.9487, Validation Accuracy: 0.9451, Loss: 0.0410
Epoch   2 Batch  340/538 - Train Accuracy: 0.9506, Validation Accuracy: 0.9572, Loss: 0.0393
Epoch   2 Batch  360/538 - Train Accuracy: 0.9553, Validation Accuracy: 0.9572, Loss: 0.0377
Epoch   2 Batch  380/538 - Train Accuracy: 0.9584, Validation Accuracy: 0.9487, Loss: 0.0344
Epoch   2 Batch  400/538 - Train Accuracy: 0.9650, Validation Accuracy: 0.9510, Loss: 0.0402
Epoch   2 Batch  420/538 - Train Accuracy: 0.9613, Validation Accuracy: 0.9423, Loss: 0.0397
Epoch   2 Batch  440/538 - Train Accuracy: 0.9617, Validation Accuracy: 0.9489, Loss: 0.0401
Epoch   2 Batch  460/538 - Train Accuracy: 0.9554, Validation Accuracy: 0.9496, Loss: 0.0427
Epoch   2 Batch  480/538 - Train Accuracy: 0.9615, Validation Accuracy: 0.9632, Loss: 0.0345
Epoch   2 Batch  500/538 - Train Accuracy: 0.9799, Validation Accuracy: 0.9400, Loss: 0.0249
Epoch   2 Batch  520/538 - Train Accuracy: 0.9611, Validation Accuracy: 0.9576, Loss: 0.0328
Model Trained and Saved

Save Parameters

Save the batch_size and save_path parameters for inference.


In [169]:
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [155]:
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence


In [156]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    s = sentence.lower()
    
    unknown = vocab_to_int["<UNK>"]
    
    w = [vocab_to_int.get(w, unknown) for w in s.split()]
    
    return w


tests.test_sentence_to_seq(sentence_to_seq)


Tests Passed

Translate

This will translate translate_sentence from English to French.


In [164]:
translate_sentence = 'paris is relaxing during december , but it is usually chilly in october.'


translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
                                         target_sequence_length: [len(translate_sentence)*2]*batch_size,
                                         source_sequence_length: [len(translate_sentence)]*batch_size,
                                         keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in translate_logits]))
print('  French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))


INFO:tensorflow:Restoring parameters from checkpoints/dev
Input
  Word Ids:      [116, 223, 20, 205, 169, 198, 217, 21, 223, 94, 124, 28, 2]
  English Words: ['paris', 'is', 'relaxing', 'during', 'december', ',', 'but', 'it', 'is', 'usually', 'chilly', 'in', '<UNK>']

Prediction
  Word Ids:      [45, 70, 221, 75, 7, 152, 197, 178, 70, 167, 47, 75, 56, 11, 1]
  French Words: paris est relaxant en décembre , mais il est généralement froid en octobre . <EOS>

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.


In [ ]: