Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.


In [1]:
# sequence_to_sequence_implementation course assignment was used a lot to finish this hw

# A live help person highly suggested I worked through it again. --- 10000% correct. this was vital

### AKA the UDACITY seq2seq assignment, /deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb

In [2]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.


In [3]:
view_sentence_range = (10, 110)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))


Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 10 to 110:
the lime is her least liked fruit , but the banana is my least liked .
he saw a old yellow truck .
india is rainy during june , and it is sometimes warm in november .
that cat was my most loved animal .
he dislikes grapefruit , limes , and lemons .
her least liked fruit is the lemon , but his least liked is the grapefruit .
california is never cold during february , but it is sometimes freezing in june .
china is usually pleasant during autumn , and it is usually quiet in october .
paris is never freezing during november , but it is wonderful in october .
the united states is never rainy during january , but it is sometimes mild in october .
china is usually pleasant during november , and it is never quiet in october .
the united states is never nice during february , but it is sometimes pleasant in april .
india is never busy during autumn , and it is mild in spring .
paris is mild during summer , but it is usually busy in april .
france is never cold during september , and it is snowy in october .
california is never cold during may , and it is sometimes chilly in march .
he dislikes lemons , grapes , and mangoes.
their favorite fruit is the mango , but our favorite is the pear .
france is sometimes quiet during may , and it is never chilly in august .
paris is never pleasant during september , and it is beautiful in autumn .
he dislikes apples , peaches , and grapes .
california is usually freezing during december , and it is busy in april .
your most feared animal is that shark .
paris is usually wet during august , and it is never dry in november .
paris is usually beautiful during september , and it is usually snowy in november .
the united states is never wet during january , but it is usually hot in october .
we like oranges , mangoes , and grapes .
they like pears , apples , and mangoes .
she dislikes that little red truck .
the grapefruit is my most loved fruit , but the banana is her most loved .
france is snowy during may , and it is never busy in autumn .
china is usually mild during winter , but it is never busy in february .
china is never nice during july , but it is usually snowy in spring .
california is busy during november , but it is rainy in autumn .
china is warm during spring , and it is sometimes cold in february .
california is usually beautiful during winter , but it is never busy in february .
france is wonderful during november , but it is sometimes hot in september .
india is usually pleasant during november , but it is never relaxing in july .
the united states is never freezing during autumn , but it is never busy in june .
paris is sometimes warm during june , but it is usually hot in july .
paris is never hot during summer , and it is usually mild in winter .
she disliked a rusty yellow car .
france is usually quiet during november , but it is sometimes warm in february .
new jersey is never wet during november , and it is mild in august .
we like peaches , pears , and strawberries .
the orange is her least liked fruit , but the grapefruit is their least liked .
china is never rainy during november , and it is quiet in january .
china is relaxing during march , but it is sometimes snowy in september .
paris is wonderful during march , but it is usually pleasant in june .
new jersey is chilly during autumn , and it is sometimes pleasant in spring .
california is never freezing during october , but it is usually quiet in june .
new jersey is freezing during winter , but it is sometimes wonderful in january .
i like grapes , pears , and strawberries .
the lemon is my most loved fruit , but the strawberry is our most loved .
china is usually dry during march , but it is nice in november .
paris is pleasant during december , but it is never nice in november .
china is freezing during july , but it is relaxing in january .
the apple is our least favorite fruit , but the orange is her least favorite .
he dislikes grapes , grapefruit , and bananas.
he dislikes apples , mangoes , and strawberries .
china is hot during july , but it is never pleasant in january .
india is usually dry during april , and it is freezing in february .
she is going to the united states next summer .
france is sometimes rainy during february , and it is usually quiet in spring .
i plan to visit california next may .
california is never wet during november , and it is sometimes pleasant in september .
they like lemons , limes , and grapefruit .
the united states is never beautiful during march , and it is usually relaxing in summer .
elephants were his most feared animals .
the strawberry is their least favorite fruit , but the apple is our least favorite.
they are going to france next june .
he likes strawberries , oranges , and limes .
california is never pleasant during winter , and it is sometimes wonderful in december .
the apple is our least favorite fruit , but the mango is their least favorite .
she likes strawberries , oranges , and bananas .
california is beautiful during january , and it is pleasant in february .
they dislike grapes , mangoes , and limes .
she dislikes lemons , grapes , and oranges .
our least favorite fruit is the banana , but your least favorite is the grape .
california is never cold during december , but it is usually warm in may .
california is never busy during february , and it is usually hot in june .
the united states is sometimes beautiful during november , and it is never rainy in march .
new jersey is sometimes hot during march , and it is beautiful in fall .
the grapefruit is our least favorite fruit , but the orange is your least favorite .
i like grapefruit , limes , and pears .
she dislikes lemons , strawberries , and grapes .
paris is usually chilly during fall , but it is sometimes rainy in july .
she likes mangoes , apples , and bananas .
she is driving the old yellow truck .
we like peaches , mangoes , and oranges .
new jersey is usually quiet during fall , but it is usually warm in april .
california is never rainy during winter , and it is usually mild in summer .
california is sometimes cold during winter , but it is sometimes chilly in autumn .
she likes lemons , pears , and oranges.
we like strawberries , bananas , and oranges .
india is never snowy during september , but it is sometimes dry in spring .
india is sometimes rainy during january , and it is pleasant in february .
paris is never relaxing during march , but it is usually freezing in autumn .
india is chilly during summer , and it is sometimes beautiful in september .
she likes peaches , limes , and mangoes .

French sentences 10 to 110:
la chaux est son moins aimé des fruits , mais la banane est mon moins aimé.
il a vu un vieux camion jaune .
inde est pluvieux en juin , et il est parfois chaud en novembre .
ce chat était mon animal le plus aimé .
il n'aime pamplemousse , citrons verts et les citrons .
son fruit est moins aimé le citron , mais son moins aimé est le pamplemousse .
californie ne fait jamais froid en février , mais il est parfois le gel en juin .
chine est généralement agréable en automne , et il est généralement calme en octobre .
paris est jamais le gel en novembre , mais il est merveilleux en octobre .
les états-unis est jamais pluvieux en janvier , mais il est parfois doux en octobre .
chine est généralement agréable en novembre , et il est jamais tranquille en octobre .
les états-unis est jamais agréable en février , mais il est parfois agréable en avril .
l' inde est jamais occupé à l'automne , et il est doux au printemps .
paris est doux pendant l' été , mais il est généralement occupé en avril .
france ne fait jamais froid en septembre , et il est neigeux en octobre .
californie ne fait jamais froid au mois de mai , et il est parfois frisquet en mars .
il déteste les citrons , les raisins et les mangues .
leur fruit préféré est la mangue , mais notre préféré est la poire .
la france est parfois calme au mois de mai , et il est jamais froid en août .
paris est jamais agréable en septembre , et il est beau à l' automne .
il déteste les pommes , les pêches et les raisins .
la californie est le gel habituellement en décembre , et il est occupé en avril .
votre animal le plus redouté est que le requin .
paris est généralement humide au mois d' août , et il est jamais sec en novembre .
paris est généralement beau en septembre , et il est généralement enneigée en novembre .
les états-unis est jamais humide en janvier , mais il est généralement chaud en octobre .
nous aimons les oranges , les mangues et les raisins .
ils aiment les poires , les pommes et les mangues .
elle déteste ce petit camion rouge .
le pamplemousse est mon fruit le plus cher , mais la banane est la plus aimée .
la france est la neige au mois de mai , et il est jamais trop de monde à l' automne .
chine est généralement doux pendant l' hiver , mais il est jamais occupé en février .
chine est jamais agréable en juillet , mais il est généralement enneigée au printemps .
californie est occupé au mois de novembre , mais il est pluvieux à l' automne .
chine est chaud au printemps , et il est parfois froid en février .
californie est généralement beau pendant l' hiver , mais il est jamais occupé en février .
france est merveilleux au mois de novembre , mais il est parfois chaud en septembre .
l' inde est généralement agréable en novembre , mais il est jamais relaxant en juillet .
les états-unis ne sont jamais gel pendant l' automne , mais il est jamais occupé en juin .
paris est parfois chaud en juin , mais il est généralement chaud en juillet .
paris est jamais chaude pendant l' été , et il est généralement doux en hiver .
elle n'aimait pas une voiture jaune rouillée .
la france est généralement calme en novembre , mais il est parfois chaud en février .
new jersey est jamais humide au mois de novembre , et il est doux au mois d' août .
nous aimons les pêches , les poires et les fraises .
l'orange est la moins aimé des fruits , mais le pamplemousse est leur moins aimé .
chine est jamais pluvieux en novembre , et il est calme en janvier .
chine est relaxant au mois de mars , mais il est parfois enneigée en septembre .
paris est merveilleux au mois de mars , mais il est généralement agréable en juin .
new jersey est froid au cours de l' automne , et il est parfois agréable au printemps .
la californie est jamais le gel en octobre , mais il est généralement calme en juin .
new jersey est le gel pendant l' hiver , mais il est parfois merveilleux en janvier .
j'aime les raisins , les poires et les fraises .
le citron est mon fruit le plus aimé , mais la fraise est notre plus aimé .
chine est généralement sec en mars , mais il est agréable en novembre .
paris est agréable en décembre , mais il est jamais agréable en novembre .
chine gèle en juillet , mais il est relaxant en janvier .
la pomme est notre fruit préféré moins , mais l'orange est son moins préféré .
il déteste les raisins , le pamplemousse et les bananes .
il déteste les pommes , les mangues et les fraises .
chine est chaud en juillet , mais il est jamais agréable en janvier .
l' inde est généralement sec en avril , et il gèle en février .
elle va aux états-unis l' été prochain .
la france est parfois pluvieux en février , et il est généralement calme au printemps .
je prévois de visiter la californie en mai prochain .
california est jamais humide en novembre , et il est parfois agréable en septembre .
ils aiment les citrons , citrons verts et le pamplemousse .
les états-unis est jamais belle en mars , et il est relaxant habituellement en été .
les éléphants étaient ses animaux les plus redoutés .
la fraise est leur fruit préféré moins , mais la pomme est notre moins préféré .
ils vont en france en juin prochain .
il aime les fraises , les oranges et les citrons verts .
california est jamais agréable pendant l' hiver , et il est parfois merveilleux en décembre .
la pomme est notre fruit préféré moins , mais la mangue est leur moins préférée .
elle aime les fraises , les oranges et les bananes .
californie est beau en janvier , et il est agréable en février .
ils n'aiment pas les raisins , mangues et citrons verts .
elle déteste les citrons , les raisins et les oranges .
notre fruit préféré moins est la banane , mais votre moins préféré est le raisin .
californie ne fait jamais froid en décembre , mais il est habituellement chaud en mai .
california est jamais occupé en février , et il est généralement chaud en juin .
les états-unis est parfois belle au mois de novembre , et il est jamais pluvieux en mars .
new jersey est parfois chaud en mars , et il est beau à l' automne .
le pamplemousse est notre fruit préféré moins , mais l'orange est votre préféré moins .
i comme le pamplemousse , citrons verts , et les poires .
elle déteste les citrons , les fraises et les raisins .
paris est généralement froid à l'automne , mais il est parfois pluvieux en juillet .
elle aime les mangues , les pommes et les bananes .
elle conduit le vieux camion jaune .
nous aimons les pêches , les mangues et les oranges .
new jersey est généralement calme au cours de l' automne , mais il est généralement chaud en avril .
california est jamais pluvieux pendant l' hiver , et il est généralement doux en été .
california est parfois froid pendant l' hiver , mais il est parfois frisquet à l' automne .
elle aime les citrons , les poires et les oranges .
nous aimons les fraises , les bananes et les oranges .
l' inde est jamais de neige au mois de septembre , mais il est parfois sec au printemps .
l' inde est parfois pluvieux en janvier , et il est agréable en février .
paris est jamais relaxant au mois de mars , mais il gèle habituellement à l' automne .
l' inde est froid pendant l' été , et il est parfois beau en septembre .
elle aime les pêches , citrons verts et les mangues .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.


In [4]:
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """
    # TODO: Implement Function
    # I couldn't remember what eos stood for (too many acronyms to remember) so I googled it
    #https://www.tensorflow.org/tutorials/seq2seq
    # end-of-senence (eos)
    # asked a live support about this. He / she directed me to https://github.com/nicolas-ivanov/tf_seq2seq_chatbot/issues/15
    #
    
    
    # Ok, setup the stuff that is known to be needed first
    source_id_text = []
    target_id_text = []
    end_of_seq = target_vocab_to_int['<EOS>']  # had "eos" at first and it gave an error. Changing to EOS.  ## Update: doesn't fix,  / issue is something else.
   
    #look at data strcuture
    #print("================")
    #print(source_text)
    #print("================")
    #source_id_text = enumerate(source_text.split('\n'))
    #source_id_text = for tacos in (source_text.split('\n'))
    
    
    #source_id_text = source_text.split('\n')
    #print(source_id_text)
    #print(np.)
    print("================")
    
    source_id_textsen = source_text.split('\n')
    target_id_textsen = target_text.split('\n')
    
    #for sentence in (source_id_textsen):
    #    for word in sentence.split():
            # I think this is OK. default *should be spaces*
            #print("test:"+word)
            #source_id_text = word
            #source_id_text = source_vocab_to_int[word]
    #       source_id_text.append([source_vocab_to_int[word]])
    #print(len(source_id_text))
    #for sentence in (target_id_textsen):
    #    for word in sentence.split():
    #        #pass
    #        #target_id_text = target_vocab_to_int[word]
    #        target_id_text.append(target_vocab_to_int[word])
    #    target_id_text.append(end_of_seq)
  

        #### WHY AM I STILL GETTING 60 something and an error saying it should just be four values in
            # source_id_text
            #How did I just break this.... It jus t worked

   # for sentence in (source_id_textsen):
   #     source_id_text = [[source_vocab_to_int[word] for word in sentence.split()]]
    
   # for sentence in (target_id_textsen):
    #    target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [end_of_seq]]
  




    # Live help said the following is the same. Added here for future reference if a similar problem is encountered after the course.
    source_id_text = [[source_vocab_to_int[word] for word in seq.split()] for seq in source_text.split('\n')]
    target_id_text = [[target_vocab_to_int[word] for word in seq.split()] + [end_of_seq] for seq in target_text.split('\n')]

    return source_id_text, target_id_text

    
    # do an enummeration for
    print("================")
    
    
    
    return (source_id_text, target_id_text) #None, None

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)


================
Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)


================

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoder_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
  • Target sequence length placeholder named "target_sequence_length" with rank 1
  • Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
  • Source sequence length placeholder named "source_sequence_length" with rank 1

Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)


In [8]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
    :return: Tuple (input, targets, learning rate, keep probability, target sequence length,
    max target sequence length, source sequence length)
    """
    #https://www.tensorflow.org/api_docs/python/tf/placeholder
    
    
    #float32 issue at end of project, chaning things to int32 where possible???
    
    
    Input = tf.placeholder(dtype=tf.int32,shape=[None,None],name="input")
    Target = tf.placeholder(dtype=tf.int32,shape=[None,None],name="target")
    lr = tf.placeholder(dtype=tf.float32,name="lr")
    taretlength = tf.placeholder(dtype=tf.int32,name="target_sequence_length")
    kp = tf.placeholder(dtype=tf.float32,name="keep_prob")
    
    
    #maxseq = tf.placeholder(dtype.float32,name='max_target_len')
    maxseq = tf.reduce_max(taretlength,name='max_target_len')
    
    sourceseqlen = tf.placeholder(dtype=tf.int32,shape=[None],name='source_sequence_length')
    
   
# TODO: Implement Function
    
    
    
    return Input, Target, lr, kp, taretlength, maxseq, sourceseqlen


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Process Decoder Input

Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.


In [9]:
### From the UDACITYclass assignment:
##########################################
# Process the input we'll feed to the decoder
#def process_decoder_input(target_data, vocab_to_int, batch_size):
 #   '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
 #   ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
 #   dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)##
    #return dec_input#
###udacity/hw/deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb
#####################################

def process_decoder_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for encoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    # done: Implement Function
    # this is to be sliced just like one would do with numpy
    # to do that, https://www.tensorflow.org/api_docs/python/tf/strided_slice is used.
    # ref to verify this is the rigth func: https://stackoverflow.com/questions/41380126/what-does-tf-strided-slice-do 
    
    #strided_slice(
    #input_,
    # begin,
    #end,
    #strides=None,
    #begin_mask=0,
    #end_mask=0,
    #ellipsis_mask=0,
    #new_axis_mask=0,
    #shrink_axis_mask=0,
    #var=None,
    #name=None
    #
    #)
    #ret = tf.strided_slice(input_=target_data,begin=[0],end=[batch_size],)
 

    # FROM UDACITY seq2seq assignment
    #ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    #dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
    #return dec_input
    
    
    ret = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    #ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target_data], 1)
    ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ret], 1)
    
    return ret #None

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)


Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer:


In [10]:
from imp import reload
reload(tests)

def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, 
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param source_vocab_size: vocabulary size of source data
    :param encoding_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """
    # done: Implement Function
    
    
    ##################
    ##
    ## This is simlar to 2.1 Encoder of the UDACITY seq2seq hw
    """
    #def encoding_layer(input_data, rnn_size, num_layers,
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):


    # Encoder embedding
    enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)

    # RNN cell
    def make_cell(rnn_size):
        enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
                                           initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
        return enc_cell

    enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
    
    enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
    
    return enc_output, enc_state"""
   #
    ##
    
    
    ##########
    
    # the respective documents for this cell are:
    #https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence
    #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell
    #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper
    #https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
    
    #rrnoutput=
    #rrnstate=
    
    #embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, rnn_size, encoding_embedding_size)
    embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
    
    #tf.contrib.layers.embed_sequence()
    def make_cell(rnn_size):
        #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper
        
        enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # I had AN INSANE AMMOUNT OF ERRORS BECAUSE I ACCIDENTALLY EDINTED THIS LINE TO HAVE PROB INSTEAD OF THE DROPOUT. >.> no good error codes
        enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell,output_keep_prob=keep_prob) 
       
        # Not sure which one. Probably not input. EIther output or state..
        #input_keep_prob: unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added.
        #output_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added.
        #state_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the output states of the cell.
        
        return enc_cell

    enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
     
    enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
    return enc_output, enc_state
    #return None, None

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)


Tests Passed

Decoding - Training

Create a training decoding layer:


In [11]:
######
#   SUPEEEER TRICKY UDACITY!
#  I spent half a day trying to figure out why I had cryptic errors - turns out only
#  Tensorflow 1.1 can run this.
# not 1.0 . Not 1.2.  
#  wasting my time near the submission deadline even though my code is OK.

In [12]:
# Used the UDACITY sequence_to_sequence_implementation as reference for this
# did find operation (ctrl+f) fro "rainingHelper"
# Found decoding_layer(...) function which seems to address this cell's requirements
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, 
                         target_sequence_length, max_summary_length, 
                         output_layer, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_summary_length: The length of the longest sequence in the batch
    :param output_layer: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing training logits and sample_id
    """
    # done: Implement Function
    
    """
    #from seq 2 seq: 
    # Helper for the training process. Used by BasicDecoder to read inputs.
        training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
                                                            sequence_length=target_sequence_length,
                                                            time_major=False)
        
        
        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                           training_helper,
                                                           enc_state,
                                                           output_layer) 
        
        # Perform dynamic decoding using the decoder
        training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
                       
    
    """
    # Helper for the training process. Used by BasicDecoder to read inputs.
    training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
                                                        sequence_length=target_sequence_length,
                                                        time_major=False)

    #encoder_state ... ameError: name 'enc_state' is not defined
    # Basic decoder
    training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                       training_helper,
                                                       encoder_state,
                                                       output_layer) 

    # Perform dynamic decoding using the decoder
    #NameError: name 'max_target_sequence_length' is not defined ... same deal
    training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True,  maximum_iterations=max_summary_length)
    #ValueError: too many values to unpack (expected 2)
    #training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True,  maximum_iterations=max_summary_length)
    
    
    return training_decoder_output
    


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)


Tests Passed

Decoding - Inference

Create inference decoder:


In [13]:
#########################
#
#  Searched course tutorial Seq2seq again, same functtion as last code cell
#
#  See below:
"""
with tf.variable_scope("decode", reuse=True):
        start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')

        # Helper for the inference process.
        inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
                                                                start_tokens,
                                                                target_letter_to_int['<EOS>'])

        # Basic decoder
        inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        output_layer)
        
        # Perform dynamic decoding using the decoder
        inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
                                                            impute_finished=True,
                                                            maximum_iterations=max_target_sequence_length)
         

    
    return training_decoder_output, inference_decoder_output
"""



def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
                         end_of_sequence_id, max_target_sequence_length,
                         vocab_size, output_layer, batch_size, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param max_target_sequence_length: Maximum length of target sequences
    :param vocab_size: Size of decoder/target vocabulary
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_layer: Function to apply the output layer
    :param batch_size: Batch size
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing inference logits and sample_id
    """
    # done: Implement Function
    
    #### BASED STRONGLY ON CLASS COURSEWORK, THE SEQ2SEQ material
    
    #https://www.tensorflow.org/api_docs/python/tf/tile
    #NameError: name 'target_letter_to_int' is not defined
    #start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
    #start_tokens = tf.tile(tf.constant(['<GO>'], dtype=tf.int32), [batch_size], name='start_tokens')
    start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')

    
    
    # Helper for the inference process.
    #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper
   
    #inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,target_letter_to_int['<EOS>'])
    #NameError: name 'target_letter_to_int' is not defined
    inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id)
    
    # Basic decoder
    #enc_state  # encoder_state changed naes
    #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,inference_helper,encoder_state,output_layer)

    # Perform dynamic decoding using the decoder
    #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode
    inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length)


    
    return inference_decoder_output#None



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)


Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Embed the target sequences
  • Construct the decoder LSTM cell (just like you constructed the encoder cell above)
  • Create an output layer to map the outputs of the decoder to the elements of our vocabulary
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.


In [14]:
#
##  
# Again, as suggested by a Uedacity TA (live support), SEQ 2 SEQ 
# Largely based on the decoding_layer in the udadcity seq2seq tutorial/example material. 
# See here:
"""
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
                   target_sequence_length, max_target_sequence_length, enc_state, dec_input):
    # 1. Decoder Embedding
    target_vocab_size = len(target_letter_to_int)
    dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)

    # 2. Construct the decoder cell
    def make_cell(rnn_size):
        dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
                                           initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
        return dec_cell

    dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
     
    # 3. Dense layer to translate the decoder's output at each time 
    # step into a choice from the target vocabulary
    output_layer = Dense(target_vocab_size,
                         kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))


    # 4. Set up a training decoder and an inference decoder
    # Training Decoder
    with tf.variable_scope("decode"):

        # Helper for the training process. Used by BasicDecoder to read inputs.
        training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
                                                            sequence_length=target_sequence_length,
                                                            time_major=False)
        
        
        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                           training_helper,
                                                           enc_state,
                                                           output_layer) 
        
        # Perform dynamic decoding using the decoder
        training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
                                                                       impute_finished=True,
                                                                       maximum_iterations=max_target_sequence_length)
    # 5. Inference Decoder
    # Reuses the same parameters trained by the training process
    with tf.variable_scope("decode", reuse=True):
        start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')

        # Helper for the inference process.
        inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
                                                                start_tokens,
                                                                target_letter_to_int['<EOS>'])

        # Basic decoder
        inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        output_layer)
        
        # Perform dynamic decoding using the decoder
        inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
                                                            impute_finished=True,
                                                            maximum_iterations=max_target_sequence_length)
         

    
    return training_decoder_output, inference_decoder_output
"""


def decoding_layer(dec_input, encoder_state,
                   target_sequence_length, max_target_sequence_length,
                   rnn_size,
                   num_layers, target_vocab_to_int, target_vocab_size,
                   batch_size, keep_prob, decoding_embedding_size):
    """
    Create decoding layer
    :param dec_input: Decoder input
    :param encoder_state: Encoder state
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_target_sequence_length: Maximum length of target sequences
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param target_vocab_size: Size of target vocabulary
    :param batch_size: The size of the batch
    :param keep_prob: Dropout keep probability
    :param decoding_embedding_size: Decoding embedding size
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    # TODO: Implement Function
    
    
   # 1. Decoder Embedding
    #NameError: name 'target_letter_to_int' is not defined
    #target_vocab_size = len(target_letter_to_int) # already param
    
    dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)

    # 2. Construct the decoder cell
    def make_cell(rnn_size):
        dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
                                           initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
        return dec_cell

    dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
     
    # 3. Dense layer to translate the decoder's output at each time 
    # step into a choice from the target vocabulary
    output_layer = Dense(target_vocab_size,
                         kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))


    # 4. Set up a training decoder and an inference decoder
    # Training Decoder
    with tf.variable_scope("decode"):

        # Helper for the training process. Used by BasicDecoder to read inputs.
        training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
                                                            sequence_length=target_sequence_length,
                                                            time_major=False)
        
        
        # Basic decoder
        #NameError: name 'enc_state' is not defined
        training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                           training_helper,
                                                           encoder_state,
                                                           output_layer) 
        
        # Perform dynamic decoding using the decoder
        training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
                                                                       impute_finished=True,
                                                                       maximum_iterations=max_target_sequence_length)
    # 5. Inference Decoder
    # Reuses the same parameters trained by the training process
    with tf.variable_scope("decode", reuse=True):
        #NameError: name 'target_letter_to_int' is not defined
        #target_vocab_to_int is the closest equivalent
        start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')

        # Helper for the inference process.
        inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
                                                                start_tokens,
                                                                target_vocab_to_int['<EOS>'])

        # Basic decoder
        #NameError: name 'enc_state' is not defined
        inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        encoder_state,
                                                        output_layer)
        
        # Perform dynamic decoding using the decoder
        inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
                                                            impute_finished=True,
                                                            maximum_iterations=max_target_sequence_length)
         

    
    return training_decoder_output, inference_decoder_output
    
    #return None, None



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)


Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
  • Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
  • Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.

In [15]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
                  source_sequence_length, target_sequence_length,
                  max_target_sentence_length,
                  source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size,
                  rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param source_sequence_length: Sequence Lengths of source sequences in the batch
    :param target_sequence_length: Sequence Lengths of target sequences in the batch
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    # done: Implement Function
    
    #ENcode
    
    
    RNN_output, RNN_state= encoding_layer(input_data, rnn_size, num_layers, keep_prob,  source_sequence_length, source_vocab_size, enc_embedding_size)
    #process
    
    Preprocessedtargetdata=process_decoder_input(target_data, target_vocab_to_int, batch_size)
    
    #decode
    reta,retb= decoding_layer(Preprocessedtargetdata, RNN_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) 
    
    
    return reta,retb#None


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)


Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability
  • Set display_step to state how many steps between each debug output statement

In [16]:
# PHyper parameters are expected to be of similar range to those of the Seq 2 seq lesson


# Number of Epochs
epochs = 16 #60 #None
# Batch Size
batch_size = 256 #None
# RNN Size
rnn_size = 50#None
# Number of Layers
num_layers = 2#None
# Embedding Size
encoding_embedding_size = 256 #15None
decoding_embedding_size = 256 #None
# Learning Rate
learning_rate = 0.01# None
# Dropout Keep Probability
keep_probability = 0.75 # reasoning: should be more than 50/50.. but it should still be able to drop values so it can search #None
display_step = 32#None

Build the Graph

Build the graph using the neural network you implemented.


In [17]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()

    #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
                                                   targets,
                                                   keep_prob,
                                                   batch_size,
                                                   source_sequence_length,
                                                   target_sequence_length,
                                                   max_target_sequence_length,
                                                   len(source_vocab_to_int),
                                                   len(target_vocab_to_int),
                                                   encoding_embedding_size,
                                                   decoding_embedding_size,
                                                   rnn_size,
                                                   num_layers,
                                                   target_vocab_to_int)


    training_logits = tf.identity(train_logits.rnn_output, name='logits')
    inference_logits = tf.identity(inference_logits.sample_id, name='predictions')

    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            training_logits,
            targets,
            masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-a6d1c9e78e5d> in <module>()
     26                                                    rnn_size,
     27                                                    num_layers,
---> 28                                                    target_vocab_to_int)
     29 
     30 

<ipython-input-15-8a8c837c3af5> in seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int)
     33 
     34     #decode
---> 35     reta,retb= decoding_layer(Preprocessedtargetdata, RNN_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
     36 
     37 

<ipython-input-14-0d1c74a1d60e> in decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size)
    124         training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
    125                                                             sequence_length=target_sequence_length,
--> 126                                                             time_major=False)
    127 
    128 

~/.conda/envs/dlndtf1.2/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/helper.py in __init__(self, inputs, sequence_length, time_major, name)
    155         raise ValueError(
    156             "Expected sequence_length to be a vector, but received shape: %s" %
--> 157             self._sequence_length.get_shape())
    158 
    159       self._zero_inputs = nest.map_structure(

ValueError: Expected sequence_length to be a vector, but received shape: <unknown>

Batch and pad the source and target sequences


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
    """Pad sentences with <PAD> so that each sentence of a batch has the same length"""
    max_sentence = max([len(sentence) for sentence in sentence_batch])
    return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]


def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
    """Batch targets, sources, and the lengths of their sentences together"""
    for batch_i in range(0, len(sources)//batch_size):
        start_i = batch_i * batch_size

        # Slice the right amount for the batch
        sources_batch = sources[start_i:start_i + batch_size]
        targets_batch = targets[start_i:start_i + batch_size]

        # Pad
        pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
        pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))

        # Need the lengths for the _lengths parameters
        pad_targets_lengths = []
        for target in pad_targets_batch:
            pad_targets_lengths.append(len(target))

        pad_source_lengths = []
        for source in pad_sources_batch:
            pad_source_lengths.append(len(source))

        yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1])],
            'constant')

    return np.mean(np.equal(target, logits))

# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
                                                                                                             valid_target,
                                                                                                             batch_size,
                                                                                                             source_vocab_to_int['<PAD>'],
                                                                                                             target_vocab_to_int['<PAD>']))                                                                                                  
with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
                get_batches(train_source, train_target, batch_size,
                            source_vocab_to_int['<PAD>'],
                            target_vocab_to_int['<PAD>'])):

            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})


            if batch_i % display_step == 0 and batch_i > 0:


                batch_train_logits = sess.run(
                    inference_logits,
                    {input_data: source_batch,
                     source_sequence_length: sources_lengths,
                     target_sequence_length: targets_lengths,
                     keep_prob: 1.0})


                batch_valid_logits = sess.run(
                    inference_logits,
                    {input_data: valid_sources_batch,
                     source_sequence_length: valid_sources_lengths,
                     target_sequence_length: valid_targets_lengths,
                     keep_prob: 1.0})

                train_acc = get_accuracy(target_batch, batch_train_logits)

                valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)

                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')

Save Parameters

Save the batch_size and save_path parameters for inference.


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint


In [ ]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.

In [ ]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    # TODO: Implement Function
    return None


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)

Translate

This will translate translate_sentence from English to French.


In [ ]:
#translate_sentence = 'he saw a old yellow truck .'
#Why does this have a typo in it? It should be "He saw AN old, yellow truck."

translate_sentence = "There once was a man from Nantucket."

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
                                         target_sequence_length: [len(translate_sentence)*2]*batch_size,
                                         source_sequence_length: [len(translate_sentence)]*batch_size,
                                         keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in translate_logits]))
print('  French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.