In [0]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Preparing text to use with TensorFlow models

The high level steps to prepare text to be used in a machine learning model are:

  1. Tokenize the words to get numerical values for them
  2. Create numerical sequences of the sentences
  3. Adjust the sequences to all be the same length.

In this colab, you learn how to use padding to make the sequences all be the same length.

Import the classes you need


In [0]:
# Import Tokenizer and pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

Write some sentences

Feel free to write your own sentences here.


In [0]:
sentences = [
    'My favorite food is ice cream',
    'do you like ice cream too?',
    'My dog likes ice cream!',
    "your favorite flavor of icecream is chocolate",
    "chocolate isn't good for dogs",
    "your dog, your cat, and your parrot prefer broccoli"
]
print(sentences)

Create the Tokenizer and define an out of vocabulary token

When creating the Tokenizer, you can specify the max number of words in the dictionary. You can also specify a token to represent words that are out of the vocabulary (OOV), in other words, that are not in the dictionary. This OOV token will be used when you create sequences for sentences that contain words that are not in the word index.


In [0]:
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")

Tokenize the words


In [0]:
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(word_index)

Turn sentences into sequences

Each word now has a unique number in the word index. However, words in a sentence are in a specific order. You can't just randomly mix up words and have the outcome be a sentence.

For example, although "chocolate isn't good for dogs" is a perfectly fine sentence, "dogs isn't for chocolate good" does not make sense as a sentence.

So the next step to representing text in a way that can be meaningfully used by machine learning programs is to create numerical sequences that represent the sentences in the text.

Each sentence will be converted into a sequence where each word is replaced by its number in the word index.


In [0]:
sequences = tokenizer.texts_to_sequences(sentences)
print (sequences)

Make the sequences all the same length

Later, when you feed the sequences into a neural network to train a model, the sequences all need to be uniform in size. Currently the sequences have varied lengths, so the next step is to make them all be the same size, either by padding them with zeros and/or truncating them.

Use f.keras.preprocessing.sequence.pad_sequences to add zeros to the sequences to make them all be the same length. By default, the padding goes at the start of the sequences, but you can specify to pad at the end.

You can optionally specify the maximum length to pad the sequences to. Sequences that are longer than the specified max length will be truncated. By default, sequences are truncated from the beginning of the sequence, but you can specify to truncate from the end.

If you don't provide the max length, then the sequences are padded to match the length of the longest sentence.

For all the options when padding and truncating sequences, see https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences


In [0]:
padded = pad_sequences(sequences)
print("\nWord Index = " , word_index)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)

In [0]:
# Specify a max length for the padded sequences
padded = pad_sequences(sequences, maxlen=15)
print(padded)

In [0]:
# Put the padding at the end of the sequences
padded = pad_sequences(sequences, maxlen=15, padding="post")
print(padded)

In [0]:
# Limit the length of the sequences, you will see some sequences get truncated
padded = pad_sequences(sequences, maxlen=3)
print(padded)

What happens if some of the sentences contain words that are not in the word index?

Here's where the "out of vocabulary" token is used. Try generating sequences for some sentences that have words that are not in the word index.


In [0]:
# Try turning sentences that contain words that 
# aren't in the word index into sequences.
# Add your own sentences to the test_data
test_data = [
    "my best friend's favorite ice cream flavor is strawberry",
    "my dog's best friend is a manatee"
]
print (test_data)

# Remind ourselves which number corresponds to the
# out of vocabulary token in the word index
print("<OOV> has the number", word_index['<OOV>'], "in the word index.")

# Convert the test sentences to sequences
test_seq = tokenizer.texts_to_sequences(test_data)
print("\nTest Sequence = ", test_seq)

# Pad the new sequences
padded = pad_sequences(test_seq, maxlen=10)
print("\nPadded Test Sequence: ")

# Notice that "1" appears in the sequence wherever there's a word 
# that's not in the word index
print(padded)