Apply word2vec to dataset

  1. Download some data
  2. Train on it a bit
  3. Visualize the result

By the end of this Notebook, you should be able to:

  • Apply word2vec to data
    • Download some data
    • Train on it a bit
    • Visualize the result
  • Describe the pre-processing steps needed for word2vec
  • Explain how t-sne is the preferred algorithm for high dimensional visualization

This notebook is based on a TensorFlow tutorial.


In [54]:
reset -fs

In [55]:
import collections
import math
import os
from pprint import pprint
import random
import urllib.request
import zipfile

import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf 
from sklearn.manifold import TSNE

%matplotlib inline

Step 1: Download the data.


In [20]:
url = 'http://mattmahoney.net/dc/'

def maybe_download(filename, expected_bytes):
    """Download a file if not present, and make sure it's the right size."""
    if not os.path.exists(filename):
        filename, _ = urllib.request.urlretrieve(url + filename, filename)
    
    statinfo = os.stat(filename)
    if statinfo.st_size == expected_bytes:
        print('Found and verified', filename)
    else:
        print(statinfo.st_size)
        raise Exception(
            'Failed to verify ' + filename + '. Can you get to it with a browser?')
    
    return filename

filename = maybe_download('text8.zip', 31344016)


Found and verified text8.zip

☝️ While this code is running, preview the code ahead.


In [21]:
# Read the data into a list of strings.
def read_data(filename):
    """Extract the first file enclosed in a zip file as a list of words"""
    with zipfile.ZipFile(filename) as f:
        data = tf.compat.as_str(f.read(f.namelist()[0])).split()
    return data

words = read_data(filename)
print('Dataset size: {:,} words'.format(len(words)))


Data set size: 17,005,207 words

In [43]:
# Let's have a peek
words[10:]


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-43-dcec7096d55f> in <module>()
      1 # Let's have a peek
----> 2 words[10:]

NameError: name 'words' is not defined

In [22]:
# Let's have a peek
words[-10:]


Out[22]:
['in', 'one', 'nine', 'six', 'three', 'one', 'nine', 'six', 'five', 'b']

Notice: None of the words are capitalized and there is no punctuation


Step 2: Build the dictionary


In [23]:
vocabulary_size = 50000

def build_dataset(words):
    """ Replace rare words with UNK token which stands for "unknown".
    It is called a dustbin category, aka sweep the small count items into a single group.
    """
    count = [['UNK', -1]]
    count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
    dictionary = dict()
    for word, _ in count:
        dictionary[word] = len(dictionary)
    data = list()
    unk_count = 0
    for word in words:
        if word in dictionary:
            index = dictionary[word]
        else:
            index = 0  # dictionary['UNK']
            unk_count += 1
        data.append(index)
    count[0][1] = unk_count
    reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
    return data, count, dictionary, reverse_dictionary

In [24]:
data, count, dictionary, reverse_dictionary = build_dataset(words)
del words  # Hint to reduce memory.

In [42]:
data # An index of each word to its rank. Therefore we don't have reference the string


Out[42]:
17005207

In [40]:
# dictionary # word: rank

In [36]:
# reverse_dictionary # rank: word

In [25]:
print('Most common words:')
pprint(count[:5])


Most common words:
[['UNK', 418391],
 ('the', 1061396),
 ('of', 593677),
 ('and', 416629),
 ('one', 411764)]

In [27]:
print('Most least words:')
pprint(count[-5:])


Most least words:
[('econometric', 9),
 ('gotthold', 9),
 ('barzani', 9),
 ('polypropylene', 9),
 ('housewives', 9)]

In [28]:
print('Sample data:')
pprint(list(zip(data[:10], [reverse_dictionary[i] for i in data[:10]])))


Sample data:
[(5240, 'anarchism'),
 (3081, 'originated'),
 (12, 'as'),
 (6, 'a'),
 (195, 'term'),
 (2, 'of'),
 (3136, 'abuse'),
 (46, 'first'),
 (59, 'used'),
 (156, 'against')]

Step 3: Function to generate a training batch for the skip-gram model.


In [44]:
data_index = 0

def generate_batch(batch_size, num_skips, skip_window):
    global data_index
    assert batch_size % num_skips == 0
    assert num_skips <= 2 * skip_window
    batch = np.ndarray(shape=(batch_size), dtype=np.int32)
    labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
    span = 2 * skip_window + 1 # [ skip_window target skip_window ]
    buffer = collections.deque(maxlen=span)
    for _ in range(span):
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    for i in range(batch_size // num_skips):
        target = skip_window  # target label at the center of the buffer
        targets_to_avoid = [ skip_window ]
        for j in range(num_skips):
            while target in targets_to_avoid:
                target = random.randint(0, span - 1)
            targets_to_avoid.append(target)
            batch[i * num_skips + j] = buffer[skip_window]
            labels[i * num_skips + j, 0] = buffer[target]
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    return batch, labels

In [47]:
batch, labels = generate_batch(batch_size=8, 
                               num_skips=2, 
                               skip_window=1)

In [46]:
for i in range(8):
    print(batch[i], reverse_dictionary[batch[i]],
          '->', labels[i, 0], reverse_dictionary[labels[i, 0]])


3081 originated -> 5240 anarchism
3081 originated -> 12 as
12 as -> 6 a
12 as -> 3081 originated
6 a -> 195 term
6 a -> 12 as
195 term -> 2 of
195 term -> 6 a

Step 4: Build and train a skip-gram model.


In [49]:
batch_size = 128
embedding_size = 128  # Dimension of the embedding vector.
skip_window = 1       # How many words to consider left and right.
num_skips = 2         # How many times to reuse an input to generate a label.

# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16     # Random set of words to evaluate similarity on.
valid_window = 100  # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64    # Number of negative examples to sample.

In [50]:
# TensorFlow setup
graph = tf.Graph()

with graph.as_default():

    # Input data
    train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
    train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
    valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

    # Ops and variables pinned to the CPU because of missing GPU implementation
    with tf.device('/cpu:0'):
        # Look up embeddings for inputs.
        embeddings = tf.Variable(
            tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
        embed = tf.nn.embedding_lookup(embeddings, train_inputs)

        # Construct the variables for the NCE loss
        nce_weights = tf.Variable(
            tf.truncated_normal([vocabulary_size, embedding_size],
                                stddev=1.0 / math.sqrt(embedding_size)))
        nce_biases = tf.Variable(tf.zeros([vocabulary_size]))

    # Compute the average NCE loss for the batch.
    # tf.nce_loss automatically draws a new sample of the negative labels each
    # time we evaluate the loss.
    loss = tf.reduce_mean(
      tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
                     num_sampled, vocabulary_size))

    # Construct the SGD optimizer using a learning rate of 1.0.
    optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)

    # Compute the cosine similarity between minibatch examples and all embeddings.
    norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
    normalized_embeddings = embeddings / norm
    valid_embeddings = tf.nn.embedding_lookup(
      normalized_embeddings, valid_dataset)
    similarity = tf.matmul(
      valid_embeddings, normalized_embeddings, transpose_b=True)

    # Add variable initializer.
    init = tf.initialize_all_variables()

Step 5: Begin training.


In [51]:
num_steps = 1 # 100001

with tf.Session(graph=graph) as session:
    # We must initialize all variables before we use them.
    init.run()
    print("Initialized")

    average_loss = 0
    for step in range(num_steps):
        batch_inputs, batch_labels = generate_batch(
            batch_size, num_skips, skip_window)
        feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}

        # We perform one update step by evaluating the optimizer op (including it
        # in the list of returned values for session.run()
        _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
        average_loss += loss_val

        if step % 2000 == 0:
            if step > 0:
                average_loss /= 2000
            # The average loss is an estimate of the loss over the last 2000 batches.
            print("Average loss at step ", step, ": ", average_loss)
            average_loss = 0

        # Note that this is expensive (~20% slowdown if computed every 500 steps)
        if step % 10000 == 0:
            sim = similarity.eval()
            for i in range(valid_size):
                valid_word = reverse_dictionary[valid_examples[i]]
                top_k = 8 # number of nearest neighbors
                nearest = (-sim[i, :]).argsort()[1:top_k+1]
                log_str = "Nearest to '%s':" % valid_word
                for k in range(top_k):
                    close_word = reverse_dictionary[nearest[k]]
                    log_str = "%s %s," % (log_str, close_word)
                print(log_str)
    final_embeddings = normalized_embeddings.eval()


Initialized
Average loss at step  0 :  290.268981934
Nearest to 'after': mescaline, courland, tank, lx, tuamotu, turbines, compassionate, worsen,
Nearest to 'into': thatcher, rigorously, resolution, indulgence, mehmet, loretta, blockading, vinland,
Nearest to 'on': begs, ebbinghaus, petsamo, humorists, byrd, depose, melodramatic, beyond,
Nearest to 'they': avl, snatch, hamlets, boasting, ellen, hbox, vied, cultures,
Nearest to 'these': mobygames, wai, wedgwood, ziggy, magus, completes, typhus, exchangeable,
Nearest to 'two': beheld, hs, escalating, expectation, withstood, upwelling, nad, shattered,
Nearest to 'be': liquid, ifvs, pastry, stylization, lepidus, annalen, beatrice, crossing,
Nearest to 'war': protective, roskilde, petri, lion, propyl, lise, sheryl, policeman,
Nearest to 'have': disparities, subdivisions, municipal, quieter, slaying, fdp, derrick, level,
Nearest to 'see': gi, dawson, interwoven, sha, meditation, mia, paws, fees,
Nearest to 'new': definitive, nightclubs, gallant, composers, kruger, marines, reinvention, tove,
Nearest to 'would': hide, bye, prefect, kjv, sensory, valid, cheated, ticonderoga,
Nearest to 'four': attempts, corning, godwin, materialistic, graceful, tac, neighbour, logo,
Nearest to 'UNK': icty, busing, acme, fusion, essai, haitians, theorizing, observational,
Nearest to 'with': fortnight, reaction, ill, ravine, odi, oecd, crewed, datatype,
Nearest to 'only': supp, steak, despotism, antonin, pbk, smoking, exhorts, lcd,

Step 6: Visualize the embeddings.

We'll use t-sne.

t-sne is a cool way to visualize of high-dimensional datasets by reducing the number dimensions.


In [52]:
tsne = TSNE(perplexity=30, 
            n_components=2, 
            init='pca', 
            n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [reverse_dictionary[i] for i in range(plot_only)]

n_words_to_visualize = 40

for i, label in enumerate(labels[:n_words_to_visualize]):
        x, y = low_dim_embs[i,:]
        plt.scatter(x, y)
        plt.annotate(label,
                     xy=(x, y),
                     xytext=(5, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')


What do you see?

How would you describe the relationships?


Let's render and save more samples.


In [53]:
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
    assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
    plt.figure(figsize=(18, 18))  #in inches
    for i, label in enumerate(labels):
        x, y = low_dim_embs[i,:]
        plt.scatter(x, y)
        plt.annotate(label,
                     xy=(x, y),
                     xytext=(5, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')

    plt.savefig(filename)

tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [reverse_dictionary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)


Take a look at the saved embeddings, are they good?

Are there any surprises?

What could we do to improve the embeddings?


If you are feeling adventurus, check out the complete TensorFlow version of word2vec


Summary

  • You need a lot of data to get good word2vec embeddings
  • The training takes some time
  • TensorFlow simplifies the model fitting of word2vec
  • You should always visualize inspect your results





Bonus Material

Run Google's TensorFlow code


In [4]:
# %run word2vec_basic.py

Let's look at the updated embeddings: