Deep Learning

Assignment 5

The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE


/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')

Download the data from the source website if necessary.


In [2]:
url = 'http://mattmahoney.net/dc/'

def maybe_download(filename, expected_bytes):
  """Download a file if not present, and make sure it's the right size."""
  if not os.path.exists(filename):
    filename, _ = urlretrieve(url + filename, filename)
  statinfo = os.stat(filename)
  if statinfo.st_size == expected_bytes:
    print('Found and verified %s' % filename)
  else:
    print(statinfo.st_size)
    raise Exception(
      'Failed to verify ' + filename + '. Can you get to it with a browser?')
  return filename

filename = maybe_download('text8.zip', 31344016)


Found and verified text8.zip

Read the data into a string.


In [3]:
def read_data(filename):
  """Extract the first file enclosed in a zip file as a list of words"""
  with zipfile.ZipFile(filename) as f:
    data = tf.compat.as_str(f.read(f.namelist()[0])).split()
  return data
  
words = read_data(filename)
print('Data size %d' % len(words))


Data size 17005207

Build the dictionary and replace rare words with UNK token.


In [4]:
vocabulary_size = 50000

def build_dataset(words):
  count = [['UNK', -1]]
  count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
  dictionary = dict()
  for word, _ in count:
    dictionary[word] = len(dictionary)
  data = list()
  unk_count = 0
  for word in words:
    if word in dictionary:
      index_into_count = dictionary[word]
    else:
      index_into_count = 0  # dictionary['UNK']
      unk_count = unk_count + 1
    data.append(index_into_count)
  count[0][1] = unk_count
  reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
  return data, count, dictionary, reverse_dictionary

data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words  # Hint to reduce memory.


Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)]
Sample data [5239, 3084, 12, 6, 195, 2, 3137, 46, 59, 156]

Function to generate a training batch for the skip-gram model.


In [5]:
data_index = 0

def generate_batch(batch_size, num_skips, skip_window):
  global data_index
  assert batch_size % num_skips == 0
  assert num_skips <= 2 * skip_window
  batch = np.ndarray(shape=(batch_size), dtype=np.int32)
  labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  span = 2 * skip_window + 1 # [ skip_window target skip_window ]
  buffer = collections.deque(maxlen=span)
  for _ in range(span):
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  for i in range(batch_size // num_skips):
    target = skip_window  # target label at the center of the buffer
    targets_to_avoid = [ skip_window ]
    for j in range(num_skips):
      while target in targets_to_avoid:
        target = random.randint(0, span - 1)
      targets_to_avoid.append(target)
      batch[i * num_skips + j] = buffer[skip_window]
      labels[i * num_skips + j, 0] = buffer[target]
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  return batch, labels

print('data:', [reverse_dictionary[di] for di in data[:8]])

for num_skips, skip_window in [(2, 1), (4, 2)]:
    data_index = 0
    batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
    print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
    print('    batch:', [reverse_dictionary[bi] for bi in batch])
    print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])


data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']

with num_skips = 2 and skip_window = 1:
    batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term']
    labels: ['anarchism', 'as', 'originated', 'a', 'as', 'term', 'of', 'a']

with num_skips = 4 and skip_window = 2:
    batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a']
    labels: ['originated', 'term', 'anarchism', 'a', 'as', 'of', 'originated', 'term']

Train a skip-gram model.


In [6]:
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent. 
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.

graph = tf.Graph()

with graph.as_default(), tf.device('/cpu:0'):

  # Input data.
  train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  
  # Variables.
  embeddings = tf.Variable(
    tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
  softmax_weights = tf.Variable(
    tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size)))
  softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
  
  # Model.
  # Look up embeddings for inputs.
  embed = tf.nn.embedding_lookup(embeddings, train_dataset)
  # Compute the softmax loss, using a sample of the negative labels each time.
  loss = tf.reduce_mean(
    tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
                               train_labels, num_sampled, vocabulary_size))
  
  # Optimizer.
  # Note: The optimizer will optimize the softmax_weights AND the embeddings.
  # This is because the embeddings are defined as a variable quantity and the
  # optimizer's `minimize` method will by default modify all variable quantities 
  # that contribute to the tensor it is passed.
  # See docs on `tf.train.Optimizer.minimize()` for more details.
  optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
  
  # Compute the similarity between minibatch examples and all embeddings.
  # We use the cosine distance:
  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  normalized_embeddings = embeddings / norm
  valid_embeddings = tf.nn.embedding_lookup(
    normalized_embeddings, valid_dataset)
  similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))

In [7]:
num_steps = 100001

with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print('Initialized')
  average_loss = 0
  for step in range(num_steps):
    batch_data, batch_labels = generate_batch(
      batch_size, num_skips, skip_window)
    feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
    _, l = session.run([optimizer, loss], feed_dict=feed_dict)
    average_loss += l
    if step % 2000 == 0:
      if step > 0:
        average_loss = average_loss / 2000
      # The average loss is an estimate of the loss over the last 2000 batches.
      print('Average loss at step %d: %f' % (step, average_loss))
      average_loss = 0
    # note that this is expensive (~20% slowdown if computed every 500 steps)
    if step % 10000 == 0:
      sim = similarity.eval()
      for i in range(valid_size):
        valid_word = reverse_dictionary[valid_examples[i]]
        top_k = 8 # number of nearest neighbors
        nearest = (-sim[i, :]).argsort()[1:top_k+1]
        log = 'Nearest to %s:' % valid_word
        for k in range(top_k):
          close_word = reverse_dictionary[nearest[k]]
          log = '%s %s,' % (log, close_word)
        print(log)
  final_embeddings = normalized_embeddings.eval()


Initialized
Average loss at step 0: 8.092085
Nearest to states: iscariot, thousand, roadmap, fita, underbelly, cooper, harappa, cosmography,
Nearest to its: topless, callaghan, flagellates, letterboxing, poured, perrin, imperfect, concatenation,
Nearest to UNK: locates, suitable, anatomical, surmised, circe, lazy, counting, taney,
Nearest to can: anoint, abhidhamma, valiantly, hillside, sustains, catches, hannover, asteraceae,
Nearest to seven: kazimierz, aborigines, abdominal, ramayana, agrarianism, scavenging, hesse, lesley,
Nearest to for: holistic, loyalty, travers, eutyches, hermitian, wells, metallurgical, thrive,
Nearest to the: imprecise, wreath, suborders, dotted, divorce, overlay, warring, plosives,
Nearest to eight: iridescent, showings, pictorial, amor, wellington, plectrum, mayfield, dorne,
Nearest to people: colorful, seeded, robin, hilberg, felines, sinister, chisinau, tours,
Nearest to it: marv, catenary, jaffa, flame, observable, macbeth, subdued, wardrobe,
Nearest to and: puente, pastry, akiva, ola, prepositions, neologisms, conceiving, psi,
Nearest to this: archivist, winston, rzeczypospolitej, hypothesis, israelite, fmd, inflate, procure,
Nearest to more: achaeans, honey, sara, hemingway, magnesians, superb, capitalists, quarters,
Nearest to see: delved, shrimp, magdalen, buddhism, serapeum, voltage, install, lithuanian,
Nearest to new: reestablishment, vito, illa, venezuela, monotonous, emphatic, parr, battista,
Nearest to i: slim, conceals, magnetometer, amending, vidal, partner, spires, niger,
Average loss at step 2000: 4.362636
Average loss at step 4000: 3.869744
Average loss at step 6000: 3.788122
Average loss at step 8000: 3.689819
Average loss at step 10000: 3.607537
Nearest to states: underbelly, iscariot, muon, irving, anglian, teleology, misinterpretation, aids,
Nearest to its: the, their, his, andre, rejuvenated, topless, imprecise, sverdlovsk,
Nearest to UNK: khaldun, arf, naturalists, gash, cluny, stanford, flanders, cleanliness,
Nearest to can: may, must, would, evoking, compensatory, mormons, baptists, markup,
Nearest to seven: six, eight, three, nine, four, five, zero, two,
Nearest to for: in, of, with, cliques, baccalaureate, intensified, and, saucers,
Nearest to the: its, a, this, his, their, an, tsunami, latham,
Nearest to eight: six, nine, five, seven, four, three, zero, two,
Nearest to people: colorful, leno, abide, robin, seeded, sinister, screwed, different,
Nearest to it: he, this, there, which, they, not, arid, that,
Nearest to and: or, mathcal, american, noir, he, s, whey, of,
Nearest to this: it, the, which, polonia, kayaking, a, zy, coda,
Nearest to more: hemingway, achaeans, superb, dysprosium, less, not, hanford, slovenians,
Nearest to see: monkey, racers, travelogue, h, proprietary, colonna, rufus, delved,
Nearest to new: vito, battista, nahuatl, gestis, cobra, singaporean, dries, relentlessly,
Nearest to i: we, you, matti, vidal, slim, minamoto, garments, sought,
Average loss at step 12000: 3.609063
Average loss at step 14000: 3.566319
Average loss at step 16000: 3.411614
Average loss at step 18000: 3.454830
Average loss at step 20000: 3.538693
Nearest to states: muon, irving, aquifers, underbelly, sejm, teleology, fourteen, aids,
Nearest to its: their, the, his, sverdlovsk, andre, rejuvenated, her, another,
Nearest to UNK: gash, moniker, huggins, mr, nilotic, funnel, theologian, berlitz,
Nearest to can: may, would, must, will, might, should, could, cannot,
Nearest to seven: eight, six, five, nine, four, three, zero, two,
Nearest to for: of, cliques, in, by, epa, neuch, julian, mcneil,
Nearest to the: its, their, his, this, a, imprecise, quay, signaling,
Nearest to eight: seven, six, nine, four, five, three, two, zero,
Nearest to people: leno, sinister, colorful, nurses, cuyahoga, robin, screwed, abide,
Nearest to it: he, this, there, which, they, she, not, batavia,
Nearest to and: or, but, while, in, from, slipped, mathcal, which,
Nearest to this: it, which, the, a, any, that, magen, rossini,
Nearest to more: less, most, dysprosium, arid, achaeans, hemingway, stop, not,
Nearest to see: monkey, proprietary, bluebeard, h, travelogue, racers, ets, monism,
Nearest to new: battista, vito, gestis, carniola, benevolent, reestablishment, chalice, assr,
Nearest to i: ii, kidnappings, mondegreens, minamoto, we, troop, g, slim,
Average loss at step 22000: 3.501102
Average loss at step 24000: 3.487492
Average loss at step 26000: 3.484972
Average loss at step 28000: 3.478751
Average loss at step 30000: 3.504336
Nearest to states: aquifers, irving, kingdom, apartheid, nations, cisc, muon, fourteen,
Nearest to its: their, the, his, her, sverdlovsk, any, another, extant,
Nearest to UNK: tarquinius, ari, donald, cytoplasm, wehrmacht, gash, animists, quotations,
Nearest to can: may, would, must, will, could, might, should, cannot,
Nearest to seven: nine, six, four, eight, five, three, two, one,
Nearest to for: with, nmr, epa, accepted, cliques, saucers, pohnpei, including,
Nearest to the: their, its, his, our, a, sheng, lachlan, any,
Nearest to eight: nine, six, seven, four, five, three, zero, two,
Nearest to people: leno, countries, contemplated, nurses, sinister, screwed, baptize, children,
Nearest to it: he, there, this, she, they, which, batavia, also,
Nearest to and: or, sassanian, but, denaturation, who, cashmere, microtubule, in,
Nearest to this: it, which, any, what, that, incumbents, not, rossini,
Nearest to more: less, most, better, gasoline, very, hemingway, extremely, achaeans,
Nearest to see: monkey, ets, bluebeard, proprietary, diels, manila, sula, constructs,
Nearest to new: gestis, vito, battista, cobra, quorum, major, assr, carniola,
Nearest to i: ii, we, kidnappings, they, g, t, you, minamoto,
Average loss at step 32000: 3.497522
Average loss at step 34000: 3.491208
Average loss at step 36000: 3.450717
Average loss at step 38000: 3.301454
Average loss at step 40000: 3.425423
Nearest to states: kingdom, nations, aquifers, countries, irving, sejm, cisc, cpc,
Nearest to its: their, his, her, the, ateneo, sverdlovsk, rejuvenated, hemlock,
Nearest to UNK: n, b, walter, greatly, khalid, keeper, unifying, quaker,
Nearest to can: may, will, must, would, could, might, should, cannot,
Nearest to seven: five, six, eight, nine, four, three, zero, two,
Nearest to for: in, including, with, of, epa, pohnpei, while, fluffy,
Nearest to the: a, this, its, their, his, each, bandleader, some,
Nearest to eight: nine, seven, six, four, five, three, zero, one,
Nearest to people: leno, scholars, nurses, languages, critics, experts, baptize, crackdown,
Nearest to it: he, there, she, this, they, batavia, which, what,
Nearest to and: or, but, including, in, somber, cytokines, nikolay, like,
Nearest to this: which, it, that, the, what, another, rossini, viciously,
Nearest to more: less, most, better, very, extremely, temptations, greater, gasoline,
Nearest to see: monkey, list, diels, ets, ogre, bfbs, condensate, kemerovo,
Nearest to new: gestis, vito, reference, battista, executed, earns, assr, rink,
Nearest to i: we, ii, you, they, t, he, g, minamoto,
Average loss at step 42000: 3.434284
Average loss at step 44000: 3.454989
Average loss at step 46000: 3.450317
Average loss at step 48000: 3.350161
Average loss at step 50000: 3.380651
Nearest to states: kingdom, nations, aquifers, irving, transshipment, canidae, sejm, fourteen,
Nearest to its: their, his, the, her, ateneo, sverdlovsk, another, rejuvenated,
Nearest to UNK: la, et, salon, al, heracleidae, khaldun, de, com,
Nearest to can: may, would, must, could, will, might, should, cannot,
Nearest to seven: six, eight, nine, four, three, five, zero, two,
Nearest to for: while, locate, saucers, epa, pohnpei, including, lomond, manipulative,
Nearest to the: its, their, this, his, quay, a, nesmith, any,
Nearest to eight: six, seven, nine, four, five, three, one, two,
Nearest to people: leno, scholars, men, mitotic, children, critics, baptize, duffy,
Nearest to it: he, there, she, they, this, still, companionship, batavia,
Nearest to and: or, but, while, including, geographic, monolingual, of, touring,
Nearest to this: which, it, the, another, he, tonnes, majorities, pontus,
Nearest to more: less, most, very, better, extremely, arid, merrimack, bracelets,
Nearest to see: monkey, reputations, include, bluebeard, list, ets, containing, ogre,
Nearest to new: gestis, reference, rink, assr, basement, elan, vito, special,
Nearest to i: we, ii, you, they, g, topple, t, iff,
Average loss at step 52000: 3.437191
Average loss at step 54000: 3.428425
Average loss at step 56000: 3.435758
Average loss at step 58000: 3.391363
Average loss at step 60000: 3.394956
Nearest to states: kingdom, nations, countries, canidae, irving, elections, sejm, transshipment,
Nearest to its: their, his, the, her, pathway, ateneo, our, fancher,
Nearest to UNK: et, bantu, mudd, literally, valentino, des, cleanliness, mvs,
Nearest to can: may, could, must, would, will, might, should, cannot,
Nearest to seven: six, eight, four, five, nine, three, one, zero,
Nearest to for: of, pohnpei, including, blog, in, eutyches, epa, ejection,
Nearest to the: its, a, their, any, quay, some, every, petrochemical,
Nearest to eight: nine, six, four, seven, five, three, zero, one,
Nearest to people: scholars, leno, men, children, critics, others, countries, experts,
Nearest to it: he, there, this, she, what, companionship, still, they,
Nearest to and: or, but, although, while, including, usable, than, see,
Nearest to this: which, it, that, there, majorities, atomism, the, a,
Nearest to more: less, most, better, very, larger, rather, extremely, greater,
Nearest to see: include, monkey, containing, bluebeard, list, jardines, kinnear, but,
Nearest to new: rink, assr, gestis, reference, lowest, printing, battista, sq,
Nearest to i: we, you, ii, they, g, t, boito, everton,
Average loss at step 62000: 3.239573
Average loss at step 64000: 3.248174
Average loss at step 66000: 3.404684
Average loss at step 68000: 3.390529
Average loss at step 70000: 3.360296
Nearest to states: kingdom, nations, countries, state, transshipment, sejm, fourteen, viol,
Nearest to its: their, his, the, her, ghaznavid, sverdlovsk, compression, andre,
Nearest to UNK: mudd, salon, luck, vases, wikipedia, wrinkled, khalid, rivest,
Nearest to can: may, could, would, must, will, might, cannot, should,
Nearest to seven: six, eight, four, nine, five, zero, three, one,
Nearest to for: pohnpei, in, including, cliques, forgeries, fluffy, constitutes, manipulative,
Nearest to the: their, any, its, this, a, each, these, another,
Nearest to eight: nine, six, seven, four, five, zero, three, two,
Nearest to people: children, men, scholars, others, leno, critics, baptize, crackdown,
Nearest to it: he, she, there, this, they, still, batavia, we,
Nearest to and: or, but, while, monolingual, of, than, like, which,
Nearest to this: it, which, the, that, another, there, he, what,
Nearest to more: less, most, very, better, extremely, larger, rather, smaller,
Nearest to see: include, list, monkey, containing, but, reputations, bluebeard, bfbs,
Nearest to new: rink, assr, amdahl, battista, gestis, wreckage, printing, tennis,
Nearest to i: we, you, ii, g, thurgood, they, boito, topple,
Average loss at step 72000: 3.372821
Average loss at step 74000: 3.350133
Average loss at step 76000: 3.313731
Average loss at step 78000: 3.346594
Average loss at step 80000: 3.381728
Nearest to states: kingdom, nations, state, transshipment, sejm, fourteen, mutualism, abdallah,
Nearest to its: their, her, his, the, our, your, ghaznavid, jingles,
Nearest to UNK: itis, el, producer, purporting, theologian, memes, r, l,
Nearest to can: could, may, will, must, would, might, cannot, should,
Nearest to seven: eight, six, five, four, nine, three, zero, one,
Nearest to for: pohnpei, fluffy, including, after, potent, forgeries, ebu, cliques,
Nearest to the: its, a, this, their, his, quay, petrochemical, morpheus,
Nearest to eight: six, seven, nine, four, five, three, zero, one,
Nearest to people: men, children, scholars, others, critics, leno, individuals, women,
Nearest to it: he, there, she, this, they, never, not, still,
Nearest to and: or, including, but, revolting, than, while, theatrically, ablaze,
Nearest to this: it, which, the, another, what, itself, there, majorities,
Nearest to more: less, very, most, better, extremely, rather, larger, smaller,
Nearest to see: include, monkey, bluebeard, includes, list, fulltext, effendi, alessandri,
Nearest to new: reference, battista, gestis, wreckage, qualitative, assr, printing, alabama,
Nearest to i: you, we, ii, g, iii, boito, amalric, topple,
Average loss at step 82000: 3.407842
Average loss at step 84000: 3.407591
Average loss at step 86000: 3.387259
Average loss at step 88000: 3.351482
Average loss at step 90000: 3.360822
Nearest to states: kingdom, nations, fourteen, abdallah, transshipment, state, countries, encrypting,
Nearest to its: their, his, her, the, our, sverdlovsk, whose, kilts,
Nearest to UNK: mr, stratford, keynes, cac, nat, ilium, johnston, seasonal,
Nearest to can: could, may, must, will, would, might, should, cannot,
Nearest to seven: five, eight, nine, four, six, three, one, zero,
Nearest to for: including, after, pohnpei, forgeries, suprema, if, seismology, when,
Nearest to the: its, their, quay, his, any, this, a, petrochemical,
Nearest to eight: seven, nine, five, six, four, three, zero, two,
Nearest to people: children, men, women, scholars, critics, individuals, person, writers,
Nearest to it: he, she, there, this, they, itself, batavia, never,
Nearest to and: or, but, including, while, although, universala, geelong, which,
Nearest to this: it, which, another, majorities, any, the, what, some,
Nearest to more: less, very, most, better, extremely, larger, rather, highly,
Nearest to see: list, include, monkey, references, consonantal, ets, includes, companionship,
Nearest to new: wreckage, reference, alabama, qualitative, gestis, risings, old, missionaries,
Nearest to i: we, you, g, ii, r, iii, boito, jumbo,
Average loss at step 92000: 3.399357
Average loss at step 94000: 3.247228
Average loss at step 96000: 3.360328
Average loss at step 98000: 3.243456
Average loss at step 100000: 3.361214
Nearest to states: kingdom, nations, transshipment, abdallah, state, irving, countries, aquifers,
Nearest to its: their, his, her, the, our, whose, sverdlovsk, drug,
Nearest to UNK: avalanche, shark, duarte, sinuous, discredited, priestess, buchanan, r,
Nearest to can: may, could, must, will, would, should, might, cannot,
Nearest to seven: eight, nine, six, five, four, zero, three, two,
Nearest to for: when, including, suprema, after, epa, polyalphabetic, if, despite,
Nearest to the: your, his, their, its, a, our, sheng, some,
Nearest to eight: seven, nine, six, five, four, three, two, zero,
Nearest to people: children, critics, men, scholars, individuals, countries, women, speakers,
Nearest to it: he, she, there, this, they, never, what, batavia,
Nearest to and: or, but, while, although, when, including, like, who,
Nearest to this: which, it, another, what, he, some, enver, itself,
Nearest to more: less, most, very, better, extremely, greater, larger, quite,
Nearest to see: list, references, includes, monkey, bluebeard, include, consonantal, can,
Nearest to new: reference, wreckage, special, different, rorty, alabama, qualitative, logbook,
Nearest to i: we, you, ii, they, iii, aldous, g, he,

In [8]:
num_points = 400

tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
# why ruclidean distance here, and not cosine?

In [9]:
def plot(embeddings, labels):
  assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
  pylab.figure(figsize=(15,15))  # in inches
  for i, label in enumerate(labels):
    x, y = embeddings[i,:]
    pylab.scatter(x, y)
    pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
                   ha='right', va='bottom')
  pylab.show()

words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)

Problem

An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.



In [16]:
data_index = 0

def generate_batch(batch_size, skip_window):
  assert skip_window == 1 # Handling of this value is hard-coded here.
  global data_index
  batch  = np.ndarray(shape=(batch_size, 2), dtype=np.int32)
  labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  span = 2*skip_window + 1 # [ skip_window target skip_window ]
  buffer = collections.deque(maxlen=span)
  for _ in range(span):
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  for i in range(batch_size):
    target = skip_window  # target label at the center of the buffer
    batch[i, 0] = buffer[skip_window-1]
    batch[i, 1] = buffer[skip_window+1]
    labels[i, 0] = buffer[target]
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  return batch, labels

print('data:', [reverse_dictionary[di] for di in data[:8]])

for skip_window in [1]:
    data_index = 0
    batch, labels = generate_batch(batch_size=8, skip_window=skip_window)
    print('\nwith skip_window = %d:' % skip_window)
    print('    batch:', [[reverse_dictionary[m] for m in bi] for bi in batch])
    print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])


data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']

with skip_window = 1:
    batch: [['anarchism', 'as'], ['originated', 'a'], ['as', 'term'], ['a', 'of'], ['term', 'abuse'], ['of', 'first'], ['abuse', 'used'], ['first', 'against']]
    labels: ['originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used']

In [22]:
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent. 
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.

graph = tf.Graph()

with graph.as_default(), tf.device('/cpu:0'):

  # Input data.
  span = 2*skip_window + 1 # [ skip_window target skip_window ]
  train_dataset = tf.placeholder(tf.int32, shape=[batch_size, (span-1)])
  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  
  # Variables.
  embeddings = tf.Variable(
    tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
  softmax_weights = tf.Variable(
    tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size)))
  softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
  
  # Model.
  # Look up embeddings for inputs.
  assert skip_window == 1 # Handling of this value is hard-coded here.
  embed0 = tf.nn.embedding_lookup(embeddings, train_dataset[:,0])
  embed1 = tf.nn.embedding_lookup(embeddings, train_dataset[:,1])
  embed = (embed0 + embed1)/(span-1)
  # Compute the softmax loss, using a sample of the negative labels each time.
  loss = tf.reduce_mean(
    tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
                               train_labels, num_sampled, vocabulary_size))
  
  # Optimizer.
  optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
  
  # Compute the similarity between minibatch examples and all embeddings.
  # We use the cosine distance:
  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  normalized_embeddings = embeddings / norm
  valid_embeddings = tf.nn.embedding_lookup(
    normalized_embeddings, valid_dataset)
  similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))

In [23]:
num_steps = 100001

with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print('Initialized')
  average_loss = 0
  for step in range(num_steps):
    batch_data, batch_labels = generate_batch(
      batch_size, skip_window)
    feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
    _, l = session.run([optimizer, loss], feed_dict=feed_dict)
    average_loss += l
    if step % 2000 == 0:
      if step > 0:
        average_loss = average_loss / 2000
      # The average loss is an estimate of the loss over the last 2000 batches.
      print('Average loss at step %d: %f' % (step, average_loss))
      average_loss = 0
    # note that this is expensive (~20% slowdown if computed every 500 steps)
    if step % 10000 == 0:
      sim = similarity.eval()
      for i in xrange(valid_size):
        valid_word = reverse_dictionary[valid_examples[i]]
        top_k = 8 # number of nearest neighbors
        nearest = (-sim[i, :]).argsort()[1:top_k+1]
        log = 'Nearest to %s:' % valid_word
        for k in xrange(top_k):
          close_word = reverse_dictionary[nearest[k]]
          log = '%s %s,' % (log, close_word)
        print(log)
  final_embeddings = normalized_embeddings.eval()


Initialized
Average loss at step 0: 7.845082
Nearest to however: serviced, farina, harley, miranda, tanning, bubbled, hodges, fairies,
Nearest to known: castes, nursia, mediation, gymnasts, serials, bradshaw, scientist, blow,
Nearest to after: ghidorah, oyly, gau, microtubules, austronesian, tfl, rehearsing, patan,
Nearest to for: alsace, goofy, nomadic, daggers, sep, diatonic, shrank, autocorrelation,
Nearest to nine: necklaces, dosage, lockport, question, trisomy, manoeuvre, lori, experimenter,
Nearest to six: characterise, resignation, chiba, pentafluoride, firms, apostolic, aged, jaspers,
Nearest to only: apocalypse, saving, planet, leben, mnt, crowd, soriano, archaically,
Nearest to up: unorganized, detachments, moesia, contractual, ayyavazhi, gaians, idol, helene,
Nearest to some: mountaineering, hilbert, reina, america, counterintelligence, boilers, emmylou, setting,
Nearest to most: representational, lite, lindsay, squaw, villains, extensively, eclipses, accumulated,
Nearest to many: confusing, bedside, axons, bale, reappearance, shores, talkative, mitford,
Nearest to was: solutions, synchronize, characters, satsuma, judith, archetypical, played, micah,
Nearest to see: whipping, nashwaak, pinky, proportional, merci, declassified, communism, valign,
Nearest to used: cossack, aerials, adolphe, colombians, krause, reservation, assistive, soundtrack,
Nearest to states: inexpensively, runoff, decaying, balkan, kc, unleashing, enclave, totality,
Nearest to their: presume, calibration, clearly, ali, constraint, handicapped, knees, algeria,
Average loss at step 2000: 4.031571
Average loss at step 4000: 3.511246
Average loss at step 6000: 3.345694
Average loss at step 8000: 3.195586
Average loss at step 10000: 3.144050
Nearest to however: betrayed, serviced, armstrong, where, around, made, jockey, chicken,
Nearest to known: castes, nursia, eaten, serials, such, foss, phoned, pianos,
Nearest to after: before, synodic, though, motherhood, existent, runway, roxy, giving,
Nearest to for: into, conflicted, milking, cma, exhibited, pessoa, platoon, fraternity,
Nearest to nine: eight, six, seven, five, zero, four, three, two,
Nearest to six: eight, nine, five, three, seven, four, zero, two,
Nearest to only: farmed, irradiation, concordance, riefenstahl, fools, contarini, bead, prefixes,
Nearest to up: moesia, idol, detachments, gaians, springtime, helene, detention, refinements,
Nearest to some: many, cordillera, any, several, the, mcduck, america, mountaineering,
Nearest to most: lite, more, tobin, theseus, moonlight, bond, subsidised, aviation,
Nearest to many: some, these, confusing, inherit, those, theseus, encoder, other,
Nearest to was: is, has, were, had, phylogenetics, became, be, ferus,
Nearest to see: research, axe, whipping, nsw, declassified, specialising, waldemar, agates,
Nearest to used: krause, current, continued, oswaldo, diseases, boots, held, cvs,
Nearest to states: inexpensively, runoff, kc, enclave, totality, bigfoot, optic, retiring,
Nearest to their: his, its, the, this, constraint, her, labial, ammianus,
Average loss at step 12000: 3.171129
Average loss at step 14000: 3.125909
Average loss at step 16000: 3.150987
Average loss at step 18000: 3.102135
Average loss at step 20000: 2.988063
Nearest to however: but, serviced, betrayed, where, cryonics, that, although, armstrong,
Nearest to known: eaten, castes, foss, such, serials, itself, regarded, nursia,
Nearest to after: before, until, though, giving, motherhood, during, dara, rehearsing,
Nearest to for: shenandoah, csicop, curtin, conflicted, cma, ibook, exhibited, constans,
Nearest to nine: eight, six, seven, four, five, zero, three, pedagogic,
Nearest to six: eight, five, seven, four, nine, three, zero, two,
Nearest to only: farmed, irradiation, transverse, crowd, always, prefixes, contarini, rage,
Nearest to up: moesia, detachments, him, mordor, masked, lipid, helene, incline,
Nearest to some: many, these, any, several, cordillera, chymotrypsin, most, all,
Nearest to most: more, lite, moonlight, superb, subsidised, dbms, theseus, greenwich,
Nearest to many: some, these, various, those, several, confusing, inherit, apportioned,
Nearest to was: is, has, were, had, became, be, ferus, been,
Nearest to see: include, but, oncoming, declassified, vitality, research, specialising, hein,
Nearest to used: held, cvs, seen, continued, oswaldo, published, krause, carrier,
Nearest to states: inexpensively, kingdom, runoff, kc, cbs, anything, actress, welding,
Nearest to their: its, his, her, the, constraint, whose, various, reznor,
Average loss at step 22000: 3.050668
Average loss at step 24000: 3.023262
Average loss at step 26000: 2.988972
Average loss at step 28000: 3.015502
Average loss at step 30000: 2.991117
Nearest to however: but, that, although, where, serviced, betrayed, picker, osorio,
Nearest to known: eaten, foss, regarded, such, serials, castes, required, itself,
Nearest to after: before, until, during, when, though, giving, within, lieberman,
Nearest to for: shenandoah, without, cma, ibook, curtin, csicop, diatonic, iom,
Nearest to nine: eight, seven, six, four, five, zero, three, two,
Nearest to six: four, eight, five, seven, nine, three, two, zero,
Nearest to only: farmed, always, hirohito, still, disturbed, irradiation, flips, otherwise,
Nearest to up: him, detachments, moesia, down, out, masked, lipid, mordor,
Nearest to some: many, several, these, any, most, various, all, the,
Nearest to most: more, some, greenwich, senator, dbms, many, superb, subsidised,
Nearest to many: some, several, various, these, those, inherit, locale, other,
Nearest to was: is, became, had, has, were, be, been, being,
Nearest to see: include, but, oncoming, vitality, peacekeepers, specialising, hein, according,
Nearest to used: held, seen, found, published, written, reported, continued, cvs,
Nearest to states: kingdom, inexpensively, countries, runoff, uses, anything, bigfoot, horses,
Nearest to their: its, his, her, the, whose, our, constraint, wandered,
Average loss at step 32000: 2.838615
Average loss at step 34000: 2.944799
Average loss at step 36000: 2.935265
Average loss at step 38000: 2.943349
Average loss at step 40000: 2.917834
Nearest to however: but, although, that, picker, serviced, where, overall, osorio,
Nearest to known: eaten, regarded, defined, foss, required, used, seen, castes,
Nearest to after: before, during, until, when, though, giving, heliocentric, lieberman,
Nearest to for: cma, csicop, shenandoah, curtin, falklands, without, codec, modeled,
Nearest to nine: eight, seven, six, five, four, three, zero, two,
Nearest to six: five, four, eight, seven, three, nine, zero, two,
Nearest to only: farmed, always, hirohito, easily, flips, substitute, disturbed, still,
Nearest to up: down, out, him, detachments, off, moesia, mordor, incline,
Nearest to some: many, several, various, these, most, any, all, certain,
Nearest to most: more, some, many, less, greenwich, blended, locale, dbms,
Nearest to many: some, several, various, those, numerous, these, most, inherit,
Nearest to was: is, became, has, were, had, be, been, being,
Nearest to see: include, oncoming, but, peacekeepers, vitality, pound, axe, according,
Nearest to used: found, seen, held, written, known, reported, cvs, published,
Nearest to states: kingdom, inexpensively, countries, nations, buildings, anything, schlick, archaeologist,
Nearest to their: its, his, her, the, our, whose, your, facet,
Average loss at step 42000: 2.934740
Average loss at step 44000: 2.943604
Average loss at step 46000: 2.905441
Average loss at step 48000: 2.871509
Average loss at step 50000: 2.845164
Nearest to however: but, although, that, picker, though, especially, administer, where,
Nearest to known: eaten, regarded, used, defined, such, foss, required, seen,
Nearest to after: before, during, when, until, without, though, despite, lieberman,
Nearest to for: without, shenandoah, curtin, cma, csicop, iom, during, codec,
Nearest to nine: eight, seven, six, five, four, zero, three, two,
Nearest to six: eight, seven, five, four, nine, three, zero, two,
Nearest to only: always, farmed, either, easily, substitute, still, otherwise, disturbed,
Nearest to up: out, down, him, off, detachments, destry, mordor, moesia,
Nearest to some: many, several, any, these, various, certain, most, all,
Nearest to most: more, less, many, some, greenwich, locale, dbms, superb,
Nearest to many: some, several, various, those, these, numerous, most, all,
Nearest to was: is, became, had, were, has, been, be, becomes,
Nearest to see: include, oncoming, pound, axe, but, asterisk, montag, according,
Nearest to used: seen, found, held, known, reported, written, depicted, responsible,
Nearest to states: kingdom, nations, countries, inexpensively, buildings, us, state, isaurian,
Nearest to their: its, his, her, our, your, whose, the, these,
Average loss at step 52000: 2.882532
Average loss at step 54000: 2.864678
Average loss at step 56000: 2.856049
Average loss at step 58000: 2.759572
Average loss at step 60000: 2.823049
Nearest to however: but, although, that, though, especially, tango, while, picker,
Nearest to known: defined, eaten, regarded, seen, used, such, required, foss,
Nearest to after: before, when, during, until, without, despite, though, lieberman,
Nearest to for: without, csicop, cma, shenandoah, rejuvenation, curtin, iom, conflicted,
Nearest to nine: eight, seven, six, five, four, zero, three, two,
Nearest to six: seven, eight, four, five, nine, three, zero, two,
Nearest to only: always, either, otherwise, farmed, still, easily, flips, substitute,
Nearest to up: out, down, off, him, destry, detachments, popolo, back,
Nearest to some: many, several, any, most, these, certain, various, this,
Nearest to most: more, some, less, many, greenwich, locale, dbms, blended,
Nearest to many: some, several, various, those, numerous, these, all, few,
Nearest to was: is, became, has, had, were, be, becomes, ferus,
Nearest to see: include, but, oncoming, andries, according, refers, mande, asterisk,
Nearest to used: held, found, seen, reported, depicted, written, known, required,
Nearest to states: kingdom, nations, countries, state, buildings, schlick, isaurian, inexpensively,
Nearest to their: its, his, her, our, your, whose, the, my,
Average loss at step 62000: 2.829922
Average loss at step 64000: 2.776876
Average loss at step 66000: 2.768646
Average loss at step 68000: 2.715828
Average loss at step 70000: 2.777806
Nearest to however: but, although, though, that, while, especially, picker, administer,
Nearest to known: defined, regarded, eaten, used, such, required, seen, believed,
Nearest to after: before, during, when, until, without, despite, though, while,
Nearest to for: without, csicop, shenandoah, cma, during, curtin, conflicted, titian,
Nearest to nine: eight, seven, six, five, four, three, zero, two,
Nearest to six: seven, eight, five, nine, four, three, zero, two,
Nearest to only: always, farmed, either, easily, kristina, irradiation, initially, flips,
Nearest to up: out, off, down, him, destry, back, together, detachments,
Nearest to some: many, several, certain, these, various, numerous, most, any,
Nearest to most: more, less, some, many, greenwich, especially, dbms, blended,
Nearest to many: some, several, various, numerous, those, these, all, few,
Nearest to was: is, became, has, were, had, be, becomes, been,
Nearest to see: include, but, oncoming, asterisk, known, according, andries, hosts,
Nearest to used: seen, found, reported, held, depicted, appears, known, described,
Nearest to states: kingdom, nations, countries, state, us, buildings, isaurian, pedestrians,
Nearest to their: its, his, her, our, your, whose, the, my,
Average loss at step 72000: 2.798618
Average loss at step 74000: 2.650664
Average loss at step 76000: 2.803110
Average loss at step 78000: 2.821768
Average loss at step 80000: 2.781568
Nearest to however: but, although, though, administer, especially, while, that, picker,
Nearest to known: regarded, defined, used, eaten, required, seen, believed, such,
Nearest to after: before, when, during, without, despite, until, while, lieberman,
Nearest to for: csicop, without, during, after, when, despite, curtin, shenandoah,
Nearest to nine: eight, six, seven, five, four, three, zero, two,
Nearest to six: eight, five, seven, nine, four, three, zero, two,
Nearest to only: either, farmed, always, easily, kristina, initially, flips, irradiation,
Nearest to up: down, out, off, him, back, together, destry, forth,
Nearest to some: many, several, certain, most, any, these, various, those,
Nearest to most: more, many, some, less, greenwich, especially, locale, orkney,
Nearest to many: some, several, various, numerous, those, most, few, all,
Nearest to was: is, became, has, were, had, becomes, been, be,
Nearest to see: include, but, oncoming, external, includes, mande, andries, hosts,
Nearest to used: depicted, seen, found, reported, known, designed, described, required,
Nearest to states: nations, kingdom, countries, state, buildings, schlick, isaurian, archaeologist,
Nearest to their: its, his, her, our, your, whose, the, my,
Average loss at step 82000: 2.675634
Average loss at step 84000: 2.771019
Average loss at step 86000: 2.743354
Average loss at step 88000: 2.758232
Average loss at step 90000: 2.743511
Nearest to however: but, although, though, that, while, especially, administer, picker,
Nearest to known: regarded, defined, eaten, called, used, seen, required, such,
Nearest to after: before, during, when, without, despite, until, while, though,
Nearest to for: csicop, without, cma, shenandoah, curtin, conflicted, falklands, constans,
Nearest to nine: eight, seven, six, four, five, three, zero, two,
Nearest to six: eight, seven, four, five, nine, three, zero, two,
Nearest to only: either, always, kristina, farmed, disturbed, flips, easily, hirohito,
Nearest to up: down, off, out, him, back, together, forth, them,
Nearest to some: many, several, certain, various, most, these, any, numerous,
Nearest to most: more, many, less, some, greenwich, especially, locale, various,
Nearest to many: some, several, various, numerous, those, most, few, both,
Nearest to was: is, has, became, had, becomes, were, be, treasured,
Nearest to see: include, but, oncoming, list, andries, includes, external, palazzo,
Nearest to used: depicted, found, seen, reported, responsible, designed, described, available,
Nearest to states: nations, countries, kingdom, state, buildings, schlick, us, isaurian,
Nearest to their: its, his, her, our, your, whose, my, the,
Average loss at step 92000: 2.685750
Average loss at step 94000: 2.729221
Average loss at step 96000: 2.708846
Average loss at step 98000: 2.385564
Average loss at step 100000: 2.420342
Nearest to however: but, although, though, while, that, especially, tango, occur,
Nearest to known: regarded, defined, eaten, used, seen, called, required, believed,
Nearest to after: before, when, during, without, despite, while, though, until,
Nearest to for: without, after, csicop, cma, conflicted, during, sulfuric, flick,
Nearest to nine: eight, seven, six, five, four, three, zero, two,
Nearest to six: seven, four, five, nine, eight, three, zero, two,
Nearest to only: either, kristina, always, farmed, easily, hirohito, jacquard, irradiation,
Nearest to up: down, off, out, back, him, together, forth, them,
Nearest to some: many, several, certain, various, any, most, these, numerous,
Nearest to most: more, less, especially, greenwich, some, many, locale, blended,
Nearest to many: some, several, various, numerous, those, few, all, both,
Nearest to was: is, had, became, were, has, be, wrote, been,
Nearest to see: include, external, oncoming, list, familia, includes, refer, andries,
Nearest to used: depicted, found, seen, described, known, reported, perceived, designed,
Nearest to states: nations, kingdom, countries, state, us, buildings, sociolinguistics, isaurian,
Nearest to their: its, his, her, our, your, whose, my, the,

In [ ]: