CS 224D Assignment #2

Part [2]: Recurrent Neural Networks

This notebook will provide starter code, testing snippets, and additional guidance for implementing the Recurrent Neural Network Language Model (RNNLM) described in Part 2 of the handout.

Please complete parts (a), (b), and (c) of Part 2 before beginning this notebook.


In [1]:
import sys, os
from numpy import *
from matplotlib.pyplot import *
%matplotlib inline
matplotlib.rcParams['savefig.dpi'] = 100

%load_ext autoreload
%autoreload 2

(e): Implement a Recurrent Neural Network Language Model

Follow the instructions on the handout to implement your model in rnnlm.py, then use the code below to test.


In [2]:
from rnnlm import RNNLM
# Gradient check on toy data, for speed
random.seed(10)
wv_dummy = random.randn(10,50)
model = RNNLM(L0 = wv_dummy, U0 = wv_dummy,
              alpha=0.005, rseed=10, bptt=4)
model.grad_check(array([1,2,3]), array([2,3,4]))

Prepare Vocabulary and Load PTB Data

We've pre-prepared a list of the vocabulary in the Penn Treebank, along with their absolute counts and unigram frequencies. The document loader code below will "canonicalize" words and replace any unknowns with a "UUUNKKK" token, then convert the data to lists of indices.


In [3]:
from data_utils import utils as du
import pandas as pd

# Load the vocabulary
vocab = pd.read_table("data/lm/vocab.ptb.txt", header=None, sep="\s+",
                     index_col=0, names=['count', 'freq'], )

# Choose how many top words to keep
vocabsize = 2000
num_to_word = dict(enumerate(vocab.index[:vocabsize]))
word_to_num = du.invert_dict(num_to_word)
##
# Below needed for 'adj_loss': DO NOT CHANGE
fraction_lost = float(sum([vocab['count'][word] for word in vocab.index
                           if (not word in word_to_num) 
                               and (not word == "UUUNKKK")]))
fraction_lost /= sum([vocab['count'][word] for word in vocab.index
                      if (not word == "UUUNKKK")])
print "Retained %d words from %d (%.02f%% of all tokens)" % (vocabsize, len(vocab),
                                                             100*(1-fraction_lost))

Load the datasets, using the vocabulary in word_to_num. Our starter code handles this for you, and also generates lists of lists X and Y, corresponding to input words and target words*.

(Of course, the target words are just the input words, shifted by one position, but it can be cleaner and less error-prone to keep them separate.)


In [4]:
# Load the training set
docs = du.load_dataset('data/lm/ptb-train.txt')
S_train = du.docs_to_indices(docs, word_to_num)
X_train, Y_train = du.seqs_to_lmXY(S_train)

# Load the dev set (for tuning hyperparameters)
docs = du.load_dataset('data/lm/ptb-dev.txt')
S_dev = du.docs_to_indices(docs, word_to_num)
X_dev, Y_dev = du.seqs_to_lmXY(S_dev)

# Load the test set (final evaluation only)
docs = du.load_dataset('data/lm/ptb-test.txt')
S_test = du.docs_to_indices(docs, word_to_num)
X_test, Y_test = du.seqs_to_lmXY(S_test)

# Display some sample data
print " ".join(d[0] for d in docs[7])
print S_test[7]

(f): Train and evaluate your model

When you're able to pass the gradient check, let's run our model on some real language!

You should randomly initialize the word vectors as Gaussian noise, i.e. $L_{ij} \sim \mathit{N}(0,0.1)$ and $U_{ij} \sim \mathit{N}(0,0.1)$; the function random.randn may be helpful here.

As in Part 1, you should tune hyperparameters to get a good model.


In [5]:
hdim = 100 # dimension of hidden layer = dimension of word vectors
random.seed(10)
L0 = zeros((vocabsize, hdim)) # replace with random init, 
                              # or do in RNNLM.__init__()
# test parameters; you probably want to change these
model = RNNLM(L0, U0 = L0, alpha=0.1, rseed=10, bptt=1)

# Gradient check is going to take a *long* time here
# since it's quadratic-time in the number of parameters.
# run at your own risk... (but do check this!)
# model.grad_check(array([1,2,3]), array([2,3,4]))

In [6]:
#### YOUR CODE HERE ####

##
# Pare down to a smaller dataset, for speed
# (optional - recommended to not do this for your final model)
ntrain = len(Y_train)
X = X_train[:ntrain]
Y = Y_train[:ntrain]




#### END YOUR CODE ####

In [7]:
## Evaluate cross-entropy loss on the dev set,
## then convert to perplexity for your writeup
dev_loss = model.compute_mean_loss(X_dev, Y_dev)

The performance of the model is skewed somewhat by the large number of UUUNKKK tokens; if these are 1/6 of the dataset, then that's a sizeable fraction that we're just waving our hands at. Naively, our model gets credit for these that's not really deserved; the formula below roughly removes this contribution from the average loss. Don't worry about how it's derived, but do report both scores - it helps us compare across models with different vocabulary sizes.


In [21]:
## DO NOT CHANGE THIS CELL ##
# Report your numbers, after computing dev_loss above.
def adjust_loss(loss, funk, q, mode='basic'):
    if mode == 'basic':
        # remove freebies only: score if had no UUUNKKK
        return (loss + funk*log(funk))/(1 - funk)
    else:
        # remove freebies, replace with best prediction on remaining
        return loss + funk*log(funk) - funk*log(q)
# q = best unigram frequency from omitted vocab
# this is the best expected loss out of that set
q = vocab.freq[vocabsize] / sum(vocab.freq[vocabsize:])
print "Unadjusted: %.03f" % exp(dev_loss)
print "Adjusted for missing vocab: %.03f" % exp(adjust_loss(dev_loss, fraction_lost, q))

Save Model Parameters


In [16]:
##
# Save to .npy files; should only be a few MB total
assert(min(model.sparams.L.shape) <= 100) # don't be too big
assert(max(model.sparams.L.shape) <= 5000) # don't be too big
save("rnnlm.L.npy", model.sparams.L)
save("rnnlm.U.npy", model.params.U)
save("rnnlm.H.npy", model.params.H)

(g): Generating Data

Once you've trained your model to satisfaction, let's use it to generate some sentences!

Implement the generate_sequence function in rnnlm.py, and call it below.


In [19]:
def seq_to_words(seq):
    return [num_to_word[s] for s in seq]
    
seq, J = model.generate_sequence(word_to_num["<s>"], 
                                 word_to_num["</s>"], 
                                 maxlen=100)
print J
# print seq
print " ".join(seq_to_words(seq))

BONUS: Use the unigram distribution given in the vocab table to fill in any UUUNKKK tokens in your generated sequences with words that we omitted from the vocabulary. You'll want to use list(vocab.index) to get a list of words, and vocab.freq to get a list of corresponding frequencies.


In [20]:
# Replace UUUNKKK with a random unigram,
# drawn from vocab that we skipped
from nn.math import MultinomialSampler, multinomial_sample
def fill_unknowns(words):
    #### YOUR CODE HERE ####
    ret = words # do nothing; replace this
    

    #### END YOUR CODE ####
    return ret
    
print " ".join(fill_unknowns(seq_to_words(seq)))

In [ ]: