Anna KaRNNa

In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.


In [1]:
import time
from collections import namedtuple

import numpy as np
import tensorflow as tf

First we'll load the text file and convert it into integers for our network to use.


In [2]:
with open('anna.txt', 'r') as f:
    text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)

In [3]:
text[:100]


Out[3]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

In [4]:
chars[:100]


Out[4]:
array([ 9, 35, 40, 33, 57, 45, 74, 54, 44, 71, 71, 71,  5, 40, 33, 33, 81,
       54, 32, 40,  0, 17, 15, 17, 45,  4, 54, 40, 74, 45, 54, 40, 15, 15,
       54, 40, 15, 17,  1, 45, 23, 54, 45, 70, 45, 74, 81, 54, 20, 29, 35,
       40, 33, 33, 81, 54, 32, 40,  0, 17, 15, 81, 54, 17,  4, 54, 20, 29,
       35, 40, 33, 33, 81, 54, 17, 29, 54, 17, 57,  4, 54, 41,  2, 29, 71,
        2, 40, 81, 11, 71, 71, 55, 70, 45, 74, 81, 57, 35, 17, 29], dtype=int32)

Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.

Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.

The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.


In [5]:
def split_data(chars, batch_size, num_steps, split_frac=0.9):
    """ 
    Split character data into training and validation sets, inputs and targets for each set.
    
    Arguments
    ---------
    chars: character array
    batch_size: Size of examples in each of batch
    num_steps: Number of sequence steps to keep in the input and pass to the network
    split_frac: Fraction of batches to keep in the training set
    
    
    Returns train_x, train_y, val_x, val_y
    """
    
    
    slice_size = batch_size * num_steps
    n_batches = int(len(chars) / slice_size)
    
    # Drop the last few characters to make only full batches
    x = chars[: n_batches*slice_size]
    y = chars[1: n_batches*slice_size + 1]
    
    # Split the data into batch_size slices, then stack them into a 2D matrix 
    x = np.stack(np.split(x, batch_size))
    y = np.stack(np.split(y, batch_size))
    
    # Now x and y are arrays with dimensions batch_size x n_batches*num_steps
    
    # Split into training and validation sets, keep the virst split_frac batches for training
    split_idx = int(n_batches*split_frac)
    train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
    val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
    
    return train_x, train_y, val_x, val_y

In [6]:
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)

In [7]:
train_x.shape


Out[7]:
(10, 178400)

In [8]:
train_x[:,:10]


Out[8]:
array([[ 9, 35, 40, 33, 57, 45, 74, 54, 44, 71],
       [ 6, 29, 18, 54, 35, 45, 54,  0, 41, 70],
       [54, 30, 40, 57, 30, 35, 17, 29, 52, 54],
       [41, 57, 35, 45, 74, 54,  2, 41, 20, 15],
       [54, 57, 35, 45, 54, 15, 40, 29, 18, 56],
       [54, 46, 35, 74, 41, 20, 52, 35, 54, 15],
       [57, 54, 57, 41, 71, 18, 41, 11, 71, 71],
       [41, 54, 35, 45, 74,  4, 45, 15, 32, 76],
       [35, 40, 57, 54, 17,  4, 54, 57, 35, 45],
       [45, 74,  4, 45, 15, 32, 54, 40, 29, 18]], dtype=int32)

I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.


In [9]:
def get_batch(arrs, num_steps):
    batch_size, slice_size = arrs[0].shape
    
    n_batches = int(slice_size/num_steps)
    for b in range(n_batches):
        yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]

In [10]:
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
              learning_rate=0.001, grad_clip=5, sampling=False):
        
    if sampling == True:
        batch_size, num_steps = 1, 1

    tf.reset_default_graph()
    
    # Declare placeholders we'll feed into the graph
    
    inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
    x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')


    targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
    y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
    y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
    
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    # Build the RNN layers
    
    lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
    drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
    cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)

    initial_state = cell.zero_state(batch_size, tf.float32)

    # Run the data through the RNN layers
    rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
    outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
    
    final_state = tf.identity(state, name='final_state')
    
    # Reshape output so it's a bunch of rows, one row for each cell output
    
    seq_output = tf.concat(outputs, axis=1,name='seq_output')
    output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
    
    # Now connect the RNN putputs to a softmax layer and calculate the cost
    softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
                           name='softmax_w')
    softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
    logits = tf.matmul(output, softmax_w) + softmax_b

    preds = tf.nn.softmax(logits, name='predictions')
    
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
    cost = tf.reduce_mean(loss, name='cost')

    # Optimizer for training, using gradient clipping to control exploding gradients
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
    train_op = tf.train.AdamOptimizer(learning_rate)
    optimizer = train_op.apply_gradients(zip(grads, tvars))

    # Export the nodes 
    export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
                    'keep_prob', 'cost', 'preds', 'optimizer']
    Graph = namedtuple('Graph', export_nodes)
    local_dict = locals()
    graph = Graph(*[local_dict[each] for each in export_nodes])
    
    return graph

Hyperparameters

Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.


In [11]:
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001

Write out the graph for TensorBoard


In [12]:
model = build_rnn(len(vocab),
                  batch_size=batch_size,
                  num_steps=num_steps,
                  learning_rate=learning_rate,
                  lstm_size=lstm_size,
                  num_layers=num_layers)

with tf.Session() as sess:
    
    sess.run(tf.global_variables_initializer())
    file_writer = tf.summary.FileWriter('./logs/1', sess.graph)

Training

Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.


In [13]:
!mkdir -p checkpoints/anna

In [14]:
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)

model = build_rnn(len(vocab), 
                  batch_size=batch_size,
                  num_steps=num_steps,
                  learning_rate=learning_rate,
                  lstm_size=lstm_size,
                  num_layers=num_layers)

saver = tf.train.Saver(max_to_keep=100)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    # Use the line below to load a checkpoint and resume training
    #saver.restore(sess, 'checkpoints/anna20.ckpt')
    
    n_batches = int(train_x.shape[1]/num_steps)
    iterations = n_batches * epochs
    for e in range(epochs):
        
        # Train network
        new_state = sess.run(model.initial_state)
        loss = 0
        for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
            iteration = e*n_batches + b
            start = time.time()
            feed = {model.inputs: x,
                    model.targets: y,
                    model.keep_prob: 0.5,
                    model.initial_state: new_state}
            batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], 
                                                 feed_dict=feed)
            loss += batch_loss
            end = time.time()
            print('Epoch {}/{} '.format(e+1, epochs),
                  'Iteration {}/{}'.format(iteration, iterations),
                  'Training loss: {:.4f}'.format(loss/b),
                  '{:.4f} sec/batch'.format((end-start)))
        
            
            if (iteration%save_every_n == 0) or (iteration == iterations):
                # Check performance, notice dropout has been set to 1
                val_loss = []
                new_state = sess.run(model.initial_state)
                for x, y in get_batch([val_x, val_y], num_steps):
                    feed = {model.inputs: x,
                            model.targets: y,
                            model.keep_prob: 1.,
                            model.initial_state: new_state}
                    batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
                    val_loss.append(batch_loss)

                print('Validation loss:', np.mean(val_loss),
                      'Saving checkpoint!')
                saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))


Epoch 1/1  Iteration 1/178 Training loss: 4.4180 4.1024 sec/batch
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-14-049284c61b0f> in <module>()
     33                     model.initial_state: new_state}
     34             batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], 
---> 35                                                  feed_dict=feed)
     36             loss += batch_loss
     37             end = time.time()

/home/leiming/anaconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/leiming/anaconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    912     # Validate and process feed_dict.
    913     if feed_dict:
--> 914       feed_dict = nest.flatten_dict_items(feed_dict)
    915       for feed, feed_val in feed_dict.items():
    916         for subfeed, subfeed_val in _feed_fn(feed, feed_val):

/home/leiming/anaconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/tensorflow/python/util/nest.py in flatten_dict_items(dictionary)
    185             "Could not flatten dictionary. Key had %d elements, but value had "
    186             "%d elements. Key: %s, value: %s."
--> 187             % (len(flat_i), len(flat_v), flat_i, flat_v))
    188       for new_i, new_v in zip(flat_i, flat_v):
    189         if new_i in flat_dictionary:

ValueError: Could not flatten dictionary. Key had 4 elements, but value had 1 elements. Key: [<tf.Tensor 'zeros:0' shape=(100, 512) dtype=float32>, <tf.Tensor 'zeros_1:0' shape=(100, 512) dtype=float32>, <tf.Tensor 'zeros_2:0' shape=(100, 512) dtype=float32>, <tf.Tensor 'zeros_3:0' shape=(100, 512) dtype=float32>], value: [array([[[[  2.86697894e-02,  -5.10910153e-03,  -1.98138016e-03, ...,
           -1.58672687e-02,  -6.64659962e-02,  -1.81783214e-02],
         [ -3.96342063e-03,  -6.89288750e-02,   1.00940457e-02, ...,
           -1.27626136e-02,   2.73811235e-03,  -1.22035928e-02],
         [  5.30566499e-02,  -4.61589638e-03,   4.29103822e-02, ...,
            1.38960686e-03,  -4.37773764e-04,  -1.55963805e-02],
         ..., 
         [  3.44972238e-02,  -3.01060826e-02,  -1.08117247e-02, ...,
            9.52159241e-03,   2.55398778e-03,   5.14930114e-04],
         [  4.52914089e-02,  -1.30921043e-02,   4.05563861e-02, ...,
           -4.15220484e-02,  -2.13628393e-02,  -4.00927961e-02],
         [  3.60463653e-03,   1.22420490e-02,   3.28983739e-02, ...,
            4.23125131e-03,  -1.72241535e-02,  -1.60153601e-02]],

        [[  1.45985372e-02,  -2.56750593e-03,  -1.00777752e-03, ...,
           -8.02794937e-03,  -3.37034576e-02,  -9.22572799e-03],
         [ -1.93809974e-03,  -3.38789150e-02,   5.08370064e-03, ...,
           -6.48310408e-03,   1.39243645e-03,  -6.14049239e-03],
         [  2.64454912e-02,  -2.32524099e-03,   2.17813551e-02, ...,
            7.03043479e-04,  -2.13736930e-04,  -7.60009279e-03],
         ..., 
         [  1.67636815e-02,  -1.49564287e-02,  -5.48222847e-03, ...,
            4.76719532e-03,   1.27645989e-03,   2.54478073e-04],
         [  2.27012262e-02,  -6.51515555e-03,   2.03002430e-02, ...,
           -2.12148484e-02,  -1.07016200e-02,  -1.97726116e-02],
         [  1.81230437e-03,   6.27639983e-03,   1.64407752e-02, ...,
            2.07970617e-03,  -8.45650025e-03,  -7.97568634e-03]]],


       [[[  1.84987485e-02,  -2.35856269e-02,  -5.92927821e-03, ...,
           -1.41464267e-02,  -1.62357945e-04,  -1.32902805e-02],
         [ -1.88082259e-03,   1.11094788e-02,  -2.51391297e-03, ...,
            9.32754669e-03,   2.03387961e-02,   8.47234973e-04],
         [ -2.28405744e-02,  -2.92753726e-02,  -1.56428888e-02, ...,
           -7.78405089e-03,  -1.26364753e-02,  -3.69471568e-03],
         ..., 
         [ -2.22901045e-03,  -1.20795127e-02,  -2.51087500e-03, ...,
           -4.55584377e-05,   4.18073544e-03,   2.69784033e-02],
         [  8.32579564e-03,  -1.17952684e-02,  -9.40148346e-03, ...,
            8.75993725e-03,  -3.11325677e-02,  -8.44056066e-03],
         [  6.92128297e-03,   1.70341227e-02,  -6.80788746e-03, ...,
            2.02331915e-02,  -2.57781036e-02,  -1.01484032e-02]],

        [[  9.31584742e-03,  -1.18595418e-02,  -2.97191367e-03, ...,
           -7.12001976e-03,  -8.08914847e-05,  -6.64519100e-03],
         [ -9.49673238e-04,   5.62221836e-03,  -1.25023106e-03, ...,
            4.63232864e-03,   1.02165770e-02,   4.24776081e-04],
         [ -1.15149785e-02,  -1.46569759e-02,  -7.77793257e-03, ...,
           -3.92548740e-03,  -6.24509761e-03,  -1.82784547e-03],
         ..., 
         [ -1.10984803e-03,  -6.06776029e-03,  -1.25748210e-03, ...,
           -2.28484114e-05,   2.09222548e-03,   1.33908959e-02],
         [  4.18542186e-03,  -5.88641269e-03,  -4.69004549e-03, ...,
            4.39291541e-03,  -1.54262576e-02,  -4.22227941e-03],
         [  3.46391369e-03,   8.54856800e-03,  -3.37907462e-03, ...,
            1.00530097e-02,  -1.28550725e-02,  -5.02844015e-03]]]], dtype=float32)].

In [ ]:
tf.train.get_checkpoint_state('checkpoints/anna')

Sampling

Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.

The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.


In [ ]:
def pick_top_n(preds, vocab_size, top_n=5):
    p = np.squeeze(preds)
    p[np.argsort(p)[:-top_n]] = 0
    p = p / np.sum(p)
    c = np.random.choice(vocab_size, 1, p=p)[0]
    return c

In [ ]:
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
    prime = "Far"
    samples = [c for c in prime]
    model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
    saver = tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess, checkpoint)
        new_state = sess.run(model.initial_state)
        for c in prime:
            x = np.zeros((1, 1))
            x[0,0] = vocab_to_int[c]
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.preds, model.final_state], 
                                         feed_dict=feed)

        c = pick_top_n(preds, len(vocab))
        samples.append(int_to_vocab[c])

        for i in range(n_samples):
            x[0,0] = c
            feed = {model.inputs: x,
                    model.keep_prob: 1.,
                    model.initial_state: new_state}
            preds, new_state = sess.run([model.preds, model.final_state], 
                                         feed_dict=feed)

            c = pick_top_n(preds, len(vocab))
            samples.append(int_to_vocab[c])
        
    return ''.join(samples)

In [ ]:
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)

In [ ]:
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)