A Network Tour of Data Science

      Xavier Bresson, Winter 2016/17

Assignment 3 : Recurrent Neural Networks


In [121]:
# Import libraries
import tensorflow as tf
import numpy as np
import collections
import os

In [122]:
# Load text data
data = open(os.path.join('datasets', 'text_ass_6.txt'), 'r').read() # must be simple plain text file
print('Text data:',data)
chars = list(set(data))
print('\nSingle characters:',chars)
data_len, vocab_size = len(data), len(chars)
print('\nText data has %d characters, %d unique.' % (data_len, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
print('\nMapping characters to numbers:',char_to_ix)
print('\nMapping numbers to characters:',ix_to_char)


Text data: hello world! is a very simple program in most programming languages often used to illustrate the basic syntax of a programming language

Single characters: ['g', 'n', 'e', '!', 'p', 'o', 'c', 'h', 'y', 'r', 'v', 'l', ' ', 'f', 'w', 'i', 't', 'd', 'm', 'a', 'b', 'x', 'u', 's']

Text data has 135 characters, 24 unique.

Mapping characters to numbers: {'o': 5, 'g': 0, ' ': 12, 'n': 1, 'w': 14, 'e': 2, 'i': 15, 't': 16, 'v': 10, 'u': 22, '!': 3, 'd': 17, 'p': 4, 'a': 19, 'b': 20, 'x': 21, 'c': 6, 'h': 7, 'm': 18, 'r': 9, 'l': 11, 'f': 13, 's': 23, 'y': 8}

Mapping numbers to characters: {0: 'g', 1: 'n', 2: 'e', 3: '!', 4: 'p', 5: 'o', 6: 'c', 7: 'h', 8: 'y', 9: 'r', 10: 'v', 11: 'l', 12: ' ', 13: 'f', 14: 'w', 15: 'i', 16: 't', 17: 'd', 18: 'm', 19: 'a', 20: 'b', 21: 'x', 22: 'u', 23: 's'}

Goal

The goal is to define with TensorFlow a vanilla recurrent neural network (RNN) model:

$$ \begin{aligned} h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\\ y_t &= W_y y_t + b_y \end{aligned} $$

to predict a sequence of characters. $x_t \in \mathbb{R}^D$ is the input character of the RNN in a dictionary of size $D$. $y_t \in \mathbb{R}^D$ is the predicted character (through a distribution function) by the RNN system. $h_t \in \mathbb{R}^H$ is the memory of the RNN, called hidden state at time $t$. Its dimensionality is arbitrarly chosen to $H$. The variables of the system are $W_h \in \mathbb{R}^{H\times H}$, $W_x \in \mathbb{R}^{H\times D}$, $W_h \in \mathbb{R}^{D\times H}$, $b_h \in \mathbb{R}^D$, and $b_y \in \mathbb{R}^D$.

The number of time steps of the RNN is $T$, that is we will learn a sequence of data of length $T$: $x_t$ for $t=0,...,T-1$.


In [123]:
# hyperparameters of RNN
batch_size = 3                                  # batch size
batch_len = data_len // batch_size              # batch length
T = 5                                           # temporal length
epoch_size = (batch_len - 1) // T               # nb of iterations to get one epoch
D = vocab_size                                  # data dimension = nb of unique characters
H = 5*D                                         # size of hidden state, the memory layer

print('data_len=',data_len,' batch_size=',batch_size,' batch_len=',
      batch_len,' T=',T,' epoch_size=',epoch_size,' D=',D)


data_len= 135  batch_size= 3  batch_len= 45  T= 5  epoch_size= 8  D= 24

Step 1

Initialize input variables of the computational graph:
(1) Xin of size batch_size x T x D and type tf.float32. Each input character is encoded on a vector of size D.
(2) Ytarget of size batch_size x T and type tf.int64. Each target character is encoded by a value in {0,...,D-1}.
(3) hin of size batch_size x H and type tf.float32


In [124]:
# input variables of computational graph (CG)
xin = tf.placeholder(tf.float32,[batch_size,T,D]);
Ytarget = tf.placeholder(tf.int64,[batch_size,T]);
hin = tf.placeholder(tf.float32,[batch_size,H]);

Step 2

Define the variables of the computational graph:
(1) $W_x$ is a random variable of shape D x H with normal distribution of variance $\frac{6}{D+H}$
(2) $W_h$ is an identity matrix multiplies by constant $0.01$
(3) $W_y$ is a random variable of shape H x D with normal distribution of variance $\frac{6}{D+H}$
(4) $b_h$, $b_y$ are zero vectors of size H, and D


In [125]:
# Model variables
Wx = tf.Variable(tf.random_normal([D,H], mean=0.0, stddev=tf.sqrt(6/(D+H)), dtype=tf.float32, seed=None, name=None)) 
Wh = tf.Variable(initial_value=0.01*np.identity(H),dtype=tf.float32)
Wy = tf.Variable(tf.random_normal([H,D], mean=0.0, stddev=tf.sqrt(6/(D+H)), dtype=tf.float32, seed=None, name=None)) 
bh = tf.Variable(tf.zeros(H))
by = tf.Variable(tf.zeros(D))

Step 3

Implement the recursive formula:

$$ \begin{aligned} h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\\ y_t &= W_y h_t + b_y \end{aligned} $$

with $h_{t=0}=hin$.

Hints:
(1) You may use functions tf.split(), enumerate(), tf.squeeze(), tf.matmul(), tf.tanh(), tf.transpose(), append(), pack().
(2) You may use a matrix Y of shape batch_size x T x D. We recall that Ytarget should have the shape batch_size x T.


In [126]:
# Vanilla RNN implementation
# Here we create the graph for training where we directly input at each timestep the known inputs.
Y = []
ht = hin

Xt = tf.split(1,T,xin)

for i,xt in enumerate(Xt):
    xt = tf.squeeze(xt)
    ht = tf.tanh(tf.matmul(ht,Wh)+tf.matmul(xt,Wx)+bh)
    yt = tf.matmul(ht,Wy)+by
    Y.append(yt)

Y=tf.pack(Y,axis=1)

print('Y=',Y.get_shape())
print('Ytarget=',Ytarget.get_shape())


Y= (3, 5, 24)
Ytarget= (3, 5)

Step 4

Perplexity loss is defined as:


In [127]:
# perplexity
logits = tf.reshape(Y,[batch_size*T,D])
weights = tf.ones([batch_size*T])
cross_entropy_perplexity = tf.nn.seq2seq.sequence_loss_by_example([logits],[Ytarget],[weights])
cross_entropy_perplexity = tf.reduce_sum(cross_entropy_perplexity) / batch_size
loss = cross_entropy_perplexity

Step 5

Implement the optimization of the loss function.

Hint: You may use function tf.train.GradientDescentOptimizer().


In [128]:
# Optimization
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)

Step 6

Implement the prediction scheme: from an input character e.g. "h" then the RNN should predict "ello".

Hints:
(1) You should use the learned RNN.
(2) You may use functions tf.one_hot(), tf.nn.softmax(), tf.argmax().


In [129]:
idx_pred = tf.placeholder(tf.int64) # input seed
Ypred = []
ht = tf.zeros([1,H]) 

xt = tf.one_hot(idx_pred,D,on_value=1.0,off_value=0.0,dtype=tf.float32)

for i in range (T):

    ht = tf.tanh(tf.matmul(ht,Wh)+ tf.matmul(xt,Wx) + bh)
    yt_test = tf.matmul(ht,Wy) + by

    idx_yt = tf.argmax(yt_test,1)

    xt = tf.one_hot(idx_yt,D,1.0,0.0,axis=None,dtype=tf.float32)
    Ypred.append(idx_yt)
    
Ypred = tf.convert_to_tensor(Ypred)

In [130]:
# Prepare train data matrix of size "batch_size x batch_len"
data_ix = [char_to_ix[ch] for ch in data[:data_len]]
train_data = np.array(data_ix)
print('original train set shape',train_data.shape)
train_data = np.reshape(train_data[:batch_size*batch_len], [batch_size,batch_len])
print('pre-processed train set shape',train_data.shape)


original train set shape (135,)
pre-processed train set shape (3, 45)

In [131]:
# The following function tansforms an integer value d between {0,...,D-1} into an one hot vector, that is a 
# vector of dimension D x 1 which has value 1 for index d-1, and 0 otherwise
from scipy.sparse import coo_matrix
def convert_to_one_hot(a,max_val=None):
    N = a.size
    data = np.ones(N,dtype=int)
    sparse_out = coo_matrix((data,(np.arange(N),a.ravel())), shape=(N,max_val))
    return np.array(sparse_out.todense())

Step 7

Run the computational graph with batches of training data.
Predict the sequence of characters starting from the character "h".

Hints:
(1) Initial memory is $h_{t=0}$ is 0.
(2) Run the computational graph to optimize the perplexity loss, and to predict the the sequence of characters starting from the character "h".


In [132]:
# Run CG
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
h0 = np.zeros([batch_size,H], dtype=float)
indices = collections.deque()
costs = 0.0; epoch_iters = 0
for n in range(50):
    
    # Batch extraction
    if len(indices) < 1:
        indices.extend(range(epoch_size))
        costs = 0.0; epoch_iters = 0
    i = indices.popleft() 
    batch_x = train_data[:,i*T:(i+1)*T]
    batch_x = convert_to_one_hot(batch_x,D);
    #print(batch_x)
    batch_x = np.reshape(batch_x,[batch_size,T,D])
    batch_y = train_data[:,i*T+1:(i+1)*T+1]
    #print(batch_x.shape,batch_y.shape)
    idx = char_to_ix['h'];
    print(idx)
    loss_value,_,Ypredicted = sess.run([loss,train_step,Ypred], feed_dict={xin: batch_x, Ytarget: batch_y, hin: h0, idx_pred: [idx]})   
    
    # Perplexity
    costs += loss_value
    epoch_iters += T
    perplexity = np.exp(costs/epoch_iters)
    
    #ix = 0;
    if not n%1:
        idx_char = Ypredicted
        #print(idx_char)
        txt = ''.join(ix_to_char[ix] for ix in list(idx_char[:,0]))
        print('\nn=',n,', perplexity value=',perplexity)
        print('starting char=',ix_to_char[idx], ', predicted sequences=',txt)
    
sess.close()


7

n= 0 , perplexity value= 23.6872702192
starting char= h , predicted sequences= ycxvl
7

n= 1 , perplexity value= 26.7329419914
starting char= h , predicted sequences= elloo
7

n= 2 , perplexity value= 26.9027156873
starting char= h , predicted sequences= ello 
7

n= 3 , perplexity value= 26.9634226204
starting char= h , predicted sequences= ellnn
7

n= 4 , perplexity value= 25.9725609885
starting char= h , predicted sequences= ell  
7

n= 5 , perplexity value= 24.2692911776
starting char= h , predicted sequences=      
7

n= 6 , perplexity value= 22.3688664804
starting char= h , predicted sequences= e pao
7

n= 7 , perplexity value= 21.689843211
starting char= h , predicted sequences= er gr
7

n= 8 , perplexity value= 8.19883195178
starting char= h , predicted sequences= el  a
7

n= 9 , perplexity value= 10.2143491041
starting char= h , predicted sequences= ello 
7

n= 10 , perplexity value= 10.0169961897
starting char= h , predicted sequences= ello 
7

n= 11 , perplexity value= 11.095749839
starting char= h , predicted sequences= elln 
7

n= 12 , perplexity value= 11.2429189353
starting char= h , predicted sequences= ella 
7

n= 13 , perplexity value= 10.5273687261
starting char= h , predicted sequences= e  a 
7

n= 14 , perplexity value= 9.65883961953
starting char= h , predicted sequences= el os
7

n= 15 , perplexity value= 9.41029128871
starting char= h , predicted sequences= el oo
7

n= 16 , perplexity value= 4.78727548009
starting char= h , predicted sequences= el us
7

n= 17 , perplexity value= 5.56246987926
starting char= h , predicted sequences= ello 
7

n= 18 , perplexity value= 5.46096966442
starting char= h , predicted sequences= ello 
7

n= 19 , perplexity value= 6.20036602103
starting char= h , predicted sequences= ello 
7

n= 20 , perplexity value= 6.33829522537
starting char= h , predicted sequences= ello 
7

n= 21 , perplexity value= 6.01225274799
starting char= h , predicted sequences= e  of
7

n= 22 , perplexity value= 5.63575878402
starting char= h , predicted sequences= ello 
7

n= 23 , perplexity value= 5.5578521916
starting char= h , predicted sequences= ello 
7

n= 24 , perplexity value= 3.15020916745
starting char= h , predicted sequences= ellun
7

n= 25 , perplexity value= 3.7076367112
starting char= h , predicted sequences= ello 
7

n= 26 , perplexity value= 3.64443733376
starting char= h , predicted sequences= ello 
7

n= 27 , perplexity value= 4.0764196004
starting char= h , predicted sequences= ello 
7

n= 28 , perplexity value= 4.13485683599
starting char= h , predicted sequences= ello 
7

n= 29 , perplexity value= 3.90559118064
starting char= h , predicted sequences= ello 
7

n= 30 , perplexity value= 3.67428217478
starting char= h , predicted sequences= ello 
7

n= 31 , perplexity value= 3.7094760448
starting char= h , predicted sequences= ello 
7

n= 32 , perplexity value= 2.39024591867
starting char= h , predicted sequences= ello 
7

n= 33 , perplexity value= 2.83207879823
starting char= h , predicted sequences= ello 
7

n= 34 , perplexity value= 2.78733486637
starting char= h , predicted sequences= ello 
7

n= 35 , perplexity value= 3.06903194797
starting char= h , predicted sequences= ello 
7

n= 36 , perplexity value= 3.0035398011
starting char= h , predicted sequences= ello 
7

n= 37 , perplexity value= 2.83347169229
starting char= h , predicted sequences= ello 
7

n= 38 , perplexity value= 2.67992859499
starting char= h , predicted sequences= ello 
7

n= 39 , perplexity value= 2.70209843581
starting char= h , predicted sequences= ello 
7

n= 40 , perplexity value= 1.89004013163
starting char= h , predicted sequences= ello 
7

n= 41 , perplexity value= 2.19761275376
starting char= h , predicted sequences= ello 
7

n= 42 , perplexity value= 2.17253552028
starting char= h , predicted sequences= ello 
7

n= 43 , perplexity value= 2.38502646182
starting char= h , predicted sequences= ello 
7

n= 44 , perplexity value= 2.3324132862
starting char= h , predicted sequences= ello 
7

n= 45 , perplexity value= 2.2347428007
starting char= h , predicted sequences= ello 
7

n= 46 , perplexity value= 2.13854602038
starting char= h , predicted sequences= ello 
7

n= 47 , perplexity value= 2.14417857953
starting char= h , predicted sequences= ello 
7

n= 48 , perplexity value= 1.690387996
starting char= h , predicted sequences= ello 
7

n= 49 , perplexity value= 1.87531294395
starting char= h , predicted sequences= ello 

In [ ]: