Hello and welcome to this part. In this notebook, we will go over the topic of what Language Modelling is and create a Recurrent Neural Network model based on the Long Short-Term Memory unit to train and be benchmarked by the Penn Treebank. By the end of this notebook, you should be able to understand how TensorFlow builds and executes a RNN model for Language Modelling.
By now, you should have an understanding of how Recurrent Networks work -- a specialized model to process sequential data by keeping track of the "state" or context. In this notebook, we go over a TensorFlow code snippet for creating a model focused on Language Modelling -- a very relevant task that is the cornerstone of many different linguistic problems such as Speech Recognition, Machine Translation and Image Captioning. For this, we will be using the Penn Treebank, which is an often-used dataset for benchmarking Language Modelling models.
Language Modelling, to put it simply, is the task of assigning probabilities to sequences of words. This means that, given a context of one or a few words in the language the model was trained on, the model should have a knowledge of what are the most probable words or sequence of words for the sentence. Language Modelling is one of the tasks under Natural Language Processing, and one of the most important.
In this example, one can see the predictions for the next word of a sentence, given the context "This is an". As you can see, this boils down to a sequential data analysis task -- you are given a word or a sequence of words (the input data), and, given the context (the state), you need to find out what is the next word (the prediction). This kind of analysis is very important for language-related tasks such as Speech Recognition, Machine Translation, Image Captioning, Text Correction and many other very relevant problems.
As the above image shows, Recurrent Network models fit this problem like a glove. Alongside LSTM and its capacity to maintain the model's state for over one thousand time steps, we have all the tools we need to undertake this problem. The goal for this notebook is to create a model that can reach low levels of perplexity on our desired dataset.
For Language Modelling problems, perplexity is the way to gauge efficiency. Perplexity is simply a measure of how well a probabilistic model is able to predict its sample. A higher-level way to explain this would be saying that low perplexity means a higher degree of trust in the predictions the model makes. Therefore, the lower perplexity is, the better.
Historically, datasets big enough for Natural Language Processing are hard to come by. This is in part due to the necessity of the sentences to be broken down and tagged with a certain degree of correctness -- or else the models trained on it won't be able to be correct at all. This means that we need a large amount of data, annotated by or at least corrected by humans. This is, of course, not an easy task at all.
The Penn Treebank, or PTB for short, is a dataset maintained by the University of Pennsylvania. It is huge -- there are over four million and eight hundred thousand annotated words in it, all corrected by humans. It is composed of many different sources, from abstracts of Department of Energy papers to texts from the Library of America. Since it is verifiably correct and of such a huge size, the Penn Treebank has been used time and time again as a benchmark dataset for Language Modelling.
The dataset is divided in different kinds of annotations, such as Piece-of-Speech, Syntactic and Semantic skeletons. For this example, we will simply use a sample of clean, non-annotated words (with the exception of one tag -- <unk>
, which is used for rare words such as uncommon proper nouns) for our model. This means that we just want to predict what the next words would be, not what they mean in context or their classes on a given sentence.
For better processing, in this example, we will make use of word embeddings, which are a way of representing sentence structures or words as n-dimensional vectors (where n is a reasonably high number, such as 200 or 500) of real numbers. Basically, we will assign each word a randomly-initialized vector, and input those into the network to be processed. After a number of iterations, these vectors are expected to assume values that help the network to correctly predict what it needs to -- in our case, the probable next word in the sentence. This is shown to be very effective in Natural Language Processing tasks, and is a commonplace practice.
$$Vec("Example") = [0.02, 0.00, 0.00, 0.92, 0.30,...]$$
</strong>
Word Embedding tends to group up similarly used words reasonably together in the vectorial space. For example, if we use T-SNE (a dimensional reduction visualization algorithm) to flatten the dimensions of our vectors into a 2-dimensional space and use the words these vectors represent as their labels, we might see something like this:
As you can see, words that are frequently used together, in place of each other, or in the same places as them tend to be grouped together -- being closer together the higher these correlations are. For example, "None" is pretty semantically close to "Zero", while a phrase that uses "Italy" can probably also fit "Germany" in it, with little damage to the sentence structure. A vectorial "closeness" for similar words like this is a great indicator of a well-built model.
We need to import the necessary modules for our code. We need numpy
and tensorflow
, obviously. Additionally, we can import directly the tensorflow.models.rnn.rnn
model, which includes the function for building RNNs, and tensorflow.models.rnn.ptb.reader
which is the helper module for getting the input data from the dataset we just downloaded.
If you want to learm more take a look at https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/
In [1]:
import time
import numpy as np
import tensorflow as tf
import os
print('TensorFlow version: ', tf.__version__)
In [2]:
tf.reset_default_graph()
In [3]:
if not os.path.isfile('./penn_treebank_reader.py'):
print('Downloading penn_treebank_reader.py...')
!wget -q -O ../../data/Penn_Treebank/ptb.zip https://ibm.box.com/shared/static/z2yvmhbskc45xd2a9a4kkn6hg4g4kj5r.zip
!unzip -o ../../data/Penn_Treebank/ptb.zip -d ../data/Penn_Treebank
!cp ../../data/Penn_Treebank/ptb/reader.py ./penn_treebank_reader.py
else:
print('Using local penn_treebank_reader.py...')
In [4]:
import penn_treebank_reader as reader
In [5]:
if not os.path.isfile('../data/Penn_Treebank/simple_examples.tgz'):
!wget -O ../../data/Penn_Treebank/simple_examples.tgz http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
!tar xzf ../../data/Penn_Treebank/simple_examples.tgz -C ../data/Penn_Treebank/
Additionally, for the sake of making it easy to play around with the model's hyperparameters, we can declare them beforehand. Feel free to change these -- you will see a difference in performance each time you change those!
In [9]:
#Initial weight scale
init_scale = 0.1
#Initial learning rate
learning_rate = 1.0
#Maximum permissible norm for the gradient (For gradient clipping -- another measure against Exploding Gradients)
max_grad_norm = 5
#The number of layers in our model
num_layers = 2
#The total number of recurrence steps, also known as the number of layers when our RNN is "unfolded"
num_steps = 20
#The number of processing units (neurons) in the hidden layers
hidden_size = 200
#The maximum number of epochs trained with the initial learning rate
max_epoch = 4
#The total number of epochs in training
max_max_epoch = 13
#The probability for keeping data in the Dropout Layer (This is an optimization, but is outside our scope for this notebook!)
#At 1, we ignore the Dropout Layer wrapping.
keep_prob = 1
#The decay for the learning rate
decay = 0.5
#The size for each batch of data
batch_size = 30
#The size of our vocabulary
vocab_size = 10000
#Training flag to separate training from testing
is_training = 1
#Data directory for our dataset
data_dir = "../../data/Penn_Treebank/simple-examples/data/"
Some clarifications for LSTM architecture based on the argumants:
Network structure:
Hidden layer:
Input layer:
There is a lot to be done and a ton of information to process at the same time, so go over this code slowly. It may seem complex at first, but if you try to ally what you just learned about language modelling to the code you see, you should be able to understand it.
This code is adapted from the PTBModel example bundled with the TensorFlow source code.
The story starts from data:
First we start an interactive session:
In [10]:
session=tf.InteractiveSession()
In [11]:
# Reads the data and separates it into training data, validation data and testing data
raw_data = reader.ptb_raw_data(data_dir)
train_data, valid_data, test_data, _ = raw_data
Lets just read one mini-batch now and feed our network:
In [12]:
itera = reader.ptb_iterator(train_data, batch_size, num_steps)
first_touple=next(itera)
x=first_touple[0]
y=first_touple[1]
In [13]:
x.shape
Out[13]:
Lets look at 3 sentences of our input x:
In [14]:
x[0:3]
Out[14]:
In [15]:
size = hidden_size
we define 2 place holders to feed them with mini-batchs, that is x and y:
In [16]:
_input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
_targets = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
lets defin a dictionary, and use it later to feed the placeholders with our first mini-batch:
In [17]:
feed_dict={_input_data:x, _targets:y}
For example, we can use it to feed _input_data:
In [18]:
session.run(_input_data,feed_dict)
Out[18]:
In this step, we create the stacked LSTM, which is a 2 layer LSTM network:
In [19]:
stacked_lstm = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(hidden_size, forget_bias=0.0)
for _ in range(num_layers)]
)
Also, we initialize the states of the nework:
For each LCTM, there are 2 state matrics, c_state and m_state. c_state and m_state represent "Memory State" and "Cell State". Each hidden layer, has a vector of size 30, which keeps the states. so, for 200 hidden units in each LSTM, we have a matrix of size [30x200]
In [20]:
_initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
_initial_state
Out[20]:
lets look at the states, though they are all zero for now:
In [21]:
session.run(_initial_state,feed_dict)
Out[21]:
In [22]:
try:
embedding = tf.get_variable("embedding", [vocab_size, hidden_size]) #[10000x200]
except ValueError:
pass
embedding.get_shape().as_list()
Out[22]:
In [23]:
session.run(tf.global_variables_initializer())
session.run(embedding, feed_dict)
Out[23]:
embedding_lookup goes to each row of input_data, and for each word in the row/sentence, finds the correspond vector in embedding. It creates a [3020200] matrix, so, the first elemnt of inputs (the first sentence), is a matrix of 20x200, which each row of it is vector representing a word in the sentence.
In [24]:
# Define where to get the data for our embeddings from
inputs = tf.nn.embedding_lookup(embedding, _input_data) #shape=(30, 20, 200)
In [25]:
inputs
Out[25]:
In [26]:
session.run(inputs[0], feed_dict)
Out[26]:
tf.nn.dynamicrnn() creates a recurrent neural network using stacked_lstm which is an instance of RNNCell.
The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (30, 20, 200)
This method, returns a pair (outputs, new_state) where:
In [27]:
outputs, new_state = tf.nn.dynamic_rnn(stacked_lstm, inputs, initial_state=_initial_state)
so, lets look at the outputs. The output of the stackedLSTM comes from 200 hidden_layer, and in each time step(=20), one of them get activated. we use the linear activation to map the 200 hidden layer to a [?x10 matrix]
In [28]:
outputs
Out[28]:
In [29]:
session.run(tf.global_variables_initializer())
session.run(outputs[0], feed_dict)
Out[29]:
Lets reshape the output tensor from [30 x 20 x 200] to [600 x 200]
In [30]:
output = tf.reshape(outputs, [-1, size])
output
Out[30]:
In [31]:
session.run(output[0], feed_dict)
Out[31]:
In [32]:
softmax_w = tf.get_variable("softmax_w", [size, vocab_size]) #[200x1000]
softmax_b = tf.get_variable("softmax_b", [vocab_size]) #[1x1000]
logits = tf.matmul(output, softmax_w) + softmax_b
In [33]:
session.run(tf.global_variables_initializer())
logi = session.run(logits, feed_dict)
logi.shape
Out[33]:
In [34]:
First_word_output_probablity = logi[0]
First_word_output_probablity.shape
Out[34]:
In [35]:
embedding_array= session.run(embedding, feed_dict)
np.argmax(First_word_output_probablity)
Out[35]:
So, what is the ground truth for the first word of first sentence?
In [36]:
y[0][0]
Out[36]:
Also, you can get it from target tensor, if you want to find the embedding vector:
In [37]:
_targets
Out[37]:
It is time to compare logit with target
In [38]:
targ = session.run(tf.reshape(_targets, [-1]), feed_dict)
In [39]:
first_word_target_code= targ[0]
first_word_target_code
Out[39]:
In [40]:
first_word_target_vec = session.run( tf.nn.embedding_lookup(embedding, targ[0]))
first_word_target_vec
Out[40]:
Now we want to define our objective function. Our objective is to minimize loss function, that is, to minimize the average negative log probability of the target words:
loss=−1N∑i=1Nlnptargeti
This function is already implimented and available in TensorFlow through sequence_loss_by_example so we can just use it here. sequence_loss_by_example is weighted cross-entropy loss for a sequence of logits (per example).
Its arguments:
logits: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
targets: List of 1D batch-sized int32 Tensors of the same length as logits.
weights: List of 1D batch-sized float-Tensors of the same length as logits.
In [41]:
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example([logits], [tf.reshape(_targets, [-1])],[tf.ones([batch_size * num_steps])])
loss is a 1D batch-sized float Tensor [600x1]: The log-perplexity for each sequence.
In [42]:
session.run(loss, feed_dict)
Out[42]:
In [43]:
cost = tf.reduce_sum(loss) / batch_size
session.run(tf.global_variables_initializer())
session.run(cost, feed_dict)
Out[43]:
Now, lets store the new state as final state
In [44]:
#
final_state = new_state
In [45]:
# Create a variable for the learning rate
lr = tf.Variable(0.0, trainable=False)
# Create the gradient descent optimizer with our learning rate
optimizer = tf.train.GradientDescentOptimizer(lr)
Definining a variable, if you passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. Now, using _tf.trainablevariables() you can get all variables created with trainable=True.
In [46]:
# Get all TensorFlow variables marked as "trainable" (i.e. all of them except _lr, which we just created)
tvars = tf.trainable_variables()
tvars
Out[46]:
we can find the name and scope of all variables:
In [47]:
tvars=tvars[3:]
In [48]:
[v.name for v in tvars]
Out[48]:
In [49]:
cost
Out[49]:
In [50]:
tvars
Out[50]:
The gradient of a function (line) is the slope of the line, or the rate of change of a function. It's a vector (a direction to move) that points in the direction of greatest increase of the function, and calculated by derivative operation.
First lets recall the gradient function using an toy example: $$ z=\left(2x^2+3xy\right)$$
In [51]:
var_x = tf.placeholder(tf.float32)
var_y = tf.placeholder(tf.float32)
func_test = 2.0*var_x*var_x + 3.0*var_x*var_y
session.run(tf.global_variables_initializer())
feed={var_x:1.0,var_y:2.0}
session.run(func_test, feed)
Out[51]:
The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. tf.gradients(func,xs) constructs symbolic partial derivatives of sum of func w.r.t. x in xs.
Now, lets look at the derivitive w.r.t. var_x: $$ \frac{\partial \:}{\partial \:x}\left(2x^2+3xy\right)=4x+3y $$
In [52]:
var_grad = tf.gradients(func_test, [var_x])
session.run(var_grad,feed)
Out[52]:
the derivitive w.r.t. var_y: $$ \frac{\partial \:}{\partial \:x}\left(2x^2+3xy\right)=3x $$
In [53]:
var_grad = tf.gradients(func_test, [var_y])
session.run(var_grad,feed)
Out[53]:
Now, we can look at gradients w.r.t all variables:
In [54]:
tf.gradients(cost, tvars)
Out[54]:
In [55]:
grad_t_list = tf.gradients(cost, tvars)
#sess.run(grad_t_list,feed_dict)
now, we have a list of tensors, t-list. We can use it to find clipped tensors. clip_by_global_norm clips values of multiple tensors by the ratio of the sum of their norms.
clip_by_global_norm get t-list as input and returns 2 things:
In [56]:
max_grad_norm
Out[56]:
In [57]:
# Define the gradient clipping threshold
grads, _ = tf.clip_by_global_norm(grad_t_list, max_grad_norm)
grads
Out[57]:
In [58]:
session.run(grads,feed_dict)
Out[58]:
In [59]:
# Create the training TensorFlow Operation through our optimizer
train_op = optimizer.apply_gradients(zip(grads, tvars))
In [60]:
session.run(tf.global_variables_initializer())
session.run(train_op,feed_dict)
We learned how the model is build step by step. Noe, let's then create a Class that represents our model. This class needs a few things:
In [61]:
class PTBModel(object):
def __init__(self, is_training):
######################################
# Setting parameters for ease of use #
######################################
self.batch_size = batch_size
self.num_steps = num_steps
size = hidden_size
self.vocab_size = vocab_size
###############################################################################
# Creating placeholders for our input data and expected outputs (target data) #
###############################################################################
self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
self._targets = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
##########################################################################
# Creating the LSTM cell structure and connect it with the RNN structure #
##########################################################################
# Create the LSTM unit.
# This creates only the structure for the LSTM and has to be associated with a RNN unit still.
# The argument n_hidden(size=200) of BasicLSTMCell is size of hidden layer, that is, the number of hidden units of the LSTM (inside A).
# Size is the same as the size of our hidden layer, and no bias is added to the Forget Gate.
# LSTM cell processes one word at a time and computes probabilities of the possible continuations of the sentence.
lstm_cells = []
reuse = tf.get_variable_scope().reuse
for _ in range(num_layers):
cell = tf.contrib.rnn.BasicLSTMCell(size, forget_bias=0.0, reuse=reuse)
if is_training and keep_prob < 1:
# Unless you changed keep_prob, this won't actually execute -- this is a dropout wrapper for our LSTM unit
# This is an optimization of the LSTM output, but is not needed at all
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
lstm_cells.append(cell)
# By taking in the LSTM cells as parameters, the MultiRNNCell function junctions the LSTM units to the RNN units.
# RNN cell composed sequentially of multiple simple cells.
stacked_lstm = tf.contrib.rnn.MultiRNNCell(lstm_cells)
# Define the initial state, i.e., the model state for the very first data point
# It initialize the state of the LSTM memory. The memory state of the network is initialized with a vector of zeros and gets updated after reading each word.
self._initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
####################################################################
# Creating the word embeddings and pointing them to the input data #
####################################################################
with tf.device("/cpu:0"):
# Create the embeddings for our input data. Size is hidden size.
# Uses default variable initializer
embedding = tf.get_variable("embedding", [vocab_size, size]) #[10000x200]
# Define where to get the data for our embeddings from
inputs = tf.nn.embedding_lookup(embedding, self._input_data)
# Unless you changed keep_prob, this won't actually execute -- this is a dropout addition for our inputs
# This is an optimization of the input processing and is not needed at all
if is_training and keep_prob < 1:
inputs = tf.nn.dropout(inputs, keep_prob)
############################################
# Creating the input structure for our RNN #
############################################
# Input structure is 20x[30x200]
# Considering each word is represended by a 200 dimentional vector, and we have 30 batchs, we create 30 word-vectors of size [30xx2000]
#inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(1, num_steps, inputs)]
# The input structure is fed from the embeddings, which are filled in by the input data
# Feeding a batch of b sentences to a RNN:
# In step 1, first word of each of the b sentences (in a batch) is input in parallel.
# In step 2, second word of each of the b sentences is input in parallel.
# The parallelism is only for efficiency.
# Each sentence in a batch is handled in parallel, but the network sees one word of a sentence at a time and does the computations accordingly.
# All the computations involving the words of all sentences in a batch at a given time step are done in parallel.
####################################################################################################
# Instanciating our RNN model and retrieving the structure for returning the outputs and the state #
####################################################################################################
outputs, state = tf.nn.dynamic_rnn(stacked_lstm, inputs, initial_state=self._initial_state)
#########################################################################
# Creating a logistic unit to return the probability of the output word #
#########################################################################
output = tf.reshape(outputs, [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, vocab_size]) #[200x1000]
softmax_b = tf.get_variable("softmax_b", [vocab_size]) #[1x1000]
logits = tf.matmul(output, softmax_w) + softmax_b
#########################################################################
# Defining the loss and cost functions for the model's learning to work #
#########################################################################
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example([logits], [tf.reshape(self._targets, [-1])],
[tf.ones([batch_size * num_steps])])
self._cost = cost = tf.reduce_sum(loss) / batch_size
# Store the final state
self._final_state = state
#Everything after this point is relevant only for training
if not is_training:
return
#################################################
# Creating the Training Operation for our Model #
#################################################
# Create a variable for the learning rate
self._lr = tf.Variable(0.0, trainable=False)
# Get all TensorFlow variables marked as "trainable" (i.e. all of them except _lr, which we just created)
tvars = tf.trainable_variables()
# Define the gradient clipping threshold
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), max_grad_norm)
# Create the gradient descent optimizer with our learning rate
optimizer = tf.train.GradientDescentOptimizer(self.lr)
# Create the training TensorFlow Operation through our optimizer
self._train_op = optimizer.apply_gradients(zip(grads, tvars))
# Helper functions for our LSTM RNN class
# Assign the learning rate for this model
def assign_lr(self, session, lr_value):
session.run(tf.assign(self.lr, lr_value))
# Returns the input data for this model at a point in time
@property
def input_data(self):
return self._input_data
# Returns the targets for this model at a point in time
@property
def targets(self):
return self._targets
# Returns the initial state for this model
@property
def initial_state(self):
return self._initial_state
# Returns the defined Cost
@property
def cost(self):
return self._cost
# Returns the final state for this model
@property
def final_state(self):
return self._final_state
# Returns the current learning rate for this model
@property
def lr(self):
return self._lr
# Returns the training operation defined for this model
@property
def train_op(self):
return self._train_op
With that, the actual structure of our Recurrent Neural Network with Long Short-Term Memory is finished. What remains for us to do is to actually create the methods to run through time -- that is, the run_epoch
method to be run at each epoch and a main
script which ties all of this together.
What our run_epoch
method should do is take our input data and feed it to the relevant operations. This will return at the very least the current result for the cost function.
In [62]:
##########################################################################################################################
# run_epoch takes as parameters the current session, the model instance, the data to be fed, and the operation to be run #
##########################################################################################################################
def run_epoch(session, m, data, eval_op, verbose=False):
#Define the epoch size based on the length of the data, batch size and the number of steps
epoch_size = ((len(data) // m.batch_size) - 1) // m.num_steps
start_time = time.time()
costs = 0.0
iters = 0
#state = m.initial_state.eval()
#m.initial_state = tf.convert_to_tensor(m.initial_state)
#state = m.initial_state.eval()
state = session.run(m.initial_state)
#For each step and data point
for step, (x, y) in enumerate(reader.ptb_iterator(data, m.batch_size, m.num_steps)):
#Evaluate and return cost, state by running cost, final_state and the function passed as parameter
cost, state, _ = session.run([m.cost, m.final_state, eval_op],
{m.input_data: x,
m.targets: y,
m.initial_state: state})
#Add returned cost to costs (which keeps track of the total costs for this epoch)
costs += cost
#Add number of steps to iteration counter
iters += m.num_steps
if verbose and (step % 10) == 0:
print("({:.2%}) Perplexity={:.3f} Speed={:.0f} wps".format(
step * 1.0 / epoch_size,
np.exp(costs / iters),
iters * m.batch_size / (time.time() - start_time))
)
# Returns the Perplexity rating for us to keep track of how the model is evolving
return np.exp(costs / iters)
Now, we create the main
method to tie everything together. The code here reads the data from the directory, using the reader
helper module, and then trains and evaluates the model on both a testing and a validating subset of data.
In [63]:
# Reads the data and separates it into training data, validation data and testing data
raw_data = reader.ptb_raw_data(data_dir)
train_data, valid_data, test_data, _ = raw_data
In [64]:
#Initializes the Execution Graph and the Session
with tf.Graph().as_default(), tf.Session() as session:
initializer = tf.random_uniform_initializer(-init_scale,init_scale)
# Instantiates the model for training
# tf.variable_scope add a prefix to the variables created with tf.get_variable
with tf.variable_scope("model", reuse=None, initializer=initializer):
m = PTBModel(is_training=True)
# Reuses the trained parameters for the validation and testing models
# They are different instances but use the same variables for weights and biases,
# they just don't change when data is input
with tf.variable_scope("model", reuse=True, initializer=initializer):
mvalid = PTBModel(is_training=False)
mtest = PTBModel(is_training=False)
#Initialize all variables
tf.global_variables_initializer().run()
# Set initial learning rate
m.assign_lr(session=session, lr_value=learning_rate)
for i in range(max_max_epoch):
print("Epoch %d : Learning rate: %.3f" % (i + 1, session.run(m.lr)))
# Run the loop for this epoch in the training model
train_perplexity = run_epoch(session, m, train_data, m.train_op,
verbose=True)
print("Epoch %d : Train Perplexity: %.3f" % (i + 1, train_perplexity))
# Run the loop for this epoch in the validation model
valid_perplexity = run_epoch(session, mvalid, valid_data, tf.no_op())
print("Epoch %d : Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
# Define the decay for the next epoch
lr_decay = decay * ((max_max_epoch - i) / max_max_epoch)
# Set the decayed learning rate as the learning rate for the next epoch
m.assign_lr(session, learning_rate * lr_decay)
# Run the loop in the testing model to see how effective was our training
test_perplexity = run_epoch(session, mtest, test_data, tf.no_op())
print("Test Perplexity: %.3f" % test_perplexity)
As you can see, the model's perplexity rating drops very quickly after a few iterations. As was elaborated before, lower Perplexity means that the model is more certain about its prediction. As such, we can be sure that this model is performing well!
This is the end of the Applying Recurrent Neural Networks to Text Processing notebook. Hopefully you now have a better understanding of Recurrent Neural Networks and how to implement one utilizing TensorFlow. Thank you for reading this notebook, and good luck on your studies.
Created by Walter Gomes de Amorim Junior, Saeed Aghabozorgi </h4>
In [ ]: