RECURRENT NETWORKS IN DEEP LEARNING

The Long Short-Term Memory Model

Hello and welcome to this notebook. In this notebook, we will go over concepts of the Long Short-Term Memory (LSTM) model, a refinement of the original Recurrent Neural Network model. By the end of this notebook, you should be able to understand the Long Short-Term Memory model, the benefits and problems it solves, and its inner workings and calculations.

The Problem to be Solved

Long Short-Term Memory, or LSTM for short, is one of the proposed solutions or upgrades to the Recurrent Neural Network model. The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.

*Representation of a Recurrent Neural Network*

However, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.

Long Short-Term Memory: What is it?

To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.

The Long Short-Term Memory, as it was called, was an abstraction of how computer memory works. It is "bundled" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.

Thanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.

Long Short-Term Memory Architecture

As seen before, the Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are the "Input" or "Write" Gate, which handles the writing of data into the information cell, the "Output" or "Read" Gate, which handles the sending of data back onto the Recurrent Network, and the "Keep" or "Forget" Gate, which handles the maintaining and modification of the data stored in the information cell.

*Diagram of the Long Short-Term Memory Unit*

The three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed onto the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.


For example, an usual flow of operations for the LSTM unit is as such: First off, the Keep Gate has to decide whether to keep or forget the data currently stored in memory. It receives both the input and the state of the Recurrent Network, and passes it through its Sigmoid activation. A value of 1 means that the LSTM unit should keep the data stored perfectly and a value of 0 means that it should forget it entirely. Consider $S_{t-1}$ as the incoming (previous) state, $x_t$ as the incoming input, and $W_k$, $B_k$ as the weight and bias for the Keep Gate. Additionally, consider $Old_{t-1}$ as the data previously in memory. What happens can be summarized by this equation:
$$K_t = \sigma(W_k \times [S_{t-1},x_t] + B_k)$$

$$Old_t = K_t \times Old_{t-1}$$

</strong></font>
As you can see, $Old_{t-1}$ was multiplied by value was returned by the Keep Gate -- this value is written in the memory cell. Then, the input and state are passed on to the Input Gate, in which there is another Sigmoid activation applied. Concurrently, the input is processed as normal by whatever processing unit is implemented in the network, and then multiplied by the Sigmoid activation's result, much like the Keep Gate. Consider $W_i$ and $B_i$ as the weight and bias for the Input Gate, and $C_t$ the result of the processing of the inputs by the Recurrent Network.
$$I_t = \sigma(W_i\times[S_{t-1},x_t]+B_i)$$

$$New_t = I_t \times C_t$$

</strong></font>
$New_t$ is the new data to be input into the memory cell. This is then added to whatever value is still stored in memory.
$$Cell_t = Old_t + New_t$$
We now have the candidate data which is to be kept in the memory cell. The conjunction of the Keep and Input gates work in an analog manner, making it so that it is possible to keep part of the old data and add only part of the new data. Consider however, what would happen if the Forget Gate was set to 0 and the Input Gate was set to 1:
$$Old_t = 0 \times Old_{t-1}$$

$$New_t = 1 \times C_t$$$$Cell_t = C_t$$

</strong></font>
The old data would be totally forgotten and the new data would overwrite it completely.

The Output Gate functions in a similar manner. To decide what we should output, we take the input data and state and pass it through a Sigmoid function as usual. The contents of our memory cell, however, are pushed onto a Tanh function to bind them between a value of -1 to 1. Consider $W_o$ and $B_o$ as the weight and bias for the Output Gate.
$$O_t = \sigma(W_o \times [S_{t-1},x_t] + B_o)$$

$$Output_t = O_t \times tanh(Cell_t)$$

</strong></font>

And that $Output_t$ is what is output into the Recurrent Network.


The Logistic Function plotted</Center>

As mentioned many times, all three gates are logistic. The reason for this is because it is very easy to backpropagate through them, and as such, it is possible for the model to learn exactly how it is supposed to use this structure. This is one of the reasons for which LSTM is a very strong structure. Additionally, this solves the gradient problems by being able to manipulate values through the gates themselves -- by passing the inputs and outputs through the gates, we have now a easily derivable function modifying our inputs.

In regards to the problem of storing many states over a long period of time, LSTM handles this perfectly by only keeping whatever information is necessary and forgetting it whenever it is not needed anymore. Therefore, LSTMs are a very elegant solution to both problems.

LSTM basics

Lets first create a tiny LSTM network sample to underestand the architecture of LSTM networks.

We need to import the necessary modules for our code. We need numpy and tensorflow, obviously. Additionally, we can import directly the tensorflow.models.rnn.rnn model, which includes the function for building RNNs, and tensorflow.models.rnn.ptb.reader which is the helper module for getting the input data from the dataset we just downloaded.

If you want to learm more take a look at https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/


In [1]:
import numpy as np
import tensorflow as tf

In [2]:
tf.reset_default_graph()

In [3]:
sess = tf.InteractiveSession()

We want to create a network that has only one LSTM cell. The LSTM cell has 2 hidden nodes, so we need 2 state vector as well. Here, state is a tuple with 2 elements, each one is of size [1 x 2], one for passing information to next time step, and another for passing the state to next layer/output.


In [4]:
LSTM_CELL_SIZE = 3 
with tf.variable_scope('basic_lstm_cell'):
    try:
        lstm_cell = tf.contrib.rnn.LSTMCell(LSTM_CELL_SIZE, state_is_tuple=True, reuse=False)
    except:
        print('LSTM already exists in the current scope. Reset the TF graph to re-create it.')
    sample_input = tf.constant([[1,2,3,4,3,2],[3,2,2,2,2,2]],dtype=tf.float32)
    state = (tf.zeros([2,LSTM_CELL_SIZE]),)*2
    output, state_new = lstm_cell(sample_input, state)

In [5]:
sess.run(tf.global_variables_initializer())

In [6]:
print (sess.run(sample_input))


[[ 1.  2.  3.  4.  3.  2.]
 [ 3.  2.  2.  2.  2.  2.]]

As we can see, the states are initalized with zeros:


In [7]:
cell_state, h_state = sess.run(state)
print('Cell state shape: ', cell_state.shape)
print('Hidden state shape: ', h_state.shape)
print(cell_state)
print(h_state)


Cell state shape:  (2, 3)
Hidden state shape:  (2, 3)
[[ 0.  0.  0.]
 [ 0.  0.  0.]]
[[ 0.  0.  0.]
 [ 0.  0.  0.]]

Lets look the output and state of the network:


In [8]:
print (sess.run(output))


[[-0.20051736  0.17376861  0.10328128]
 [-0.55597687  0.29114026  0.14375289]]

In [9]:
c_new, h_new = sess.run(state_new)
print(c_new)
print(h_new)


[[-0.28083256  0.18005869  0.12710069]
 [-0.8096109   0.32536143  0.17687449]]
[[-0.20051736  0.17376861  0.10328128]
 [-0.55597687  0.29114026  0.14375289]]

Stacked LSTM basecs

What about if we want to have a RNN?

The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (2, 3, 6)


In [10]:
sample_LSTM_CELL_SIZE = 3  #3 hidden nodes (it is equal to time steps)
sample_batch_size = 2
sample_input = tf.constant([
    [[1,2,3,4,3,2], 
    [1,2,1,1,1,2],
    [1,2,2,2,2,2]],
    
    [[1,2,3,4,3,2],
    [3,2,2,1,1,2],
    [0,0,0,0,3,2]]
],dtype=tf.float32)
num_layers = 3

with tf.variable_scope("stacked_lstm"):
    try:
        multi_lstm_cell = tf.contrib.rnn.MultiRNNCell(
            [tf.contrib.rnn.BasicLSTMCell(sample_LSTM_CELL_SIZE, state_is_tuple=True) for _ in range(num_layers)]
        )
        _initial_state = multi_lstm_cell.zero_state(sample_batch_size, tf.float32)
        outputs, new_state =  tf.nn.dynamic_rnn(multi_lstm_cell, 
                                                sample_input,
                                                dtype=tf.float32, 
                                                initial_state=_initial_state)    
    except ValueError:
        print('Stacked LSTM already exists in the current scope. Reset the TF graph to re-create it.')

In [11]:
sess.run(tf.global_variables_initializer())

In [12]:
print (sess.run(sample_input))


[[[ 1.  2.  3.  4.  3.  2.]
  [ 1.  2.  1.  1.  1.  2.]
  [ 1.  2.  2.  2.  2.  2.]]

 [[ 1.  2.  3.  4.  3.  2.]
  [ 3.  2.  2.  1.  1.  2.]
  [ 0.  0.  0.  0.  3.  2.]]]

In [13]:
sess.run(_initial_state)


Out[13]:
(LSTMStateTuple(c=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32), h=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)),
 LSTMStateTuple(c=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32), h=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)),
 LSTMStateTuple(c=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32), h=array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)))

In [14]:
print (sess.run(new_state))


(LSTMStateTuple(c=array([[-0.06243251, -0.42666471, -0.66819465],
       [-0.54530811,  0.63006109,  0.19364132]], dtype=float32), h=array([[-0.03435314, -0.03299662, -0.51462108],
       [-0.37309262,  0.13184208,  0.0641093 ]], dtype=float32)), LSTMStateTuple(c=array([[ 0.24988043, -0.0357448 , -0.20049879],
       [-0.03750936, -0.10067128,  0.03815747]], dtype=float32), h=array([[ 0.13592206, -0.01893157, -0.10592181],
       [-0.0199372 , -0.04886958,  0.02061478]], dtype=float32)), LSTMStateTuple(c=array([[ 0.09395637,  0.00929645, -0.01139056],
       [ 0.00369226,  0.01093869,  0.00342935]], dtype=float32), h=array([[ 0.04628225,  0.00462313, -0.00587444],
       [ 0.0018322 ,  0.0054282 ,  0.0017295 ]], dtype=float32)))

In [15]:
print (sess.run(output))


[[-0.30990523  0.10787404  0.05974748]
 [-0.13730974  0.05931989  0.05300466]]

This is the end of this lesson. Hopefully, now you have a deeper and intuitive understanding regarding the LSTM model. Thank you for reading this notebook, and good luck on your studies.