Using wrappers for Gensim models for working with Keras

This tutorial is about using gensim models as a part of your Keras models.

The wrappers available (as of now) are :

  • Word2Vec (uses the function get_embedding_layer defined in gensim.models.keyedvectors)

Word2Vec

To use Word2Vec, we import the corresponding module.


In [3]:
from gensim.models import word2vec


Using TensorFlow backend.

Next we create a dummy set of sentences to train our Word2Vec model.


In [4]:
sentences = [
    ['human', 'interface', 'computer'],
    ['survey', 'user', 'computer', 'system', 'response', 'time'],
    ['eps', 'user', 'interface', 'system'],
    ['system', 'human', 'system', 'eps'],
    ['user', 'response', 'time'],
    ['trees'],
    ['graph', 'trees'],
    ['graph', 'minors', 'trees'],
    ['graph', 'minors', 'survey']
]

Then, we create the Word2Vec model by passing appropriate parameters.


In [5]:
model = word2vec.Word2Vec(sentences, size=100, min_count=1, hs=1)


WARNING:gensim.models.word2vec:under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

Integration with Keras : Cosine Similarity Task

As an example of integration of Gensim's Word2Vec model with Keras, we consider a word similarity task where we compute the cosine distance as a measure of similarity between the two words.


In [6]:
import numpy as np
from keras.engine import Input
from keras.models import Model
from keras.layers.merge import dot

We would use the layer returned by the function get_embedding_layer in the Keras model.


In [7]:
wv = model.wv
embedding_layer = wv.get_embedding_layer()

Next, we construct the Keras model.


In [8]:
input_a = Input(shape=(1,), dtype='int32', name='input_a')
input_b = Input(shape=(1,), dtype='int32', name='input_b')
embedding_a = embedding_layer(input_a)
embedding_b = embedding_layer(input_b)
similarity = dot([embedding_a, embedding_b], axes=2, normalize=True)

keras_model = Model(input=[input_a, input_b], output=similarity)
keras_model.compile(optimizer='sgd', loss='mse')


/home/chinmaya13/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:7: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=Tensor("do..., inputs=[<tf.Tenso...)`

Now, we input the two words which we wish to compare and retrieve the value predicted by the model as the similarity score of the two words.


In [9]:
word_a = 'graph'
word_b = 'trees'
# output is the cosine distance between the two words (as a similarity measure)
output = keras_model.predict([np.asarray([model.wv.vocab[word_a].index]), np.asarray([model.wv.vocab[word_b].index])])

print output


[[[ 0.00596689]]]

Integration with Keras : 20NewsGroups Task

To see how Gensim's Word2Vec model could be integrated with Keras while dealing with a real supervised (classification) task, we consider the 20NewsGroups task. Here, we take a smaller version of this data by taking a subset of the documents to be classified.

First, we import the necessary modules.


In [10]:
import os
import sys
import keras
import numpy as np

from gensim.models import word2vec

from keras.models import Model
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
from keras.layers import Input, Dense, Flatten
from keras.layers import Conv1D, MaxPooling1D

from sklearn.datasets import fetch_20newsgroups

As the first step of the task, we iterate over the folder in which our text samples are stored, and format them into a list of samples. Also, we prepare at the same time a list of class indices matching the samples.


In [11]:
texts = []  # list of text samples
texts_w2v = []  # used to train the word embeddings
labels = []  # list of label ids

#using 3 categories for training the classifier
data = fetch_20newsgroups(subset='train', categories=['alt.atheism', 'comp.graphics', 'sci.space'])

for index in range(len(data)):
    label_id = data.target[index]
    file_data = data.data[index]
    i = file_data.find('\n\n')  # skip header
    if i > 0:
        file_data = file_data[i:]
    try:
        curr_str = str(file_data)
        sentence_list = curr_str.split('\n')
        for sentence in sentence_list:
            sentence = (sentence.strip()).lower()
            texts.append(sentence)
            texts_w2v.append(sentence.split(' '))
            labels.append(label_id)
    except:
        None

Then, we format our text samples and labels into tensors that can be fed into a neural network. To do this, we rely on Keras utilities keras.preprocessing.text.Tokenizer and keras.preprocessing.sequence.pad_sequences.


In [12]:
MAX_SEQUENCE_LENGTH = 1000

# Vectorize the text samples into a 2D integer tensor
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)

# word_index = tokenizer.word_index
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels))

x_train = data
y_train = labels

As the next step, we prepare the embedding layer to be used in our actual Keras model.


In [13]:
Keras_w2v = word2vec.Word2Vec(min_count=1)
Keras_w2v.build_vocab(texts_w2v)
Keras_w2v.train(texts, total_examples=Keras_w2v.corpus_count, epochs=Keras_w2v.iter)
Keras_w2v_wv = Keras_w2v.wv
embedding_layer = Keras_w2v_wv.get_embedding_layer()


WARNING:gensim.models.word2vec:under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

Finally, we create a small 1D convnet to solve our classification problem.


In [14]:
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(35)(x)  # global max pooling
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
preds = Dense(y_train.shape[1], activation='softmax')(x)

model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])

model.fit(x_train, y_train, epochs=5)


Epoch 1/5
137/137 [==============================] - 2s - loss: 1.0051 - acc: 0.4088     
Epoch 2/5
137/137 [==============================] - 2s - loss: 0.9640 - acc: 0.4891     
Epoch 3/5
137/137 [==============================] - 2s - loss: 0.8881 - acc: 0.4891     
Epoch 4/5
137/137 [==============================] - 2s - loss: 0.9136 - acc: 0.4453     
Epoch 5/5
137/137 [==============================] - 2s - loss: 0.8823 - acc: 0.4891     
Out[14]:
<keras.callbacks.History at 0x7fe154eea2d0>

As can be seen from the results above, the accuracy obtained is not that high. This is because of the small size of training data used and we could expect to obtain better accuracy for training data of larger size.

Integration with Keras : Another classification task

In this task, we train our model to predict the category of the input text. We start by importing the relevant modules and libraries :


In [20]:
from keras.models import Sequential
from keras.layers import Dropout
from keras.regularizers import l2
from keras.models import Model
from keras.engine import Input
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from gensim.models import keyedvectors
from collections import defaultdict

import pandas as pd

We now define some global variables and utility functions which would be used in the code further :


In [21]:
# global variables

nb_filters = 1200  # number of filters
n_gram = 2  # n-gram, or window size of CNN/ConvNet
maxlen = 15  # maximum number of words in a sentence
vecsize = 300  # length of the embedded vectors in the model 
cnn_dropout = 0.0  # dropout rate for CNN/ConvNet
final_activation = 'softmax'  # activation function. Options: softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear.
dense_wl2reg = 0.0  # dense_wl2reg: L2 regularization coefficient
dense_bl2reg = 0.0  # dense_bl2reg: L2 regularization coefficient for bias
optimizer = 'adam'  # optimizer for gradient descent. Options: sgd, rmsprop, adagrad, adadelta, adam, adamax, nadam

# utility functions

def retrieve_csvdata_as_dict(filepath):
    """
    Retrieve the training data in a CSV file, with the first column being the
    class labels, and second column the text data. It returns a dictionary with
    the class labels as keys, and a list of short texts as the value for each key.
    """
    df = pd.read_csv(filepath)
    category_col, descp_col = df.columns.values.tolist()
    shorttextdict = dict()
    for category, descp in zip(df[category_col], df[descp_col]):
        if type(descp) == str:
            shorttextdict.setdefault(category, []).append(descp)
    return shorttextdict

def subjectkeywords():
    """
    Return an example data set, with three subjects and corresponding keywords.
    This is in the format of the training input.
    """
    data_path = os.path.join(os.getcwd(), 'datasets/keras_classifier_training_data.csv')
    return retrieve_csvdata_as_dict(data_path)

def convert_trainingdata(classdict):
    """
    Convert the training data into format put into the neural networks.
    """
    classlabels = classdict.keys()
    lblidx_dict = dict(zip(classlabels, range(len(classlabels))))

    # tokenize the words, and determine the word length
    phrases = []
    indices = []
    for label in classlabels:
        for shorttext in classdict[label]:
            shorttext = shorttext if type(shorttext) == str else ''
            category_bucket = [0]*len(classlabels)
            category_bucket[lblidx_dict[label]] = 1
            indices.append(category_bucket)
            phrases.append(shorttext)

    return classlabels, phrases, indices

def process_text(text):
    """ 
    Process the input text by tokenizing and padding it.
    """
    tokenizer = Tokenizer()
    tokenizer.fit_on_texts(text)
    x_train = tokenizer.texts_to_sequences(text)

    x_train = pad_sequences(x_train, maxlen=maxlen)
    return x_train

We create our word2vec model first. We could either train our model or user pre-trained vectors.


In [22]:
# we are training our Word2Vec model here
w2v_training_data_path = os.path.join(os.getcwd(), 'datasets/word_vectors_training_data.txt')
input_data = word2vec.LineSentence(w2v_training_data_path)
w2v_model = word2vec.Word2Vec(input_data, size=300)
w2v_model_wv = w2v_model.wv

# Alternatively we could have imported pre-trained word-vectors like : 
# w2v_model_wv = keyedvectors.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
# The dataset 'GoogleNews-vectors-negative300.bin.gz' can be downloaded from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit


WARNING:gensim.models.word2vec:under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

We load the training data for the Keras model.


In [23]:
trainclassdict = subjectkeywords()

nb_labels = len(trainclassdict)  # number of class labels

Next, we create out Keras model.


In [24]:
# get embedding layer corresponding to our trained Word2Vec model
embedding_layer = w2v_model_wv.get_embedding_layer()

# create a convnet to solve our classification task
sequence_input = Input(shape=(maxlen,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(filters=nb_filters, kernel_size=n_gram, padding='valid', activation='relu', input_shape=(maxlen, vecsize))(embedded_sequences)
x = MaxPooling1D(pool_size=maxlen - n_gram + 1)(x)
x = Flatten()(x)
preds = Dense(nb_labels, activation=final_activation, kernel_regularizer=l2(dense_wl2reg), bias_regularizer=l2(dense_bl2reg))(x)

Next, we train the classifier.


In [25]:
classlabels, x_train, y_train = convert_trainingdata(trainclassdict)

tokenizer = Tokenizer()
tokenizer.fit_on_texts(x_train)
x_train = tokenizer.texts_to_sequences(x_train)

x_train = pad_sequences(x_train, maxlen=maxlen)

model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
fit_ret_val = model.fit(x_train, y_train, epochs=10)


Epoch 1/10
45/45 [==============================] - 0s - loss: 1.1154 - acc: 0.2222     
Epoch 2/10
45/45 [==============================] - 0s - loss: 1.0949 - acc: 0.3333     
Epoch 3/10
45/45 [==============================] - 0s - loss: 1.0426 - acc: 0.8667     
Epoch 4/10
45/45 [==============================] - 0s - loss: 0.8931 - acc: 0.9556     
Epoch 5/10
45/45 [==============================] - 0s - loss: 0.6967 - acc: 0.9778     
Epoch 6/10
45/45 [==============================] - 0s - loss: 0.4727 - acc: 0.9556     
Epoch 7/10
45/45 [==============================] - 0s - loss: 0.2991 - acc: 0.9778     
Epoch 8/10
45/45 [==============================] - 0s - loss: 0.1795 - acc: 0.9778     
Epoch 9/10
45/45 [==============================] - 0s - loss: 0.1218 - acc: 0.9778     
Epoch 10/10
45/45 [==============================] - 0s - loss: 0.0889 - acc: 0.9778     

Our classifier is now ready to predict classes for input data.


In [26]:
input_text = 'artificial intelligence'

matrix = process_text(input_text)

predictions = model.predict(matrix)

# get the actual categories from output
scoredict = {}
for idx, classlabel in zip(range(len(classlabels)), classlabels):
    scoredict[classlabel] = predictions[0][idx]

print scoredict


{'mathematics': 0.96289372, 'physics': 0.025273025, 'theology': 0.011833278}

The result above clearly suggests (~ 98% probability!) that the input artificial intelligence should belong to the category mathematics, which conforms very well with the expected output in this case. In general, the output could depend on several factors including the number of filters for the conv-net, the training data for the word-vectors, the training data for the classifier etc.


In [ ]: