Using wrappers for Scikit learn API

This tutorial is about using gensim models as a part of your scikit learn workflow with the help of wrappers found at gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel

The wrapper available (as of now) are :

  • LdaModel (gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel.SklearnWrapperLdaModel),which implements gensim's LdaModel in a scikit-learn interface

LdaModel

To use LdaModel begin with importing LdaModel wrapper


In [1]:
from gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel import SklearnWrapperLdaModel

Next we will create a dummy set of texts and convert it into a corpus


In [2]:
from gensim.corpora import Dictionary
texts = [['complier', 'system', 'computer'],
 ['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
 ['graph', 'flow', 'network', 'graph'],
 ['loading', 'computer', 'system'],
 ['user', 'server', 'system'],
 ['tree','hamiltonian'],
 ['graph', 'trees'],
 ['computer', 'kernel', 'malfunction','computer'],
 ['server','system','computer']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]

Then to run the LdaModel on it


In [3]:
model=SklearnWrapperLdaModel(num_topics=2,id2word=dictionary,iterations=20, random_state=1)
model.fit(corpus)
model.print_topics(2)


WARNING:gensim.models.ldamodel:too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy
Out[3]:
[(0,
  u'0.164*"computer" + 0.117*"system" + 0.105*"graph" + 0.061*"server" + 0.057*"tree" + 0.046*"malfunction" + 0.045*"kernel" + 0.045*"complier" + 0.043*"loading" + 0.039*"hamiltonian"'),
 (1,
  u'0.102*"graph" + 0.083*"system" + 0.072*"tree" + 0.064*"server" + 0.059*"user" + 0.059*"computer" + 0.057*"trees" + 0.056*"eulerian" + 0.055*"node" + 0.052*"flow"')]

Integration with Sklearn

To provide a better example of how it can be used with Sklearn, Let's use CountVectorizer method of sklearn. For this example we will use 20 Newsgroups data set. We will only use the categories rec.sport.baseball and sci.crypt and use it to generate topics.


In [4]:
import numpy as np
from gensim import matutils
from gensim.models.ldamodel import LdaModel
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel import SklearnWrapperLdaModel

In [5]:
rand = np.random.mtrand.RandomState(1) # set seed for getting same result
cats = ['rec.sport.baseball', 'sci.crypt']
data = fetch_20newsgroups(subset='train',
                        categories=cats,
                        shuffle=True)

Next, we use countvectorizer to convert the collection of text documents to a matrix of token counts.


In [6]:
vec = CountVectorizer(min_df=10, stop_words='english')

X = vec.fit_transform(data.data)
vocab = vec.get_feature_names() #vocab to be converted to id2word 

id2word=dict([(i, s) for i, s in enumerate(vocab)])

Next, we just need to fit X and id2word to our Lda wrapper.


In [7]:
obj=SklearnWrapperLdaModel(id2word=id2word,num_topics=5,passes=20)
lda=obj.fit(X)
lda.print_topics()


Out[7]:
[(0,
  u'0.018*"cryptography" + 0.018*"face" + 0.017*"fierkelab" + 0.008*"abuse" + 0.007*"constitutional" + 0.007*"collection" + 0.007*"finish" + 0.007*"150" + 0.007*"fast" + 0.006*"difference"'),
 (1,
  u'0.022*"corporate" + 0.022*"accurate" + 0.012*"chance" + 0.008*"decipher" + 0.008*"example" + 0.008*"basically" + 0.008*"dawson" + 0.008*"cases" + 0.008*"consideration" + 0.008*"follow"'),
 (2,
  u'0.034*"argue" + 0.031*"456" + 0.031*"arithmetic" + 0.024*"courtesy" + 0.020*"beastmaster" + 0.019*"bitnet" + 0.015*"false" + 0.015*"classified" + 0.014*"cubs" + 0.014*"digex"'),
 (3,
  u'0.108*"abroad" + 0.089*"asking" + 0.060*"cryptography" + 0.035*"certain" + 0.030*"ciphertext" + 0.030*"book" + 0.028*"69" + 0.028*"demand" + 0.028*"87" + 0.027*"cracking"'),
 (4,
  u'0.022*"clark" + 0.019*"authentication" + 0.017*"candidates" + 0.016*"decryption" + 0.015*"attempt" + 0.013*"creation" + 0.013*"1993apr5" + 0.013*"acceptable" + 0.013*"algorithms" + 0.013*"employer"')]

Using together with Scikit learn's Logistic Regression

Now lets try Sklearn's logistic classifier to classify the given categories into two types.Ideally we should get postive weights when cryptography is talked about and negative when baseball is talked about.


In [8]:
from sklearn import linear_model

In [9]:
def print_features(clf, vocab, n=10):
    ''' Better printing for sorted list '''
    coef = clf.coef_[0]
    print 'Positive features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[::-1][:n] if coef[j] > 0]))
    print 'Negative features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[:n] if coef[j] < 0]))

In [10]:
clf=linear_model.LogisticRegression(penalty='l1', C=0.1) #l1 penalty used
clf.fit(X,data.target)
print_features(clf,vocab)


Positive features: clipper:1.50 code:1.24 key:1.04 encryption:0.95 chip:0.37 nsa:0.37 government:0.36 uk:0.36 org:0.23 cryptography:0.23
Negative features: baseball:-1.32 game:-0.71 year:-0.61 team:-0.38 edu:-0.27 games:-0.26 players:-0.23 ball:-0.17 season:-0.14 phillies:-0.11

In [ ]: