This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
Gensim's Online Non-Negative Matrix Factorization (NMF, NNMF, ONMF) implementation is based on Renbo Zhao, Vincent Y. F. Tan: Online Nonnegative Matrix Factorization with Outliers, 2016 and is optimized for extremely large, sparse, streamed inputs. Such inputs happen in NLP with unsupervised training on massive text corpora.
Why Online? Because corpora and datasets in modern ML can be very large, and RAM is limited. Unlike batch algorithms, online algorithms learn iteratively, streaming through the available training examples, without loading the entire dataset into RAM or requiring random-access to the data examples.
Why Non-Negative? Because non-negativity leads to more interpretable, sparse "human-friendly" topics. This is in contrast to e.g. SVD (another popular matrix factorization method with super-efficient implementation in Gensim), which produces dense negative factors and thus harder-to-interpret topics.
Matrix factorizations are the corner stone of modern machine learning. They can be used either directly (recommendation systems, bi-clustering, image compression, topic modeling…) or as internal routines in more complex deep learning algorithms.
Terminology:
corpus
is a stream of input documents = training examplesbatch
is a chunk of input corpus, a word-document matrix mini-batch that fits in RAMW
is a word-topic matrix (to be learned; stored in the resulting model)h
is a topic-document matrix (to be learned; not stored, but rather inferred for documents on-the-fly)A
, B
- matrices that accumulate information from consecutive chunks. A = h.dot(ht)
, B = v.dot(ht)
.The idea behind the algorithm is as follows:
Initialize W, A and B matrices
for batch in input corpus batches:
infer h:
do coordinate gradient descent step to find h that minimizes ||batch - Wh|| in L2 norm
bound h so that it is non-negative
update A and B:
A = h.dot(ht)
B = batch.dot(ht)
update W:
do gradient descent step to find W that minimizes ||0.5*trace(WtWA) - trace(WtB)|| in L2 norm
Let's import the models we'll be using throughout this tutorial (numpy==1.14.2
, matplotlib==3.0.2
, pandas==0.24.1
, sklearn==0.19.1
, gensim==3.7.1
) and set up logging at INFO level.
Gensim uses logging generously to inform users what's going on. Eyeballing the logs is a good sanity check, to make sure everything is working as expected.
Only numpy
and gensim
are actually needed to train and use NMF. The other imports are used only to make our life a little easier in this tutorial.
In [1]:
import logging
import time
from contextlib import contextmanager
import os
from multiprocessing import Process
import psutil
import numpy as np
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
import gensim.downloader
from gensim import matutils, utils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, TfidfModel
from gensim.models.basemodel import BaseTopicModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora:
In [2]:
newsgroups = gensim.downloader.load('20-newsgroups')
categories = [
'alt.atheism',
'comp.graphics',
'rec.motorcycles',
'talk.politics.mideast',
'sci.space'
]
categories = {name: idx for idx, name in enumerate(categories)}
Create a train/test split:
In [3]:
random_state = RandomState(42)
trainset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'train'
])
random_state.shuffle(trainset)
testset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'test'
])
random_state.shuffle(testset)
We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
In [4]:
train_documents = [preprocess_string(doc['data']) for doc in trainset]
test_documents = [preprocess_string(doc['data']) for doc in testset]
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
In [5]:
dictionary = Dictionary(train_documents)
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=20000) # filter out too in/frequent tokens
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus:
In [6]:
tfidf = TfidfModel(dictionary=dictionary)
train_corpus = [
dictionary.doc2bow(document)
for document
in train_documents
]
test_corpus = [
dictionary.doc2bow(document)
for document
in test_documents
]
train_corpus_tfidf = list(tfidf[train_corpus])
test_corpus_tfidf = list(tfidf[test_corpus])
Here we simply stored the bag-of-words vectors into a list
, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python: Generators, Iterators, Iterables tutorial.
Notable model parameters:
kappa
float, optional
Gradient descent step size. Larger value makes the model train faster, but could lead to non-convergence if set too large.
w_max_iter
int, optional
Maximum number of iterations to train W per each batch.
w_stop_condition
float, optional
If the error difference gets smaller than this, training of W
stops for the current batch.
h_r_max_iter
int, optional
Maximum number of iterations to train h per each batch.
h_r_stop_condition
float, optional
If the error difference gets smaller than this, training of h
stops for the current batch.
Learn an NMF model with 5 topics:
In [7]:
%%time
nmf = GensimNmf(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=1000,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
kappa=1,
)
In [8]:
nmf.show_topics()
Out[8]:
Topic coherence measures how often do most frequent tokens from each topic co-occur in one document. Larger is better.
In [9]:
CoherenceModel(
model=nmf,
corpus=test_corpus_tfidf,
coherence='u_mass'
).get_coherence()
Out[9]:
In [10]:
print(testset[0]['data'])
print('=' * 100)
print("Topics: {}".format(nmf[test_corpus[0]]))
In [11]:
word = dictionary[0]
print("Word: {}".format(word))
print("Topics: {}".format(nmf.get_term_topics(word)))
Density is a fraction of non-zero elements in a matrix.
In [12]:
def density(matrix):
return (matrix > 0).mean()
Term-topic matrix of shape (words, topics)
.
In [13]:
print("Density: {}".format(density(nmf._W)))
Topic-document matrix for the last batch of shape (topics, batch)
In [14]:
print("Density: {}".format(density(nmf._h)))
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above:
train time
- time to train a modelmean_ram
- mean RAM consumption during trainingmax_ram
- maximum RAM consumption during trainingtrain time
- time to train a model.coherence
- coherence score (larger is better).l2_norm
- L2 norm of v - Wh
(less is better, not defined for LDA).f1
- F1 score on the task of news topic classification (larger is better).
In [15]:
fixed_params = dict(
chunksize=1000,
num_topics=5,
id2word=dictionary,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
)
In [16]:
@contextmanager
def measure_ram(output, tick=5):
def _measure_ram(pid, output, tick=tick):
py = psutil.Process(pid)
with open(output, 'w') as outfile:
while True:
memory = py.memory_info().rss
outfile.write("{}\n".format(memory))
outfile.flush()
time.sleep(tick)
pid = os.getpid()
p = Process(target=_measure_ram, args=(pid, output, tick))
p.start()
yield
p.terminate()
def get_train_time_and_ram(func, name, tick=5):
memprof_filename = "{}.memprof".format(name)
start = time.time()
with measure_ram(memprof_filename, tick=tick):
result = func()
elapsed_time = pd.to_timedelta(time.time() - start, unit='s').round('s')
memprof_df = pd.read_csv(memprof_filename, squeeze=True)
mean_ram = "{} MB".format(
int(memprof_df.mean() // 2 ** 20),
)
max_ram = "{} MB".format(int(memprof_df.max() // 2 ** 20))
return elapsed_time, mean_ram, max_ram, result
def get_f1(model, train_corpus, X_test, y_train, y_test):
if isinstance(model, SklearnNmf):
dense_train_corpus = matutils.corpus2dense(
train_corpus,
num_terms=model.components_.shape[1],
)
X_train = model.transform(dense_train_corpus.T)
else:
X_train = np.zeros((len(train_corpus), model.num_topics))
for bow_id, bow in enumerate(train_corpus):
for topic_id, word_count in model.get_document_topics(bow):
X_train[bow_id, topic_id] = word_count
log_reg = LogisticRegressionCV(multi_class='multinomial', cv=5)
log_reg.fit(X_train, y_train)
pred_labels = log_reg.predict(X_test)
return f1_score(y_test, pred_labels, average='micro')
def get_sklearn_topics(model, top_n=5):
topic_probas = model.components_.T
topic_probas = topic_probas / topic_probas.sum(axis=0)
sparsity = np.zeros(topic_probas.shape[1])
for row in topic_probas:
sparsity += (row == 0)
sparsity /= topic_probas.shape[1]
topic_probas = topic_probas[:, sparsity.argsort()[::-1]][:, :top_n]
token_indices = topic_probas.argsort(axis=0)[:-11:-1, :]
topic_probas.sort(axis=0)
topic_probas = topic_probas[:-11:-1, :]
topics = []
for topic_idx in range(topic_probas.shape[1]):
tokens = [
model.id2word[token_idx]
for token_idx
in token_indices[:, topic_idx]
]
topic = (
'{}*"{}"'.format(round(proba, 3), token)
for proba, token
in zip(topic_probas[:, topic_idx], tokens)
)
topic = " + ".join(topic)
topics.append((topic_idx, topic))
return topics
def get_metrics(model, test_corpus, train_corpus=None, y_train=None, y_test=None, dictionary=None):
if isinstance(model, SklearnNmf):
model.get_topics = lambda: model.components_
model.show_topics = lambda top_n: get_sklearn_topics(model, top_n)
model.id2word = dictionary
W = model.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus,
num_terms=W.shape[0],
)
if isinstance(model, SklearnNmf):
H = model.transform(dense_test_corpus.T).T
else:
H = np.zeros((model.num_topics, len(test_corpus)))
for bow_id, bow in enumerate(test_corpus):
for topic_id, word_count in model.get_document_topics(bow):
H[topic_id, bow_id] = word_count
l2_norm = None
if not isinstance(model, LdaModel):
pred_factors = W.dot(H)
l2_norm = np.linalg.norm(pred_factors - dense_test_corpus)
l2_norm = round(l2_norm, 4)
f1 = None
if train_corpus and y_train and y_test:
f1 = get_f1(model, train_corpus, H.T, y_train, y_test)
f1 = round(f1, 4)
model.normalize = True
coherence = CoherenceModel(
model=model,
corpus=test_corpus,
coherence='u_mass'
).get_coherence()
coherence = round(coherence, 4)
topics = model.show_topics(5)
model.normalize = False
return dict(
coherence=coherence,
l2_norm=l2_norm,
f1=f1,
topics=topics,
)
In [17]:
tm_metrics = pd.DataFrame(columns=['model', 'train_time', 'coherence', 'l2_norm', 'f1', 'topics'])
y_train = [doc['target'] for doc in trainset]
y_test = [doc['target'] for doc in testset]
# LDA metrics
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(
corpus=train_corpus,
**fixed_params,
),
'lda',
1,
)
row.update(get_metrics(
lda, test_corpus, train_corpus, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Sklearn NMF metrics
row = {}
row['model'] = 'sklearn_nmf'
train_dense_corpus_tfidf = matutils.corpus2dense(train_corpus_tfidf, len(dictionary)).T
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: SklearnNmf(n_components=5, random_state=42).fit(train_dense_corpus_tfidf),
'sklearn_nmf',
1,
)
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test, dictionary,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Gensim NMF metrics
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], gensim_nmf = get_train_time_and_ram(
lambda: GensimNmf(
normalize=False,
corpus=train_corpus_tfidf,
**fixed_params
),
'gensim_nmf',
0.5,
)
row.update(get_metrics(
gensim_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
In [18]:
tm_metrics.drop('topics', axis=1)
Out[18]:
Let's inspect the 5 topics learned by each of the three models:
In [19]:
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA looks a bit worse.
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia: 2.6 billion words, in 23.1 million article sections across 5 million Wikipedia articles.
The data preprocessing takes a while, and we'll be comparing multiple models, so reserve about FIXME hours and some 20 GB of disk space to go through the following notebook cells in full. You'll need gensim>=3.7.1
, numpy
, tqdm
, pandas
, psutils
, joblib
and sklearn
.
In [20]:
# Re-import modules from scratch, so that this Section doesn't rely on any previous cells.
import itertools
import json
import logging
import time
import os
from smart_open import smart_open
import psutil
import numpy as np
import scipy.sparse
from contextlib import contextmanager, contextmanager, contextmanager
from multiprocessing import Process
from tqdm import tqdm, tqdm_notebook
import joblib
import pandas as pd
from sklearn.decomposition.nmf import NMF as SklearnNmf
import gensim.downloader
from gensim import matutils
from gensim.corpora import MmCorpus, Dictionary
from gensim.models import LdaModel, LdaMulticore, CoherenceModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.utils import simple_preprocess
tqdm.pandas()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space):
In [21]:
data = gensim.downloader.load("wiki-english-20171001")
Print the titles and sections of the first Wikipedia article, as a little sanity check:
In [22]:
data = gensim.downloader.load("wiki-english-20171001")
article = next(iter(data))
print("Article: %r\n" % article['title'])
for section_title, section_text in zip(article['section_titles'], article['section_texts']):
print("Section title: %r" % section_title)
print("Section text: %s…\n" % section_text[:100].replace('\n', ' ').strip())
Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article:
In [23]:
def wikidump2tokens(articles):
"""Stream through the Wikipedia dump, yielding a list of tokens for each article."""
for article in articles:
article_section_texts = [
" ".join([title, text])
for title, text
in zip(article['section_titles'], article['section_texts'])
]
article_tokens = simple_preprocess(" ".join(article_section_texts))
yield article_tokens
Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours:
In [24]:
if os.path.exists('wiki.dict'):
# If we already stored the Dictionary in a previous run, simply load it, to save time.
dictionary = Dictionary.load('wiki.dict')
else:
dictionary = Dictionary(wikidump2tokens(data))
# Keep only the 30,000 most frequent vocabulary terms, after filtering away terms
# that are too frequent/too infrequent.
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=30000)
dictionary.save('wiki.dict')
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage:
In [25]:
vector_stream = (dictionary.doc2bow(article) for article in wikidump2tokens(data))
For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm
file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space:
In [26]:
class RandomSplitCorpus(MmCorpus):
"""
Use the fact that MmCorpus supports random indexing, and create a streamed
corpus in shuffled order, including a train/test split for evaluation.
"""
def __init__(self, random_seed=42, testset=False, testsize=1000, *args, **kwargs):
super().__init__(*args, **kwargs)
random_state = np.random.RandomState(random_seed)
self.indices = random_state.permutation(range(self.num_docs))
test_nnz = sum(len(self[doc_idx]) for doc_idx in self.indices[:testsize])
if testset:
self.indices = self.indices[:testsize]
self.num_docs = testsize
self.num_nnz = test_nnz
else:
self.indices = self.indices[testsize:]
self.num_docs -= testsize
self.num_nnz -= test_nnz
def __iter__(self):
for doc_id in self.indices:
yield self[doc_id]
In [27]:
if not os.path.exists('wiki.mm'):
MmCorpus.serialize('wiki.mm', vector_stream, progress_cnt=100000)
if not os.path.exists('wiki_tfidf.mm'):
MmCorpus.serialize('wiki_tfidf.mm', tfidf[MmCorpus('wiki.mm')], progress_cnt=100000)
In [28]:
# Load back the vectors as two lazily-streamed train/test iterables.
train_corpus = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki.mm',
)
test_corpus = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki.mm',
)
train_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki_tfidf.mm',
)
test_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki_tfidf.mm',
)
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
In [29]:
if not os.path.exists('wiki_train_csr.npz'):
scipy.sparse.save_npz(
'wiki_train_csr.npz',
matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T,
)
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above:
train time
- time to train a modelmean_ram
- mean RAM consumption during trainingmax_ram
- maximum RAM consumption during trainingtrain time
- time to train a model.coherence
- coherence score (larger is better).l2_norm
- L2 norm of v - Wh
(less is better, not defined for LDA).Define a dataframe in which we'll store the recorded metrics:
In [30]:
tm_metrics = pd.DataFrame(columns=[
'model', 'train_time', 'mean_ram', 'max_ram', 'coherence', 'l2_norm', 'topics',
])
Define common parameters, to be shared by all evaluated models:
In [31]:
params = dict(
chunksize=2000,
num_topics=50,
id2word=dictionary,
passes=1,
eval_every=10,
minimum_probability=0,
random_state=42,
)
In [32]:
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], nmf = get_train_time_and_ram(
lambda: GensimNmf(normalize=False, corpus=train_corpus_tfidf, **params),
'gensim_nmf',
1,
)
In [33]:
print(row)
nmf.save('gensim_nmf.model')
In [34]:
nmf = GensimNmf.load('gensim_nmf.model')
row.update(get_metrics(nmf, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
In [35]:
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(corpus=train_corpus, **params),
'lda',
1,
)
In [36]:
print(row)
lda.save('lda.model')
In [37]:
lda = LdaModel.load('lda.model')
row.update(get_metrics(lda, test_corpus))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
In [38]:
row = {}
row['model'] = 'sklearn_nmf'
sklearn_nmf = SklearnNmf(n_components=50, tol=1e-2, random_state=42)
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: sklearn_nmf.fit(scipy.sparse.load_npz('wiki_train_csr.npz')),
'sklearn_nmf',
10,
)
print(row)
joblib.dump(sklearn_nmf, 'sklearn_nmf.joblib')
Out[38]:
In [39]:
sklearn_nmf = joblib.load('sklearn_nmf.joblib')
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, dictionary=dictionary,
))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
In [40]:
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop(['topics', 'f1'], axis=1)
Out[40]:
Gensim's online NMF outperforms Sklearn's NMF in terms of speed and RAM consumption:
2x faster.
Uses ~20x less memory.
About 8GB of Sklearn's RAM comes from the in-memory input matrices, which, in contrast to Gensim NMF, cannot be streamed iteratively. But even if we forget about the huge input size, Sklearn NMF uses about 2-8 GB of RAM – significantly more than Gensim NMF or LDA.
L2 norm and coherence are a bit worse.
Compared to Gensim's LDA, Gensim NMF also gives superior results:
In [41]:
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
It seems all three models successfully learned useful topics from the Wikipedia corpus.
The NMF algorithm in Gensim is optimized for extremely large (sparse) text corpora, but it will also work on vectors from other domains!
Let's compare our model to other factorization algorithms on dense image vectors and check out the results.
To do that we'll patch sklearn's Faces Dataset Decomposition.
In [42]:
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
from sklearn.model_selection import ParameterGrid
import gensim.downloader
from gensim import matutils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
In [43]:
from sklearn.base import BaseEstimator, TransformerMixin
import scipy.sparse as sparse
class NmfWrapper(BaseEstimator, TransformerMixin):
def __init__(self, bow_matrix, **kwargs):
self.corpus = sparse.csc.csc_matrix(bow_matrix)
self.nmf = GensimNmf(**kwargs)
def fit(self, X):
self.nmf.update(self.corpus)
@property
def components_(self):
return self.nmf.get_topics()
Adapted from the excellent Scikit-learn tutorial (BSD license):
Turn off the logger due to large number of info messages during training
In [44]:
gensim.models.nmf.logger.propagate = False
In [45]:
"""
============================
Faces dataset decompositions
============================
This example applies to :ref:`olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
:ref:`decompositions`) .
"""
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 claus
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF (Sklearn)',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Non-negative components - NMF (Gensim)',
NmfWrapper(
bow_matrix=faces.T,
chunksize=3,
eval_every=400,
passes=2,
id2word={idx: idx for idx in range(faces.shape[1])},
num_topics=n_components,
minimum_probability=0,
random_state=42,
),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time.time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time.time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
As you can see, Gensim's NMF implementation is slower than Sklearn's on dense vectors, while achieving comparable quality.
Gensim NMF is an extremely fast and memory-optimized model. Use it to obtain interpretable topics, as an alternative to SVD / LDA.
The NMF implementation in Gensim was created by Timofey Yefimov as a part of his RARE Technologies Student Incubator graduation project.
In [ ]: