# 主题模型

wangchengjun@nju.edu.cn

2014年高考前夕，百度“基于海量作文范文和搜索数据，利用概率主题模型，预测2014年高考作文的命题方向”。如上图所示，共分为了六个主题：时间、生命、民族、教育、心灵、发展。而每个主题下面又包括了一些具体的关键词。比如，生命的主题对应：平凡、自由、美丽、梦想、奋斗、青春、快乐、孤独。

# latent Dirichlet allocation (LDA)

The simplest topic model (on which all others are based) is latent Dirichlet allocation (LDA).

• LDA is a generative model that infers unobserved meanings from a large set of observations.

## Reference

• Blei DM, Ng J, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003; 3: 993–1022.
• Blei DM, Lafferty JD. Correction: a correlated topic model of science. Ann Appl Stat. 2007; 1: 634.
• Blei DM. Probabilistic topic models. Commun ACM. 2012; 55: 55–65.
• Chandra Y, Jiang LC, Wang C-J (2016) Mining Social Entrepreneurship Strategies Using Topic Modeling. PLoS ONE 11(3): e0151342. doi:10.1371/journal.pone.0151342

• Topic models assume that each document contains a mixture of topics
• Topics are considered latent/unobserved variables that stand between the documents and terms

It is impossible to directly assess the relationships between topics and documents and between topics and terms.

• What can be directly observed is the distribution of terms over documents, which is known as the document term matrix (DTM).

Topic models algorithmically identify the best set of latent variables (topics) that can best explain the observed distribution of terms in the documents.

The DTM is further decomposed into two matrices：

• a term-topic matrix (TTM)
• a topic-document matrix (TDM)

Each document can be assigned to a primary topic that demonstrates the highest topic-document probability and can then be linked to other topics with declining probabilities.

Assume K topics are in D documents, and each topic is denoted with $\phi_{1:K}$.

Each topic $\phi_K$ is a distribution of fixed words in the given documents.

The topic proportion in the document is denoted as $\theta_d$.

• e.g., the kth topic's proportion in document d is $\theta_{d, k}$.

Let $w_{d,n}$ denote the nth term in document d.

Further, topic models assign topics to a document and its terms.

• For example, the topic assigned to document d is denoted as $z_d$,
• and the topic assigned to the nth term in document d is denoted as $z_{d,n}$.

According to Blei et al. the joint distribution of $\phi_{1:K}$,$\theta_{1:D}$, $z_{1:D}$ and $w_{d, n}$ plus the generative process for LDA can be expressed as:

$p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n})$ =

$\prod_{i=1}^{K} p(\phi_i) \prod_{d =1}^D p(\theta_d)(\prod_{n=1}^N p(z_{d,n} \mid \theta_d) \times p(w_{d, n} \mid \phi_{1:K}, Z_{d, n}) )$

Note that $\phi_{1:k},\theta_{1:D},and z_{1:D}$ are latent, unobservable variables. Thus, the computational challenge of LDA is to compute the conditional distribution of them given the observable specific words in the documents $w_{d, n}$.

Accordingly, the posterior distribution of LDA can be expressed as:

## $p(\phi_{1:K}, \theta_{1:D}, z_{1:D} \mid w_{d, n}) = \frac{p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n})}{p(w_{1:D})}$

Because the number of possible topic structures is exponentially large, it is impossible to compute the posterior of LDA. Topic models aim to develop efficient algorithms to approximate the posterior of LDA.

• There are two categories of algorithms:
• sampling-based algorithms
• variational algorithms

Using the Gibbs sampling method, we can build a Markov chain for the sequence of random variables (see Eq 1). The sampling algorithm is applied to the chain to sample from the limited distribution, and it approximates the posterior.

# Gensim

Unfortunately, scikit-learn does not support latent Dirichlet allocation.

Therefore, we are going to use the gensim package in Python.

Gensim is developed by Radim Řehůřek,who is a machine learning researcher and consultant in the Czech Republic. We muststart by installing it. We can achieve this by running one of the following commands:

# pip install gensim



In [1]:

%matplotlib inline
from __future__ import print_function
from wordcloud import WordCloud
from gensim import corpora, models, similarities,  matutils
import matplotlib.pyplot as plt
import numpy as np



http://www.cs.princeton.edu/~blei/lda-c/ap.tgz

Unzip the data and put them into /Users/chengjun/bigdata/ap/



In [2]:

corpus = corpora.BleiCorpus('/Users/chengjun/bigdata/ap/ap.dat', '/Users/chengjun/bigdata/ap/vocab.txt')




In [53]:

' '.join(dir(corpus))




Out[53]:

'__class__ __delattr__ __dict__ __doc__ __format__ __getattribute__ __getitem__ __hash__ __init__ __iter__ __len__ __module__ __new__ __reduce__ __reduce_ex__ __repr__ __setattr__ __sizeof__ __str__ __subclasshook__ __weakref__ _smart_save docbyoffset fname id2word index length line2doc load save save_corpus serialize'




In [112]:

corpus.id2word.items()[:3]




Out[112]:

[(0, u'i'), (1, u'new'), (2, u'percent')]



# Build the topic model



In [3]:

NUM_TOPICS = 100




In [4]:

model = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=None)




/Users/chengjun/anaconda/lib/python2.7/site-packages/gensim/models/ldamodel.py:379: VisibleDeprecationWarning: non integer (and non boolean) array-likes will not be accepted as indices in the future
/Users/chengjun/anaconda/lib/python2.7/site-packages/gensim/models/ldamodel.py:404: VisibleDeprecationWarning: non integer (and non boolean) array-likes will not be accepted as indices in the future
sstats[:, ids] += numpy.outer(expElogthetad.T, cts / phinorm)
/Users/chengjun/anaconda/lib/python2.7/site-packages/gensim/models/ldamodel.py:659: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
score += numpy.sum(cnt * logsumexp(Elogthetad + Elogbeta[:, id]) for id, cnt in doc)




In [54]:

' '.join(dir(model))




Out[54]:

'__class__ __delattr__ __dict__ __doc__ __format__ __getattribute__ __getitem__ __hash__ __init__ __module__ __new__ __reduce__ __reduce_ex__ __repr__ __setattr__ __sizeof__ __str__ __subclasshook__ __weakref__ _apply _smart_save alpha bound chunksize clear decay dispatcher distributed do_estep do_mstep eta eval_every expElogbeta gamma_threshold id2word inference iterations load log_perplexity num_terms num_topics num_updates numworkers offset optimize_alpha passes print_topic print_topics save show_topic show_topics state sync_state top_topics update update_alpha update_every'



# We can see the list of topics a document refers to

by using the model[doc] syntax:



In [5]:

document_topics = [model[c] for c in corpus]




In [6]:

# how many topics does one document cover?
document_topics[2]




Out[6]:

[(4, 0.052137616168702404),
(15, 0.12876321417584932),
(23, 0.04379009859453694),
(33, 0.14116424594514373),
(40, 0.010625828901655247),
(44, 0.056086598893535212),
(46, 0.027328228413054385),
(48, 0.10533999426518473),
(50, 0.071357565067439274),
(60, 0.13825487429448413),
(63, 0.012913111922677734),
(66, 0.02058986239907246),
(71, 0.013526739269049862),
(78, 0.030533118118060393),
(80, 0.024940171016118957),
(82, 0.042514905130635307),
(89, 0.01240829189515686),
(96, 0.058391317548739517)]




In [7]:

# The first topic
# format: weight, term
model.show_topic(0, 10)




Out[7]:

[(0.029097259282066554, u'creek'),
(0.026251396149414099, u'evacuated'),
(0.018583586332033605, u'water'),
(0.012544795715575586, u'flooding'),
(0.010683809061860247, u'homes'),
(0.010339512883379582, u'fire'),
(0.0091349290520717219, u'million'),
(0.0085188831799051287, u'people'),
(0.0083042708108851237, u'chris')]




In [8]:

# The 100 topic
# format: weight, term
model.show_topic(99, 10)




Out[8]:

[(0.0186781684377769, u'horton'),
(0.0068139538571449811, u'procedure'),
(0.0059927410778672224, u'fournier'),
(0.0055935660795590871, u'willie'),
(0.0052769957060853937, u'states'),
(0.0052531257304195313, u'furlough'),
(0.004750767493231554, u'maryland'),
(0.0040773817728890502, u'i'),
(0.0039145679465123285, u'new')]




In [213]:

words = model.show_topic(0, 5)
words




Out[213]:

[(0.029097259282066554, u'creek'),
(0.026251396149414099, u'evacuated'),
(0.018583586332033605, u'water'),
(0.012544795715575586, u'flooding'),
(0.010683809061860247, u'homes')]




In [215]:

model.show_topics(4)




Out[215]:

[u'0.017*percent + 0.014*cat + 0.011*bendjedid + 0.008*estate + 0.007*orbiter + 0.006*study + 0.005*filipino + 0.005*economy + 0.005*oil + 0.005*debts',
u'0.019*heroes + 0.015*sources + 0.014*mock + 0.013*cocaine + 0.012*guida + 0.008*monroe + 0.007*paul + 0.007*federal + 0.007*calif + 0.006*fill',
u'0.029*creek + 0.026*evacuated + 0.019*water + 0.013*flooding + 0.011*homes + 0.010*fire + 0.009*million + 0.009*roads + 0.009*people + 0.008*chris',
u'0.027*soviet + 0.008*union + 0.006*state + 0.005*government + 0.005*people + 0.005*national + 0.005*committee + 0.005*afghan + 0.005*heart + 0.004*president']




In [214]:

for f, w in words[:10]:
print(f, w)




0.0290972592821 creek
0.0262513961494 evacuated
0.018583586332 water
0.0125447957156 flooding
0.0106838090619 homes




In [39]:

# write out topcis with 10 terms with weights
for ti in range(model.num_topics):
words = model.show_topic(ti, 10)
tf = sum(f for f, w in words)
with open('/Users/chengjun/github/cjc2016/data/topics_term_weight.txt', 'a') as output:
for f, w in words:
line = str(ti) + '\t' +  w + '\t' + str(f/tf)
output.write(line + '\n')




In [40]:

# We first identify the most discussed topic, i.e., the one with the
# highest total weight
topics = matutils.corpus2dense(model[corpus], num_terms=model.num_topics)
weight = topics.sum(1)
max_topic = weight.argmax()




In [216]:

# Get the top 64 words for this topic
# Without the argument, show_topic would return only 10 words
words = model.show_topic(max_topic, 64)
words = np.array(words).T
words_freq=[float(i)*10000000 for i in words[0]]
words = zip(words[1], words_freq)




In [219]:

wordcloud = WordCloud().generate_from_frequencies(words)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()







In [104]:

num_topics_used = [len(model[doc]) for doc in corpus]

fig,ax = plt.subplots()
ax.hist(num_topics_used, np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
fig.tight_layout()
#fig.savefig('Figure_04_01.png')






# We can see that about 150 documents have 5 topics,

• while the majority deal with around 10 to 12 of them.
• No document talks about more than 20 topics.


In [109]:

# Now, repeat the same exercise using alpha=1.0
# You can edit the constant below to play around with this parameter
ALPHA = 1.0
model1 = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=ALPHA)

num_topics_used1 = [len(model1[doc]) for doc in corpus]




In [108]:

fig,ax = plt.subplots()
ax.hist([num_topics_used, num_topics_used1], np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
# The coordinates below were fit by trial and error to look good
plt.text(9, 223, r'default alpha')
plt.text(26, 156, 'alpha=1.0')
fig.tight_layout()






# 读取并清洗数据



In [113]:

with open('/Users/chengjun/bigdata/ap/ap.txt', 'r') as f:




In [130]:

dat[:6]




Out[130]:

['<DOC>\n',
'<DOCNO> AP881218-0003 </DOCNO>\n',
'<TEXT>\n',
" A 16-year-old student at a private Baptist school who allegedly killed one teacher and wounded another before firing into a filled classroom apparently just snapped,'' the school's pastor said. I don't know how it could have happened,'' said George Sweet, pastor of Atlantic Shores Baptist Church. This is a good, Christian school. We pride ourselves on discipline. Our kids are good kids.'' The Atlantic Shores Christian School sophomore was arrested and charged with first-degree murder, attempted murder, malicious assault and related felony charges for the Friday morning shooting. Police would not release the boy's name because he is a juvenile, but neighbors and relatives identified him as Nicholas Elliott. Police said the student was tackled by a teacher and other students when his semiautomatic pistol jammed as he fired on the classroom as the students cowered on the floor crying Jesus save us! God save us!'' Friends and family said the boy apparently was troubled by his grandmother's death and the divorce of his parents and had been tormented by classmates. Nicholas' grandfather, Clarence Elliott Sr., said Saturday that the boy's parents separated about four years ago and his maternal grandmother, Channey Williams, died last year after a long illness. The grandfather also said his grandson was fascinated with guns. The boy was always talking about guns,'' he said. He knew a lot about them. He knew all the names of them _ none of those little guns like a .32 or a .22 or nothing like that. He liked the big ones.'' The slain teacher was identified as Karen H. Farley, 40. The wounded teacher, 37-year-old Sam Marino, was in serious condition Saturday with gunshot wounds in the shoulder. Police said the boy also shot at a third teacher, Susan Allen, 31, as she fled from the room where Marino was shot. He then shot Marino again before running to a third classroom where a Bible class was meeting. The youngster shot the glass out of a locked door before opening fire, police spokesman Lewis Thurston said. When the youth's pistol jammed, he was tackled by teacher Maurice Matteson, 24, and other students, Thurston said. Once you see what went on in there, it's a miracle that we didn't have more people killed,'' Police Chief Charles R. Wall said. Police didn't have a motive, Detective Tom Zucaro said, but believe the boy's primary target was not a teacher but a classmate. Officers found what appeared to be three Molotov cocktails in the boy's locker and confiscated the gun and several spent shell casings. Fourteen rounds were fired before the gun jammed, Thurston said. The gun, which the boy carried to school in his knapsack, was purchased by an adult at the youngster's request, Thurston said, adding that authorities have interviewed the adult, whose name is being withheld pending an investigation by the federal Bureau of Alcohol, Tobacco and Firearms. The shootings occurred in a complex of four portable classrooms for junior and senior high school students outside the main building of the 4-year-old school. The school has 500 students in kindergarten through 12th grade. Police said they were trying to reconstruct the sequence of events and had not resolved who was shot first. The body of Ms. Farley was found about an hour after the shootings behind a classroom door.\n",
' </TEXT>\n',
'</DOC>\n']




In [194]:

dat[4].strip()[0]




Out[194]:

'<'




In [195]:

docs = []
for i in dat[:100]:
if i.strip()[0] != '<':
docs.append(i)




In [196]:

def clean_doc(doc):
doc = doc.replace('.', '').replace(',', '')
doc = doc.replace('', '').replace('"', '')
doc = doc.replace('_', '').replace("'", '')
doc = doc.replace('!', '')
return doc
docs = [clean_doc(doc) for doc in docs]




In [197]:

texts = [[i for i in doc.lower().split()] for doc in docs]




In [198]:

from nltk.corpus import stopwords
stop = stopwords.words('english')




In [222]:

' '.join(stop)




Out[222]:

u'i me my myself we our ours ourselves you your yours yourself yourselves he him his himself she her hers herself it its itself they them their theirs themselves what which who whom this that these those am is are was were be been being have has had having do does did doing a an the and but if or because as until while of at by for with about against between into through during before after above below to from up down in out on off over under again further then once here there when where why how all any both each few more most other some such no nor not only own same so than too very s t can will just don should now d ll m o re ve y ain aren couldn didn doesn hadn hasn haven isn ma mightn mustn needn shan shouldn wasn weren won wouldn'




In [223]:

stop.append('said')




In [199]:

from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1

texts = [[token for token in text if frequency[token] > 1 and token not in stop]
for text in texts]




In [200]:

docs[8]




Out[200]:

' Here is a summary of developments in forest and brush fires in Western states:\n'




In [203]:

' '.join(texts[9])




Out[203]:

'stirbois 2 man extreme-right national front party le pen died saturday automobile police said 43 stirbois political meeting friday city dreux miles west paris traveling toward capital car ran police said stirbois national front member party since born paris law headed business stirbois several extreme-right political joining national front 1977 percent vote local elections west paris highest vote percentage candidate year half later deputy dreux stirbois deputy national 1986 lost elections last national front founded le pen frances government death priority first years presidential elections le pen percent vote national front could'




In [204]:

dictionary = corpora.Dictionary(texts)
lda_corpus = [dictionary.doc2bow(text) for text in texts]
#The function doc2bow() simply counts the number of occurences of each distinct word,
# converts the word to its integer word id and returns the result as a sparse vector.




In [205]:

lda_model = models.ldamodel.LdaModel(
lda_corpus, num_topics=NUM_TOPICS, id2word=dictionary, alpha=None)




In [206]:

import pyLDAvis.gensim

ap_data = pyLDAvis.gensim.prepare(lda_model, lda_corpus, dictionary)




/Users/chengjun/anaconda/lib/python2.7/site-packages/skbio/stats/ordination/_principal_coordinate_analysis.py:102: RuntimeWarning: The result contains negative eigenvalues. Please compare their magnitude with the magnitude of some of the largest positive eigenvalues. If the negative ones are smaller, it's probably safe to ignore them, but if they are large in magnitude, the results won't be useful. See the Notes section for more details. The smallest eigenvalue is -0.147573272219 and the largest is 0.684022396432.
RuntimeWarning




In [221]:

pyLDAvis.enable_notebook()
pyLDAvis.display(ap_data)




Out[221]:




In [220]:

pyLDAvis.save_html(ap_data, '/Users/chengjun/github/cjc2016/vis/ap_ldavis.html')



# 阅读材料

Willi Richert, Luis Pedro Coelho, 2013, Building Machine Learning Systems with Python. Chapter 4. Packt Publishing.

Chandra Y, Jiang LC, Wang C-J (2016) Mining Social Entrepreneurship Strategies Using Topic Modeling. PLoS ONE 11(3): e0151342. doi:10.1371/journal.pone.0151342