http://v.youku.com/v_show/id_XMzA3OTA5MjUy.html
This talk by Jean-Baptiste Michel and Erez Lieberman Aiden is phenomenal. The associated article is also well worth checking out: Michel, J.-B., et al. (2011). Quantitative Analysis of Culture Using Millions of Digitized Books. Science, 331, 176–182.
试一下谷歌图书的数据: https://books.google.com/ngrams/
Since the unique words in each document represent only a small subset of all the words in the bag-of-words vocabulary, the feature vectors will consist of mostly zeros, which is why we call them sparse
Bag of words,也叫做“词袋”,在信息检索中,Bag of words model假定对于一个文本,忽略其词序和语法,句法,将其仅仅看做是一个词集合,或者说是词的一个组合,文本中每个词的出现都是独立的,不依赖于其他词是否出现,或者说当这篇文章的作者在任意一个位置选择一个词汇都不受前面句子的影响而独立选择的。这种假设虽然对自然语言进行了简化,便于模型化。
假定在有些情况下是不合理的,例如在新闻个性化推荐中,采用Bag of words的模型就会出现问题。例如用户甲对“南京醉酒驾车事故”这个短语很感兴趣,采用bag of words忽略了顺序和句法,则认为用户甲对“南京”、“醉酒”、“驾车”和“事故”感兴趣,因此可能推荐出和“南京”,“公交车”,“事故”相关的新闻,这显然是不合理的。
解决的方法可以采用SCPCD的方法抽取出整个短语,或者采用高阶(2阶以上)统计语言模型,例如bigram,trigram来将词序保留下来,相当于bag of bigram和bag of trigram,这样能在一定程度上解决这种问题。简言之,bag of words模型是否适用需要根据实际情况来确定。对于那些不可以忽视词序,语法和句法的场合均不能采用bag of words的方法。
A document-term matrix or term-document matrix is a mathematical matrix that describes the frequency of terms that occur in a collection of documents.
In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms.
There are various schemes for determining the value that each entry in the matrix should take. One such scheme is tf-idf. They are useful in the field of natural language processing.
D1 = "I like databases"
D2 = "I hate databases"
| I | like | hate | databases | |
|---|---|---|---|---|
| D1 | 1 | 1 | 0 | 1 |
| D2 | 1 | 0 | 1 | 1 |
In [1]:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
In [6]:
' '.join(dir(count))
Out[6]:
In [8]:
count.get_feature_names()
Out[8]:
In [2]:
print(count.vocabulary_)
In [5]:
type(bag)
Out[5]:
In [3]:
print(bag.toarray())
In [12]:
import pandas as pd
pd.DataFrame(bag.toarray(), columns = count.get_feature_names())
Out[12]:
The sequence of items in the bag-of-words model that we just created is also called the 1-gram or unigram model
The choice of the number n in the n-gram model depends on the particular application
The CountVectorizer class in scikit-learn allows us to use different n-gram models via its ngram_range parameter.
While a 1-gram representation is used by default
we could switch to a 2-gram representation by initializing a new CountVectorizer instance with ngram_range=(2,2).
Scikit-learn implements yet another transformer, the TfidfTransformer, that takes the raw term frequencies from CountVectorizer as input and transforms them into tf-idfs:
In [15]:
np.set_printoptions(precision=2)
In [16]:
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
In [17]:
bag = tfidf.fit_transform(count.fit_transform(docs))
pd.DataFrame(bag.toarray(), columns = count.get_feature_names())
Out[17]:
In [18]:
# 一个词的tfidf值
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
In [19]:
# 最后一个文本里的词的tfidf原始数值(未标准化)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
Out[19]:
In [20]:
# l2标准化后的tfidf数值
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
Out[20]:
In [1]:
with open('/Users/chengjun/github/cjc2016/data/gov_reports1954-2016.txt', 'r') as f:
reports = f.readlines()
In [2]:
len(reports)
Out[2]:
In [3]:
print reports[32][:1000]
In [4]:
%matplotlib inline
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import sys
import numpy as np
from collections import defaultdict
import statsmodels.api as sm
from wordcloud import WordCloud
import jieba
import matplotlib
import gensim
from gensim import corpora, models, similarities
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
matplotlib.rcParams['font.sans-serif'] = ['Microsoft YaHei'] #指定默认字体
matplotlib.rc("savefig", dpi=400)
In [6]:
import jieba
seg_list = jieba.cut("我来到北京清华大学", cut_all=True)
print("Full Mode: " + "/ ".join(seg_list)) # 全模式
seg_list = jieba.cut("我来到北京清华大学", cut_all=False)
print("Default Mode: " + "/ ".join(seg_list)) # 精确模式
seg_list = jieba.cut("他来到了网易杭研大厦") # 默认是精确模式
print(", ".join(seg_list))
seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") # 搜索引擎模式
print(", ".join(seg_list))
In [7]:
filename = '/Users/chengjun/github/cjc2016/data/stopwords.txt'
stopwords = {}
f = open(filename, 'r')
line = f.readline().rstrip()
while line:
stopwords.setdefault(line, 0)
stopwords[line.decode('utf-8')] = 1
line = f.readline().rstrip()
f.close()
In [8]:
adding_stopwords = [u'我们', u'要', u'地', u'有', u'这', u'人',
u'发展',u'建设',u'加强',u'继续',u'对',u'等',u'推进',u'工作',u'增加']
for s in adding_stopwords: stopwords[s]=10
In [14]:
import jieba.analyse
txt = reports[-1]
tf = jieba.analyse.extract_tags(txt, topK=200, withWeight=True)
In [262]:
print u"、".join([i[0] for i in tf[:50]])
In [267]:
plt.hist([i[1] for i in tf])
plt.show()
In [264]:
tr = jieba.analyse.textrank(txt,topK=200, withWeight=True)
print u"、".join([i[0] for i in tr[:50]])
In [268]:
plt.hist([i[1] for i in tr])
plt.show()
In [75]:
import pandas as pd
def keywords(index):
txt = reports[-index]
tf = jieba.analyse.extract_tags(txt, topK=200, withWeight=True)
tr = jieba.analyse.textrank(txt,topK=200, withWeight=True)
tfdata = pd.DataFrame(tf, columns=['word', 'tfidf'])
trdata = pd.DataFrame(tr, columns=['word', 'textrank'])
worddata = pd.merge(tfdata, trdata, on='word')
plt.plot(worddata.tfidf, worddata.textrank, linestyle='',marker='.')
for i in range(len(worddata.word)):
plt.text(worddata.tfidf[i], worddata.textrank[i], worddata.word[i],
fontsize = worddata.textrank[i]*15, color = 'red', rotation = 0)
plt.title(txt[:4])
plt.xlabel('Tf-Idf')
plt.ylabel('TextRank')
plt.show()
In [80]:
keywords(1)
In [269]:
keywords(2)
In [270]:
keywords(3)
In [59]:
def wordcloudplot(txt, year):
wordcloud = WordCloud(font_path='/Users/chengjun/github/cjc2016/data/msyh.ttf').generate(txt)
# Open a plot of the generated image.
plt.imshow(wordcloud)
plt.title(year)
plt.axis("off")
#plt.show()
In [326]:
txt = reports[-1]
tfidf200= jieba.analyse.extract_tags(txt, topK=200, withWeight=False)
seg_list = jieba.cut(txt, cut_all=False)
seg_list = [i for i in seg_list if i in tfidf200]
txt200 = r' '.join(seg_list)
wordcloudplot(txt200, txt[:4])
In [334]:
wordfreq = defaultdict(int)
for i in seg_list:
wordfreq[i] +=1
wordfreq = [[i, wordfreq[i]] for i in wordfreq]
wordfreq.sort(key= lambda x:x[1], reverse = True )
print u"、 ".join([ i[0] + u'(' + str(i[1]) +u')' for i in wordfreq ])
In [70]:
#jieba.add_word('股灾', freq=100, tag=None)
txt = reports[-1]
seg_list = jieba.cut(txt, cut_all=False)
seg_list = [i for i in seg_list if i not in stopwords]
txt = r' '.join(seg_list)
wordcloudplot(txt, txt[:4])
#file_path = '/Users/chengjun/GitHub/cjc2016/figures/wordcloud-' + txt[:4] + '.png'
#plt.savefig(file_path,dpi = 300, bbox_inches="tight",transparent = True)
In [113]:
#jieba.add_word('股灾', freq=100, tag=None)
for txt in reports:
seg_list = jieba.cut(txt, cut_all=False)
seg_list = [i for i in seg_list if i not in stopwords]
txt = r' '.join(seg_list)
wordcloudplot(txt, txt[:4])
file_path = '/Users/chengjun/GitHub/cjc2016/figure/wordcloud-' + txt[:4] + '.png'
plt.savefig(file_path,dpi = 400, bbox_inches="tight",transparent = True)
In [335]:
import jieba.analyse
wordset = []
for txt in reports:
top20= jieba.analyse.textrank(txt, topK=200, withWeight=False)
for w in top20:
if w not in wordset:
wordset.append(w)
In [336]:
len(wordset)
Out[336]:
In [337]:
print ' '.join(wordset)
In [338]:
from collections import defaultdict
data = defaultdict(dict)
years = [int(i[:4]) for i in reports]
for i in wordset:
for year in years:
data[i][year] = 0
In [339]:
for txt in reports:
year = int(txt[:4])
top1000= jieba.analyse.textrank(txt, topK=1000, withWeight=True)
for ww in top1000:
word, weight = ww
if word in wordset:
data[word][year]+= weight
In [340]:
word_weight = []
for i in data:
word_weight.append([i, np.sum(data[i].values())])
In [341]:
word_weight.sort(key= lambda x:x[1], reverse = True )
top50 = [i[0] for i in word_weight[:50]]
In [342]:
print ' '.join(top50)
In [343]:
def plotEvolution(word, color, linestyle, marker):
cx = data[word]
plt.plot(cx.keys(), cx.values(), color = color,
linestyle=linestyle, marker=marker, label= word)
plt.legend(loc=2,fontsize=8)
plt.ylabel(u'词语重要性')
In [349]:
plotEvolution(u'民主', 'g', '-', '>')
plotEvolution(u'法制', 'b', '-', 's')
In [363]:
plotEvolution(u'动能', 'b', '-', 's')
plotEvolution(u'互联网', 'g', '-', '>')
In [364]:
plotEvolution(u'工业', 'y', '-', '<')
plotEvolution(u'农业', 'r', '-', 'o')
plotEvolution(u'制造业', 'b', '-', 's')
plotEvolution(u'服务业', 'g', '-', '>')
In [362]:
plotEvolution(u'教育', 'r', '-', 'o')
plotEvolution(u'社会保障', 'b', '-', 's')
plotEvolution(u'医疗', 'g', '-', '>')
In [356]:
plotEvolution(u'环境', 'b', '-', 's')
plotEvolution(u'住房', 'purple', '-', 'o')
In [357]:
plotEvolution(u'发展', 'y', '-', '<')
plotEvolution(u'经济', 'r', '-', 'o')
plotEvolution(u'改革', 'b', '-', 's')
plotEvolution(u'创新', 'g', '-', '>')
In [359]:
plotEvolution(u'社会主义', 'r', '-', 'o')
plotEvolution(u'马克思主义', 'b', '-', 's')
In [208]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[:5]):
years = data[word].keys()[-40:]
tfidfs = data[word].values()[-40:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [207]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[5:10]):
years = data[word].keys()[-40:]
tfidfs = data[word].values()[-40:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [206]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[10:15]):
years = data[word].keys()[-40:]
tfidfs = data[word].values()[-40:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [205]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[15:20]):
years = data[word].keys()[-40:]
tfidfs = data[word].values()[-40:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [204]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[20:25]):
years = data[word].keys()[-40:]
tfidfs = data[word].values()[-40:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [202]:
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',5)
for k, word in enumerate(top50[25:30]):
years = data[word].keys()[-30:]
tfidfs = data[word].values()[-30:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [209]:
from sklearn import metrics
from sklearn.metrics import pairwise_distances
dataX = []
wordX = []
for word in top50:
dataX.append(data[word].values()[-40:])
wordX.append(word)
dataX = np.array(dataX)
In [210]:
dataX
Out[210]:
In [211]:
import numpy as np
from sklearn.cluster import KMeans
silhouette_score = []
for cluster_num in range(2, 10):
kmeans_model = KMeans(n_clusters=cluster_num, random_state=1).fit(dataX)
labels = kmeans_model.labels_
sscore = metrics.silhouette_score(dataX, labels, metric='euclidean')
silhouette_score.append(sscore)
fig = plt.figure(figsize=(4, 2),facecolor='white')
plt.plot(range(2, 10), silhouette_score)
plt.xlabel('# Clusters')
plt.ylabel('Silhouette Score')
plt.show()
The score is bounded between -1 for incorrect clustering and +1 for highly dense clustering. Scores around zero indicate overlapping clusters. The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
In [212]:
kmeans_model = KMeans(n_clusters=2, random_state=1).fit(dataX)
labels = kmeans_model.labels_
labels
Out[212]:
In [213]:
print ' '.join(wordX)
In [214]:
print '\t'.join([wordX[index] for index in np.where(labels==0)[0]])
In [215]:
word_cluster1 = [wordX[index] for index in np.where(labels==0)[0]]
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',10)
for k, word in enumerate(word_cluster1[:10]):
years = data[word].keys()[-30:]
tfidfs = data[word].values()[-30:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [216]:
print '\t'.join([wordX[index] for index in np.where(labels==1)[0]])
In [217]:
word_cluster2 = [wordX[index] for index in np.where(labels==1)[0]]
fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',10)
for k, word in enumerate(word_cluster2[:10]):
years = data[word].keys()[-30:]
tfidfs = data[word].values()[-30:]
plt.plot(years, tfidfs, color=cmap(k), linestyle='-',marker='.',label= word)
plt.legend(loc=1,fontsize=8)
plt.show()
In [137]:
import jieba.posseg as pseg
words = pseg.cut("我爱北京天安门")
words = [str(w).split('/') for w in words]
for i in words:
print i[0], i[1]
汉语文本词性标注标记集
Ag 形语素 形容词性语素。形容词代码为a,语素代码g前面置以A。
a 形容词 取英语形容词adjective的第1个字母。
ad 副形词 直接作状语的形容词。形容词代码a和副词代码d并在一起。
an 名形词 具有名词功能的形容词。形容词代码a和名词代码n并在一起。
b 区别词 取汉字“别”的声母。
c 连词 取英语连词conjunction的第1个字母。
Dg 副语素 副词性语素。副词代码为d,语素代码g前面置以D。
d 副词 取adverb的第2个字母,因其第1个字母已用于形容词。
e 叹词 取英语叹词exclamation的第1个字母。
f 方位词 取汉字“方”
g 语素 绝大多数语素都能作为合成词的“词根”,取汉字“根”的声母。
h 前接成分 取英语head的第1个字母。
i 成语 取英语成语idiom的第1个字母。
j 简称略语 取汉字“简”的声母。
k 后接成分
l 习用语 习用语尚未成为成语,有点“临时性”,取“临”的声母。
m 数词 取英语numeral的第3个字母,n,u已有他用。
Ng 名语素 名词性语素。名词代码为n,语素代码g前面置以N。
n 名词 取英语名词noun的第1个字母。
nr 人名 名词代码n和“人(ren)”的声母并在一起。
ns 地名 名词代码n和处所词代码s并在一起。
nt 机构团体 “团”的声母为t,名词代码n和t并在一起。
nz 其他专名 “专”的声母的第1个字母为z,名词代码n和z并在一起。
o 拟声词 取英语拟声词onomatopoeia的第1个字母。
p 介词 取英语介词prepositional的第1个字母。
q 量词 取英语quantit的第1个字母。
r 代词 取英语代词pronoun的第2个字母,因p已用于介词。
s 处所词 取英语space的第1个字母。
Tg 时语素 时间词性语素。时间词代码为t,在语素的代码g前面置以T。
t 时间词 取英语time的第1个字母。
u 助词 取英语助词auxiliary
Vg 动语素 动词性语素。动词代码为v。在语素的代码g前面置以V。
v 动词 取英语动词verb的第一个字母。
vd 副动词 直接作状语的动词。动词和副词的代码并在一起。
vn 名动词 指具有名词功能的动词。动词和名词的代码并在一起。
w 标点符号
x 非语素字 非语素字只是一个符号,字母x通常用于代表未知数、符号。
y 语气词 取汉字“语”的声母。
z 状态词 取汉字“状”的声母的前一个字母。
a: 形容词 b: 区别词 c: 连词 d: 副词 e: 叹词 g: 语素字 h: 前接成分 i: 习用语 j: 简称 k: 后接成分 m: 数词 n: 普通名词 nd: 方位名词 nh: 人名 ni: 机构名 nl: 处所名词 ns: 地名 nt: 时间词 nz: 其他专名 o: 拟声词 p: 介词 q: 量词 r: 代词 u: 助词 v: 动词 wp: 标点符号 ws: 字符串 x: 非语素字
In [230]:
def getCorpus(data):
processed_docs = [tokenize(doc) for doc in data]
word_count_dict = gensim.corpora.Dictionary(processed_docs)
print "In the corpus there are", len(word_count_dict), "unique tokens"
word_count_dict.filter_extremes(no_below=5, no_above=0.2) # word must appear >5 times, and no more than 10% documents
print "After filtering, in the corpus there are only", len(word_count_dict), "unique tokens"
bag_of_words_corpus = [word_count_dict.doc2bow(pdoc) for pdoc in processed_docs]
return bag_of_words_corpus, word_count_dict
def cleancntxt(txt, stopwords):
tfidf1000= jieba.analyse.extract_tags(txt, topK=1000, withWeight=False)
seg_generator = jieba.cut(txt, cut_all=False)
seg_list = [i for i in seg_generator if i not in stopwords]
seg_list = [i for i in seg_list if i != u' ']
seg_list = [i for i in seg_list if i in tfidf1000]
return(seg_list)
def getCnCorpus(data):
processed_docs = [cleancntxt(doc) for doc in data]
word_count_dict = gensim.corpora.Dictionary(processed_docs)
print "In the corpus there are", len(word_count_dict), "unique tokens"
#word_count_dict.filter_extremes(no_below=5, no_above=0.2)
# word must appear >5 times, and no more than 10% documents
print "After filtering, in the corpus there are only", len(word_count_dict), "unique tokens"
bag_of_words_corpus = [word_count_dict.doc2bow(pdoc) for pdoc in processed_docs]
return bag_of_words_corpus, word_count_dict
def inferTopicNumber(bag_of_words_corpus, num, word_count_dict):
lda_model = gensim.models.LdaModel(bag_of_words_corpus, num_topics=num, id2word=word_count_dict, passes=10)
_ = lda_model.print_topics(-1) #use _ for throwaway variables.
logperplexity = lda_model.log_perplexity(bag_of_words_corpus)
return logperplexity
def ppnumplot(topicnum,logperplexity): #做主题数与困惑度的折线图
plt.plot(topicnum,logperplexity,color="red",linewidth=2)
plt.xlabel("Number of Topic")
plt.ylabel("Perplexity")
plt.show()
# 定义一些常用的函数
def flushPrint(variable):
if variable %10^2 == 0:
sys.stdout.write('\r')
sys.stdout.write('%s' % variable)
sys.stdout.flush()
def top(data):
for i in data:
print i
def freq(data):
dtable = defaultdict(int)
for i in data:
dtable[i] += 1
return dtable
def sortdict(data):
'''data is a dict, sorted by value'''
return sorted(data.items(), lambda x, y: cmp(x[1], y[1]), reverse=True)
In [118]:
import urllib2
from bs4 import BeautifulSoup
import sys
url2016 = 'http://news.xinhuanet.com/fortune/2016-03/05/c_128775704.htm'
content = urllib2.urlopen(url2016).read()
soup = BeautifulSoup(content)
In [231]:
gov_report_2016 = [s.text for s in soup('p')]
for i in gov_report_2016[:10]:print i
In [232]:
def clean_txt(txt):
for i in [u'、', u',', u'—', u'!', u'。', u'《', u'》', u'(', u')']:
txt = txt.replace(i, ' ')
return txt
In [233]:
gov_report_2016 = [clean_txt(i) for i in gov_report_2016]
In [234]:
for i in gov_report_2016[:10]:print i
In [227]:
len(gov_report_2016[5:-1])
Out[227]:
In [243]:
jieba.add_word(u'屠呦呦', freq=None, tag=None)
#del_word(word)
print ' '.join(cleancntxt(u'屠呦呦获得了诺贝尔医学奖。', stopwords))
In [244]:
processed_docs = [cleancntxt(doc, stopwords) for doc in gov_report_2016[5:-1]]
word_count_dict = gensim.corpora.Dictionary(processed_docs)
print "In the corpus there are", len(word_count_dict), "unique tokens"
# word_count_dict.filter_extremes(no_below=5, no_above=0.2) # word must appear >5 times, and no more than 10% documents
# print "After filtering, in the corpus there are only", len(word_count_dict), "unique tokens"
bag_of_words_corpus = [word_count_dict.doc2bow(pdoc) for pdoc in processed_docs]
In [245]:
tfidf = models.TfidfModel(bag_of_words_corpus )
corpus_tfidf = tfidf[bag_of_words_corpus ]
lda_model = gensim.models.LdaModel(corpus_tfidf, num_topics=20, id2word=word_count_dict, passes=10)
#lda_model = gensim.models.LdaMulticore(corpus_tfidf, num_topics=10, id2word=word_count_dict, passes=10)
In [246]:
perplexity_list = [inferTopicNumber(bag_of_words_corpus, num, word_count_dict) for num in [5, 15, 20, 25, 30, 35, 40 ]]
In [247]:
plt.plot([5, 15, 20, 25, 30, 35, 40], perplexity_list)
Out[247]:
In [252]:
topictermlist = lda_model.print_topics(-1)
top_words = [[j.split('*')[1] for j in i.split(' + ')] for i in topictermlist]
for i in top_words: print " ".join(i) + '\n'
In [249]:
top_words_shares = [[j.split('*')[0] for j in i.split(' + ')] for i in topictermlist]
top_words_shares = [map(float, i) for i in top_words_shares]
def weightvalue(x):
return (x - np.min(top_words_shares))*40/(np.max(top_words_shares) -np.min(top_words_shares)) + 10
top_words_shares = [map(weightvalue, i) for i in top_words_shares]
def plotTopics(mintopics, maxtopics):
num_top_words = 10
plt.rcParams['figure.figsize'] = (10.0, 4.0)
n = 0
for t in range(mintopics , maxtopics):
plt.subplot(2, 15, n + 1) # plot numbering starts with 1
plt.ylim(0, num_top_words) # stretch the y-axis to accommodate the words
plt.xticks([]) # remove x-axis markings ('ticks')
plt.yticks([]) # remove y-axis markings ('ticks')
plt.title(u'主题 #{}'.format(t+1), size = 5)
words = top_words[t][0:num_top_words ]
words_shares = top_words_shares[t][0:num_top_words ]
for i, (word, share) in enumerate(zip(words, words_shares)):
plt.text(0.05, num_top_words-i-0.9, word, fontsize= np.log(share*10))
n += 1
In [250]:
plotTopics(0, 10)
In [251]:
plotTopics(10, 20)
In [ ]: