LDA模型应用:希拉里邮件主题分析


In [1]:
# coding:utf-8
import numpy as np
import pandas as pd
import re

In [2]:
df = pd.read_csv("HillaryEmails.csv")
# 原邮件数据中有很多Nan的值,直接扔掉。
df = df[['Id','ExtractedBodyText']].dropna()

文本预处理:

针对邮件内容,写一组正则表达式:


In [3]:
def clean_email_text(text):
    text = text.replace('\n'," ") #去掉新行
    text = re.sub(r"-", " ", text) #把 "-" 的两个单词,分开。
    text = re.sub(r"\d+/\d+/\d+", "", text) #日期,对主体模型没什么意义
    text = re.sub(r"[0-2]?[0-9]:[0-6][0-9]", "", text) #时间,没意义
    text = re.sub(r"[\w]+@[\.\w]+", "", text) #邮件地址,没意义
    text = re.sub(r"/[a-zA-Z]*[:\//\]*[A-Za-z0-9\-_]+\.+[A-Za-z0-9\.\/%&=\?\-_]+/i", "", text) #网址,没意义
    pure_text = ''
    # 以防还有其他特殊字符(数字)等等,过滤掉
    for letter in text:
        # 只留下字母和空格
        if letter.isalpha() or letter==' ':
            pure_text += letter
    # 再把那些去除特殊字符后落单的单词,直接排除。
    # 我们就只剩下有意义的单词了。
    text = ' '.join(word for word in pure_text.split() if len(word)>1)
    return text

In [4]:
docs = df['ExtractedBodyText']
docs = docs.apply(lambda s: clean_email_text(s))

In [5]:
docs.head(1).values


Out[5]:
array([ 'Thursday March PM Latest How Syria is aiding Qaddafi and more Sid hrc memo syria aiding libya docx hrc memo syria aiding libya docx March For Hillary'], dtype=object)

把所有的邮件内容拿出来。


In [6]:
doclist = docs.values

LDA模型构建:

用Gensim进行模型构建

首先,将文本数据

[[一条邮件字符串],[另一条邮件字符串], ...]

转化成Gensim认可的语料库形式:

[[一,条,邮件,在,这里],[第,二,条,邮件,在,这里],[今天,天气,肿么,样],...]

引入库:


In [7]:
from gensim import corpora, models, similarities
import gensim

可以使用nltk的stopwords,这里手写一个


In [8]:
stoplist = ['very', 'ourselves', 'am', 'doesn', 'through', 'me', 'against', 'up', 'just', 'her', 'ours', 
            'couldn', 'because', 'is', 'isn', 'it', 'only', 'in', 'such', 'too', 'mustn', 'under', 'their', 
            'if', 'to', 'my', 'himself', 'after', 'why', 'while', 'can', 'each', 'itself', 'his', 'all', 'once', 
            'herself', 'more', 'our', 'they', 'hasn', 'on', 'ma', 'them', 'its', 'where', 'did', 'll', 'you', 
            'didn', 'nor', 'as', 'now', 'before', 'those', 'yours', 'from', 'who', 'was', 'm', 'been', 'will', 
            'into', 'same', 'how', 'some', 'of', 'out', 'with', 's', 'being', 't', 'mightn', 'she', 'again', 'be', 
            'by', 'shan', 'have', 'yourselves', 'needn', 'and', 'are', 'o', 'these', 'further', 'most', 'yourself', 
            'having', 'aren', 'here', 'he', 'were', 'but', 'this', 'myself', 'own', 'we', 'so', 'i', 'does', 'both', 
            'when', 'between', 'd', 'had', 'the', 'y', 'has', 'down', 'off', 'than', 'haven', 'whom', 'wouldn', 
            'should', 've', 'over', 'themselves', 'few', 'then', 'hadn', 'what', 'until', 'won', 'no', 'about', 
            'any', 'that', 'for', 'shouldn', 'don', 'do', 'there', 'doing', 'an', 'or', 'ain', 'hers', 'wasn', 
            'weren', 'above', 'a', 'at', 'your', 'theirs', 'below', 'other', 'not', 're', 'him', 'during', 'which']

人工分词:

英文的分词,对着空白处分割就可以了。

中文的分词可以使用CoreNLP, HaNLP, 结巴分词,等等


In [9]:
texts = [[word for word in doc.lower().split() if word not in stoplist] for doc in doclist]

In [10]:
texts[0] #一封邮件分词后的结果


Out[10]:
['thursday',
 'march',
 'pm',
 'latest',
 'syria',
 'aiding',
 'qaddafi',
 'sid',
 'hrc',
 'memo',
 'syria',
 'aiding',
 'libya',
 'docx',
 'hrc',
 'memo',
 'syria',
 'aiding',
 'libya',
 'docx',
 'march',
 'hillary']

建立语料库

用词袋的方法,把每个单词用一个数字tokenize 标记,并把原文本变成一个向量:


In [11]:
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]

In [12]:
corpus[13] #第14封邮件中,一共6个有意义的单词(经过文本预处理,并去除了停止词后)其中,36号单词出现1次,505号单词出现1次


Out[12]:
[(34, 1), (505, 1), (506, 1), (507, 1), (508, 1)]

建立模型


In [13]:
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20)

In [14]:
lda.print_topic(10, topn=5) #第10号分类,最常出现的top 5单词


Out[14]:
'0.020*"senate" + 0.012*"nuclear" + 0.010*"missile" + 0.008*"us" + 0.008*"pm"'

In [15]:
lda.print_topics(num_topics=20, num_words=5) #打印所有主题


Out[15]:
[(0, '0.020*"call" + 0.016*"ok" + 0.014*"pls" + 0.013*"thx" + 0.013*"see"'),
 (1,
  '0.013*"pm" + 0.010*"huma" + 0.010*"sullivan" + 0.009*"monday" + 0.009*"fw"'),
 (2,
  '0.006*"obama" + 0.005*"clips" + 0.005*"strategic" + 0.005*"israel" + 0.004*"would"'),
 (3,
  '0.013*"bloomberg" + 0.010*"call" + 0.009*"kurdistan" + 0.007*"dont" + 0.007*"got"'),
 (4,
  '0.007*"us" + 0.006*"security" + 0.006*"state" + 0.006*"cheryl" + 0.005*"international"'),
 (5, '0.010*"fyi" + 0.008*"mr" + 0.007*"said" + 0.005*"us" + 0.005*"new"'),
 (6,
  '0.010*"message" + 0.008*"please" + 0.008*"china" + 0.006*"chinese" + 0.005*"email"'),
 (7,
  '0.012*"assistant" + 0.011*"secretary" + 0.011*"lona" + 0.011*"state" + 0.009*"call"'),
 (8,
  '0.007*"us" + 0.005*"work" + 0.004*"also" + 0.004*"would" + 0.004*"well"'),
 (9,
  '0.008*"mtg" + 0.006*"negotiating" + 0.006*"call" + 0.005*"book" + 0.005*"bill"'),
 (10,
  '0.020*"senate" + 0.012*"nuclear" + 0.010*"missile" + 0.008*"us" + 0.008*"pm"'),
 (11,
  '0.017*"pm" + 0.012*"office" + 0.009*"time" + 0.009*"state" + 0.008*"meeting"'),
 (12,
  '0.028*"israeli" + 0.014*"palestinian" + 0.013*"part" + 0.011*"settlements" + 0.010*"release"'),
 (13,
  '0.011*"percent" + 0.006*"obama" + 0.006*"republicans" + 0.005*"said" + 0.005*"democrats"'),
 (14,
  '0.012*"party" + 0.008*"would" + 0.006*"one" + 0.005*"company" + 0.004*"new"'),
 (15,
  '0.021*"print" + 0.016*"pls" + 0.007*"prefer" + 0.007*"pis" + 0.005*"district"'),
 (16,
  '0.010*"us" + 0.008*"un" + 0.006*"state" + 0.006*"would" + 0.005*"people"'),
 (17,
  '0.007*"new" + 0.007*"would" + 0.006*"us" + 0.005*"one" + 0.005*"obama"'),
 (18,
  '0.013*"hikers" + 0.009*"get" + 0.005*"kurdish" + 0.005*"bauer" + 0.005*"know"'),
 (19,
  '0.077*"pm" + 0.032*"office" + 0.028*"secretarys" + 0.020*"meeting" + 0.020*"room"')]