Latent Dirichlet Allocation

Data

We will be using articles from NPR (National Public Radio), obtained from their website www.npr.org


In [1]:
import pandas as pd

In [2]:
npr = pd.read_csv('npr.csv')

In [3]:
npr.head()


Out[3]:
Article
0 In the Washington of 2016, even when the polic...
1 Donald Trump has used Twitter — his prefe...
2 Donald Trump is unabashedly praising Russian...
3 Updated at 2:50 p. m. ET, Russian President Vl...
4 From photography, illustration and video, to d...

Notice how we don't have the topic of the articles! Let's use LDA to attempt to figure out clusters of the articles.

Preprocessing


In [4]:
from sklearn.feature_extraction.text import CountVectorizer

max_df: float in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.

min_df: float in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.


In [5]:
cv = CountVectorizer(max_df=0.95, min_df=2, stop_words='english')

In [6]:
dtm = cv.fit_transform(npr['Article'])

In [7]:
dtm


Out[7]:
<11992x54777 sparse matrix of type '<class 'numpy.int64'>'
	with 3033388 stored elements in Compressed Sparse Row format>

LDA


In [8]:
from sklearn.decomposition import LatentDirichletAllocation

In [39]:
LDA = LatentDirichletAllocation(n_components=7,random_state=42)

In [40]:
# This can take awhile, we're dealing with a large amount of documents!
LDA.fit(dtm)


Out[40]:
LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
             evaluate_every=-1, learning_decay=0.7,
             learning_method='batch', learning_offset=10.0,
             max_doc_update_iter=100, max_iter=10, mean_change_tol=0.001,
             n_components=7, n_jobs=None, n_topics=None, perp_tol=0.1,
             random_state=42, topic_word_prior=None,
             total_samples=1000000.0, verbose=0)

Showing Stored Words


In [41]:
len(cv.get_feature_names())


Out[41]:
54777

In [42]:
import random

In [43]:
for i in range(10):
    random_word_id = random.randint(0,54776)
    print(cv.get_feature_names()[random_word_id])


cred
fairly
occupational
temer
tamil
closest
condone
breathes
tendrils
pivot

In [44]:
for i in range(10):
    random_word_id = random.randint(0,54776)
    print(cv.get_feature_names()[random_word_id])


foremothers
mocoa
ellroy
liron
ally
discouraged
utterance
provo
videgaray
archivist

Showing Top Words Per Topic


In [46]:
len(LDA.components_)


Out[46]:
7

In [47]:
LDA.components_


Out[47]:
array([[8.64332806e+00, 2.38014333e+03, 1.42900522e-01, ...,
        1.43006821e-01, 1.42902042e-01, 1.42861626e-01],
       [2.76191749e+01, 5.36394437e+02, 1.42857148e-01, ...,
        1.42861973e-01, 1.42857147e-01, 1.42906875e-01],
       [7.22783888e+00, 8.24033986e+02, 1.42857148e-01, ...,
        6.14236247e+00, 2.14061364e+00, 1.42923753e-01],
       ...,
       [3.11488651e+00, 3.50409655e+02, 1.42857147e-01, ...,
        1.42859912e-01, 1.42857146e-01, 1.42866614e-01],
       [4.61486388e+01, 5.14408600e+01, 3.14281373e+00, ...,
        1.43107628e-01, 1.43902481e-01, 2.14271779e+00],
       [4.93991422e-01, 4.18841042e+02, 1.42857151e-01, ...,
        1.42857146e-01, 1.43760101e-01, 1.42866201e-01]])

In [48]:
len(LDA.components_[0])


Out[48]:
54777

In [49]:
single_topic = LDA.components_[0]

In [50]:
# Returns the indices that would sort this array.
single_topic.argsort()


Out[50]:
array([ 2475, 18302, 35285, ..., 22673, 42561, 42993], dtype=int64)

In [51]:
# Word least representative of this topic
single_topic[18302]


Out[51]:
0.14285714309286987

In [52]:
# Word most representative of this topic
single_topic[42993]


Out[52]:
6247.245510521082

In [53]:
# Top 10 words for this topic:
single_topic.argsort()[-10:]


Out[53]:
array([33390, 36310, 21228, 10425, 31464,  8149, 36283, 22673, 42561,
       42993], dtype=int64)

In [54]:
top_word_indices = single_topic.argsort()[-10:]

In [55]:
for index in top_word_indices:
    print(cv.get_feature_names()[index])


new
percent
government
company
million
care
people
health
said
says

These look like business articles perhaps... Let's confirm by using .transform() on our vectorized articles to attach a label number. But first, let's view all the 10 topics found.


In [56]:
for index,topic in enumerate(LDA.components_):
    print(f'THE TOP 15 WORDS FOR TOPIC #{index}')
    print([cv.get_feature_names()[i] for i in topic.argsort()[-15:]])
    print('\n')


THE TOP 15 WORDS FOR TOPIC #0
['companies', 'money', 'year', 'federal', '000', 'new', 'percent', 'government', 'company', 'million', 'care', 'people', 'health', 'said', 'says']


THE TOP 15 WORDS FOR TOPIC #1
['military', 'house', 'security', 'russia', 'government', 'npr', 'reports', 'says', 'news', 'people', 'told', 'police', 'president', 'trump', 'said']


THE TOP 15 WORDS FOR TOPIC #2
['way', 'world', 'family', 'home', 'day', 'time', 'water', 'city', 'new', 'years', 'food', 'just', 'people', 'like', 'says']


THE TOP 15 WORDS FOR TOPIC #3
['time', 'new', 'don', 'years', 'medical', 'disease', 'patients', 'just', 'children', 'study', 'like', 'women', 'health', 'people', 'says']


THE TOP 15 WORDS FOR TOPIC #4
['voters', 'vote', 'election', 'party', 'new', 'obama', 'court', 'republican', 'campaign', 'people', 'state', 'president', 'clinton', 'said', 'trump']


THE TOP 15 WORDS FOR TOPIC #5
['years', 'going', 've', 'life', 'don', 'new', 'way', 'music', 'really', 'time', 'know', 'think', 'people', 'just', 'like']


THE TOP 15 WORDS FOR TOPIC #6
['student', 'years', 'data', 'science', 'university', 'people', 'time', 'schools', 'just', 'education', 'new', 'like', 'students', 'school', 'says']


Attaching Discovered Topic Labels to Original Articles


In [57]:
dtm


Out[57]:
<11992x54777 sparse matrix of type '<class 'numpy.int64'>'
	with 3033388 stored elements in Compressed Sparse Row format>

In [58]:
dtm.shape


Out[58]:
(11992, 54777)

In [59]:
len(npr)


Out[59]:
11992

In [60]:
topic_results = LDA.transform(dtm)

In [61]:
topic_results.shape


Out[61]:
(11992, 7)

In [62]:
topic_results[0]


Out[62]:
array([1.61040465e-02, 6.83341493e-01, 2.25376318e-04, 2.25369288e-04,
       2.99652737e-01, 2.25479379e-04, 2.25497980e-04])

In [63]:
topic_results[0].round(2)


Out[63]:
array([0.02, 0.68, 0.  , 0.  , 0.3 , 0.  , 0.  ])

In [64]:
topic_results[0].argmax()


Out[64]:
1

This means that our model thinks that the first article belongs to topic #1.

Combining with Original Data


In [65]:
npr.head()


Out[65]:
Article Topic
0 In the Washington of 2016, even when the polic... 1
1 Donald Trump has used Twitter — his prefe... 1
2 Donald Trump is unabashedly praising Russian... 1
3 Updated at 2:50 p. m. ET, Russian President Vl... 1
4 From photography, illustration and video, to d... 6

In [66]:
topic_results.argmax(axis=1)


Out[66]:
array([1, 1, 1, ..., 3, 4, 0], dtype=int64)

In [67]:
npr['Topic'] = topic_results.argmax(axis=1)

In [68]:
npr.head(10)


Out[68]:
Article Topic
0 In the Washington of 2016, even when the polic... 1
1 Donald Trump has used Twitter — his prefe... 1
2 Donald Trump is unabashedly praising Russian... 1
3 Updated at 2:50 p. m. ET, Russian President Vl... 1
4 From photography, illustration and video, to d... 2
5 I did not want to join yoga class. I hated tho... 3
6 With a who has publicly supported the debunk... 3
7 I was standing by the airport exit, debating w... 2
8 If movies were trying to be more realistic, pe... 3
9 Eighteen years ago, on New Year’s Eve, David F... 2

Great work!