Non-Negative Matric Factorization

Let's repeat thet opic modeling task from the previous lecture, but this time, we will use NMF instead of LDA.

Data

We will be using articles scraped from NPR (National Public Radio), obtained from their website www.npr.org


In [1]:
import pandas as pd

In [2]:
npr = pd.read_csv('npr.csv')

In [3]:
npr.head()


Out[3]:
Article
0 In the Washington of 2016, even when the polic...
1 Donald Trump has used Twitter — his prefe...
2 Donald Trump is unabashedly praising Russian...
3 Updated at 2:50 p. m. ET, Russian President Vl...
4 From photography, illustration and video, to d...

Notice how we don't have the topic of the articles! Let's use LDA to attempt to figure out clusters of the articles.

Preprocessing


In [4]:
from sklearn.feature_extraction.text import TfidfVectorizer

max_df: float in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.

min_df: float in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.


In [5]:
tfidf = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english')

In [6]:
dtm = tfidf.fit_transform(npr['Article'])

In [7]:
dtm


Out[7]:
<11992x54777 sparse matrix of type '<class 'numpy.float64'>'
	with 3033388 stored elements in Compressed Sparse Row format>

NMF


In [8]:
from sklearn.decomposition import NMF

In [11]:
nmf_model = NMF(n_components=7,random_state=42)

In [12]:
# This can take awhile, we're dealing with a large amount of documents!
nmf_model.fit(dtm)


Out[12]:
NMF(alpha=0.0, beta_loss='frobenius', init=None, l1_ratio=0.0, max_iter=200,
  n_components=7, random_state=42, shuffle=False, solver='cd', tol=0.0001,
  verbose=0)

Displaying Topics


In [13]:
len(tfidf.get_feature_names())


Out[13]:
54777

In [14]:
import random

In [15]:
for i in range(10):
    random_word_id = random.randint(0,54776)
    print(tfidf.get_feature_names()[random_word_id])


blocked
microcephalic
floorboards
seville
embryology
indictments
saver
purview
micrograms
fluorescence

In [16]:
for i in range(10):
    random_word_id = random.randint(0,54776)
    print(tfidf.get_feature_names()[random_word_id])


distinctively
pointless
28th
trinity
andes
loren
florence
bioterrorists
bolstering
typeface

In [19]:
len(nmf_model.components_)


Out[19]:
7

In [21]:
nmf_model.components_


Out[21]:
array([[0.00000000e+00, 2.49950821e-01, 0.00000000e+00, ...,
        1.70313822e-03, 2.37544362e-04, 0.00000000e+00],
       [0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       [0.00000000e+00, 8.22048918e-02, 0.00000000e+00, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       ...,
       [0.00000000e+00, 3.12379960e-02, 0.00000000e+00, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       [5.89723338e-03, 0.00000000e+00, 1.50186440e-03, ...,
        7.06428924e-04, 5.85500542e-04, 6.89536542e-04],
       [4.01763234e-03, 5.31643833e-02, 0.00000000e+00, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])

In [22]:
len(nmf_model.components_[0])


Out[22]:
54777

In [23]:
single_topic = nmf_model.components_[0]

In [24]:
# Returns the indices that would sort this array.
single_topic.argsort()


Out[24]:
array([    0, 27208, 27206, ..., 36283, 54692, 42993], dtype=int64)

In [25]:
# Word least representative of this topic
single_topic[18302]


Out[25]:
0.0

In [26]:
# Word most representative of this topic
single_topic[42993]


Out[26]:
2.005055165418576

In [49]:
# Top 10 words for this topic:
single_topic.argsort()[-10:]


Out[49]:
array([10421, 31464,     1, 54403, 28659, 54412, 36310, 33390, 36283,
       42993], dtype=int64)

In [27]:
top_word_indices = single_topic.argsort()[-10:]

In [28]:
for index in top_word_indices:
    print(tfidf.get_feature_names()[index])


disease
percent
women
virus
study
water
food
people
zika
says

These look like business articles perhaps... Let's confirm by using .transform() on our vectorized articles to attach a label number. But first, let's view all the 10 topics found.


In [30]:
for index,topic in enumerate(nmf_model.components_):
    print(f'THE TOP 15 WORDS FOR TOPIC #{index}')
    print([tfidf.get_feature_names()[i] for i in topic.argsort()[-15:]])
    print('\n')


THE TOP 15 WORDS FOR TOPIC #0
['new', 'research', 'like', 'patients', 'health', 'disease', 'percent', 'women', 'virus', 'study', 'water', 'food', 'people', 'zika', 'says']


THE TOP 15 WORDS FOR TOPIC #1
['gop', 'pence', 'presidential', 'russia', 'administration', 'election', 'republican', 'obama', 'white', 'house', 'donald', 'campaign', 'said', 'president', 'trump']


THE TOP 15 WORDS FOR TOPIC #2
['senate', 'house', 'people', 'act', 'law', 'tax', 'plan', 'republicans', 'affordable', 'obamacare', 'coverage', 'medicaid', 'insurance', 'care', 'health']


THE TOP 15 WORDS FOR TOPIC #3
['officers', 'syria', 'security', 'department', 'law', 'isis', 'russia', 'government', 'state', 'attack', 'president', 'reports', 'court', 'said', 'police']


THE TOP 15 WORDS FOR TOPIC #4
['primary', 'cruz', 'election', 'democrats', 'percent', 'party', 'delegates', 'vote', 'state', 'democratic', 'hillary', 'campaign', 'voters', 'sanders', 'clinton']


THE TOP 15 WORDS FOR TOPIC #5
['love', 've', 'don', 'album', 'way', 'time', 'song', 'life', 'really', 'know', 'people', 'think', 'just', 'music', 'like']


THE TOP 15 WORDS FOR TOPIC #6
['teacher', 'state', 'high', 'says', 'parents', 'devos', 'children', 'college', 'kids', 'teachers', 'student', 'education', 'schools', 'school', 'students']


Attaching Discovered Topic Labels to Original Articles


In [31]:
dtm


Out[31]:
<11992x54777 sparse matrix of type '<class 'numpy.float64'>'
	with 3033388 stored elements in Compressed Sparse Row format>

In [32]:
dtm.shape


Out[32]:
(11992, 54777)

In [33]:
len(npr)


Out[33]:
11992

In [34]:
topic_results = nmf_model.transform(dtm)

In [35]:
topic_results.shape


Out[35]:
(11992, 7)

In [36]:
topic_results[0]


Out[36]:
array([0.        , 0.12075603, 0.00140297, 0.05919954, 0.01518909,
       0.        , 0.        ])

In [37]:
topic_results[0].round(2)


Out[37]:
array([0.  , 0.12, 0.  , 0.06, 0.02, 0.  , 0.  ])

In [38]:
topic_results[0].argmax()


Out[38]:
1

This means that our model thinks that the first article belongs to topic #1.

Combining with Original Data


In [39]:
npr.head()


Out[39]:
Article
0 In the Washington of 2016, even when the polic...
1 Donald Trump has used Twitter — his prefe...
2 Donald Trump is unabashedly praising Russian...
3 Updated at 2:50 p. m. ET, Russian President Vl...
4 From photography, illustration and video, to d...

In [40]:
topic_results.argmax(axis=1)


Out[40]:
array([1, 1, 1, ..., 0, 4, 3], dtype=int64)

In [41]:
npr['Topic'] = topic_results.argmax(axis=1)

In [42]:
npr.head(10)


Out[42]:
Article Topic
0 In the Washington of 2016, even when the polic... 1
1 Donald Trump has used Twitter — his prefe... 1
2 Donald Trump is unabashedly praising Russian... 1
3 Updated at 2:50 p. m. ET, Russian President Vl... 3
4 From photography, illustration and video, to d... 6
5 I did not want to join yoga class. I hated tho... 5
6 With a who has publicly supported the debunk... 0
7 I was standing by the airport exit, debating w... 0
8 If movies were trying to be more realistic, pe... 0
9 Eighteen years ago, on New Year’s Eve, David F... 5

Great work!