Makers, scientists, influencers and many other people share their ideas, products and innovations via the most intellectual social network Twitter. It is hard to find the information about a topic in the giant network of Twitter. Our aim is to find users who are tweeting about the same topic. With this aim we want to bring people interested in the same community together. In this project, we focused on maker communities and influencers in the context of computer science, such as ML, Robotics, 3D Printing, Arduino. We worked on 1.118 users and approximately 3.250.000 tweets.
There are potential methods like LDA and NMF to tackle this problem, we want to investigate the addition of KL-BNMF and to see whether this method is an applicable solution candidate for this problem.
The language of twitter is generally close to daily language. People share their ideas and emotions at any time of the day. Other than normal texts, tweets can include hashtags, emoticons, pictures, videos, gifs, urls etc. Even normal text part of the tweets may consist of misspelled words. Apart from these, one user may tweet in lots of language. For example, one tweet may be in Turkish, and another one in English. So we need to make a cleanup before using those tweets. The list of applied processes:
Importing the necessary libraries.
In [2]:
import langid
import logging
import nltk
import numpy as np
import re
import os
import sys
import time
from collections import defaultdict
from string import digits
import pyLDAvis.gensim
import pyLDAvis.sklearn
from gensim import corpora, models, similarities, matutils
import networkx as nx
import string
import math
import pickle
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups
from sklearn.cluster import KMeans
from collections import Counter
import scipy.io
from scipy import sparse
In [2]:
def totalWordCount(tList):
totalWords = 0
for tt in tList:
totalWords += len(tt)
return totalWords
def totalWordCount2(corpus):
totalWords = 0
for corp in corpus:
for c in corp:
totalWords += c[1]
return totalWords
We have already collected tweets of random 900 followers of TRTWorld's twitter account. You can also find those Twitter API codes in this repo.
Here we are reading each user's tweets from files and saving them into a list (tweetList) if the number of words in the file greater than 2000 words.
In [5]:
tweetsList = []
userList = []
for file in os.listdir("tweets3"):
path = "tweets3\\" + file
f = open(path, 'r', encoding='utf-8')
fread = f.read()
if (len(fread.split()) > 2000):
tweetsList.append(fread)
userList.append(file[0:len(file)-4])
f.close()
print("Number of Users: %d" %(len(tweetsList)))
print("Total Number of Words: %d" %(totalWordCount(tweetsList)))
In [6]:
def remove_urls(text):
text = re.sub(r"(?:\@|http?\://)\S+", "", text)
text = re.sub(r"(?:\@|https?\://)\S+", "", text)
return text
def doc_rm_urls():
return [ remove_urls(tweets) for tweets in tweetsList]
tweetsList = doc_rm_urls()
print("Total Number of Words: %d" %(totalWordCount(tweetsList)))
Tokenization is basically process of splitting text into words, phrases or other meaningful elements called tokens. We words as our tokens. To better process the text and to create a dictionary and a corpus we tokenized and converted to lower case all the tweets. We used nltk library with regexp to tokenize.
In [7]:
# This returns a list of tokens / single words for each user
def tokenize_tweet():
'''
Tokenizes the raw text of each document
'''
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
return [ tokenizer.tokenize(t.lower()) for t in tweetsList]
tweetsList = tokenize_tweet()
print("Total Number of Words: " + str(totalWordCount(tweetsList)))
Stop words usually refer to the most common words in a language. So being common makes stopwords less effective and sometimes misleading while making decisions. Thus generally stop words are words which are filtered out. We used nltk library to obtain general English stop words, also we determined some words ourselves and also added one and two character words from tweets to stop words.
In [8]:
# Remove stop words
stoplist_tw=['amp','get','got','hey','hmm','hoo','hop','iep','let','ooo','par',
'pdt','pln','pst','wha','yep','yer','aest','didn','nzdt','via',
'one','com','new','like','great','make','top','awesome','best',
'good','wow','yes','say','yay','would','thanks','thank','going',
'new','use','should','could','best','really','see','want','nice',
'while','know', 'rt', 'http', 'https']
stoplist = set(nltk.corpus.stopwords.words("english") + stoplist_tw)
## bu sayi olayini yapicaksak 3d olayina dikkat et
tweetsList = [[token for token in tweets if token not in stoplist and len(token) > 1]
for tweets in tweetsList]
print("Total Number of Words: " + str(totalWordCount(tweetsList)))
In [9]:
# Delete Accounts whose tweets are not majorly in English
tweetsList2 = [tweets for tweets in tweetsList if langid.classify(' '.join(tweets))[0] == 'en']
print("Number of Users: " + str(len(tweetsList2)))
print("Total Number of Words: " + str(totalWordCount(tweetsList2)))
After all those preprocessing on tweets, we have removed lots of words from original tweets. Some of the accounts, which are possibly not majorly in English but still includes English words, effected more but still existed in the corpus. So to eliminate those misleading accounts from the corpus we deleted accounts whose number of left tokens are less than 200.
In [10]:
# Delete Accounts whose length of tokenized tweets are less than 200
tweetsList2 = [tweets for tweets in tweetsList2 if len(tweets) > 200]
print("Number of Users: " + str(len(tweetsList2)))
In [11]:
userList2 = []
for i in range(len(tweetsList2)):
for j in range(i, len(tweetsList)):
if tweetsList2[i] == tweetsList[j]:
userList2.append(userList[j])
break
For grammatical reasons, documents are going to use different forms of a word, such as organize, organizes, and organizing. Additionally, there are families of derivationally related words with similar meanings, such as democracy, democratic, and democratization. The goal of stemming is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form. nltk library has mainly 3 kinds of stemming tools for English: lancaster, porter and snowball. We chose Snowball stemmer because it uses a more developed algorithm then Porter Stemmer (Snowball is also called as Porter2) and less aggressive than Lancaster.
In [12]:
# Porter Stemmer and Snowball Stemmer (Porter2) - We useed Snowball Stemmer
# http://stackoverflow.com/questions/10554052/what-are-the-major-differences-and-benefits-of-porter-and-lancaster-stemming-alg
sno = nltk.stem.SnowballStemmer('english')
tweetsList2 = [[sno.stem(token) for token in tweets]
for tweets in tweetsList2]
print("Total Number of Words: " + str(totalWordCount(tweetsList2)))
To properly use the Twitter data that we have preprocessed, we need to put into a shape that will be understandable by Topic Modeling algorithms. Bag-of-words representation is perfect fit for those kind of algorithms. In bag-of-words we first created a dictionary which consists of all the words from our preprocessed twitter data as values and their ids as keys. Then we created our corpus. Each element of the corpus corresponds to one Twitter account. Each element consists tuples which includes dictionary id of words and the number of that words' occurrences in that account. We used a very useful python library called Gensim to create our dictionary and corpus.
In [13]:
# Build a dictionary where for each document each word has its own id
dictionary = corpora.Dictionary(tweetsList2)
dictionary.compactify()
# Build the corpus: vectors with occurence of each word for each document
# convert tokenized documents to vectors
corpus = [dictionary.doc2bow(tweets) for tweets in tweetsList2]
print(dictionary)
In [16]:
# Removing words that appears at most 10 times in the whole corpus
dictCtr = np.zeros(len(dictionary))
for c in corpus:
for tuples in c:
dictCtr[tuples[0]] = dictCtr[tuples[0]] + tuples[1]
badids = []
for i in range(len(dictCtr)):
if dictCtr[i] < 11:
badids.append(i)
dictionary.filter_tokens(bad_ids=badids)
dictionary.compactify()
corpus = [dictionary.doc2bow(tweets) for tweets in tweetsList2]
print(dictionary)
In [17]:
tweetList = []
for c in corpus:
str = ''
for tokens in c:
str = str + ((dictionary[tokens[0]]+' ') * tokens[1])
tweetList.append(str)
print("Number of Users: %d" %(len(tweetList)))
print("Total Number of Words: %d" %(totalWordCount2(corpus)))
In [18]:
wordModel = models.Word2Vec(tweetsList2, size=30, window=5, min_count=11, workers=4)
print(wordModel)
In [19]:
#print(len(wordModel.wv.index2word))
vocab = wordModel.wv.index2word
wordvectors = wordModel.wv[vocab]
In [20]:
kmeansList = np.asarray(wordvectors).astype(np.float64)
kmeans = KMeans(n_clusters=2000).fit(kmeansList)
In [21]:
clusters = {}
labels = {}
centers = []
inVocab = {}
for i in range(0,2000):
clusters[i] = []
for i, label in enumerate(kmeans.labels_):
clusters[label].append(vocab[i])
labels[vocab[i]] = label
for c in kmeans.cluster_centers_:
centers.append(wordModel.similar_by_vector(c)[0][0])
for v in vocab:
inVocab[v] = 1
In [22]:
# Change words in tweets with their cluster center words
tweets2 = [[centers[labels[r]] for r in row if r in inVocab]
for row in tweetsList2]
In [23]:
# Build a dictionary where for each document each word has its own id
dictionaryVW = corpora.Dictionary(tweets2)
dictionaryVW.compactify()
# Build the corpus: vectors with occurence of each word for each document
# convert tokenized documents to vectors
corpusVW = [dictionaryVW.doc2bow(tweets) for tweets in tweets2]
print(dictionaryVW)
In [24]:
# Normalize word counts by dividing it to the number of elements in its cluster
corpusVW = [[(r[0], int(math.ceil(r[1]/ len(clusters[labels[dictionaryVW[r[0]]]]))) ) for r in row]
for row in corpusVW]
In [25]:
tweetListVW = []
for c in corpusVW:
str = ''
for tokens in c:
str = str + ((dictionaryVW[tokens[0]]+' ') * tokens[1])
tweetListVW.append(str)
print("Number of Users: %d" %(len(tweetListVW)))
print("Total Number of Words: %d" %(totalWordCount2(corpusVW)))
This code is converted from Cemgil's MATLAB code. https://www.cmpe.boun.edu.tr/~cemgil/bnmf/index.html
In [5]:
# %load gnmf_solvebynewton.py
from __future__ import division
import numpy as np
import scipy as sp
from scipy import special
import numpy.matlib as M
def gnmf_solvebynewton(c, a0 = None):
if a0 is None:
a0 = 0.1 * np.ones(np.shape(c))
M, N = np.shape(a0)
if len(np.shape(c)) == 0:
Mc , Nc = 1,1
else:
Mc, Nc = np.shape(c)
a = None
cond = 0
if (M == Mc and N == Nc):
a = a0
cond = 1
elif (Mc == 1 and Nc >1):
cond = 2
a = a0[0,:]
elif (Mc > 1 and Nc == 1):
cond = 3
a = a0[:,0]
elif (Mc == 1 and Nc == 1):
cond = 4
a = a0[0,0]
a2 = None
for index in range(10):
a2 = a - (np.log(a) - special.polygamma(0,a) + 1 - c) / (1/a - special.polygamma(1,a))
idx = np.where(a2<0)
if len(idx[0]) > 0:
if isinstance(a, float):
a2 = a / 2
else:
a2[idx] = a[idx] / 2
a = a2
if(cond == 2):
a = M.repmat(a,M,1)
elif(cond == 3):
a = M.repmat(a,1,N)
elif(cond == 4):
a = a * np.ones([M,N])
return a
In [6]:
# %load gnmf_vb_poisson_mult_fast.py
from __future__ import division
import numpy as np
import scipy as sp
import math
from scipy import special
import numpy.matlib as M
def gnmf_vb_poisson_mult_fast(x,
a_tm,
b_tm,
a_ve,
b_ve,
EPOCH =1000,
Method = 'vb',
Update = np.inf,
tie_a_ve = 'clamp',
tie_b_ve = 'clamp',
tie_a_tm = 'clamp',
tie_b_tm = 'clamp',
print_period = 500
):
# Result initialiation
g = dict()
g['E_T'] = None
g['E_logT'] = None
g['E_V'] = None
g['E_logV'] = None
g['Bound'] = None
g['a_ve'] = None
g['b_ve'] = None
g['a_tm'] = None
g['b_tm'] = None
logm = np.vectorize(math.log)
W = x.shape[0]
K = x.shape[1]
I = b_tm.shape[1]
M = ~np.isnan(x)
X = np.zeros(x.shape)
X[M] = x[M]
t_init = np.random.gamma(a_tm, b_tm/a_tm)
v_init = np.random.gamma(a_ve, b_ve/a_ve)
L_t = t_init
L_v = v_init
E_t = t_init
E_v = v_init
Sig_t = t_init
Sig_v = v_init
B = np.zeros([1,EPOCH])
gammalnX = special.gammaln(X+1)
for e in range(1,EPOCH+1):
LtLv = L_t.dot(L_v)
tmp = X / (LtLv)
#check Tranpose
Sig_t = L_t * (tmp.dot(L_v.T))
Sig_v = L_v * (L_t.T.dot(tmp))
alpha_tm = a_tm + Sig_t
beta_tm = 1/((a_tm/b_tm) + M.dot(E_v.T))
E_t = alpha_tm * (beta_tm)
alpha_ve = a_ve + Sig_v
beta_ve = 1/((a_ve/b_ve) + E_t.T.dot(M))
E_v = alpha_ve * (beta_ve)
# Compute the bound
if(e%10 == 1):
print("*", end='')
if(e%print_period == 1 or e == EPOCH):
g['E_T'] = E_t
g['E_logT'] = logm(L_t)
g['E_V'] = E_v
g['E_logV'] = logm(L_v)
g['Bound'] = -np.sum(np.sum(M * (g['E_T'].dot(g['E_V'])) + gammalnX))\
+ np.sum(np.sum(-X * ( ((L_t * g['E_logT']).dot(L_v) + L_t.dot(L_v * g['E_logV']))/(LtLv) - logm(LtLv) ) ))\
+ np.sum(np.sum((-a_tm/b_tm)* g['E_T'] - special.gammaln(a_tm) + a_tm * logm(a_tm /b_tm)))\
+ np.sum(np.sum((-a_ve/b_ve)* g['E_V'] - special.gammaln(a_ve) + a_ve * logm(a_ve /b_ve)))\
+ np.sum(np.sum( special.gammaln(alpha_tm) + alpha_tm * logm(beta_tm) + 1))\
+ np.sum(np.sum(special.gammaln(alpha_ve) + alpha_ve * logm(beta_ve) + 1 ))
g['a_ve'] = a_ve
g['b_ve'] = b_ve
g['a_tm'] = a_tm
g['b_tm'] = b_tm
print()
print( g['Bound'], a_ve.flatten()[0], b_ve.flatten()[0], a_tm.flatten()[0], b_tm.flatten()[0])
if (e == EPOCH):
break;
L_t = np.exp(special.psi(alpha_tm)) * beta_tm
L_v = np.exp(special.psi(alpha_ve)) * beta_ve
Z = None
if( e> Update):
if(not tie_a_tm == 'clamp' ):
Z = (E_t / b_tm) - (logm(L_t) - logm(b_tm))
if(tie_a_tm == 'clamp'):
a_tm = gnmf_solvebynewton(Z,a0=a_tm)
elif(tie_a_tm == 'rows'):
a_tm = gnmf_solvebynewton(np.sum(Z,0)/W, a0=a_tm)
elif(tie_a_tm == 'cols'):
a_tm = gnmf_solvebynewton(np.sum(Z,1)/I, a0=a_tm)
elif(tie_a_tm == 'tie_all'):
#print(np.sum(Z)/(W * I))
#print(a_tm)
a_tm = gnmf_solvebynewton(np.sum(Z)/(W * I), a0=a_tm)
if(tie_b_tm == 'free'):
b_tm = E_t
elif(tie_b_tm == 'rows'):
b_tm = M.repmat(np.sum(a_tm * E_t,0)/np.sum(a_tm,0),W,1)
elif(tie_b_tm == 'cols'):
b_tm = M.repmat(np.sum(a_tm * E_t,1)/np.sum(a_tm,1),1,I)
elif(tie_b_tm == 'tie_all'):
b_tm = (np.sum(a_tm*E_t)/ np.sum(a_tm)) * np.ones([W,I])
if(not tie_a_ve == 'clamp' ):
Z = (E_v / b_ve) - (logm(L_v) - logm(b_ve))
if(tie_a_ve == 'clamp'):
a_ve = gnmf_solvebynewton(Z,a_ve)
elif(tie_a_ve == 'rows'):
a_ve = gnmf_solvebynewton(np.sum(Z,0)/I, a0=a_ve)
elif(tie_a_ve == 'cols'):
a_ve = gnmf_solvebynewton(np.sum(Z,1)/K, a0=a_ve)
elif(tie_a_ve == 'tie_all'):
a_ve = gnmf_solvebynewton(np.sum(Z)/(I * K), a0=a_ve)
if(tie_b_ve == 'free'):
b_ve = E_v
elif(tie_b_ve == 'rows'):
b_ve = M.repmat(np.sum(a_ve * E_v,0)/np.sum(a_ve,0),I,1)
elif(tie_b_tm == 'cols'):
b_ve = M.repmat(np.sum(a_ve * E_v,1)/np.sum(a_ve,1),1,K)
elif(tie_b_tm == 'tie_all'):
b_ve = (np.sum(a_ve*E_v)/ np.sum(a_ve)) * np.ones([I,K])
return g
In [8]:
n_topics = 17
n_top_words = 7
n_top_topics = 3
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
sm = sum(topic)
print("Topic #%d:" % topic_idx)
for i in topic.argsort()[:-n_top_words - 1:-1]:
print("(%s, %lf) " %(feature_names[i], topic[i]/sm), end='')
print()
print()
def print_top_words2(H, feature_names, n_top_words):
for topic_idx, topic in enumerate(H):
sm = sum(topic)
print("Topic #%d:" % topic_idx)
for i in topic.argsort()[:-n_top_words - 1:-1]:
print("(%s, %lf) " %(feature_names[i], topic[i]/sm), end='')
print()
print()
def print_top_topics(doc_topic, user, n_top_topics):
for i in doc_topic[user].argsort()[:-n_top_topics - 1:-1]:
print("(%d, %lf) " %(i, doc_topic[user][i]), end='')
def topicAndWords(model, doc_topic, user, feature_names):
model_comp = model.components_
for i in doc_topic[user].argsort()[:-3 - 1:-1]:
print("(%d, %lf) " %(i, doc_topic[user][i]), end='')
sm = sum(model_comp[i])
for j in model_comp[i].argsort()[:-3 - 1:-1]:
print("(%s, %lf) " %(feature_names[j], model_comp[i][j]/sm), end='')
print()
In [8]:
n_samples = len(tweetList)
n_features = len(dictionary)
# Use tf-idf features for NMF.
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(max_features=n_features)
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(tweetList)
print("done in %0.3fs." % (time() - t0))
# Fit the NMF model
print("Fitting the NMF model with tf-idf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
t0 = time()
nmf = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
#http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/sklearn.ipynb#topic=0&lambda=1&term=
nmf_vis_data = pyLDAvis.sklearn.prepare(nmf, tfidf, tfidf_vectorizer)
pyLDAvis.display(nmf_vis_data)
Out[8]:
In [20]:
n_samples = len(tweetListVW)
n_features = len(dictionaryVW)
# Use tf-idf features for NMF.
print("Extracting tf-idf features for NMF...")
tfidf_vectorizerWV = TfidfVectorizer(max_features=n_features)
t0 = time()
tfidfWV = tfidf_vectorizerWV.fit_transform(tweetListVW)
print("done in %0.3fs." % (time() - t0))
# Fit the NMF model
print("Fitting the NMF model with tf-idf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
t0 = time()
nmfWV = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidfWV)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizerWV.get_feature_names()
print_top_words(nmfWV, tfidf_feature_names, n_top_words)
#http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/sklearn.ipynb#topic=0&lambda=1&term=
nmf_vis_data = pyLDAvis.sklearn.prepare(nmfWV, tfidfWV, tfidf_vectorizerWV)
pyLDAvis.display(nmf_vis_data)
Out[20]:
In [9]:
n_samples = len(tweetList)
n_features = len(dictionary)
# Use tf-idf features for NMF.
print("Extracting tf features for NMF...")
tf_vectorizer = CountVectorizer(max_features=n_features)
t0 = time()
tf = tf_vectorizer.fit_transform(tweetList)
#tfidf_vectorizer = TfidfVectorizer(max_features=n_features)
#t0 = time()
#tfidf = tfidf_vectorizer.fit_transform(tweetList)
print("done in %0.3fs." % (time() - t0))
In [11]:
tfDense = tf.todense()
tfDense2 = tfDense
#idx = np.where(tfidfDense2>100)
#print(len(idx[0]))
W = tf.shape[0]
K = tf.shape[1]
I = n_topics
a_tm = 10 * np.ones([W,I])
b_tm = np.ones([W,I])
a_ve = np.ones([I,K])
b_ve = 10 * np.ones([I,K])
#T = np.random.gamma(a_tm,b_tm)
#V = np.random.gamma(a_ve,b_ve)
#x = np.random.poisson(T.dot(V))
#idx = np.where(x>100)
#print(len(idx[0]))
t0 = time()
klbnmf = gnmf_vb_poisson_mult_fast(np.asarray(tfDense2),a_tm,b_tm,a_ve,b_ve,
EPOCH=500,
Update =10,
tie_a_ve='tie_all',
tie_b_ve='tie_all',
tie_a_tm='tie_all',
tie_b_tm='tie_all')
print("done in %0.3fs." % (time() - t0))
We can directly obtain the word distributions of topics from the factorized matrices generated by KLBNMF algorithm. Below you can see the output. However, to visualize the output we wanted to use LDAvis library. To use it we generated our initial corpus (matrix) by multiplying the outputs of KLBNMF (klbnmf['E_T'],klbnmf['E_V']) and we put it into the regular sci-kit NMF. Then we put the resulting NMF model into the LDAvis to visualize the word-topic distribution. As we inspect both results, there are nearly no difference and with LDAvis we have achieved to visualize. After this ecample we will follow the same proccess of visualization for all the outputs. But if ypu want you can use the below function to directly obtain the word-topic distribution.
In [12]:
print_top_words2(klbnmf['E_V'], tf_vectorizer.get_feature_names(), n_top_words)
In [14]:
tfKL = np.dot(klbnmf['E_T'],klbnmf['E_V'])
# Fit the NMF model
print("Fitting the KLBNMF model with tf-idf(klbnmf['E_T']*klbnmf['E_V']) features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
t0 = time()
klbnmf2tf = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfKL)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(klbnmf2tf, tfidf_feature_names, n_top_words)
tfKLsparse = sparse.csr_matrix(tfKL)
#http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/sklearn.ipynb#topic=0&lambda=1&term=
nmf_vis_data = pyLDAvis.sklearn.prepare(klbnmf2tf, tfKLsparse, tf_vectorizer)
pyLDAvis.display(nmf_vis_data)
Out[14]:
Here in tf-idf appraoch we needed to multiply the corpus values with 10000 to get results. Because it has a Poisson distributions, it needs integer values. You can compare the results with term frequnecy approach which has integer values inherently and we didn't need to multiply the values with anything to get results.
In [9]:
tfidfDense = tfidf.todense()
tfidfDense2 = tfidfDense*10000
#idx = np.where(tfidfDense2>100)
#print(len(idx[0]))
W = tfidf.shape[0]
K = tfidf.shape[1]
I = n_topics
a_tm = 1 * np.ones([W,I])
b_tm = np.ones([W,I])
a_ve = np.ones([I,K])
b_ve = 8 * np.ones([I,K])
#T = np.random.gamma(a_tm,b_tm)
#V = np.random.gamma(a_ve,b_ve)
#x = np.random.poisson(T.dot(V))
#idx = np.where(x>100)
#print(len(idx[0]))
t0 = time()
klbnmf = gnmf_vb_poisson_mult_fast(np.asarray(tfidfDense2),a_tm,b_tm,a_ve,b_ve,
EPOCH=500,
Update =10,
tie_a_ve='tie_all',
tie_b_ve='tie_all',
tie_a_tm='tie_all',
tie_b_tm='tie_all')
print("done in %0.3fs." % (time() - t0))
In [11]:
tfidfKL = np.dot(klbnmf['E_T'],klbnmf['E_V'])
# Fit the NMF model
print("Fitting the KLBNMF model with tf-idf(klbnmf['E_T']*klbnmf['E_V']) features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
t0 = time()
klbnmf2 = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidfKL)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(klbnmf2, tfidf_feature_names, n_top_words)
tfidfKLsparse = sparse.csr_matrix(tfidfKL)
#http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/sklearn.ipynb#topic=0&lambda=1&term=
nmf_vis_data = pyLDAvis.sklearn.prepare(klbnmf2, tfidfKLsparse, tfidf_vectorizer)
pyLDAvis.display(nmf_vis_data)
Out[11]:
In [21]:
tfidfDenseWV = tfidfWV.todense()
tfidfDense2WV = tfidfDenseWV*10000
#idx = np.where(tfidfDense2>100)
#print(len(idx[0]))
W = tfidfWV.shape[0]
K = tfidfWV.shape[1]
I = n_topics
a_tm = 1 * np.ones([W,I])
b_tm = np.ones([W,I])
a_ve = np.ones([I,K])
b_ve = 8 * np.ones([I,K])
#T = np.random.gamma(a_tm,b_tm)
#V = np.random.gamma(a_ve,b_ve)
#x = np.random.poisson(T.dot(V))
#idx = np.where(x>100)
#print(len(idx[0]))
klbnmfWV = gnmf_vb_poisson_mult_fast(np.asarray(tfidfDense2WV),a_tm,b_tm,a_ve,b_ve,
EPOCH=1000,
Update =10,
tie_a_ve='tie_all',
tie_b_ve='tie_all',
tie_a_tm='tie_all',
tie_b_tm='tie_all')
In [22]:
tfidfKLWV = np.dot(klbnmfWV['E_T'],klbnmfWV['E_V'])
# Fit the NMF model
print("Fitting the KLBNMF model with tf-idf(klbnmf['E_T']*klbnmf['E_V']) features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
t0 = time()
klbnmfWV2 = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidfKLWV)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizerWV.get_feature_names()
print_top_words(klbnmfWV2, tfidf_feature_names, n_top_words)
tfidfKLsparseWV = sparse.csr_matrix(tfidfKLWV)
#http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/sklearn.ipynb#topic=0&lambda=1&term=
nmf_vis_data = pyLDAvis.sklearn.prepare(klbnmfWV2, tfidfKLsparseWV, tfidf_vectorizerWV)
pyLDAvis.display(nmf_vis_data)
Out[22]:
In [13]:
ids = []
chosenNames = ['andrewyng', 'radbuzzz', 'RoboticsEU', 'karpathy', 'polar3d', 'thearduinoguy']
# @andrewyng => 216939636
ids.append(userList2.index('216939636'))
# @radbuzzz => 28953366
ids.append(userList2.index('28953366'))
# @RoboticsEU => 335419621
ids.append(userList2.index('335419621'))
# @karpathy => 33836629
ids.append(userList2.index('33836629'))
# @polar3d => 2875670213
ids.append(userList2.index('2875670213'))
# @thearduinoguy => 15392736
ids.append(userList2.index('15392736'))
In [23]:
from operator import itemgetter
klbnmf2Topics = klbnmf2.transform(tfidfKLsparse)
klbnmfWV2Topics = klbnmfWV2.transform(tfidfKLsparseWV)
In [26]:
for i, id in enumerate(ids):
print(chosenNames[i])
print("KLBNMF")
topicAndWords(klbnmf2, klbnmf2Topics, id, tfidf_vectorizer.get_feature_names())
print()
print("KLBNMF (w2v)")
topicAndWords(klbnmfWV2, klbnmfWV2Topics, id, tfidf_vectorizerWV.get_feature_names())
print()