Experimenting with Gensim/Word2Vec on tweets collected by the folks at the discursive project. Also making use of the gensim model built from ~ 400 million Twitter posts (built by Fréderic Godin , available at http://www.fredericgodin.com/software/)
In [1]:
import gensim
import pymongo
import json
import numpy as np
import pandas as pd
from pymongo import MongoClient
In [8]:
import requests
In [55]:
from gensim import corpora, models, similarities
In [2]:
mongoClient = MongoClient()
db = mongoClient.data4democracy
tweets_collection = db.tweets
In [19]:
from gensim.models.word2vec import Word2Vec
from gensim.parsing.preprocessing import STOPWORDS
from gensim.utils import smart_open, simple_preprocess
def tokenize(text):
return [token for token in simple_preprocess(text) if token not in STOPWORDS]
In [5]:
tweets_model = Word2Vec.load_word2vec_format('../../../../Volumes/SDExternal2/word2vec_twitter_model/word2vec_twitter_model.bin', binary=True, unicode_errors='ignore')
In [14]:
#now calculate word simiarities on twitter data e.g.
tweets_model.most_similar('jewish')
Out[14]:
In [9]:
#to remind myself what a tweet is like:
r = requests.get('https://s3-us-west-2.amazonaws.com/discursive/2017/1/10/18/tweets-25.json')
In [10]:
tweets_collection = r.json()
print(tweets_collection[0])
#for text analysis, the 'text' field is the one of interest
In [13]:
#the tweets text are in the 'text' field
print(tweets_collection[0]['text'])
The following is a bit of experimentation/learning with gensim -- following along some tutuorials on the gensim site to vectorize text, find tfidf etc
In [15]:
tweets_text_documents = [x['text'] for x in tweets_collection]
In [16]:
#quick check that the mapping was done correctly
tweets_text_documents[0]
Out[16]:
In [20]:
#quick check of the tokenize function -- remove stopwords included
tokenize(tweets_text_documents[0])
Out[20]:
In [36]:
tokenized_tweets = [[word for word in tokenize(x) if word != 'rt'] for x in tweets_text_documents]
In [37]:
tokenized_tweets[0]
Out[37]:
In [38]:
#construct a dictoinary of the words in the tweets using gensim
# the dictionary is a mapping between words and their ids
tweets_dictionary = corpora.Dictionary(tokenized_tweets)
In [44]:
#save gyhe dict for future reference
tweets_dictionary.save('temp/tweets_dictionary.dict')
In [49]:
#just a quick view of words and ids
dict(list(tweets_dictionary.token2id.items())[0:20])
Out[49]:
In [50]:
#convert tokenized documents to vectors
# compile corpus (vectors number of times each elements appears)
tweet_corpus = [tweets_dictionary.doc2bow(x) for x in tokenized_tweets]
corpora.MmCorpus.serialize('temp/tweets_corpus.mm', tweet_corpus) # save for future ref
In [51]:
tweets_tfidf_model = gensim.models.TfidfModel(tweet_corpus, id2word = tweets_dictionary)
In [53]:
tweets_tfidf_model[tweet_corpus]
Out[53]:
In [56]:
#Create similarity matrix of all tweets
'''note from gensim docs: The class similarities.MatrixSimilarity is only appropriate when
the whole set of vectors fits into memory. For example, a corpus of one million documents
would require 2GB of RAM in a 256-dimensional LSI space, when used with this class.
Without 2GB of free RAM, you would need to use the similarities.Similarity class.
This class operates in fixed memory, by splitting the index across multiple files on disk,
called shards. It uses similarities.MatrixSimilarity and similarities.SparseMatrixSimilarity internally,
so it is still fast, although slightly more complex.'''
index = similarities.MatrixSimilarity(tweets_tfidf_model[tweet_corpus])
index.save('temp/tweetsSimilarity.index')
In [62]:
#get similarity matrix between docs: https://groups.google.com/forum/#!topic/gensim/itYEaOYnlEA
#and check that the similarity matrix is what you expect
tweets_similarity_matrix = np.array(index)
print(tweets_similarity_matrix.shape)
In [70]:
#save the similarity matrix and associated tweets to json
#work in progress-- use tSNE to visualise the tweets to see if there's any clustering
outputDict = {'tweets' : [{'text': x['text'], 'id': x['id_str'], 'user': x['original_name']} for x in tweets_collection], 'matrix': tweets_similarity_matrix.tolist()}
with open('temp/tweetSimilarity.json', 'w') as f:
json.dump(outputDict, f)
In [77]:
#back to the word2vec idea, use min_count=1 since corpus is tiny
tweets_collected_model = gensim.models.Word2Vec(tokenized_tweets, min_count=1)
In [79]:
#looking again at the term jewish in our small tweet collection...
tweets_collected_model.most_similar('jewish')
Out[79]:
next step is to loop through the data on s3 and build up a bigger corpus of tweets from the