Sebastian Raschka, Nov 2014
Last week, I posted some visualizations in context of "Happy Rock Song" data mining project, and some people were curious about how I created the word clouds. I thought it might be interesting to use a different dataset for this tutorial: Your personal twitter timeline.
Before we get started, I want to list some of the required packages to make this work!
Below, you find a list of the basic packages which can be installed via
pip install <package_name>
And the Python (2.7) wordcloud
package by Andreas Mueller can be installed via
pip install git+git://github.com/amueller/word_cloud.git
Note that wordcloud
requires Python's imaging library PIL. Depending on the operating system, the installation and setup of PIL can be quite challenging; however, when I tried to install it on different MacOS and Linux systems via conda
it always seemed to work seamlessly:
conda install pil
Let me use my handy watermark
extension to summarize the different packages and version numbers that were used in my case to download the twitter timeline and create the word cloud:
In [1]:
%load_ext watermark
%watermark -d -v -m -p twitter,pyprind,wordcloud,pandas,scipy,matplotlib
In order to download our twitter timeline, we are going to use a simple command line tool twitter_timeline.py
. The usage is quite simple, and I have detailed the setup procedure in the README.md file which can be found in the respective GitHub repository. After you provided the necessary authentication information, you can run it from your terminal via
python ./twitter_timeline.py --out 'output.csv'
in order to save your timeline in CSV format.
Alternatively, you can import the TimelineMiner
class from the twitter_timeline.py
to run the code directly in this IPython notebook as shown below.
In [1]:
import sys
sys.path.append('../../twitter_timeline/')
import twitter_timeline
import oauth_info as auth
In [2]:
tm = twitter_timeline.TimelineMiner(auth.ACCESS_TOKEN,
auth.ACCESS_TOKEN_SECRET,
auth.CONSUMER_KEY,
auth.CONSUMER_SECRET,
auth.USER_NAME
)
print('Authentification successful: %s' %tm.authenticate())
tm.get_timeline(max=2000, keywords=[])
In [3]:
tm.df.head()
Out[3]:
If the twitter_timeline.py
script was executed terminal, you can read the "tweets" from the CSV file via
import pandas as pd
df = pd.read_csv('path/to/CSV')
Now that we collected the tweets from our twitter timeline the creation of the word cloud is pretty simple and straightforward thanks to the nice wordcloud
module.
In [4]:
%matplotlib inline
In [10]:
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
# join tweets to a single string
words = ' '.join(tm.df['tweet'])
# remove URLs, RTs, and twitter handles
no_urls_no_tags = " ".join([word for word in words.split()
if 'http' not in word
and not word.startswith('@')
and word != 'RT'
])
wordcloud = WordCloud(
font_path='/Users/sebastian/Library/Fonts/CabinSketch-Bold.ttf',
stopwords=STOPWORDS,
background_color='black',
width=1800,
height=1400
).generate(no_urls_no_tags)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./my_twitter_wordcloud_1.png', dpi=300)
plt.show()
Surprise, surprise: The most common term I used in my tweets is obviously "Python!"
To make the word cloud even more visually appealing, let us as a custom shape in form of the twitter logo:
In [16]:
from scipy.misc import imread
twitter_mask = imread('./twitter_mask.png', flatten=True)
wordcloud = WordCloud(
font_path='/Users/sebastian/Library/Fonts/CabinSketch-Bold.ttf',
stopwords=STOPWORDS,
background_color='white',
width=1800,
height=1400,
mask=twitter_mask
).generate(no_urls_no_tags)
plt.imshow(wordcloud)
plt.axis("off")
plt.savefig('./my_twitter_wordcloud_2.png', dpi=300)
plt.show()
(You can find the twitter_mask.png
here)
In [ ]: