For the final two days we'll move to measuring the prevelence of themes in a corpus. We'll cover three ways of doing this: the dictionary method, supervised classification, and unsupervised machine learning. Today, dictionary method.
This is the most simple way to measure the prevelence of a theme in a corpus, and is used for many purposes, including sentiment analysis. This is one of the most long-standing, and ubiquitous, methods in automated text analysis, so it's important to both understand the method and be able to implement it.
The method is simple: it involves grouping words into categories or themes, and then counting the number of words from each theme in your corpus. We will use this method to do sentiment analysis, a popular text analysis task, on our Music Review corpus, using a standard sentiment analysis dictionary.
A Novel Method for Detecting Plot, Matt Jockers
Enns, Peter, Nathan Kelly, Jana Morgan, and Christopher Witko. 2015.“Money and the Supply of Political Rhetoric: Understanding the Congressional (Non-)Response to Economic Inequality.” Paper presented at the APSA Annual Meetings, San Francisco.
Neal Caren has a tutorial using MPQA, which implements the dictionary method in Python but in a much different way
The dictionary method is based on the assumption that themes or categories consist of a group of words, and texts that cover that theme will have a higher percentage of that group of words compared to other texts. Dictionary methods are used for many purposes. A few possibilities:
There are two forms of dictionaries: standard or general dictionaries, and custom dictionaries.
There are a number of standard dictionaries that have been created by field experts. The benefit of standarized dictionaries is that they're developed by experts and have been throughoughly validated. Others have likely published using these dictionaries, so reviewers are more likely to accept them as valid. Because of this, they are good options if they fit your research question.
Here are a few:
Many research questions or data are domain specific, however, and will thus require you to create your own dictionary based on your own knowledge of the domain and question. Creating your own dictionary requires a lot of thought, and must be validated. These dictionaries are typically created in an interative fashion, and are modified as they are validated. See Enns et al. (2015) for an example of how they constructed their own dictionary.
Today we will use the free and standard sentiment dictionary from MPQA to measure positive and negative sentiment in the music reviews.
Our first step, as with any technique, is the pre-processing step, to get the data ready for analyis.
First, read in our Music Reviews corpus as a Pandas dataframe.
In [ ]:
#import the necessary packages
import pandas
import nltk
from nltk import word_tokenize
import string
#read the Music Reviews corpus into a Pandas dataframe
df = pandas.read_csv("BDHSI2016_music_reviews.csv", sep = '\t')
#view the dataframe
df
The next step is to create a new column in our dataset that contains tokenized words with all the pre-processing steps.
The code here will look slightly different that lesson 1, as we're applying these functions to every row in our dataframe.
In [ ]:
#first create a new column called "body_tokens" and transform to lowercase by applying the string function str.lower()
df['body_tokens'] = df['body'].str.lower()
#make sure it worked
print(df[['body','body_tokens']])
Next we tokenize the text. To do this on a Pandas dataframe we need the apply function. This simply tells the computer to take the function in the parentheses,, apply it to each row in the dataframe, and assign the output to a new column.
There are two ways to do this. If it's a built-in function you're applying to the entire field, such as nltk.word_tokenize, you can simply put the function in the parentheses,. In some cases, you need to write your own function, called a lambda function. This is the case if you're applying something to a list (Pandas does not deal with list objects well. Hopefully someone smart will fix that). We'll get to that case below.
In [ ]:
#tokenize
df['body_tokens'] = df['body_tokens'].apply(nltk.word_tokenize)
#view output
print(df['body_tokens'])
In [ ]:
punctuations = list(string.punctuation)
#remove punctuation. Let's talk about that lambda x.
df['body_tokens'] = df['body_tokens'].apply(lambda x: [word for word in x if word not in punctuations])
#view output
print(df['body_tokens'])
Pre-processing is done. What other pre-processing steps might we use?
One more step before getting to the dictionary method. We want a total token count for each row, so we can normalize the dictionary counts. To do this we simply create a new column that contains the length of the token list in each row.
In [ ]:
df['token_count'] = df['body_tokens'].apply(len)
print(df[['body_tokens','token_count']])
I created two text files, one is a list of positive words from the MPQA dictionary, the other is a list of negative words. One word per line. Our goal here is to count the number of positive and negative words in each row of our dataframe, and add two columns to our dataset with the count of positive and negative words.
First, read in the positive and negative words and create list variables for each.
In [ ]:
pos_sent = open("positive_words.txt").read()
neg_sent = open("negative_words.txt").read()
#view part of the pos_sent variable, to see how it's formatted.
print(pos_sent[:101])
In [ ]:
#remember the split function? We'll split on the newline character (\n) to create a list
positive_words=pos_sent.split('\n')
negative_words=neg_sent.split('\n')
#view the first elements in the lists
print(positive_words[:10])
print(negative_words[:10])
In [ ]:
#count number of words in each list
print(len(positive_words))
print(len(negative_words))
Great! Now we can create two more columns that contain the number of positive words and negative words in the review tokens. I'm going to get creative with this, as we need to do this step in one line of code for positive and negative words, each. Your challenges:
In [ ]:
#create column with the number of positive words
df['positive_tokens'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in positive_words]))
df['negative_tokens'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in negative_words]))
print(df[['token_count', 'positive_tokens', 'negative_tokens']])
In [ ]:
#use groupby function
df_genres = df.groupby('genre')
In [ ]:
##EX: Calculate the proportion of words that are positive for each genre.
###Hint: Use the sum() function. Proportion is just the total number of positive words divided by the total number of words.
###How do you calculate this using Pandas?
In [ ]:
##EX: Do the same for negative words. Which genre has the highest proportion of positive and negative words?
Compare these lists to the average score by genre.
In [ ]:
print(df_genres['score'].mean().sort_values(ascending=False))
Not bad. But this also illustrates potential problems with sentiment analysis, and the dictionary method in general.
Questions: