In this notebook, you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
In [69]:
import os
import zipfile
import string
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
The dataset consists of baby product reviews from Amazon.com.
In [70]:
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
In [71]:
# Filename of unzipped file
unzipped_file = 'amazon_baby.csv'
In [72]:
# If upzipped file not in files_list, unzip the file
if unzipped_file not in files_list:
zip_file = unzipped_file + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
In [73]:
products = pd.read_csv("amazon_baby.csv")
Now, let us see a preview of what the dataset looks like.
In [74]:
products.head()
Out[74]:
Let us explore a specific example of a baby product.
In [75]:
products.ix[1]
Out[75]:
Now, we will perform 2 simple data transformations:
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
Before removing the punctuation from the strings in the review column, we will fall all NA values with empty string.
In [76]:
products["review"] = products["review"].fillna("")
Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
In [77]:
products["review_clean"] = products["review"].str.translate(None, string.punctuation)
In [78]:
products = products[products['rating'] != 3]
len(products)
Out[78]:
Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Below, we are create a function we will applyi to the "ratings" column of the dataframe to determine if the review is positive or negative.
In [79]:
def sent_func(x):
# If rating is >=4, return a positive sentiment (+1)
if x>=4:
return 1
# Else, return a negative sentiment (-1)
else:
return -1
Creating a "sentiment" column by applying the sent_func to the "rating" column in the dataframe.
In [80]:
products['sentiment'] = products['rating'].apply(sent_func)
In [81]:
products.ix[20:22]
Out[81]:
Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set.
Loading the indicies for the train and test data and putting them in a list
In [82]:
with open('module-2-assignment-train-idx.txt', 'r') as train_file:
ind_list_train = map(int,train_file.read().split(','))
In [83]:
with open('module-2-assignment-test-idx.txt', 'r') as test_file:
ind_list_test = map(int,test_file.read().split(','))
Using the indicies of the train and test data to create the train and test datasets.
In [84]:
train_data = products.iloc[ind_list_train,:]
test_data = products.iloc[ind_list_test,:]
In [85]:
print len(train_data)
print len(test_data)
We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows:
The following cell uses CountVectorizer in scikit-learn. Notice the token_pattern argument in the constructor.
In [86]:
# Use this token pattern to keep single-letter words
vectorizer = CountVectorizer(token_pattern=r'\b\w+\b')
# First, learn vocabulary from the training data and assign columns to words
# Then convert the training data into a sparse matrix
train_matrix = vectorizer.fit_transform(train_data['review_clean'])
# Second, convert the test data into a sparse matrix, using the same word-column mapping
test_matrix = vectorizer.transform(test_data['review_clean'])
Creating an instance of the LogisticRegression class
In [87]:
logreg = linear_model.LogisticRegression()
Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model sentiment_model.
In [88]:
sentiment_model = logreg.fit(train_matrix, train_data["sentiment"])
Putting all the weights from the model into a numpy array.
In [89]:
weights_list = list(sentiment_model.intercept_) + list(sentiment_model.coef_.flatten())
weights_sent_model = np.array(weights_list, dtype = np.double)
print len(weights_sent_model)
There are a total of 121713
coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Quiz question: How many weights are >= 0?
In [90]:
num_positive_weights = len(weights_sent_model[weights_sent_model >= 0.0])
num_negative_weights = len(weights_sent_model[weights_sent_model < 0.0])
print "Number of positive weights: %i" % num_positive_weights
print "Number of negative weights: %i" % num_negative_weights
In [91]:
sample_test_data = test_data.ix[[59,71,91]]
print sample_test_data['rating']
sample_test_data
Out[91]:
Let's dig deeper into the first row of the sample_test_data. Here's the full review:
In [92]:
sample_test_data['review'].ix[59]
Out[92]:
That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
In [93]:
sample_test_data['review'].ix[71]
Out[93]:
We will now make a class prediction for the sample_test_data. The sentiment_model
should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores . For each row, the score (or margin) is a number in the range [-inf, inf].
In [94]:
sample_test_matrix = vectorizer.transform(sample_test_data['review_clean'])
scores = sentiment_model.decision_function(sample_test_matrix)
print scores
These scores can be used to make class predictions as follows:
$$ \hat{y} = \left\{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \\ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \\ \end{array} \right. $$Using scores, write code to calculate $\hat{y}$, the class predictions:
In [95]:
pred_sent_test_data = []
for val in scores:
if val>0:
pred_sent_test_data.append(1)
else:
pred_sent_test_data.append(-1)
print pred_sent_test_data
Checkpoint: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from Scikit-Learn.
In [96]:
print "Class predictions according to Scikit-Learn:"
print sentiment_model.predict(sample_test_matrix)
Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
In [97]:
prob_pos_score = 1.0/(1.0 + np.exp(-scores))
prob_pos_score
Out[97]:
Checkpoint: Make sure your probability predictions match the ones obtained from Scikit-Learn.
In [98]:
print "Class predictions according to Scikit-Learn:"
print sentiment_model.predict_proba(sample_test_matrix)[:,1]
Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
The 3rd data point has the lowest probability of being positive
We now turn to examining the full test dataset, test_data.
Using the sentiment_model
, find the 40 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-40 reviews, use the following steps:
sentiment_model
.Computing the scores with the sentiment_model decision function and then calculating the probability that y = +1
In [99]:
scores_test_data = sentiment_model.decision_function(test_matrix)
prob_test_data = 1.0/(1.0 + np.exp(-scores_test_data))
To find the 40 most positive and the 40 most negative values, we will create a list of tuples with the entries (probability, index). We will then sort the list and will be able to extract the indicies corresponding to each entry.
In [100]:
# List of indicies in the test data
ind_vals_test_data = test_data.index.values
# Empty list that will be filled with the tuples (probability, index)
score_label_lst_test = len(scores_test_data)*[-1]
Filling the list of tuples with the (probability, index) values
In [101]:
for i in range(len(scores_test_data)):
score_label_lst_test[i] = (prob_test_data[i],ind_vals_test_data[i])
Sorting the list with the entries (probability, index)
In [102]:
score_label_lst_test.sort()
Extracting the top 40 positive reviews and the top 40 negative reviews
In [103]:
top_40_pos_test_rev = score_label_lst_test[-40:]
top_40_neg_test_rev = score_label_lst_test[0:40]
Getting the indicies of the top 40 positive reviews.
In [104]:
ind_top_40_pos_test = 40*[-1]
for i,val in enumerate(top_40_pos_test_rev):
ind_top_40_pos_test[i] = val[1]
Getting the indicies of the top 40 negative reviews.
In [105]:
ind_top_40_neg_test = 40*[-1]
for i,val in enumerate(top_40_neg_test_rev):
ind_top_40_neg_test[i] = val[1]
Quiz Question: Which of the following products are represented in the 40 most positive reviews? [multiple choice]
In [106]:
test_data.ix[ind_top_40_pos_test]["name"]
Out[106]:
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
In [107]:
test_data.ix[ind_top_40_neg_test]["name"]
Out[107]:
We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by
$$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$This can be computed as follows:
true_labels
below).Complete the function below to compute the classification accuracy:
In [108]:
def get_classification_accuracy(model, data, true_labels):
# Constructing the wordcount vector
data_matrix = vectorizer.transform(data['review_clean'])
# Getting the predictions
preds_data = model.predict(data_matrix)
# Computing the number of correctly classified examples and the total examples
n_correct = float(np.sum(preds_data == true_labels.values))
n_total = float(len(preds_data))
# Computing the accuracy by dividing number of
#correctly classified examples by total number of examples
accuracy = n_correct/n_total
return accuracy
Now, let's compute the classification accuracy of the sentiment_model on the test_data.
In [109]:
acc_sent_mod_test = get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
print acc_sent_mod_test
Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
In [110]:
print "Accuracy on Test Data: %.2f" %(acc_sent_mod_test)
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
No, you may be overfitting.
Now, computing the accuracy of the sentiment model on the training data for a future quiz question.
In [111]:
acc_sent_mod_train = get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
print acc_sent_mod_train
In this section, we will find the weights of significant words for the sentiment_model.
Creating a vocab list. The vocab list constains all the words used for the sentiment_model
In [112]:
vocab = vectorizer.get_feature_names()
print len(vocab)
Creating a list of the significant words in the utf-8 format
In [113]:
un_sig_words = [u'love', u'great', u'easy', u'old', u'little', u'perfect', u'loves',
u'well', u'able', u'car', u'broke', u'less', u'even', u'waste', u'disappointed',
u'work', u'product', u'money', u'would', u'return']
Creating a list that will store all the indicies where the significant words appear in the vocab list.
In [114]:
ind_vocab_sig_words = []
Finding the index where each significant word appears.
In [115]:
for word in un_sig_words:
ind_vocab_sig_words.append(vocab.index(word))
Creating an empty list that will store the weights of the significant words. Then, using the index to find the weight for each signigicant word.
In [116]:
ws_sent_mod_sig_words = []
for ind in ind_vocab_sig_words:
ws_sent_mod_sig_words.append(sentiment_model.coef_.flatten()[ind])
Creating a series that will store the weights of the significant words and displaying this Series.
In [117]:
ws_sent_mod_ser = pd.Series(data=ws_sent_mod_sig_words, index=un_sig_words)
ws_sent_mod_ser
Out[117]:
In [118]:
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
In [119]:
len(significant_words)
Out[119]:
Compute a new set of word count vectors using only these words. The CountVectorizer class has a parameter that lets you limit the choice of words when building word count vectors:
In [120]:
vectorizer_word_subset = CountVectorizer(vocabulary=significant_words) # limit to 20 words
train_matrix_word_subset = vectorizer_word_subset.fit_transform(train_data['review_clean'])
test_matrix_word_subset = vectorizer_word_subset.transform(test_data['review_clean'])
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
Creating an instance of the LogisticRegression class. Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model simple_model.
In [121]:
log_reg = linear_model.LogisticRegression()
simple_model = logreg.fit(train_matrix_word_subset, train_data["sentiment"])
Getting the weights for the 20 significant words from the simple_model
In [122]:
ws_simp_model = list(simple_model.coef_.flatten())
Putting the weights in a Series with the words corresponding to the weights as the index.
In [123]:
ws_simp_mod_ser = pd.Series(data=ws_simp_model, index=significant_words)
ws_simp_mod_ser
Out[123]:
Quiz Question: Consider the coefficients of simple_model. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model
?
In [124]:
print len(simple_model.coef_[simple_model.coef_>0])
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words
) also positive words in the sentiment_model?
Yes, see weights below for the significant words for the sentiment model
In [125]:
ws_sent_mod_ser
Out[125]:
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy
method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
In [126]:
acc_sent_mod_train
Out[126]:
Now, compute the classification accuracy of the simple_model on the train_data:
In [127]:
preds_simp_mod_train = simple_model.predict(train_matrix_word_subset)
n_cor_preds_simp_mod_train = float(np.sum(preds_simp_mod_train == train_data['sentiment'].values))
n_tol_preds_simp_mod_train = float(len(preds_simp_mod_train))
acc_simp_mod_train = n_cor_preds_simp_mod_train/n_tol_preds_simp_mod_train
print acc_simp_mod_train
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
In [128]:
if acc_sent_mod_train>acc_simp_mod_train:
print "sentiment_model"
else:
print "simple_model"
Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
In [129]:
acc_sent_mod_test
Out[129]:
Next, we will compute the classification accuracy of the simple_model on the test_data:
In [130]:
preds_simp_mod_test = simple_model.predict(test_matrix_word_subset)
n_cor_preds_simp_mod_test = float(np.sum(preds_simp_mod_test == test_data['sentiment'].values))
n_tol_preds_simp_mod_test = float(len(preds_simp_mod_test))
acc_simp_mod_test = n_cor_preds_simp_mod_test/n_tol_preds_simp_mod_test
print acc_simp_mod_test
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
In [131]:
if acc_sent_mod_test>acc_simp_mod_test:
print "sentiment_model"
else:
print "simple_model"
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
In [132]:
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
acc_pos_train = float(num_positive)/float(len(train_data['sentiment']))
acc_neg_train = float(num_negative)/float(len(train_data['sentiment']))
if acc_pos_train>acc_neg_train:
print "Positive Sentiment is Majority Classifier for Training Data"
else:
print "Negative Sentiment is Majority Classifier for Training Data"
Now compute the accuracy of the majority class classifier on test_data.
Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
In [133]:
num_pos_test = (test_data['sentiment'] == +1).sum()
acc_pos_test = float(num_pos_test)/float(len(test_data['sentiment']))
print "Accuracy of Majority Class Classifier on Test Data: %.2f" %(acc_pos_test)
Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?
In [134]:
if acc_sent_mod_test>acc_pos_test:
print "Yes, the sentiment_model is better than majority class classifier"
else:
print "No, the majority class classifier is better than sentiment_model"
In [ ]: