The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Let's get started!
Make sure you have the latest version of GraphLab Create.
In [1]:
from __future__ import division
import graphlab
import math
import string
import numpy
In [2]:
products = graphlab.SFrame('amazon_baby.gl/')
Now, let us see a preview of what the dataset looks like.
In [3]:
products
Out[3]:
Let us explore a specific example of a baby product.
In [4]:
products[269]
Out[4]:
Now, we will perform 2 simple data transformations:
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
In [5]:
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)
Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
In [6]:
products[269]['word_count']
Out[6]:
In [7]:
products = products[products['rating'] != 3]
len(products)
Out[7]:
Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
In [8]:
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
Out[8]:
Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1
so that everyone gets the same result.
In [9]:
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None
to obtain same results as everyone else.
Note: This line may take 1-2 minutes.
In [10]:
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
In [11]:
sentiment_model
Out[11]:
Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:
In [12]:
weights = sentiment_model.coefficients
weights.column_names()
Out[12]:
In [13]:
weights[weights['value'] > 0]['value']
Out[13]:
There are a total of 121713
coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value'
column in SFrame weights must be positive ( >= 0)).
In [14]:
num_positive_weights = weights[weights['value'] >= 0]['value'].size()
num_negative_weights = weights[weights['value'] < 0]['value'].size()
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
Quiz question: How many weights are >= 0?
In [15]:
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
Out[15]:
Let's dig deeper into the first row of the sample_test_data. Here's the full review:
In [16]:
sample_test_data[0]['review']
Out[16]:
That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
In [17]:
sample_test_data[1]['review']
Out[17]:
We will now make a class prediction for the sample_test_data. The sentiment_model
should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].
In [18]:
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
These scores can be used to make class predictions as follows:
$$ \hat{y} = \left\{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \\ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \\ \end{array} \right. $$Using scores, write code to calculate $\hat{y}$, the class predictions:
In [19]:
def margin_based_classifier(score):
return 1 if score > 0 else -1
sample_test_data['predictions'] = scores.apply(margin_based_classifier)
sample_test_data['predictions']
Out[19]:
Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
In [20]:
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.
Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
In [21]:
def logistic_classifier_prob(weight):
return 1.0 / (1.0 + math.exp(-1 * weight))
probabilities = scores.apply(logistic_classifier_prob)
probabilities
Out[21]:
Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.
In [22]:
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
In [23]:
print "Third"
We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.
Using the sentiment_model
, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-20 reviews, use the following steps:
sentiment_model
. (Hint: When you call .predict
to make predictions on the test data, use option output_type='probability'
to output the probability rather than just the most likely class.).topk
method on an SFrame to find the top k rows sorted according to the value of a specified column.)
In [24]:
a = graphlab.SArray([1,2,3])
b = graphlab.SArray([1,2,1])
print a == b
print (a == b).sum()
In [56]:
test_data['predicted_prob'] = sentiment_model.predict(test_data, output_type='probability')
test_data
test_data.topk('predicted_prob', 20).print_rows(20)
Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]
Now, let us repeat this exercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.
In [57]:
test_data.topk('predicted_prob', 20, reverse=True).print_rows(20)
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
We will now evaluate the accuracy of the trained classifier. Recall that the accuracy is given by
$$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$This can be computed as follows:
predict
method)true_labels
below).Complete the function below to compute the classification accuracy:
In [27]:
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
prediction = model.predict(data)
# Compute the number of correctly classified examples
correctly_classified = prediction == true_labels
# Then compute accuracy by dividing num_correct by total number of examples
accuracy = float(correctly_classified.sum()) / true_labels.size()
return accuracy
Now, let's compute the classification accuracy of the sentiment_model on the test_data.
In [28]:
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
Out[28]:
Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
In [29]:
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
In [30]:
len(significant_words)
Out[30]:
For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
In [31]:
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
Let's see what the first example of the dataset looks like:
In [32]:
train_data[0]['review']
Out[32]:
The word_count column had been working with before looks like the following:
In [33]:
print train_data[0]['word_count']
Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words
are present in this review.
In [34]:
print train_data[0]['word_count_subset']
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
In [35]:
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
Out[35]:
We can compute the classification accuracy using the get_classification_accuracy
function you implemented earlier.
In [36]:
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
Out[36]:
Now, we will inspect the weights (coefficients) of the simple_model:
In [37]:
simple_model.coefficients
Out[37]:
Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
In [38]:
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model
?
In [39]:
simple_model.coefficients[simple_model.coefficients['value'] > 0]['value'].size() - 1
Out[39]:
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words
) also positive words in the sentiment_model?
In [44]:
positive_significant_words = simple_model.coefficients[simple_model.coefficients['value'] > 0]
positive_significant_words
for w in positive_significant_words['index']:
print sentiment_model.coefficients[sentiment_model.coefficients['index'] == w]
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy
method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
In [46]:
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
Out[46]:
Now, compute the classification accuracy of the simple_model on the train_data:
In [47]:
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
Out[47]:
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
Now, we will repeat this exercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
In [58]:
round(get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']), 2)
Out[58]:
Next, we will compute the classification accuracy of the simple_model on the test_data:
In [49]:
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
Out[49]:
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
In [50]:
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
Now compute the accuracy of the majority class classifier on test_data.
Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
In [59]:
num_positive_test = (test_data['sentiment'] == +1).sum()
num_negative_test = (test_data['sentiment'] == -1).sum()
print num_positive_test
print num_negative_test
majority_accuracy = float(num_positive_test) / test_data['sentiment'].size()
print round(majority_accuracy, 2)
Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?
In [54]:
print "Yes"
In [55]:
graphlab.version
Out[55]:
In [ ]: