Predicting sentiment from product reviews

The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.

In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.

  • Use SFrames to do some feature engineering
  • Train a logistic regression model to predict the sentiment of product reviews.
  • Inspect the weights (coefficients) of a trained logistic regression model.
  • Make a prediction (both class and probability) of sentiment for a new product review.
  • Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
  • Inspect the coefficients of the logistic regression model and interpret their meanings.
  • Compare multiple logistic regression models.

Let's get started!

Fire up GraphLab Create

Make sure you have the latest version of GraphLab Create.


In [1]:
from __future__ import division
import graphlab
import math
import string
import numpy

Data preparation

We will use a dataset consisting of baby product reviews on Amazon.com.


In [2]:
products = graphlab.SFrame('amazon_baby.gl/')


[INFO] graphlab.cython.cy_server: GraphLab Create v2.1 started. Logging: /tmp/graphlab_server_1474869134.log
This non-commercial license of GraphLab Create for academic use is assigned to sudhanshu.shekhar.iitd@gmail.com and will expire on September 18, 2017.

Now, let us see a preview of what the dataset looks like.


In [3]:
products


Out[3]:
name review rating
Planetwise Flannel Wipes These flannel wipes are
OK, but in my opinion ...
3.0
Planetwise Wipe Pouch it came early and was not
disappointed. i love ...
5.0
Annas Dream Full Quilt
with 2 Shams ...
Very soft and comfortable
and warmer than it ...
5.0
Stop Pacifier Sucking
without tears with ...
This is a product well
worth the purchase. I ...
5.0
Stop Pacifier Sucking
without tears with ...
All of my kids have cried
non-stop when I tried to ...
5.0
Stop Pacifier Sucking
without tears with ...
When the Binky Fairy came
to our house, we didn't ...
5.0
A Tale of Baby's Days
with Peter Rabbit ...
Lovely book, it's bound
tightly so you may no ...
4.0
Baby Tracker® - Daily
Childcare Journal, ...
Perfect for new parents.
We were able to keep ...
5.0
Baby Tracker® - Daily
Childcare Journal, ...
A friend of mine pinned
this product on Pinte ...
5.0
Baby Tracker® - Daily
Childcare Journal, ...
This has been an easy way
for my nanny to record ...
4.0
[183531 rows x 3 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.

Build the word count vector for each review

Let us explore a specific example of a baby product.


In [4]:
products[269]


Out[4]:
{'name': 'The First Years Massaging Action Teether',
 'rating': 5.0,
 'review': 'A favorite in our house!'}

Now, we will perform 2 simple data transformations:

  1. Remove punctuation using Python's built-in string functionality.
  2. Transform the reviews into word-counts.

Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.


In [5]:
def remove_punctuation(text):
    import string
    return text.translate(None, string.punctuation) 

review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)

Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.


In [6]:
products[269]['word_count']


Out[6]:
{'a': 1, 'favorite': 1, 'house': 1, 'in': 1, 'our': 1}

Extract sentiments

We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.


In [7]:
products = products[products['rating'] != 3]
len(products)


Out[7]:
166752

Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.


In [8]:
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products


Out[8]:
name review rating word_count sentiment
Planetwise Wipe Pouch it came early and was not
disappointed. i love ...
5.0 {'and': 3, 'love': 1,
'it': 3, 'highly': 1, ...
1
Annas Dream Full Quilt
with 2 Shams ...
Very soft and comfortable
and warmer than it ...
5.0 {'and': 2, 'quilt': 1,
'it': 1, 'comfortable': ...
1
Stop Pacifier Sucking
without tears with ...
This is a product well
worth the purchase. I ...
5.0 {'and': 3, 'ingenious':
1, 'love': 2, 'what': 1, ...
1
Stop Pacifier Sucking
without tears with ...
All of my kids have cried
non-stop when I tried to ...
5.0 {'and': 2, 'all': 2,
'help': 1, 'cried': 1, ...
1
Stop Pacifier Sucking
without tears with ...
When the Binky Fairy came
to our house, we didn't ...
5.0 {'and': 2, 'this': 2,
'her': 1, 'help': 2, ...
1
A Tale of Baby's Days
with Peter Rabbit ...
Lovely book, it's bound
tightly so you may no ...
4.0 {'shop': 1, 'noble': 1,
'is': 1, 'it': 1, 'as': ...
1
Baby Tracker® - Daily
Childcare Journal, ...
Perfect for new parents.
We were able to keep ...
5.0 {'and': 2, 'all': 1,
'right': 1, 'had': 1, ...
1
Baby Tracker® - Daily
Childcare Journal, ...
A friend of mine pinned
this product on Pinte ...
5.0 {'and': 1, 'fantastic':
1, 'help': 1, 'give': 1, ...
1
Baby Tracker® - Daily
Childcare Journal, ...
This has been an easy way
for my nanny to record ...
4.0 {'all': 1, 'standarad':
1, 'another': 1, 'when': ...
1
Baby Tracker® - Daily
Childcare Journal, ...
I love this journal and
our nanny uses it ...
4.0 {'all': 2, 'nannys': 1,
'just': 1, 'food': 1, ...
1
[166752 rows x 5 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.

Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).

Split data into training and test sets

Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.


In [9]:
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)


133416
33336

Train a sentiment classifier with logistic regression

We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.

Note: This line may take 1-2 minutes.


In [10]:
sentiment_model = graphlab.logistic_classifier.create(train_data,
                                                      target = 'sentiment',
                                                      features=['word_count'],
                                                      validation_set=None)


Logistic regression:
--------------------------------------------------------
Number of examples          : 133416
Number of classes           : 2
Number of feature columns   : 1
Number of unpacked features : 121712
Number of coefficients    : 121713
Starting L-BFGS
--------------------------------------------------------
+-----------+----------+-----------+--------------+-------------------+
| Iteration | Passes   | Step size | Elapsed Time | Training-accuracy |
+-----------+----------+-----------+--------------+-------------------+
| 1         | 5        | 0.000002  | 2.370445     | 0.840754          |
| 2         | 9        | 3.000000  | 3.680410     | 0.931350          |
| 3         | 10       | 3.000000  | 4.042031     | 0.882046          |
| 4         | 11       | 3.000000  | 4.639828     | 0.954076          |
| 5         | 12       | 3.000000  | 5.286855     | 0.960964          |
| 6         | 13       | 3.000000  | 5.660884     | 0.975033          |
+-----------+----------+-----------+--------------+-------------------+
TERMINATED: Terminated due to numerical difficulties.
This model may not be ideal. To improve it, consider doing one of the following:
(a) Increasing the regularization.
(b) Standardizing the input data.
(c) Removing highly correlated features.
(d) Removing `inf` and `NaN` values in the training data.

In [11]:
sentiment_model


Out[11]:
Class                          : LogisticClassifier

Schema
------
Number of coefficients         : 121713
Number of examples             : 133416
Number of classes              : 2
Number of feature columns      : 1
Number of unpacked features    : 121712

Hyperparameters
---------------
L1 penalty                     : 0.0
L2 penalty                     : 0.01

Training Summary
----------------
Solver                         : lbfgs
Solver iterations              : 6
Solver status                  : TERMINATED: Terminated due to numerical difficulties.
Training time (sec)            : 6.0248

Settings
--------
Log-likelihood                 : inf

Highest Positive Coefficients
-----------------------------
word_count[mobileupdate]       : 41.9847
word_count[placeid]            : 41.7354
word_count[labelbox]           : 41.151
word_count[httpwwwamazoncomreviewrhgg6qp7tdnhbrefcmcrprcmtieutf8asinb00318cla0nodeid] : 40.0454
word_count[knobskeeping]       : 36.2091

Lowest Negative Coefficients
----------------------------
word_count[probelm]            : -44.9283
word_count[impulsejeep]        : -43.081
word_count[infantsyoung]       : -39.5945
word_count[cutereditafter]     : -35.6875
word_count[avacado]            : -35.0542

Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.

Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:


In [12]:
weights = sentiment_model.coefficients
weights.column_names()


Out[12]:
['name', 'index', 'class', 'value', 'stderr']

In [13]:
weights[weights['value'] > 0]['value']


Out[13]:
dtype: float
Rows: ?
[1.3033708054360478, 0.30381560001530183, 0.671556821414296, 0.4263265257017871, 7.3963370872039285, 0.03837878820789226, 0.16550664933686843, 0.13547267841250457, 0.048449573171953336, 0.009866464903065059, 1.4330168543924917, 0.48841347880790975, 1.4918301527571352, 0.16554143661471354, 0.1313784807648474, 0.018252811627941436, 0.9743446387845417, 0.0529359292782591, 0.0928734488916785, 0.1300905285253491, 0.010833655305222418, 2.3923403419346734, 0.121909799417747, 0.2460870633175975, 0.6432151257478322, 0.11969744113002541, 0.24942645220695273, 0.07785834042872607, 0.09671185203612317, 2.3923403419346734, 0.04049284227108606, 0.908512104907688, 0.5677491254714825, 0.8837698300028248, 0.6741624574993088, 1.566485175695319, 0.03352345090563641, 0.05586436840445408, 1.0599062809563713, 0.19807099842587092, 0.018153023049408412, 0.3230036173780626, 1.0346740253453142, 0.21981398744468203, 0.21439228275083452, 0.10192949442043275, 0.0571886244910515, 0.11263071292501947, 0.10561387598679911, 0.6279648775667184, 0.210281105956478, 0.8913640207592242, 0.1146745176375589, 0.19747269141739715, 0.2254013725181854, 0.09087974030047645, 0.20904873325334128, 2.1005257489511426, 0.2785251109126205, 0.16536863502228735, 0.9852520107565834, 0.6387441856299578, 0.07261939690130445, 1.059694626903007, 0.222489019918546, 0.12173053880926218, 1.3145924503857376, 0.2956069656231225, 1.0459372544068128, 0.0011624448006253753, 0.09960431188429193, 0.18410607658121975, 0.33371072435897203, 1.21346937821582, 0.22514573312906902, 0.022083562315692692, 0.12450121086002347, 1.268121383129462, 1.059694626903007, 0.21311891000176614, 0.10978524895341278, 1.99491334730161, 0.7062081615279217, 0.49649683897901475, 0.01841943858163569, 0.03162142059838427, 0.15704492817878676, 0.02741331659152883, 0.14164505291793297, 1.3052129063790394, 1.3981107755862627, 0.11611267498504396, 1.211540993753675, 0.10857357727604089, 0.03062168223199016, 1.725227378055564, 0.09996929435520308, 0.1487018356833375, 0.299506378240784, 0.4726661516111792, ... ]

There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.

Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).


In [14]:
num_positive_weights = weights[weights['value'] >= 0]['value'].size()
num_negative_weights = weights[weights['value'] < 0]['value'].size()

print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights


Number of positive weights: 68419 
Number of negative weights: 53294 

Quiz question: How many weights are >= 0?

Making predictions with logistic regression

Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.


In [15]:
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data


[5.0, 2.0, 1.0]
Out[15]:
name review rating word_count sentiment
Our Baby Girl Memory Book Absolutely love it and
all of the Scripture in ...
5.0 {'and': 2, 'all': 1,
'love': 1, 'purchased': ...
1
Wall Decor Removable
Decal Sticker - Colorful ...
Would not purchase again
or recommend. The decals ...
2.0 {'and': 1, 'would': 2,
'almost': 1, 'decals' ...
-1
New Style Trailing Cherry
Blossom Tree Decal ...
Was so excited to get
this product for my baby ...
1.0 {'all': 1, 'money': 1,
'into': 1, 'back': 1, ...
-1
[3 rows x 5 columns]

Let's dig deeper into the first row of the sample_test_data. Here's the full review:


In [16]:
sample_test_data[0]['review']


Out[16]:
'Absolutely love it and all of the Scripture in it.  I purchased the Baby Boy version for my grandson when he was born and my daughter-in-law was thrilled to receive the same book again.'

That review seems pretty positive.

Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.


In [17]:
sample_test_data[1]['review']


Out[17]:
'Would not purchase again or recommend. The decals were thick almost plastic like and were coming off the wall as I was applying them! The would NOT stick! Literally stayed stuck for about 5 minutes then started peeling off.'

We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:

$$ \mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i) $$

where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].


In [18]:
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores


[6.734619727059159, -5.734130996760232, -14.668460404468538]

Predicting sentiment

These scores can be used to make class predictions as follows:

$$ \hat{y} = \left\{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \\ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \\ \end{array} \right. $$

Using scores, write code to calculate $\hat{y}$, the class predictions:


In [19]:
def margin_based_classifier(score):
    return 1 if score > 0 else -1

sample_test_data['predictions'] = scores.apply(margin_based_classifier)
sample_test_data['predictions']


Out[19]:
dtype: int
Rows: 3
[1, -1, -1]

Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.


In [20]:
print "Class predictions according to GraphLab Create:" 
print sentiment_model.predict(sample_test_data)


Class predictions according to GraphLab Create:
[1, -1, -1]

Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.

Probability predictions

Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$

Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].


In [21]:
def logistic_classifier_prob(weight):
    return 1.0 / (1.0 + math.exp(-1 * weight))

probabilities = scores.apply(logistic_classifier_prob)
probabilities


Out[21]:
dtype: float
Rows: 3
[0.9988123848377196, 0.003223268181800763, 4.261557996655337e-07]

Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.


In [22]:
print "Class predictions according to GraphLab Create:" 
print sentiment_model.predict(sample_test_data, output_type='probability')


Class predictions according to GraphLab Create:
[0.9988123848377196, 0.003223268181800763, 4.2615579966553343e-07]

Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?


In [23]:
print "Third"


Third

Find the most positive (and negative) review

We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.

Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."

To calculate these top-20 reviews, use the following steps:

  1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)
  2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)

In [24]:
a = graphlab.SArray([1,2,3])
b = graphlab.SArray([1,2,1])

print a == b
print (a == b).sum()


[1, 1, 0]
2

In [56]:
test_data['predicted_prob'] = sentiment_model.predict(test_data, output_type='probability')
test_data
test_data.topk('predicted_prob', 20).print_rows(20)


+-------------------------------+-------------------------------+--------+
|              name             |             review            | rating |
+-------------------------------+-------------------------------+--------+
|   Munchkin Mozart Magic Cube  | My wife and I have been li... |  4.0   |
|  BABYBJORN Potty Chair - Red  | Our family is just startin... |  5.0   |
| Safety 1st Tot-Lok Four Lo... | I have a wooden desk that ... |  5.0   |
| Summer Infant Complete Nur... | This Nursery and Bath Care... |  4.0   |
| Leachco Snoogle Total Body... | I have had my Snoogle for ... |  5.0   |
| HALO SleepSack Micro-Fleec... | I love the Sleepsack weara... |  5.0   |
| Peg Perego Primo Viaggio C... | We have been using this se... |  5.0   |
|   Capri Stroller - Red Tech   | First let me say that I wa... |  4.0   |
| Wizard Convertible Car Sea... | My son was born big and re... |  5.0   |
| Britax Marathon Convertibl... | My son began using the Mar... |  5.0   |
| Britax Decathlon Convertib... | I researched a few differe... |  4.0   |
|  Fisher-Price Deluxe Jumperoo | I had already decided that... |  5.0   |
| Lilly Gold Sit 'n' Stroll ... | I just completed a two-mon... |  5.0   |
| JP Lizzy Chocolate Ice Cla... | I got this bag as a presen... |  4.0   |
| Cloud b Sound Machine Soot... | First off, I love plush sh... |  5.0   |
| Traveling Toddler Car Seat... | I am sure this product wor... |  2.0   |
| Ameda Purely Yours Breast ... | As with many new moms, I c... |  4.0   |
| Moby Wrap Original 100% Co... | My Moby is an absolute nec... |  5.0   |
| Moby Wrap Original 100% Co... | Let me just say that I DO ... |  4.0   |
| bumGenius One-Size Cloth D... | I love, love, love these d... |  5.0   |
+-------------------------------+-------------------------------+--------+
+-------------------------------+-----------+----------------+
|           word_count          | sentiment | predicted_prob |
+-------------------------------+-----------+----------------+
| {'rating': 1, 'all': 2, 'b... |     1     |      1.0       |
| {'managed': 1, 'just': 3, ... |     1     |      1.0       |
| {'saying': 1, 'all': 1, 'o... |     1     |      1.0       |
| {'son': 1, 'duty': 1, 'hel... |     1     |      1.0       |
| {'saying': 1, 'help': 1, '... |     1     |      1.0       |
| {'all': 1, 'just': 2, 'foo... |     1     |      1.0       |
| {'all': 1, 'just': 2, 'mov... |     1     |      1.0       |
| {'son': 3, 'infant': 1, 'a... |     1     |      1.0       |
| {'son': 4, 'infant': 1, 'd... |     1     |      1.0       |
| {'son': 3, 'infant': 1, 'o... |     1     |      1.0       |
| {'all': 1, 'toted': 1, 'un... |     1     |      1.0       |
| {'all': 1, 'just': 1, 'adj... |     1     |      1.0       |
| {'son': 2, 'infant': 1, 'j... |     1     |      1.0       |
| {'lighter': 1, 'just': 4, ... |     1     |      1.0       |
| {'cute': 1, 'all': 2, 'jus... |     1     |      1.0       |
| {'all': 1, 'just': 1, 'pho... |     -1    |      1.0       |
| {'lansinoh': 1, 'since': 4... |     1     |      1.0       |
| {'summer': 1, 'all': 4, 't... |     1     |      1.0       |
| {'son': 2, 'infant': 1, 'j... |     1     |      1.0       |
| {'summer': 1, 'remember': ... |     1     |      1.0       |
+-------------------------------+-----------+----------------+
+-------------------------------+
|       word_count_subset       |
+-------------------------------+
| {'old': 1, 'little': 1, 'w... |
| {'great': 2, 'love': 1, 'w... |
|   {'little': 2, 'would': 1}   |
| {'little': 1, 'well': 2, '... |
| {'even': 1, 'perfect': 1, ... |
| {'even': 1, 'product': 2, ... |
| {'even': 1, 'perfect': 1, ... |
| {'even': 1, 'little': 3, '... |
| {'even': 1, 'great': 2, 'l... |
| {'little': 2, 'old': 2, 'c... |
| {'even': 2, 'little': 2, '... |
| {'love': 1, 'little': 1, '... |
| {'product': 1, 'old': 2, '... |
| {'perfect': 1, 'even': 1, ... |
| {'love': 1, 'great': 4, 'o... |
| {'car': 6, 'product': 4, '... |
| {'perfect': 1, 'little': 1... |
| {'even': 4, 'product': 1, ... |
| {'even': 1, 'little': 5, '... |
| {'even': 4, 'great': 1, 'l... |
+-------------------------------+
[20 rows x 7 columns]

Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]

Now, let us repeat this exercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.


In [57]:
test_data.topk('predicted_prob', 20, reverse=True).print_rows(20)


+-------------------------------+-------------------------------+--------+
|              name             |             review            | rating |
+-------------------------------+-------------------------------+--------+
| Jolly Jumper Arctic Sneak ... | I am a "research-aholic" i... |  5.0   |
| Levana Safe N'See Digital ... | This is the first review I... |  1.0   |
| Snuza Portable Baby Moveme... | I would have given the pro... |  1.0   |
| Fisher-Price Ocean Wonders... | We have not had ANY luck w... |  2.0   |
| VTech Communications Safe ... | This is my second video mo... |  1.0   |
| Safety 1st High-Def Digita... | We bought this baby monito... |  1.0   |
| Chicco Cortina KeyFit 30 T... | My wife and I have used th... |  1.0   |
| Prince Lionheart Warmies W... | *****IMPORTANT UPDATE*****... |  1.0   |
| Valco Baby Tri-mode Twin S... | I give one star to the dim... |  1.0   |
| Adiri BPA Free Natural Nur... | I will try to write an obj... |  2.0   |
| Munchkin Nursery Projector... | Updated January 3, 2014.  ... |  1.0   |
| The First Years True Choic... | Note: we never installed b... |  1.0   |
| Nuby Natural Touch Silicon... | I'm honestly confused by s... |  1.0   |
| Peg-Perego Tatamia High Ch... | I ordered this high chair ... |  1.0   |
|    Fisher-Price Royal Potty   | This was the worst potty e... |  1.0   |
| Safety 1st Exchangeable Ti... | I thought it sounded great... |  1.0   |
| Safety 1st Lift Lock and S... | Don't buy this product. If... |  1.0   |
| Evenflo Take Me Too Premie... | I am absolutely disgusted ... |  1.0   |
| Cloth Diaper Sprayer--styl... | I bought this sprayer out ... |  1.0   |
| The First Years 3 Pack Bre... | I purchased several of the... |  1.0   |
+-------------------------------+-------------------------------+--------+
+-------------------------------+-----------+--------------------+
|           word_count          | sentiment |   predicted_prob   |
+-------------------------------+-----------+--------------------+
| {'raining': 1, 'all': 8, '... |     1     | 7.80415068211e-100 |
| {'all': 2, 'they': 4, 'jus... |     -1    | 6.83650885514e-25  |
| {'contacted': 1, 'being': ... |     -1    | 2.12654510823e-24  |
| {'fishstarfish': 1, 'infan... |     -1    | 2.24582080779e-23  |
| {'all': 4, 'reviewers': 1,... |     -1    | 1.32962966148e-22  |
| {'all': 3, 'being': 1, 'ov... |     -1    | 2.06872097469e-20  |
| {'all': 4, 'wrestle': 1, '... |     -1    | 5.93881994672e-20  |
| {'less': 1, 'move': 1, 'no... |     -1    | 6.28510016539e-20  |
| {'limited': 2, 'forget': 1... |     -1    | 8.05528712689e-20  |
| {'all': 2, 'forget': 1, 'g... |     -1    | 8.46521724941e-20  |
| {'all': 2, 'just': 1, 'mon... |     -1    |  1.5285394517e-19  |
| {'all': 3, 'go': 1, 'worke... |     -1    | 1.77901889381e-19  |
| {'all': 2, 'just': 3, 'bei... |     -1    | 1.15227353848e-18  |
| {'just': 2, 'food': 2, 'mo... |     -1    | 1.26175666136e-18  |
| {'son': 1, 'old': 1, 'is':... |     -1    | 1.60282966315e-18  |
| {'headsplittinglyannoying'... |     -1    |  7.0488741171e-18  |
| {'all': 2, 'money': 1, 'ov... |     -1    | 9.84839237568e-18  |
| {'chore': 1, 'managed': 1,... |     -1    | 1.00120730395e-17  |
| {'all': 1, 'just': 1, 'rev... |     -1    |  1.169063556e-17   |
| {'all': 1, 'just': 2, 'mon... |     -1    | 1.22003532002e-17  |
+-------------------------------+-----------+--------------------+
+-------------------------------+
|       word_count_subset       |
+-------------------------------+
| {'even': 4, 'great': 1, 'w... |
| {'even': 2, 'product': 1, ... |
| {'money': 1, 'product': 2,... |
| {'even': 2, 'product': 5, ... |
| {'even': 2, 'product': 3, ... |
| {'even': 2, 'little': 1, '... |
| {'even': 2, 'product': 1, ... |
| {'product': 3, 'well': 1, ... |
| {'even': 3, 'easy': 1, 'wo... |
| {'even': 1, 'product': 1, ... |
| {'even': 1, 'money': 1, 'g... |
| {'even': 3, 'great': 1, 'o... |
| {'disappointed': 1, 'money... |
| {'money': 1, 'product': 1,... |
|     {'old': 1, 'would': 1}    |
|    {'great': 1, 'well': 1}    |
| {'even': 1, 'money': 1, 'p... |
| {'even': 1, 'product': 2, ... |
| {'disappointed': 1, 'money... |
| {'perfect': 1, 'money': 1,... |
+-------------------------------+
[20 rows x 7 columns]

Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]

Compute accuracy of the classifier

We will now evaluate the accuracy of the trained classifier. Recall that the accuracy is given by

$$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$

This can be computed as follows:

  • Step 1: Use the trained model to compute class predictions (Hint: Use the predict method)
  • Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).
  • Step 3: Divide the total number of correct predictions by the total number of data points in the dataset.

Complete the function below to compute the classification accuracy:


In [27]:
def get_classification_accuracy(model, data, true_labels):
    # First get the predictions
    prediction = model.predict(data)
    
    # Compute the number of correctly classified examples
    correctly_classified = prediction == true_labels

    # Then compute accuracy by dividing num_correct by total number of examples
    accuracy = float(correctly_classified.sum()) / true_labels.size()
    
    return accuracy

Now, let's compute the classification accuracy of the sentiment_model on the test_data.


In [28]:
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])


Out[28]:
0.9145368370530358

Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).

Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?

Learn another classifier with fewer words

There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subset of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:


In [29]:
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', 
      'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', 
      'work', 'product', 'money', 'would', 'return']

In [30]:
len(significant_words)


Out[30]:
20

For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.


In [31]:
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)

Let's see what the first example of the dataset looks like:


In [32]:
train_data[0]['review']


Out[32]:
'it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.'

The word_count column had been working with before looks like the following:


In [33]:
print train_data[0]['word_count']


{'and': 3, 'love': 1, 'it': 3, 'highly': 1, 'osocozy': 1, 'bags': 1, 'holder': 1, 'leak': 1, 'moist': 1, 'does': 1, 'recommend': 1, 'was': 1, 'wipes': 1, 'early': 1, 'not': 2, 'now': 1, 'disappointed': 1, 'wipe': 1, 'keps': 1, 'wise': 1, 'i': 1, 'planet': 1, 'my': 2, 'came': 1}

Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.


In [34]:
print train_data[0]['word_count_subset']


{'love': 1, 'disappointed': 1}

Train a logistic regression model on a subset of data

We will now build a classifier with word_count_subset as the feature and sentiment as the target.


In [35]:
simple_model = graphlab.logistic_classifier.create(train_data,
                                                   target = 'sentiment',
                                                   features=['word_count_subset'],
                                                   validation_set=None)
simple_model


Logistic regression:
--------------------------------------------------------
Number of examples          : 133416
Number of classes           : 2
Number of feature columns   : 1
Number of unpacked features : 20
Number of coefficients    : 21
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+-------------------+
| Iteration | Passes   | Elapsed Time | Training-accuracy |
+-----------+----------+--------------+-------------------+
| 1         | 2        | 0.213005     | 0.862917          |
| 2         | 3        | 0.272496     | 0.865713          |
| 3         | 4        | 0.333577     | 0.866478          |
| 4         | 5        | 0.396712     | 0.866748          |
| 5         | 6        | 0.456604     | 0.866815          |
| 6         | 7        | 0.515612     | 0.866815          |
+-----------+----------+--------------+-------------------+
SUCCESS: Optimal solution found.

Out[35]:
Class                          : LogisticClassifier

Schema
------
Number of coefficients         : 21
Number of examples             : 133416
Number of classes              : 2
Number of feature columns      : 1
Number of unpacked features    : 20

Hyperparameters
---------------
L1 penalty                     : 0.0
L2 penalty                     : 0.01

Training Summary
----------------
Solver                         : newton
Solver iterations              : 6
Solver status                  : SUCCESS: Optimal solution found.
Training time (sec)            : 0.5254

Settings
--------
Log-likelihood                 : 44323.7254

Highest Positive Coefficients
-----------------------------
word_count_subset[loves]       : 1.6773
word_count_subset[perfect]     : 1.5145
word_count_subset[love]        : 1.3654
(intercept)                    : 1.2995
word_count_subset[easy]        : 1.1937

Lowest Negative Coefficients
----------------------------
word_count_subset[disappointed] : -2.3551
word_count_subset[return]      : -2.1173
word_count_subset[waste]       : -2.0428
word_count_subset[broke]       : -1.658
word_count_subset[money]       : -0.8979

We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.


In [36]:
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])


Out[36]:
0.8693004559635229

Now, we will inspect the weights (coefficients) of the simple_model:


In [37]:
simple_model.coefficients


Out[37]:
name index class value stderr
(intercept) None 1 1.2995449552 0.0120888541331
word_count_subset disappointed 1 -2.35509250061 0.0504149888557
word_count_subset love 1 1.36543549368 0.0303546295109
word_count_subset little 1 0.520628636025 0.0214691475665
word_count_subset loves 1 1.67727145556 0.0482328275384
word_count_subset product 1 -0.320555492996 0.0154311321362
word_count_subset well 1 0.504256746398 0.021381300631
word_count_subset great 1 0.94469126948 0.0209509926591
word_count_subset easy 1 1.19366189833 0.029288869202
word_count_subset work 1 -0.621700012425 0.0230330597946
[21 rows x 5 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.

Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.


In [38]:
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)


+-------------------+--------------+-------+-----------------+-----------------+
|        name       |    index     | class |      value      |      stderr     |
+-------------------+--------------+-------+-----------------+-----------------+
| word_count_subset |    loves     |   1   |  1.67727145556  | 0.0482328275384 |
| word_count_subset |   perfect    |   1   |  1.51448626703  |  0.049861952294 |
| word_count_subset |     love     |   1   |  1.36543549368  | 0.0303546295109 |
|    (intercept)    |     None     |   1   |   1.2995449552  | 0.0120888541331 |
| word_count_subset |     easy     |   1   |  1.19366189833  |  0.029288869202 |
| word_count_subset |    great     |   1   |  0.94469126948  | 0.0209509926591 |
| word_count_subset |    little    |   1   |  0.520628636025 | 0.0214691475665 |
| word_count_subset |     well     |   1   |  0.504256746398 |  0.021381300631 |
| word_count_subset |     able     |   1   |  0.191438302295 | 0.0337581955697 |
| word_count_subset |     old      |   1   | 0.0853961886678 | 0.0200863423025 |
| word_count_subset |     car      |   1   |  0.058834990068 | 0.0168291532091 |
| word_count_subset |     less     |   1   | -0.209709815216 |  0.040505735954 |
| word_count_subset |   product    |   1   | -0.320555492996 | 0.0154311321362 |
| word_count_subset |    would     |   1   | -0.362308947711 | 0.0127544751985 |
| word_count_subset |     even     |   1   |  -0.51173855127 | 0.0199612760261 |
| word_count_subset |     work     |   1   | -0.621700012425 | 0.0230330597946 |
| word_count_subset |    money     |   1   | -0.897884155776 | 0.0339936732836 |
| word_count_subset |    broke     |   1   |  -1.65796447838 | 0.0580878907166 |
| word_count_subset |    waste     |   1   |   -2.042773611  | 0.0644702932444 |
| word_count_subset |    return    |   1   |  -2.11729659718 | 0.0578650807241 |
| word_count_subset | disappointed |   1   |  -2.35509250061 | 0.0504149888557 |
+-------------------+--------------+-------+-----------------+-----------------+
[21 rows x 5 columns]

Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?


In [39]:
simple_model.coefficients[simple_model.coefficients['value'] > 0]['value'].size() - 1


Out[39]:
10

Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?


In [44]:
positive_significant_words = simple_model.coefficients[simple_model.coefficients['value'] > 0]
positive_significant_words

for w in positive_significant_words['index']:
    print sentiment_model.coefficients[sentiment_model.coefficients['index'] == w]


+-------------+-------+-------+---------------+--------+
|     name    | index | class |     value     | stderr |
+-------------+-------+-------+---------------+--------+
| (intercept) |  None |   1   | 1.30337080544 |  None  |
+-------------+-------+-------+---------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+---------------+--------+
|    name    | index | class |     value     | stderr |
+------------+-------+-------+---------------+--------+
| word_count |  love |   1   | 1.43301685439 |  None  |
+------------+-------+-------+---------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+--------+-------+----------------+--------+
|    name    | index  | class |     value      | stderr |
+------------+--------+-------+----------------+--------+
| word_count | little |   1   | 0.674162457499 |  None  |
+------------+--------+-------+----------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+--------------+--------+
|    name    | index | class |    value     | stderr |
+------------+-------+-------+--------------+--------+
| word_count | loves |   1   | 1.5664851757 |  None  |
+------------+-------+-------+--------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+----------------+--------+
|    name    | index | class |     value      | stderr |
+------------+-------+-------+----------------+--------+
| word_count |  well |   1   | 0.627964877567 |  None  |
+------------+-------+-------+----------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+---------------+--------+
|    name    | index | class |     value     | stderr |
+------------+-------+-------+---------------+--------+
| word_count | great |   1   | 1.31459245039 |  None  |
+------------+-------+-------+---------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+---------------+--------+
|    name    | index | class |     value     | stderr |
+------------+-------+-------+---------------+--------+
| word_count |  easy |   1   | 1.21346937822 |  None  |
+------------+-------+-------+---------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+----------------+--------+
|    name    | index | class |     value      | stderr |
+------------+-------+-------+----------------+--------+
| word_count |  able |   1   | 0.174331272552 |  None  |
+------------+-------+-------+----------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+---------+-------+---------------+--------+
|    name    |  index  | class |     value     | stderr |
+------------+---------+-------+---------------+--------+
| word_count | perfect |   1   | 1.75190114392 |  None  |
+------------+---------+-------+---------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+------------------+--------+
|    name    | index | class |      value       | stderr |
+------------+-------+-------+------------------+--------+
| word_count |  old  |   1   | 0.00912230113671 |  None  |
+------------+-------+-------+------------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
+------------+-------+-------+----------------+--------+
|    name    | index | class |     value      | stderr |
+------------+-------+-------+----------------+--------+
| word_count |  car  |   1   | 0.195263670618 |  None  |
+------------+-------+-------+----------------+--------+
[? rows x 5 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.

Comparing models

We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.

First, compute the classification accuracy of the sentiment_model on the train_data:


In [46]:
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])


Out[46]:
0.979440247046831

Now, compute the classification accuracy of the simple_model on the train_data:


In [47]:
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])


Out[47]:
0.8668150746537147

Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?

Now, we will repeat this exercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:


In [58]:
round(get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']), 2)


Out[58]:
0.91

Next, we will compute the classification accuracy of the simple_model on the test_data:


In [49]:
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])


Out[49]:
0.8693004559635229

Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?

Baseline: Majority class prediction

It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.

What is the majority class in the train_data?


In [50]:
num_positive  = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative


112164
21252

Now compute the accuracy of the majority class classifier on test_data.

Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).


In [59]:
num_positive_test  = (test_data['sentiment'] == +1).sum()
num_negative_test = (test_data['sentiment'] == -1).sum()
print num_positive_test
print num_negative_test

majority_accuracy = float(num_positive_test) / test_data['sentiment'].size()
print round(majority_accuracy, 2)


28095
5241
0.84

Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?


In [54]:
print "Yes"


Yes

In [55]:
graphlab.version


Out[55]:
'2.1'

In [ ]: