Logistic Regression with L2 regularization

In this notebook, you will implement your own logistic regression classifier with L2 regularization. You will do the following:

  • Extract features from Amazon product reviews.
  • Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
  • Implement gradient ascent with an L2 penalty.
  • Empirically explore how the L2 penalty can ameliorate overfitting.

Importing Libraries


In [5]:
import os
import zipfile
import string
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
%matplotlib inline

Unzipping files with Amazon Baby Products Reviews

For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.


In [6]:
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]

In [7]:
# Filename of unzipped file
unzipped_file = 'amazon_baby_subset.csv'

In [8]:
# If upzipped file not in files_list, unzip the file
if unzipped_file not in files_list:
    zip_file = unzipped_file + '.zip'
    unzipping = zipfile.ZipFile(zip_file)
    unzipping.extractall()
    unzipping.close

Loading the products data

We will use a dataset consisting of baby product reviews on Amazon.com.


In [9]:
products = pd.read_csv("amazon_baby_subset.csv")

Now, let us see a preview of what the dataset looks like.


In [10]:
products.head()


Out[10]:
name review rating sentiment
0 Stop Pacifier Sucking without tears with Thumb... All of my kids have cried non-stop when I trie... 5 1
1 Nature's Lullabies Second Year Sticker Calendar We wanted to get something to keep track of ou... 5 1
2 Nature's Lullabies Second Year Sticker Calendar My daughter had her 1st baby over a year ago. ... 5 1
3 Lamaze Peekaboo, I Love You One of baby's first and favorite books, and it... 4 1
4 SoftPlay Peek-A-Boo Where's Elmo A Children's ... Very cute interactive book! My son loves this ... 5 1

One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.


In [11]:
products['sentiment']


Out[11]:
0        1
1        1
2        1
3        1
4        1
5        1
6        1
7        1
8        1
9        1
10       1
11       1
12       1
13       1
14       1
15       1
16       1
17       1
18       1
19       1
20       1
21       1
22       1
23       1
24       1
25       1
26       1
27       1
28       1
29       1
        ..
53042   -1
53043   -1
53044   -1
53045   -1
53046   -1
53047   -1
53048   -1
53049   -1
53050   -1
53051   -1
53052   -1
53053   -1
53054   -1
53055   -1
53056   -1
53057   -1
53058   -1
53059   -1
53060   -1
53061   -1
53062   -1
53063   -1
53064   -1
53065   -1
53066   -1
53067   -1
53068   -1
53069   -1
53070   -1
53071   -1
Name: sentiment, dtype: int64

Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.


In [12]:
products.head(10)['name']


Out[12]:
0    Stop Pacifier Sucking without tears with Thumb...
1      Nature's Lullabies Second Year Sticker Calendar
2      Nature's Lullabies Second Year Sticker Calendar
3                          Lamaze Peekaboo, I Love You
4    SoftPlay Peek-A-Boo Where's Elmo A Children's ...
5                            Our Baby Girl Memory Book
6    Hunnt® Falling Flowers and Birds Kids Nurs...
7    Blessed By Pope Benedict XVI Divine Mercy Full...
8    Cloth Diaper Pins Stainless Steel Traditional ...
9    Cloth Diaper Pins Stainless Steel Traditional ...
Name: name, dtype: object

In [13]:
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])


# of positive reviews = 26579
# of negative reviews = 26493

Apply text cleaning on the review data

In this section, we will perform some simple feature cleaning. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.

Now, we will load these words from this JSON file:


In [14]:
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
    important_words = json.load(f)
important_words = [str(s) for s in important_words]

In [15]:
print important_words


['baby', 'one', 'great', 'love', 'use', 'would', 'like', 'easy', 'little', 'seat', 'old', 'well', 'get', 'also', 'really', 'son', 'time', 'bought', 'product', 'good', 'daughter', 'much', 'loves', 'stroller', 'put', 'months', 'car', 'still', 'back', 'used', 'recommend', 'first', 'even', 'perfect', 'nice', 'bag', 'two', 'using', 'got', 'fit', 'around', 'diaper', 'enough', 'month', 'price', 'go', 'could', 'soft', 'since', 'buy', 'room', 'works', 'made', 'child', 'keep', 'size', 'small', 'need', 'year', 'big', 'make', 'take', 'easily', 'think', 'crib', 'clean', 'way', 'quality', 'thing', 'better', 'without', 'set', 'new', 'every', 'cute', 'best', 'bottles', 'work', 'purchased', 'right', 'lot', 'side', 'happy', 'comfortable', 'toy', 'able', 'kids', 'bit', 'night', 'long', 'fits', 'see', 'us', 'another', 'play', 'day', 'money', 'monitor', 'tried', 'thought', 'never', 'item', 'hard', 'plastic', 'however', 'disappointed', 'reviews', 'something', 'going', 'pump', 'bottle', 'cup', 'waste', 'return', 'amazon', 'different', 'top', 'want', 'problem', 'know', 'water', 'try', 'received', 'sure', 'times', 'chair', 'find', 'hold', 'gate', 'open', 'bottom', 'away', 'actually', 'cheap', 'worked', 'getting', 'ordered', 'came', 'milk', 'bad', 'part', 'worth', 'found', 'cover', 'many', 'design', 'looking', 'weeks', 'say', 'wanted', 'look', 'place', 'purchase', 'looks', 'second', 'piece', 'box', 'pretty', 'trying', 'difficult', 'together', 'though', 'give', 'started', 'anything', 'last', 'company', 'come', 'returned', 'maybe', 'took', 'broke', 'makes', 'stay', 'instead', 'idea', 'head', 'said', 'less', 'went', 'working', 'high', 'unit', 'seems', 'picture', 'completely', 'wish', 'buying', 'babies', 'won', 'tub', 'almost', 'either']

Now, we will perform 2 simple data transformations:

  1. Remove punctuation using Python's built-in string functionality.
  2. Compute word counts (only for important_words)

We start with Step 1 which can be done as follows:

Before removing the punctuation from the strings in the review column, we will fill all NA values with empty string.


In [16]:
products["review"] = products["review"].fillna("")

Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.


In [17]:
products["review_clean"] = products["review"].str.translate(None, string.punctuation)

Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text. Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.


In [18]:
for word in important_words:
    products[word] = products['review_clean'].apply(lambda s : s.split().count(word))

The products DataFrame now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.


In [19]:
products['perfect']


Out[19]:
0        0
1        0
2        0
3        1
4        0
5        0
6        0
7        0
8        0
9        0
10       0
11       1
12       0
13       0
14       0
15       0
16       0
17       0
18       0
19       0
20       0
21       0
22       1
23       0
24       1
25       0
26       0
27       1
28       0
29       0
        ..
53042    0
53043    0
53044    0
53045    0
53046    0
53047    0
53048    0
53049    0
53050    0
53051    1
53052    0
53053    0
53054    1
53055    0
53056    0
53057    0
53058    0
53059    0
53060    0
53061    0
53062    0
53063    0
53064    0
53065    0
53066    0
53067    0
53068    0
53069    0
53070    0
53071    0
Name: perfect, dtype: int64

Train-Validation split

We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set.

Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.

Loading the JSON files with the indicies from the training data and the validation data into a a list.


In [20]:
with open('module-4-assignment-train-idx.json', 'r') as f:
    train_idx_lst = json.load(f)
train_idx_lst = [int(entry) for entry in train_idx_lst]

In [21]:
with open('module-4-assignment-validation-idx.json', 'r') as f:
    validation_idx_lst = json.load(f)
validation_idx_lst = [int(entry) for entry in validation_idx_lst]

Using the list of the training data indicies and the validation data indicies to get a DataFrame with the training data and a DataFrame with the validation data.


In [22]:
train_data = products.ix[train_idx_lst]
validation_data = products.ix[validation_idx_lst]

In [23]:
print 'Training set   : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)


Training set   : 42361 data points
Validation set : 10711 data points

Convert DataFrame to NumPy array

Just like in the second assignment of the previous module, we provide you with a function that extracts columns from a DataFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.

Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.


In [24]:
def get_numpy_data(data_frame, features, label):
    data_frame['intercept'] = 1
    features = ['intercept'] + features
    features_frame = data_frame[features]
    feature_matrix = data_frame.as_matrix(columns=features)
    label_array = data_frame[label]
    label_array = label_array.values
    return(feature_matrix, label_array)

We convert both the training and validation sets into NumPy arrays.

Warning: This may take a few minutes.


In [25]:
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')

Building on logistic regression with no L2 penalty assignment

Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:

$$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$

where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.

We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)


In [26]:
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
    
    # Take dot product of feature_matrix and coefficients  
    arg_exp = np.dot(coefficients,feature_matrix.transpose())
    
    # Compute P(y_i = +1 | x_i, w) using the link function
    predictions = 1.0/(1.0 + np.exp(-arg_exp))
    
    # return predictions
    return predictions

Adding L2 penalty

Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.

Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$

Adding L2 penalty to the derivative

It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.

  • Recall from the lecture that the link function is still the sigmoid: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$
  • We add the L2 penalty term to the per-coefficient derivative of log likelihood: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$

The per-coefficient derivative for logistic regression with an L2 penalty is as follows: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$ and for the intercept term, we have $$ \frac{\partial\ell}{\partial w_0} = \sum_{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$

Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.

Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:

  • errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
  • feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
  • coefficient containing the current value of coefficient $w_j$.
  • l2_penalty representing the L2 penalty constant $\lambda$
  • feature_is_constant telling whether the $j$-th feature is constant or not.

In [27]:
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): 
    
    # Compute the dot product of errors and feature
    derivative = np.dot(feature.transpose(), errors)

    # add L2 penalty term for any feature that isn't the intercept.
    if not feature_is_constant: 
        derivative = derivative - 2.0*l2_penalty*coefficient
        
    return derivative

Quiz question: In the code above, was the intercept term regularized?

No

To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).

$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$

In [28]:
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
    indicator = (sentiment==+1)
    scores = np.dot(feature_matrix, coefficients)
    
    lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
    
    return lp

Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?

Decreases

The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.


In [29]:
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
    coefficients = np.array(initial_coefficients) # make sure it's a numpy array
    for itr in xrange(max_iter):
        # Predict P(y_i = +1|x_i,w) using your predict_probability() function
        predictions = predict_probability(feature_matrix, coefficients)
        
        # Compute indicator value for (y_i = +1)
        indicator = (sentiment==+1)
        
        # Compute the errors as indicator - predictions
        errors = indicator - predictions
        for j in xrange(len(coefficients)): # loop over each coefficient
            is_intercept = (j == 0)
            # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
            # Compute the derivative for coefficients[j]. Save it in a variable called derivative
            derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],coefficients[j],l2_penalty, is_intercept)
            
            # add the step size times the derivative to the current coefficient
            coefficients[j] = coefficients[j] + step_size*derivative
        
        # Checking whether log likelihood is increasing
        if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
        or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
            lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
            print 'iteration %*d: log likelihood of observed labels = %.8f' % \
                (int(np.ceil(np.log10(max_iter))), itr, lp)
    return coefficients

Explore effects of L2 regularization

Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.

Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.


In [30]:
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                     initial_coefficients=np.zeros(194),
                                                     step_size=5e-6, l2_penalty=0, max_iter=501)


iteration   0: log likelihood of observed labels = -29179.39138303
iteration   1: log likelihood of observed labels = -29003.71259047
iteration   2: log likelihood of observed labels = -28834.66187288
iteration   3: log likelihood of observed labels = -28671.70781507
iteration   4: log likelihood of observed labels = -28514.43078198
iteration   5: log likelihood of observed labels = -28362.48344665
iteration   6: log likelihood of observed labels = -28215.56713122
iteration   7: log likelihood of observed labels = -28073.41743783
iteration   8: log likelihood of observed labels = -27935.79536396
iteration   9: log likelihood of observed labels = -27802.48168669
iteration  10: log likelihood of observed labels = -27673.27331484
iteration  11: log likelihood of observed labels = -27547.98083656
iteration  12: log likelihood of observed labels = -27426.42679977
iteration  13: log likelihood of observed labels = -27308.44444728
iteration  14: log likelihood of observed labels = -27193.87673876
iteration  15: log likelihood of observed labels = -27082.57555831
iteration  20: log likelihood of observed labels = -26570.43059938
iteration  30: log likelihood of observed labels = -25725.48742389
iteration  40: log likelihood of observed labels = -25055.53326910
iteration  50: log likelihood of observed labels = -24509.63590026
iteration  60: log likelihood of observed labels = -24054.97906083
iteration  70: log likelihood of observed labels = -23669.51640848
iteration  80: log likelihood of observed labels = -23337.89167628
iteration  90: log likelihood of observed labels = -23049.07066021
iteration 100: log likelihood of observed labels = -22794.90974921
iteration 200: log likelihood of observed labels = -21283.29527353
iteration 300: log likelihood of observed labels = -20570.97485473
iteration 400: log likelihood of observed labels = -20152.21466944
iteration 500: log likelihood of observed labels = -19876.62333410

In [31]:
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                      initial_coefficients=np.zeros(194),
                                                      step_size=5e-6, l2_penalty=4, max_iter=501)


iteration   0: log likelihood of observed labels = -29179.39508175
iteration   1: log likelihood of observed labels = -29003.73417180
iteration   2: log likelihood of observed labels = -28834.71441858
iteration   3: log likelihood of observed labels = -28671.80345068
iteration   4: log likelihood of observed labels = -28514.58077957
iteration   5: log likelihood of observed labels = -28362.69830317
iteration   6: log likelihood of observed labels = -28215.85663259
iteration   7: log likelihood of observed labels = -28073.79071393
iteration   8: log likelihood of observed labels = -27936.26093762
iteration   9: log likelihood of observed labels = -27803.04751805
iteration  10: log likelihood of observed labels = -27673.94684207
iteration  11: log likelihood of observed labels = -27548.76901327
iteration  12: log likelihood of observed labels = -27427.33612958
iteration  13: log likelihood of observed labels = -27309.48101569
iteration  14: log likelihood of observed labels = -27195.04624253
iteration  15: log likelihood of observed labels = -27083.88333261
iteration  20: log likelihood of observed labels = -26572.49874392
iteration  30: log likelihood of observed labels = -25729.32604153
iteration  40: log likelihood of observed labels = -25061.34245801
iteration  50: log likelihood of observed labels = -24517.52091982
iteration  60: log likelihood of observed labels = -24064.99093939
iteration  70: log likelihood of observed labels = -23681.67373669
iteration  80: log likelihood of observed labels = -23352.19298741
iteration  90: log likelihood of observed labels = -23065.50180166
iteration 100: log likelihood of observed labels = -22813.44844580
iteration 200: log likelihood of observed labels = -21321.14164794
iteration 300: log likelihood of observed labels = -20624.98634439
iteration 400: log likelihood of observed labels = -20219.92048845
iteration 500: log likelihood of observed labels = -19956.11341777

In [32]:
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                      initial_coefficients=np.zeros(194),
                                                      step_size=5e-6, l2_penalty=10, max_iter=501)


iteration   0: log likelihood of observed labels = -29179.40062984
iteration   1: log likelihood of observed labels = -29003.76654163
iteration   2: log likelihood of observed labels = -28834.79322654
iteration   3: log likelihood of observed labels = -28671.94687528
iteration   4: log likelihood of observed labels = -28514.80571589
iteration   5: log likelihood of observed labels = -28363.02048079
iteration   6: log likelihood of observed labels = -28216.29071186
iteration   7: log likelihood of observed labels = -28074.35036891
iteration   8: log likelihood of observed labels = -27936.95892966
iteration   9: log likelihood of observed labels = -27803.89576265
iteration  10: log likelihood of observed labels = -27674.95647005
iteration  11: log likelihood of observed labels = -27549.95042714
iteration  12: log likelihood of observed labels = -27428.69905549
iteration  13: log likelihood of observed labels = -27311.03455140
iteration  14: log likelihood of observed labels = -27196.79890162
iteration  15: log likelihood of observed labels = -27085.84308528
iteration  20: log likelihood of observed labels = -26575.59697506
iteration  30: log likelihood of observed labels = -25735.07304608
iteration  40: log likelihood of observed labels = -25070.03447306
iteration  50: log likelihood of observed labels = -24529.31188025
iteration  60: log likelihood of observed labels = -24079.95349572
iteration  70: log likelihood of observed labels = -23699.83199186
iteration  80: log likelihood of observed labels = -23373.54108747
iteration  90: log likelihood of observed labels = -23090.01500055
iteration 100: log likelihood of observed labels = -22841.08995135
iteration 200: log likelihood of observed labels = -21377.25595328
iteration 300: log likelihood of observed labels = -20704.63995428
iteration 400: log likelihood of observed labels = -20319.25685307
iteration 500: log likelihood of observed labels = -20072.16321721

In [33]:
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                       initial_coefficients=np.zeros(194),
                                                       step_size=5e-6, l2_penalty=1e2, max_iter=501)


iteration   0: log likelihood of observed labels = -29179.48385120
iteration   1: log likelihood of observed labels = -29004.25177457
iteration   2: log likelihood of observed labels = -28835.97382190
iteration   3: log likelihood of observed labels = -28674.09410083
iteration   4: log likelihood of observed labels = -28518.17112932
iteration   5: log likelihood of observed labels = -28367.83774654
iteration   6: log likelihood of observed labels = -28222.77708939
iteration   7: log likelihood of observed labels = -28082.70799392
iteration   8: log likelihood of observed labels = -27947.37595368
iteration   9: log likelihood of observed labels = -27816.54738615
iteration  10: log likelihood of observed labels = -27690.00588850
iteration  11: log likelihood of observed labels = -27567.54970126
iteration  12: log likelihood of observed labels = -27448.98991327
iteration  13: log likelihood of observed labels = -27334.14912742
iteration  14: log likelihood of observed labels = -27222.86041863
iteration  15: log likelihood of observed labels = -27114.96648229
iteration  20: log likelihood of observed labels = -26621.50201299
iteration  30: log likelihood of observed labels = -25819.72803950
iteration  40: log likelihood of observed labels = -25197.34035501
iteration  50: log likelihood of observed labels = -24701.03698195
iteration  60: log likelihood of observed labels = -24296.66378580
iteration  70: log likelihood of observed labels = -23961.38842316
iteration  80: log likelihood of observed labels = -23679.38088853
iteration  90: log likelihood of observed labels = -23439.31824267
iteration 100: log likelihood of observed labels = -23232.88192018
iteration 200: log likelihood of observed labels = -22133.50726528
iteration 300: log likelihood of observed labels = -21730.03957488
iteration 400: log likelihood of observed labels = -21545.87572145
iteration 500: log likelihood of observed labels = -21451.95551390

In [34]:
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                       initial_coefficients=np.zeros(194),
                                                       step_size=5e-6, l2_penalty=1e3, max_iter=501)


iteration   0: log likelihood of observed labels = -29180.31606471
iteration   1: log likelihood of observed labels = -29009.07176112
iteration   2: log likelihood of observed labels = -28847.62378912
iteration   3: log likelihood of observed labels = -28695.14439397
iteration   4: log likelihood of observed labels = -28550.95060743
iteration   5: log likelihood of observed labels = -28414.45771129
iteration   6: log likelihood of observed labels = -28285.15124375
iteration   7: log likelihood of observed labels = -28162.56976044
iteration   8: log likelihood of observed labels = -28046.29387744
iteration   9: log likelihood of observed labels = -27935.93902900
iteration  10: log likelihood of observed labels = -27831.15045502
iteration  11: log likelihood of observed labels = -27731.59955260
iteration  12: log likelihood of observed labels = -27636.98108219
iteration  13: log likelihood of observed labels = -27547.01092670
iteration  14: log likelihood of observed labels = -27461.42422295
iteration  15: log likelihood of observed labels = -27379.97375625
iteration  20: log likelihood of observed labels = -27027.18208317
iteration  30: log likelihood of observed labels = -26527.22737267
iteration  40: log likelihood of observed labels = -26206.59048765
iteration  50: log likelihood of observed labels = -25995.96903148
iteration  60: log likelihood of observed labels = -25854.95710284
iteration  70: log likelihood of observed labels = -25759.08109950
iteration  80: log likelihood of observed labels = -25693.05688014
iteration  90: log likelihood of observed labels = -25647.09929349
iteration 100: log likelihood of observed labels = -25614.81468705
iteration 200: log likelihood of observed labels = -25536.20998919
iteration 300: log likelihood of observed labels = -25532.57691220
iteration 400: log likelihood of observed labels = -25532.35543765
iteration 500: log likelihood of observed labels = -25532.33970049

In [35]:
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
                                                       initial_coefficients=np.zeros(194),
                                                       step_size=5e-6, l2_penalty=1e5, max_iter=501)


iteration   0: log likelihood of observed labels = -29271.85955115
iteration   1: log likelihood of observed labels = -29271.71006589
iteration   2: log likelihood of observed labels = -29271.65738833
iteration   3: log likelihood of observed labels = -29271.61189923
iteration   4: log likelihood of observed labels = -29271.57079975
iteration   5: log likelihood of observed labels = -29271.53358505
iteration   6: log likelihood of observed labels = -29271.49988440
iteration   7: log likelihood of observed labels = -29271.46936584
iteration   8: log likelihood of observed labels = -29271.44172890
iteration   9: log likelihood of observed labels = -29271.41670149
iteration  10: log likelihood of observed labels = -29271.39403722
iteration  11: log likelihood of observed labels = -29271.37351294
iteration  12: log likelihood of observed labels = -29271.35492661
iteration  13: log likelihood of observed labels = -29271.33809523
iteration  14: log likelihood of observed labels = -29271.32285309
iteration  15: log likelihood of observed labels = -29271.30905015
iteration  20: log likelihood of observed labels = -29271.25729150
iteration  30: log likelihood of observed labels = -29271.20657205
iteration  40: log likelihood of observed labels = -29271.18775997
iteration  50: log likelihood of observed labels = -29271.18078247
iteration  60: log likelihood of observed labels = -29271.17819447
iteration  70: log likelihood of observed labels = -29271.17723457
iteration  80: log likelihood of observed labels = -29271.17687853
iteration  90: log likelihood of observed labels = -29271.17674648
iteration 100: log likelihood of observed labels = -29271.17669750
iteration 200: log likelihood of observed labels = -29271.17666862
iteration 300: log likelihood of observed labels = -29271.17666862
iteration 400: log likelihood of observed labels = -29271.17666862
iteration 500: log likelihood of observed labels = -29271.17666862

Compare coefficients

We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.

Below is a simple helper function that will help us create this table.


In [36]:
def add_coefficients_to_table(coefficients, column_name):
    return pd.Series(coefficients, index = column_name)

Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.


In [37]:
coeff_L2_0_table = add_coefficients_to_table(coefficients_0_penalty, ['intercept'] + important_words)
coeff_L2_4_table = add_coefficients_to_table(coefficients_4_penalty, ['intercept'] + important_words)
coeff_L2_10_table = add_coefficients_to_table(coefficients_10_penalty, ['intercept'] + important_words)
coeff_L2_1e2_table = add_coefficients_to_table(coefficients_1e2_penalty, ['intercept'] + important_words)
coeff_L2_1e3_table = add_coefficients_to_table(coefficients_1e3_penalty, ['intercept'] + important_words)
coeff_L2_1e5_table = add_coefficients_to_table(coefficients_1e5_penalty, ['intercept'] + important_words)

Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.


In [38]:
positive_words = coeff_L2_0_table.sort_values(ascending=False)[0:5].index.tolist()
negative_words = coeff_L2_0_table.sort_values(ascending=True)[0:5].index.tolist()

Quiz Question. Which of the following is not listed in either positive_words or negative_words?


In [39]:
print "positive_words: ", positive_words
print "negative_words: ", negative_words


positive_words:  ['love', 'loves', 'easy', 'perfect', 'great']
negative_words:  ['disappointed', 'money', 'return', 'waste', 'returned']

Plotting the Coefficient Path with Increase in L2 Penalty

Let us observe the effect of increasing L2 penalty on the 10 words just selected.

First, let's put the 6 L2 penalty values we considered in a list.


In [40]:
l2_pen_vals = [0.0, 4.0, 10.0, 1.0e2, 1.0e3, 1.0e5]

Next, let's put all the words we considered as features for the classification model plus the intercept features


In [41]:
feature_words_lst = ['intercept'] + important_words

Now, we will fill-in 2 dictionaries, one with the 5 positive words as the index for the dictionary and the other with the 5 negative words as the index for the dictionary. For each index (word), we fill in a list which has the coefficient value of the index (word) for the 6 different L2 penalties we considered.


In [42]:
pos_word_coeff_dict = {}

for curr_word in positive_words:
    # Finding the index of the word we are considering in the feature_words_lst
    word_index = feature_words_lst.index(curr_word)
    # Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
    pos_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
                                      coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
                                      coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]

In [43]:
neg_word_coeff_dict = {}

for curr_word in negative_words:
    # Finding the index of the word we are considering in the feature_words_lst
    word_index = feature_words_lst.index(curr_word)
    # Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
    neg_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
                                      coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
                                      coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]

Plotting coefficient path for positive words


In [44]:
plt.figure(figsize=(10,6))
for pos_word in positive_words:
    plt.semilogx(l2_pen_vals, pos_word_coeff_dict[pos_word], linewidth =2, label = pos_word  )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -0.5, 1.5])
plt.title("Positive Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "upper right", fontsize=18)


Out[44]:
<matplotlib.legend.Legend at 0x10bb36bd0>

Plotting coefficient path for negative words


In [45]:
plt.figure(figsize=(10,6))
for pos_word in negative_words:
    plt.semilogx(l2_pen_vals, neg_word_coeff_dict[pos_word], linewidth =2, label = pos_word  )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -1.5, 0.5])
plt.title("Negative Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "lower right", fontsize=18)


Out[45]:
<matplotlib.legend.Legend at 0x10d354f50>

The following 2 questions relate to the 2 figures above.

Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.

True

Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)

False

Measuring accuracy

Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by

$$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$

Recall from lecture that that the class prediction is calculated using $$ \hat{y}_i = \left\{ \begin{array}{ll} +1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \\ -1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \\ \end{array} \right. $$

Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.

Step 1: First compute the scores using feature_matrix and coefficients using a dot product. Do this for the training data and the validation data.


In [46]:
# Compute the scores as a dot product between feature_matrix and coefficients.
scores_l2_pen_0_train = np.dot(feature_matrix_train, coefficients_0_penalty)
scores_l2_pen_4_train = np.dot(feature_matrix_train, coefficients_4_penalty)
scores_l2_pen_10_train = np.dot(feature_matrix_train, coefficients_10_penalty)
scores_l2_pen_1e2_train = np.dot(feature_matrix_train, coefficients_1e2_penalty)
scores_l2_pen_1e3_train = np.dot(feature_matrix_train, coefficients_1e3_penalty)
scores_l2_pen_1e5_train = np.dot(feature_matrix_train, coefficients_1e5_penalty)

In [47]:
scores_l2_pen_0_valid = np.dot(feature_matrix_valid, coefficients_0_penalty)
scores_l2_pen_4_valid = np.dot(feature_matrix_valid, coefficients_4_penalty)
scores_l2_pen_10_valid = np.dot(feature_matrix_valid, coefficients_10_penalty)
scores_l2_pen_1e2_valid = np.dot(feature_matrix_valid, coefficients_1e2_penalty)
scores_l2_pen_1e3_valid = np.dot(feature_matrix_valid, coefficients_1e3_penalty)
scores_l2_pen_1e5_valid = np.dot(feature_matrix_valid, coefficients_1e5_penalty)

Step 2: Using the formula above, compute the class predictions from the scores.

First, writing a helper function that will return an array with the predictions.


In [48]:
def get_pred_from_score(scores_array):
    # First, set predictions equal to scores array
    predictions = scores_array
    # Replace <= 0 scores with negative review classification (-1)
    scores_array[scores_array<=0] = -1
    # Replace > 0 scores with positive review classification (+1)
    scores_array[scores_array>0] = 1
    
    return predictions

Now, getting the predictions for the training data and the validation data for the 6 L2 penalties we considered.


In [49]:
pred_l2_pen_0_train = get_pred_from_score(scores_l2_pen_0_train)
pred_l2_pen_4_train = get_pred_from_score(scores_l2_pen_4_train)
pred_l2_pen_10_train = get_pred_from_score(scores_l2_pen_10_train)
pred_l2_pen_1e2_train = get_pred_from_score(scores_l2_pen_1e2_train)
pred_l2_pen_1e3_train = get_pred_from_score(scores_l2_pen_1e3_train)
pred_l2_pen_1e5_train = get_pred_from_score(scores_l2_pen_1e5_train)

In [50]:
pred_l2_pen_0_valid = get_pred_from_score(scores_l2_pen_0_valid)
pred_l2_pen_4_valid = get_pred_from_score(scores_l2_pen_4_valid)
pred_l2_pen_10_valid = get_pred_from_score(scores_l2_pen_10_valid)
pred_l2_pen_1e2_valid = get_pred_from_score(scores_l2_pen_1e2_valid)
pred_l2_pen_1e3_valid = get_pred_from_score(scores_l2_pen_1e3_valid)
pred_l2_pen_1e5_valid = get_pred_from_score(scores_l2_pen_1e5_valid)

Step 3: Getting the accurary for the training set data and the validation set data


In [51]:
train_accuracy = {}
train_accuracy[0]   = np.sum(pred_l2_pen_0_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[4]   = np.sum(pred_l2_pen_4_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[10]  = np.sum(pred_l2_pen_10_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e2] = np.sum(pred_l2_pen_1e2_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e3] = np.sum(pred_l2_pen_1e3_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e5] = np.sum(pred_l2_pen_1e5_train==sentiment_train)/float(len(sentiment_train))

validation_accuracy = {}
validation_accuracy[0]   = np.sum(pred_l2_pen_0_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[4]   = np.sum(pred_l2_pen_4_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[10]  = np.sum(pred_l2_pen_10_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e2] = np.sum(pred_l2_pen_1e2_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e3] = np.sum(pred_l2_pen_1e3_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e5] = np.sum(pred_l2_pen_1e5_valid==sentiment_valid)/float(len(sentiment_valid))

In [52]:
# Build a simple report
for key in sorted(validation_accuracy.keys()):
    print "L2 penalty = %g" % key
    print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
    print "--------------------------------------------------------------------------------"


L2 penalty = 0
train accuracy = 0.785156157787, validation_accuracy = 0.78143964149
--------------------------------------------------------------------------------
L2 penalty = 4
train accuracy = 0.785108944548, validation_accuracy = 0.781533003454
--------------------------------------------------------------------------------
L2 penalty = 10
train accuracy = 0.784990911452, validation_accuracy = 0.781719727383
--------------------------------------------------------------------------------
L2 penalty = 100
train accuracy = 0.783975826822, validation_accuracy = 0.781066193633
--------------------------------------------------------------------------------
L2 penalty = 1000
train accuracy = 0.775855149784, validation_accuracy = 0.771356549342
--------------------------------------------------------------------------------
L2 penalty = 100000
train accuracy = 0.680366374731, validation_accuracy = 0.667818130893
--------------------------------------------------------------------------------

Cleating a list of tuples with the entries as (accuracy, l2_penalty) for the training set and the validation set.


In [53]:
accuracy_training_data = [(train_accuracy[0], 0), (train_accuracy[4], 4), (train_accuracy[10], 10),
(train_accuracy[1e2], 1e2), (train_accuracy[1e3], 1e3), (train_accuracy[1e5], 1e5)]
accuracy_validation_data = [(validation_accuracy[0], 0), (validation_accuracy[4], 4), (validation_accuracy[10], 10),
                          (validation_accuracy[1e2], 1e2), (validation_accuracy[1e3], 1e3), (validation_accuracy[1e5], 1e5)]

Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?


In [54]:
max(accuracy_training_data)[1]


Out[54]:
0

Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?


In [55]:
max(accuracy_validation_data)[1]


Out[55]:
10

Quiz question: Does the highest accuracy on the training data imply that the model is the best one?

No, this model probably suffers from overfitting


In [ ]: