Regression Week 5: LASSO (coordinate descent)

In this notebook, you will implement your very own LASSO solver via coordinate descent. You will:

  • Write a function to normalize features
  • Implement coordinate descent for LASSO
  • Explore effects of L1 penalty

Importing Libraries


In [47]:
import os
import zipfile
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline

Unzipping files with house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [48]:
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]

In [49]:
# Filenames of unzipped files
unzip_files = ['kc_house_data.csv','kc_house_train_data.csv', 'kc_house_test_data.csv']

In [50]:
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
    if filename not in files_list:
        zip_file = filename + '.zip'
        unzipping = zipfile.ZipFile(zip_file)
        unzipping.extractall()
        unzipping.close

Load in house sales data


In [51]:
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 
              'sqft_living15':float, 'grade':int, 'yr_renovated':int, 
              'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 
              'sqft_lot15':float, 'sqft_living':float, 'floors':str, 
              'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 
              'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}

In [52]:
sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to float, before using it below
sales['floors'] = sales[ 'floors' ].astype(float)

If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the DataFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.

Import useful functions from previous notebook

As in Week 2, we convert the DataFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.


In [53]:
def get_numpy_data(input_df, features, output):
    
    input_df['constant'] = 1.0 # Adding column 'constant' to input DataFrame with all values = 1.0
    features = ['constant'] + features # Adding constant' to List of features

    feature_matrix = input_df.as_matrix(columns=features) # Convert DataFrame w/ columns in features list in np.ndarray
    output_array = input_df[output].values # Convert column with output feature into np.array
    
    return(feature_matrix, output_array)

Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:


In [54]:
def predict_output(feature_matrix, weights):
    predictions = np.dot(feature_matrix, weights)
    return predictions

Normalize features

In the house dataset, features vary wildly in their relative magnitude: sqft_living is very large overall compared to bedrooms, for instance. As a result, weight for sqft_living would be much smaller than weight for bedrooms. This is problematic because "small" weights are dropped first as l1_penalty goes up.

To give equal considerations for all features, we need to normalize features as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1.

Let's see how we can do this normalization easily with Numpy: let us first consider a small matrix.


In [55]:
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X


[[  3.   5.   8.]
 [  4.  12.  15.]]

Numpy provides a shorthand for computing 2-norms of each column:


In [56]:
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms


[  5.  13.  17.]

To normalize, apply element-wise division:


In [57]:
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]


[[ 0.6         0.38461538  0.47058824]
 [ 0.8         0.92307692  0.88235294]]

Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.


In [58]:
def normalize_features(feature_matrix):
    norms = np.linalg.norm(feature_matrix, axis=0)
    normalized_features = feature_matrix/norms
    return (normalized_features, norms)

To test the function, run the following:


In [59]:
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6  0.6  0.6]
#  [ 0.8  0.8  0.8]]
print norms
# should print
# [5.  10.  15.]


[[ 0.6  0.6  0.6]
 [ 0.8  0.8  0.8]]
[  5.  10.  15.]

Implementing Coordinate Descent with normalized features

We seek to obtain a sparse set of weights by minimizing the LASSO cost function

SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).

(By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.)

The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent: at each iteration, we will fix all weights but weight i and find the value of weight i that minimizes the objective. That is, we look for

argmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ]

where all weights other than w[i] are held to be constant. We will optimize one w[i] at a time, circling through the weights multiple times.

  1. Pick a coordinate i
  2. Compute w[i] that minimizes the cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|)
  3. Repeat Steps 1 and 2 for all coordinates, multiple times

For this notebook, we use cyclical coordinate descent with normalized features, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows:

       ┌ (ro[i] + lambda/2)     if ro[i] < -lambda/2
w[i] = ├ 0                      if -lambda/2 <= ro[i] <= lambda/2
       └ (ro[i] - lambda/2)     if ro[i] > lambda/2

where

ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ].

Note that we do not regularize the weight of the constant feature (intercept) w[0], so, for this weight, the update is simply:

w[0] = ro[i]

Effect of L1 penalty

Let us consider a simple model with 2 features:


In [60]:
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)

Don't forget to normalize features:


In [61]:
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)

We assign some random set of initial weights and inspect the values of ro[i]:


In [62]:
weights = np.array([1., 4., 1.])

Use predict_output() to make predictions on this data.


In [63]:
predictions = predict_output(simple_feature_matrix, weights)
outputs = sales['price'].values
errors = outputs - predictions

Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula:

ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]

Hint: You can get a Numpy vector for feature_i using:

simple_feature_matrix[:,i]

In [64]:
ro = np.zeros(len(simple_features) + 1 , dtype = float)
for i in range(len(ro)):
    ro[i] = sum( simple_feature_matrix[:,i]*( errors + weights[i]*simple_feature_matrix[:,i] ) )
    print ro[i]


79400300.0145
87939470.8233
80966698.6662

QUIZ QUESTION

Recall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate?

This condition corresponds to L1 penalty values (or $\lambda$) in the interval $ 2 \rho_{2} \le \lambda \le 2 \rho_{1} $


In [65]:
print 'rom the printed values above, ro[2] < ro[1]. Thus, the condition is:'
print '[', '{:.2e}'.format(2.0*ro[2]) , ',' , '{:.2e}'.format(2.0*ro[1]) , ']'


rom the printed values above, ro[2] < ro[1]. Thus, the condition is:
[ 1.62e+08 , 1.76e+08 ]

QUIZ QUESTION

What range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate?


In [66]:
max_w1_w2 = max( [2.0*ro[1], 2.0*ro[2]] )
print 'For all values larger than:', '{:.2e}'.format(max_w1_w2)


For all values larger than: 1.76e+08

So we can say that ro[i] quantifies the significance of the i-th feature: the larger ro[i] is, the more likely it is for the i-th feature to be retained.

Single Coordinate Descent Step

Using the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i.


In [67]:
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
    
    # compute prediction
    predict_vals = predict_output(feature_matrix, weights)
    # compute error between output and predicted  values
    error_vals = output - predict_vals
    
    # compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
    ro_i = sum( feature_matrix[:,i]*( error_vals + weights[i]*feature_matrix[:,i] ) )

    if i == 0: # intercept -- do not regularize
        new_weight_i = ro_i 
    elif ro_i < -l1_penalty/2.0:
        new_weight_i = ro_i + l1_penalty/2.0
    elif ro_i > l1_penalty/2.0:
        new_weight_i = ro_i - l1_penalty/2.0
    else:
        new_weight_i = 0.0
    
    return new_weight_i

To test the function, run the following cell:


In [68]:
# should print 0.425558846691
print lasso_coordinate_descent_step(1, np.array([[3./np.sqrt(13),1./np.sqrt(10)],[2./np.sqrt(13),3./np.sqrt(10)]]), 
                                   np.array([1., 1.]), np.array([1., 4.]), 0.1)


0.425558846691

Cyclical coordinate descent

Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.

When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop.

For each iteration:

  1. As you loop over features in order and perform coordinate descent, measure how much each coordinate changes.
  2. After the loop, if the maximum change across all coordinates is falls below the tolerance, stop. Otherwise, go back to step 1.

Return weights

IMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration.

for i in range(len(weights)):
    old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten
    # the following line uses new values for weight[0], weight[1], ..., weight[i-1]
    #     and old values for weight[i], ..., weight[d-1]
    weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)

    # use old_weights_i to compute change in coordinate
    ...

In [69]:
def lasso_cyclical_coordinate_descent(feat_matrix, output, initial_weights, l1_penalty, tolerance):
    
    # Initializing weights as initial weights
    weights = initial_weights
    
    # Set convergence criteria to initially be false
    converged = False
    
    # Continue to perform calculations on algorithm converges
    while not converged:
        
        # Tracking largest value of weight change (corresponding to a feat i) for convergence
        max_delta_w = 0.0
    
        # For each i, finding new weight with lasso regression and determining the change in the weight
        for i in range(len(weights)):

            # Storring the values for the old weight
            prev_w_i = weights[i]
            # Finding new weight with lasso coordinate descent
            weights[i] = lasso_coordinate_descent_step(i, feat_matrix, output, weights, l1_penalty)
            # Finding the change in value of weight i
            delta_w_i = abs(prev_w_i - weights[i])
            
            # If the change of the current weight is larger than tmax_delta_w,
            # Reefine max_delta_w to the current change of the current weight
            if delta_w_i > max_delta_w:
                max_delta_w = delta_w_i
            
        # If the largest value for the weight change < tolerance, code has converge.
        if max_delta_w < tolerance:
            converged = True

    return weights

Using the following parameters, learn the weights on the sales dataset.


In [70]:
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0

First create a normalized version of the feature matrix, normalized_simple_feature_matrix


In [71]:
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features

Then, run your implementation of LASSO coordinate descent:


In [72]:
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
                                            initial_weights, l1_penalty, tolerance)

QUIZ QUESTIONS

Q1. What is the RSS of the learned model on the normalized dataset?


In [73]:
pred_simp_feat = predict_output(simple_feature_matrix, weights/simple_norms)
RSS = sum( (pred_simp_feat - sales['price'].values)**2 )
print '%.2e' % RSS


1.63e+15

Q2. Which features had weight zero at convergence?


In [74]:
# Adding 'intercept' to the list of features
total_feats = ['intercept'] + simple_features 
for feat, w_val in zip(total_feats, weights):
    if w_val==0.0:
        print "'" + feat + "'" + ' feature had weight 0.0 at convergence'


'bedrooms' feature had weight 0.0 at convergence

Evaluating LASSO fit with more features

Let us split the sales dataset into training and test sets.


In [75]:
train_data = pd.read_csv('kc_house_train_data.csv', dtype=dtype_dict)
test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict)
# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to float, before using it below
train_data['floors'] = train_data[ 'floors' ].astype(float) 
test_data['floors'] = test_data[ 'floors' ].astype(float)

Let us consider the following set of features.


In [76]:
all_features = ['bedrooms',
                'bathrooms',
                'sqft_living',
                'sqft_lot',
                'floors',
                'waterfront', 
                'view', 
                'condition', 
                'grade',
                'sqft_above',
                'sqft_basement',
                'yr_built', 
                'yr_renovated']
my_output = 'price'

First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)


In [77]:
# First, creating feature matrix and output vector
(feat_matrix_train, output) = get_numpy_data(train_data, all_features, my_output)
(norm_feat_matrix_train, norms_train) = normalize_features(feat_matrix_train)

First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.


In [78]:
initial_weights = np.zeros( len(all_features) + 1 )
l1_penalty = 1e7
tolerance = 1.0

In [79]:
weights1e7 = lasso_cyclical_coordinate_descent(norm_feat_matrix_train, output,
                                               initial_weights, l1_penalty, tolerance)

QUIZ QUESTION

What features had non-zero weight in this case?


In [80]:
# Adding 'intercept' to the list of features
total_feats = ['intercept'] + all_features 
for feat, w_val in zip(total_feats, weights1e7):
    if w_val!=0.0:
        print "'" + feat + "'" + ' feature had weight not equal to 0.0 at convergence'


'intercept' feature had weight not equal to 0.0 at convergence
'sqft_living' feature had weight not equal to 0.0 at convergence
'waterfront' feature had weight not equal to 0.0 at convergence
'view' feature had weight not equal to 0.0 at convergence

Next, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later.


In [81]:
initial_weights = np.zeros( len(all_features) + 1 )
l1_penalty = 1e8
tolerance = 1.0

In [82]:
weights1e8 = lasso_cyclical_coordinate_descent(norm_feat_matrix_train, output,
                                               initial_weights, l1_penalty, tolerance)

QUIZ QUESTION

What features had non-zero weight in this case?


In [83]:
# total_feats defined above
for feat, w_val in zip(total_feats, weights1e8):
    if w_val!=0.0:
        print "'" + feat + "'" + ' feature had weight not equal to 0.0 at convergence'


'intercept' feature had weight not equal to 0.0 at convergence

Finally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.)


In [84]:
initial_weights = np.zeros( len(all_features) + 1 )
l1_penalty = 1e4
tolerance = 5e5

In [85]:
weights1e4 = lasso_cyclical_coordinate_descent(norm_feat_matrix_train, output,
                                               initial_weights, l1_penalty, tolerance)

QUIZ QUESTION

What features had non-zero weight in this case?


In [86]:
# total_feats defined above
for feat, w_val in zip(total_feats, weights1e4):
    if w_val!=0.0:
        print "'" + feat + "'" + ' feature had weight not equal to 0.0 at convergence'


'intercept' feature had weight not equal to 0.0 at convergence
'bedrooms' feature had weight not equal to 0.0 at convergence
'bathrooms' feature had weight not equal to 0.0 at convergence
'sqft_living' feature had weight not equal to 0.0 at convergence
'sqft_lot' feature had weight not equal to 0.0 at convergence
'floors' feature had weight not equal to 0.0 at convergence
'waterfront' feature had weight not equal to 0.0 at convergence
'view' feature had weight not equal to 0.0 at convergence
'condition' feature had weight not equal to 0.0 at convergence
'grade' feature had weight not equal to 0.0 at convergence
'sqft_above' feature had weight not equal to 0.0 at convergence
'sqft_basement' feature had weight not equal to 0.0 at convergence
'yr_built' feature had weight not equal to 0.0 at convergence
'yr_renovated' feature had weight not equal to 0.0 at convergence

Rescaling learned weights

Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.

Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data:

In this case, we must scale the resulting weights so that we can make predictions with original features:

  1. Store the norms of the original features to a vector called norms:
    features, norms = normalize_features(features)
  2. Run Lasso on the normalized features and obtain a weights vector
  3. Compute the weights for the original features by performing element-wise division, i.e.
    weights_normalized = weights / norms
    Now, we can apply weights_normalized to the test data, without normalizing it!

Create a normalized version of each of the weights learned above. (weights1e4, weights1e7, weights1e8).


In [87]:
normalized_weights1e4 = weights1e4/norms_train
normalized_weights1e7 = weights1e7/norms_train
normalized_weights1e8 = weights1e8/norms_train

In [88]:
print normalized_weights1e7[3]


161.317457646

To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then:

print normalized_weights1e7[3]

should return 161.31745624837794.

Evaluating each of the learned models on the test data

Let's now evaluate the three models on the test data:


In [89]:
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')

Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix:


In [90]:
pred_1e4 = predict_output(test_feature_matrix, normalized_weights1e4)
RSS_1e4 = sum( (pred_1e4 - test_data['price'].values)**2 )
print '%.2e' % RSS_1e4


2.28e+14

In [91]:
pred_1e7 = predict_output(test_feature_matrix, normalized_weights1e7)
RSS_1e7 = sum( (pred_1e7 - test_data['price'].values)**2 )
print '%.2e' % RSS_1e7


2.76e+14

In [92]:
pred_1e8 = predict_output(test_feature_matrix, normalized_weights1e8)
RSS_1e8 = sum( (pred_1e8 - test_data['price'].values)**2 )
print '%.2e' % RSS_1e8


5.37e+14

QUIZ QUESTION

Which model performed best on the test data?

Model with l1_penalty = 1e4 and tolerance = 5e5 performed best on the test data


In [ ]: