Regression Week 5: Feature Selection and LASSO (Interpretation)

In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:

  • Run LASSO with different L1 penalties.
  • Choose best L1 penalty using a validation set.
  • Choose best L1 penalty using a validation set, with additional constraint on the size of subset.

In the second notebook, you will implement your own LASSO solver, using coordinate descent.

Fire up graphlab create


In [1]:
import graphlab

Load in house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [2]:
sales = graphlab.SFrame('../Data/kc_house_data.gl/')


[INFO] 1452303362 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_FILE to /home/scoaste/miniconda2/envs/dato-env/lib/python2.7/site-packages/certifi/cacert.pem
1452303362 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_DIR to 
This non-commercial license of GraphLab Create is assigned to scoaste@gmail.com and will expire on November 24, 2016. For commercial licensing options, visit https://dato.com/buy/.

[INFO] Start server at: ipc:///tmp/graphlab_server-6977 - Server binary: /home/scoaste/miniconda2/envs/dato-env/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1452303362.log
[INFO] GraphLab Server Version: 1.7.1

Create new features

As in Week 2, we consider features that are some transformations of inputs.


In [3]:
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']

# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float) 
sales['floors_square'] = sales['floors']*sales['floors']
  • Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
  • On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.

Learn regression weights with L1 penalty

Let us fit a model with all the features available, plus the features we just created above.


In [4]:
all_features = ['bedrooms', 'bedrooms_square',
            'bathrooms',
            'sqft_living', 'sqft_living_sqrt',
            'sqft_lot', 'sqft_lot_sqrt',
            'floors', 'floors_square',
            'waterfront', 'view', 'condition', 'grade',
            'sqft_above',
            'sqft_basement',
            'yr_built', 'yr_renovated']

Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.


In [5]:
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=1e10)


PROGRESS: Linear regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples          : 21613
PROGRESS: Number of features          : 17
PROGRESS: Number of unpacked features : 17
PROGRESS: Number of coefficients    : 18
PROGRESS: Starting Accelerated Gradient (FISTA)
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: | Iteration | Passes   | Step size | Elapsed Time | Training-max_error | Training-rmse |
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: Tuning step size. First iteration could take longer than subsequent iterations.
PROGRESS: | 1         | 2        | 0.000002  | 2.353153     | 6962915.603493     | 426631.749026 |
PROGRESS: | 2         | 3        | 0.000002  | 2.456793     | 6843144.200219     | 392488.929838 |
PROGRESS: | 3         | 4        | 0.000002  | 2.568695     | 6831900.032123     | 385340.166783 |
PROGRESS: | 4         | 5        | 0.000002  | 2.672958     | 6847166.848958     | 384842.383767 |
PROGRESS: | 5         | 6        | 0.000002  | 2.777240     | 6869667.895833     | 385998.458623 |
PROGRESS: | 6         | 7        | 0.000002  | 2.882314     | 6847177.773672     | 380824.455891 |
PROGRESS: | 10        | 11       | 0.000002  | 3.322628     | 6842123.232651     | 364204.576180 |
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: TERMINATED: Iteration limit reached.
PROGRESS: This model may not be optimal. To improve it, consider increasing `max_iterations`.

Find what features had non-zero weight.


In [6]:
model_all_mask = model_all["coefficients"]["value"] > 0.0

In [39]:
model_all["coefficients"][model_all_mask].print_rows(num_rows=20)


+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None |  274873.05595 |
|    bathrooms     |  None | 8468.53108691 |
|   sqft_living    |  None | 24.4207209824 |
| sqft_living_sqrt |  None | 350.060553386 |
|      grade       |  None | 842.068034898 |
|    sqft_above    |  None | 20.0247224171 |
+------------------+-------+---------------+
[6 rows x 3 columns]

Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.

QUIZ QUESTION: According to this list of weights, which of the features have been chosen?

  • (intercept)
  • bathrooms
  • sqft_living
  • sqft_living_sqrt
  • grade
  • sqft_above

Selecting an L1 penalty

To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:

  • Split our sales data into 2 sets: training and test
  • Further split our training data into two sets: train, validation

Be very careful that you use seed = 1 to ensure you get the same answer!


In [40]:
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate

Next, we write a loop that does the following:

  • For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
    • Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
  • Report which l1_penalty produced the lowest RSS on validation data.

When you call linear_regression.create() make sure you set validation_set = None.

Note: you can turn off the print out of linear_regression.create() with verbose = False


In [41]:
import numpy as np

In [42]:
def get_rss(model, data, outcome):
    predictions = model.predict(data)
    residuals = predictions - outcome
    rss = sum(pow(residuals,2))

    return(rss)

In [43]:
l1_rss = {}
for l1 in np.logspace(1, 7, num=13):
    l1_rss[l1] = get_rss(graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, 
                                                    l2_penalty=0., l1_penalty=l1, verbose=False),
                        validation,
                        validation["price"])
min_value = min(l1_rss.values())
min_key = [key for key, value in l1_rss.iteritems() if value == min_value]
print "l1 value " + str(min_key) + " yielded rss of " + str(min_value)


l1 value [10.0] yielded rss of 6.25766285142e+14

In [44]:
model_best_l1 = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, 
                                                  l2_penalty=0., l1_penalty=10, verbose=False)
rss_best_l1 = get_rss(model_best_l1,testing,testing["price"])
print rss_best_l1


1.56983602382e+14

QUIZ QUESTIONS

  1. What was the best value for the l1_penalty? ###### 10
  2. What is the RSS on TEST data of the model with the best l1_penalty? ###### 1.56983602382e+14

In [45]:
model_best_l1_mask = model_best_l1["coefficients"]["value"] > 0.0
model_best_l1["coefficients"][model_best_l1_mask].print_rows(num_rows=20)


+------------------+-------+------------------+
|       name       | index |      value       |
+------------------+-------+------------------+
|   (intercept)    |  None |  18993.4272128   |
|     bedrooms     |  None |  7936.96767903   |
| bedrooms_square  |  None |  936.993368193   |
|    bathrooms     |  None |  25409.5889341   |
|   sqft_living    |  None |  39.1151363797   |
| sqft_living_sqrt |  None |  1124.65021281   |
|     sqft_lot     |  None | 0.00348361822299 |
|  sqft_lot_sqrt   |  None |  148.258391011   |
|      floors      |  None |   21204.335467   |
|  floors_square   |  None |  12915.5243361   |
|    waterfront    |  None |  601905.594545   |
|       view       |  None |  93312.8573119   |
|    condition     |  None |  6609.03571245   |
|      grade       |  None |  6206.93999188   |
|    sqft_above    |  None |  43.2870534193   |
|  sqft_basement   |  None |  122.367827534   |
|     yr_built     |  None |  9.43363539372   |
|   yr_renovated   |  None |  56.0720034488   |
+------------------+-------+------------------+
[18 rows x 3 columns]


In [49]:
model_best_l1["coefficients"]["value"].nnz()


Out[49]:
18

QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?

18

Limit the number of nonzero weights

What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.

In this section, you are going to implement a simple, two phase procedure to achive this goal:

  1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
  2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.

In [46]:
max_nonzeros = 7

Exploring the larger range of values to find a narrow range with the desired sparsity

Let's define a wide range of possible l1_penalty_values:


In [68]:
l1_penalty_values = np.logspace(8, 10, num=20)

In [69]:
l1_penalty_values


Out[69]:
array([  1.00000000e+08,   1.27427499e+08,   1.62377674e+08,
         2.06913808e+08,   2.63665090e+08,   3.35981829e+08,
         4.28133240e+08,   5.45559478e+08,   6.95192796e+08,
         8.85866790e+08,   1.12883789e+09,   1.43844989e+09,
         1.83298071e+09,   2.33572147e+09,   2.97635144e+09,
         3.79269019e+09,   4.83293024e+09,   6.15848211e+09,
         7.84759970e+09,   1.00000000e+10])

Now, implement a loop that search through this space of possible l1_penalty values:

  • For l1_penalty in np.logspace(8, 10, num=20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
      • Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!

In [70]:
l1_penalty_nnz = {}
for l1 in l1_penalty_values:
    model_l1_penalty = graphlab.linear_regression.create(training, target='price', features=all_features, 
                                                         validation_set=None, l2_penalty=0., l1_penalty=l1, verbose=False)
    l1_penalty_nnz[l1] = model_l1_penalty["coefficients"]["value"].nnz()
print l1_penalty_nnz


{100000000.0: 18, 206913808.11147901: 18, 1128837891.6846883: 15, 6158482110.6602545: 3, 1832980710.8324375: 13, 3792690190.7322536: 6, 2976351441.6313133: 10, 127427498.57031322: 18, 545559478.11685145: 17, 1438449888.2876658: 15, 263665089.87303555: 17, 162377673.91887242: 18, 885866790.41008317: 16, 10000000000.0: 1, 335981828.62837881: 17, 7847599703.5146227: 1, 695192796.17755914: 17, 4832930238.5717525: 5, 2335721469.0901213: 12, 428133239.8719396: 17}

In [76]:
from collections import OrderedDict

sorted_l1_penalty_nnz = OrderedDict(sorted(l1_penalty_nnz.items(), key=lambda t: t[0]))

print sorted_l1_penalty_nnz


OrderedDict([(100000000.0, 18), (127427498.57031322, 18), (162377673.91887242, 18), (206913808.11147901, 18), (263665089.87303555, 17), (335981828.62837881, 17), (428133239.8719396, 17), (545559478.11685145, 17), (695192796.17755914, 17), (885866790.41008317, 16), (1128837891.6846883, 15), (1438449888.2876658, 15), (1832980710.8324375, 13), (2335721469.0901213, 12), (2976351441.6313133, 10), (3792690190.7322536, 6), (4832930238.5717525, 5), (6158482110.6602545, 3), (7847599703.5146227, 1), (10000000000.0, 1)])

In [77]:
l1_penalty_min = float('NaN')
l1_penalty_max = float('NaN')

for i in xrange(1,len(sorted_l1_penalty_nnz)):
    if sorted_l1_penalty_nnz.values()[i-1] >= max_nonzeros and sorted_l1_penalty_nnz.values()[i] <= max_nonzeros:
        l1_penalty_min = sorted_l1_penalty_nnz.keys()[i-1]
        l1_penalty_max = sorted_l1_penalty_nnz.keys()[i]
        break

Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.

More formally, find:

  • The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
    • Store this value in the variable l1_penalty_min (we will use it later)
  • The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
    • Store this value in the variable l1_penalty_max (we will use it later)

Hint: there are many ways to do this, e.g.:

  • Programmatically within the loop above
  • Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.

In [78]:
print l1_penalty_min 
print l1_penalty_max


2976351441.63
3792690190.73

QUIZ QUESTIONS

What values did you find for l1_penalty_min andl1_penalty_max?

2976351441.63
3792690190.73

Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set

We will now explore the narrow region of l1_penalty values we found:


In [79]:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
print l1_penalty_values


[  2.97635144e+09   3.01931664e+09   3.06228184e+09   3.10524703e+09
   3.14821223e+09   3.19117743e+09   3.23414263e+09   3.27710782e+09
   3.32007302e+09   3.36303822e+09   3.40600341e+09   3.44896861e+09
   3.49193381e+09   3.53489901e+09   3.57786420e+09   3.62082940e+09
   3.66379460e+09   3.70675980e+09   3.74972499e+09   3.79269019e+09]
  • For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Measure the RSS of the learned model on the VALIDATION set

Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.


In [105]:
l1_penalty_rss = {}
for l1 in l1_penalty_values:
    l1_penalty_model = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, l2_penalty=0., l1_penalty=l1, verbose=False)
    l1_penalty_rss[l1] = (get_rss(l1_penalty_model,validation,validation["price"]), l1_penalty_model["coefficients"])

sorted_l1_penalty_rss = OrderedDict(sorted(l1_penalty_rss.items(), key=lambda t: t[1][0]))

In [106]:
for item in sorted_l1_penalty_rss.items():
    if( item[1][1]["value"].nnz() == max_nonzeros):
        print ("l1", item[0])
        print ("rss", item[1][0])
        l1_penalty_model_mask = item[1][1]["value"] > 0.0
        item[1][1][l1_penalty_model_mask].print_rows(num_rows=20)
        #print ("coefficients", item[1][1])
        break


('l1', 3448968612.1634369)
('rss', 1046937488751713.4)
+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None | 222253.192544 |
|     bedrooms     |  None | 661.722717782 |
|    bathrooms     |  None | 15873.9572593 |
|   sqft_living    |  None | 32.4102214513 |
| sqft_living_sqrt |  None | 690.114773313 |
|      grade       |  None | 2899.42026975 |
|    sqft_above    |  None | 30.0115753022 |
+------------------+-------+---------------+
[7 rows x 3 columns]

QUIZ QUESTIONS

  1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
3448968612.1634369
  1. What features in this model have non-zero coefficients?
(intercept)
bedrooms
bathrooms
sqft_living
sqft_living_sqrt
grade
sqft_above

In [ ]: