Regression Week 5: Feature Selection and LASSO (Interpretation)

In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:

  • Run LASSO with different L1 penalties.
  • Choose best L1 penalty using a validation set.
  • Choose best L1 penalty using a validation set, with additional constraint on the size of subset.

In the second notebook, you will implement your own LASSO solver, using coordinate descent.

Fire up graphlab create


In [1]:
import graphlab

Load in house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [2]:
sales = graphlab.SFrame('kc_house_data.gl/')


[INFO] 1449116990 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_FILE to C:\Users\linghao\AppData\Local\Dato\Dato Launcher\lib\site-packages\certifi\cacert.pem
1449116990 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_DIR to 
This non-commercial license of GraphLab Create is assigned to zhanglh13@fudan.edu.cn and will expire on September 21, 2016. For commercial licensing options, visit https://dato.com/buy/.

[INFO] Start server at: ipc:///tmp/graphlab_server-18440 - Server binary: C:\Users\linghao\AppData\Local\Dato\Dato Launcher\lib\site-packages\graphlab\unity_server.exe - Server log: C:\Users\linghao\AppData\Local\Temp\graphlab_server_1449116990.log.0
[INFO] GraphLab Server Version: 1.7.1

Create new features

As in Week 2, we consider features that are some transformations of inputs.


In [3]:
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']

# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to int, before creating a new feature.
sales['floors'] = sales['floors'].astype(int) 
sales['floors_square'] = sales['floors']*sales['floors']
  • Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
  • On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.

Learn regression weights with L1 penalty

Let us fit a model with all the features available, plus the features we just created above.


In [112]:
all_features = ['bedrooms', 'bedrooms_square',
            'bathrooms',
            'sqft_living', 'sqft_living_sqrt',
            'sqft_lot', 'sqft_lot_sqrt',
            'floors', 'floors_square',
            'waterfront', 'view', 'condition', 'grade',
            'sqft_above',
            'sqft_basement',
            'yr_built', 'yr_renovated']
len(all_features)


Out[112]:
17

Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.


In [7]:
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=1e10, verbose=False)

Find what features had non-zero weight.


In [10]:
model_all.get('coefficients').print_rows(num_rows=len(all_features) + 1)


+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None |  274952.62044 |
|     bedrooms     |  None |      0.0      |
| bedrooms_square  |  None |      0.0      |
|    bathrooms     |  None | 8483.95148798 |
|   sqft_living    |  None | 24.4238022551 |
| sqft_living_sqrt |  None | 351.097833343 |
|     sqft_lot     |  None |      0.0      |
|  sqft_lot_sqrt   |  None |      0.0      |
|      floors      |  None |      0.0      |
|  floors_square   |  None |      0.0      |
|    waterfront    |  None |      0.0      |
|       view       |  None |      0.0      |
|    condition     |  None |      0.0      |
|      grade       |  None | 850.427363977 |
|    sqft_above    |  None | 20.0777654516 |
|  sqft_basement   |  None |      0.0      |
|     yr_built     |  None |      0.0      |
|   yr_renovated   |  None |      0.0      |
+------------------+-------+---------------+
[18 rows x 3 columns]

Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.

QUIZ QUESTION: According to this list of weights, which of the features have been chosen?

Selecting an L1 penalty

To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:

  • Split our sales data into 2 sets: training and test
  • Further split our training data into two sets: train, validation

Be very careful that you use seed = 1 to ensure you get the same answer!


In [89]:
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate

Next, we write a loop that does the following:

  • For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
    • Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
  • Report which l1_penalty produced the lowest RSS on validation data.

When you call linear_regression.create() make sure you set validation_set = None.

Note: you can turn off the print out of linear_regression.create() with verbose = False


In [114]:
import numpy as np
l1_penalty_list = np.logspace(1, 7, num=13)
sort_table = []

for l1_penalty in l1_penalty_list:
    model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
    predictions = model.predict(validation)
    RSS = sum([(predictions[i] - validation[i]['price']) ** 2 for i in range(len(predictions))])
    print l1_penalty, RSS
    sort_table.append((RSS, l1_penalty))
    
print sorted(sort_table)[0]


10.0 6.28412158085e+14
31.6227766017 6.28412158283e+14
100.0 6.28412158907e+14
316.227766017 6.28412160882e+14
1000.0 6.28412167128e+14
3162.27766017 6.2841218688e+14
10000.0 6.28412249343e+14
31622.7766017 6.28412446908e+14
100000.0 6.2841307204e+14
316227.766017 6.28415052663e+14
1000000.0 6.2842135376e+14
3162277.66017 6.28441657748e+14
10000000.0 6.28509646259e+14
(628412158085185.4, 10.0)

QUIZ QUESTIONS

  1. What was the best value for the l1_penalty?
  2. What is the RSS on TEST data of the model with the best l1_penalty?

In [129]:
best_model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0, l1_penalty=10, verbose=False)
best_model.get('coefficients').print_rows(num_rows=len(all_features) + 1)


+------------------+-------+------------------+
|       name       | index |      value       |
+------------------+-------+------------------+
|   (intercept)    |  None |  19772.5826717   |
|     bedrooms     |  None |  8163.51995199   |
| bedrooms_square  |  None |  963.286207169   |
|    bathrooms     |  None |  25578.4297787   |
|   sqft_living    |  None |  39.3451522891   |
| sqft_living_sqrt |  None |  1139.31384706   |
|     sqft_lot     |  None | 0.00596774968784 |
|  sqft_lot_sqrt   |  None |  151.205354426   |
|      floors      |  None |  20205.4712003   |
|  floors_square   |  None |  11652.1081691   |
|    waterfront    |  None |  602998.684348   |
|       view       |  None |  93445.2338191   |
|    condition     |  None |   6883.581066    |
|      grade       |  None |  6290.22186917   |
|    sqft_above    |  None |  43.6198492433   |
|  sqft_basement   |  None |  122.001332807   |
|     yr_built     |  None |  9.80274156192   |
|   yr_renovated   |  None |  57.3884930807   |
+------------------+-------+------------------+
[18 rows x 3 columns]


In [130]:
predictions = best_model.predict(testing)
RSS = sum([(predictions[i] - testing[i]['price']) ** 2 for i in range(len(predictions))])
print RSS


1.57678098725e+14

In [132]:
best_model['coefficients']['value'].nnz()


Out[132]:
18

QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?

Limit the number of nonzero weights

What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.

In this section, you are going to implement a simple, two phase procedure to achive this goal:

  1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
  2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.

In [115]:
max_nonzeros = 7

Exploring the larger range of values to find a narrow range with the desired sparsity

Let's define a wide range of possible l1_penalty_values:


In [116]:
l1_penalty_values = np.logspace(8, 10, num=20)

Now, implement a loop that search through this space of possible l1_penalty values:

  • For l1_penalty in np.logspace(8, 10, num=20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
      • Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!

In [117]:
info = []
for l1_penalty in l1_penalty_values:
    model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
    nnz = model['coefficients']['value'].nnz()
    info.append((l1_penalty, nnz))

In [120]:
for x in enumerate(info):
    print x


(0, (100000000.0, 18))
(1, (127427498.57031322, 18))
(2, (162377673.91887242, 18))
(3, (206913808.11147901, 18))
(4, (263665089.87303555, 17))
(5, (335981828.62837881, 17))
(6, (428133239.8719396, 17))
(7, (545559478.11685145, 17))
(8, (695192796.17755914, 17))
(9, (885866790.41008317, 16))
(10, (1128837891.6846883, 15))
(11, (1438449888.2876658, 15))
(12, (1832980710.8324375, 13))
(13, (2335721469.0901213, 11))
(14, (2976351441.6313128, 10))
(15, (3792690190.7322536, 6))
(16, (4832930238.5717525, 5))
(17, (6158482110.6602545, 3))
(18, (7847599703.5146227, 1))
(19, (10000000000.0, 1))

Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.

More formally, find:

  • The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
    • Store this value in the variable l1_penalty_min (we will use it later)
  • The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
    • Store this value in the variable l1_penalty_max (we will use it later)

Hint: there are many ways to do this, e.g.:

  • Programmatically within the loop above
  • Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.

In [122]:
l1_penalty_min = l1_penalty_values[14]
l1_penalty_max = l1_penalty_values[15]
print l1_penalty_min, l1_penalty_max


2976351441.63 3792690190.73

QUIZ QUESTIONS

What values did you find for l1_penalty_min andl1_penalty_max?

Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set

We will now explore the narrow region of l1_penalty values we found:


In [124]:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
print l1_penalty_values


[  2.97635144e+09   3.01931664e+09   3.06228184e+09   3.10524703e+09
   3.14821223e+09   3.19117743e+09   3.23414263e+09   3.27710782e+09
   3.32007302e+09   3.36303822e+09   3.40600341e+09   3.44896861e+09
   3.49193381e+09   3.53489901e+09   3.57786420e+09   3.62082940e+09
   3.66379460e+09   3.70675980e+09   3.74972499e+09   3.79269019e+09]
  • For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Measure the RSS of the learned model on the VALIDATION set

Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.


In [126]:
sort_table = []

for l1_penalty in l1_penalty_values:
    model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
    nnz = model['coefficients']['value'].nnz()
    if not nnz == max_nonzeros:
        continue
    predictions = model.predict(validation)
    RSS = sum([(predictions[i] - validation[i]['price']) ** 2 for i in range(len(predictions))])
    print l1_penalty, RSS
    sort_table.append((RSS, l1_penalty))
    
print sorted(sort_table)[0]


3320073020.2 1.0299794723e+15
3363038217.52 1.03293901937e+15
3406003414.84 1.03646285326e+15
3448968612.16 1.0401993584e+15
3491933809.48 1.04511992951e+15
3534899006.81 1.05027158412e+15
3577864204.13 1.05541689503e+15
3620829401.45 1.06058634457e+15
(1029979472296640.8, 3320073020.20013)

QUIZ QUESTIONS

  1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
  2. What features in this model have non-zero coefficients?

In [128]:
best_model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0, l1_penalty=3320073020.20013, verbose=False)
best_model.get('coefficients').print_rows(num_rows=len(all_features) + 1)


+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None | 215675.924417 |
|     bedrooms     |  None | 1228.59582293 |
| bedrooms_square  |  None |      0.0      |
|    bathrooms     |  None | 16669.4449433 |
|   sqft_living    |  None | 33.0891011485 |
| sqft_living_sqrt |  None |  728.74471267 |
|     sqft_lot     |  None |      0.0      |
|  sqft_lot_sqrt   |  None |      0.0      |
|      floors      |  None |      0.0      |
|  floors_square   |  None |      0.0      |
|    waterfront    |  None |      0.0      |
|       view       |  None |      0.0      |
|    condition     |  None |      0.0      |
|      grade       |  None | 3155.81538376 |
|    sqft_above    |  None |  31.080485027 |
|  sqft_basement   |  None |      0.0      |
|     yr_built     |  None |      0.0      |
|   yr_renovated   |  None |      0.0      |
+------------------+-------+---------------+
[18 rows x 3 columns]