Regression Week 5: Feature Selection and LASSO (Interpretation)

In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:

  • Run LASSO with different L1 penalties.
  • Choose best L1 penalty using a validation set.
  • Choose best L1 penalty using a validation set, with additional constraint on the size of subset.

In the second notebook, you will implement your own LASSO solver, using coordinate descent.

Fire up graphlab create


In [1]:
import graphlab

Load in house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [2]:
sales = graphlab.SFrame('kc_house_data.gl/')


[INFO] 1452424993 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_FILE to /usr/local/lib/python2.7/site-packages/certifi/cacert.pem
1452424993 : INFO:     (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_DIR to 
This non-commercial license of GraphLab Create is assigned to jinntrance@gmail.com and will expire on September 27, 2016. For commercial licensing options, visit https://dato.com/buy/.

[INFO] Start server at: ipc:///tmp/graphlab_server-16171 - Server binary: /usr/local/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1452424993.log
[INFO] GraphLab Server Version: 1.7.1

Create new features

As in Week 2, we consider features that are some transformations of inputs.


In [3]:
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']

# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float) 
sales['floors_square'] = sales['floors']*sales['floors']
  • Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
  • On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.

Learn regression weights with L1 penalty

Let us fit a model with all the features available, plus the features we just created above.


In [4]:
all_features = ['bedrooms', 'bedrooms_square',
            'bathrooms',
            'sqft_living', 'sqft_living_sqrt',
            'sqft_lot', 'sqft_lot_sqrt',
            'floors', 'floors_square',
            'waterfront', 'view', 'condition', 'grade',
            'sqft_above',
            'sqft_basement',
            'yr_built', 'yr_renovated']

Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.


In [5]:
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=1e10)


PROGRESS: Linear regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples          : 21613
PROGRESS: Number of features          : 17
PROGRESS: Number of unpacked features : 17
PROGRESS: Number of coefficients    : 18
PROGRESS: Starting Accelerated Gradient (FISTA)
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: | Iteration | Passes   | Step size | Elapsed Time | Training-max_error | Training-rmse |
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: Tuning step size. First iteration could take longer than subsequent iterations.
PROGRESS: | 1         | 2        | 0.000002  | 1.347939     | 6962915.603493     | 426631.749026 |
PROGRESS: | 2         | 3        | 0.000002  | 1.379063     | 6843144.200219     | 392488.929838 |
PROGRESS: | 3         | 4        | 0.000002  | 1.406895     | 6831900.032123     | 385340.166783 |
PROGRESS: | 4         | 5        | 0.000002  | 1.436134     | 6847166.848958     | 384842.383767 |
PROGRESS: | 5         | 6        | 0.000002  | 1.464988     | 6869667.895833     | 385998.458623 |
PROGRESS: | 6         | 7        | 0.000002  | 1.493188     | 6847177.773672     | 380824.455891 |
PROGRESS: +-----------+----------+-----------+--------------+--------------------+---------------+
PROGRESS: TERMINATED: Iteration limit reached.
PROGRESS: This model may not be optimal. To improve it, consider increasing `max_iterations`.

Find what features had non-zero weight.


In [20]:
model_all.coefficients.sort('value', ascending = False).print_rows(num_rows = 50)


+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None |  274873.05595 |
|    bathrooms     |  None | 8468.53108691 |
|      grade       |  None | 842.068034898 |
| sqft_living_sqrt |  None | 350.060553386 |
|   sqft_living    |  None | 24.4207209824 |
|    sqft_above    |  None | 20.0247224171 |
|     sqft_lot     |  None |      0.0      |
|  sqft_lot_sqrt   |  None |      0.0      |
|      floors      |  None |      0.0      |
|  floors_square   |  None |      0.0      |
|    waterfront    |  None |      0.0      |
|       view       |  None |      0.0      |
|    condition     |  None |      0.0      |
| bedrooms_square  |  None |      0.0      |
|     bedrooms     |  None |      0.0      |
|  sqft_basement   |  None |      0.0      |
|     yr_built     |  None |      0.0      |
|   yr_renovated   |  None |      0.0      |
+------------------+-------+---------------+
[18 rows x 3 columns]

Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.

QUIZ QUESTION: According to this list of weights, which of the features have been chosen?

Selecting an L1 penalty

To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:

  • Split our sales data into 2 sets: training and test
  • Further split our training data into two sets: train, validation

Be very careful that you use seed = 1 to ensure you get the same answer!


In [21]:
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate

Next, we write a loop that does the following:

  • For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
    • Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
  • Report which l1_penalty produced the lowest RSS on validation data.

When you call linear_regression.create() make sure you set validation_set = None.

Note: you can turn off the print out of linear_regression.create() with verbose = False


In [43]:
import numpy as np
import sys

def choose_l1(l1s, nnz_threshold = -1):
    best_model = 0
    best_l1 = sys.maxint
    min_nnz = sys.maxint
    for l1 in l1s:
        model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=l1, verbose = False)
        validation['pred'] = model.predict(validation)
        testing['pred'] = model.predict(testing)
        this_rss = validation.apply(lambda x : (x['price'] - x['pred']) ** 2).sum()
        test_rss = testing.apply(lambda x : (x['price'] - x['pred']) ** 2).sum()
        nnz_num = model['coefficients']['value'].nnz()
        if  (this_rss < best_l1 or rss < 0) and (nnz_threshold <= 0 or nnz_num == nnz_threshold):
            best_l1 = this_rss
            best_model = model
            best_l1 = 
            print best_l1, test_rss, l1
            model.coefficients.sort('value', ascending = False).print_rows(num_rows = 50)
        
        if nnz_num < min_nnz:
            min_nnz = nnz_num
    return best_model, best_l1, min_nnz


6.25766285142e+14 1.56983602382e+14 10.0
+------------------+-------+------------------+
|       name       | index |      value       |
+------------------+-------+------------------+
|    waterfront    |  None |  601905.594545   |
|       view       |  None |  93312.8573119   |
|    bathrooms     |  None |  25409.5889341   |
|      floors      |  None |   21204.335467   |
|   (intercept)    |  None |  18993.4272128   |
|  floors_square   |  None |  12915.5243361   |
|     bedrooms     |  None |  7936.96767903   |
|    condition     |  None |  6609.03571245   |
|      grade       |  None |  6206.93999188   |
| sqft_living_sqrt |  None |  1124.65021281   |
| bedrooms_square  |  None |  936.993368193   |
|  sqft_lot_sqrt   |  None |  148.258391011   |
|  sqft_basement   |  None |  122.367827534   |
|   yr_renovated   |  None |  56.0720034488   |
|    sqft_above    |  None |  43.2870534193   |
|   sqft_living    |  None |  39.1151363797   |
|     yr_built     |  None |  9.43363539372   |
|     sqft_lot     |  None | 0.00348361822299 |
+------------------+-------+------------------+
[18 rows x 3 columns]

Out[43]:
18

In [ ]:
model, l1, mnn = choose_l1(np.logspace(1, 7, num=13))
mnn

QUIZ QUESTIONS

  1. What was the best value for the l1_penalty?
  2. What is the RSS on TEST data of the model with the best l1_penalty?

In [ ]:

QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?


In [ ]:

Limit the number of nonzero weights

What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.

In this section, you are going to implement a simple, two phase procedure to achive this goal:

  1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
  2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.

In [31]:
max_nonzeros = 7

Exploring the larger range of values to find a narrow range with the desired sparsity

Let's define a wide range of possible l1_penalty_values:


In [32]:
l1_penalty_values = np.logspace(8, 10, num=20)

Now, implement a loop that search through this space of possible l1_penalty values:

  • For l1_penalty in np.logspace(8, 10, num=20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
      • Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!

In [37]:
this_model, this_l1, this_mnn = choose_l1(l1_penalty_values)
this_mnn


6.27492659875e+14 1.56998193279e+14 100000000.0
+------------------+-------+------------------+
|       name       | index |      value       |
+------------------+-------+------------------+
|    waterfront    |  None |  568204.644584   |
|       view       |  None |  91066.9428088   |
|    bathrooms     |  None |  25234.2091945   |
|   (intercept)    |  None |  25090.9173672   |
|      floors      |  None |  20695.3592396   |
|  floors_square   |  None |  12466.6906503   |
|     bedrooms     |  None |   7789.1770611   |
|    condition     |  None |  6360.78092625   |
|      grade       |  None |  6139.21280565   |
| sqft_living_sqrt |  None |  1117.31189557   |
| bedrooms_square  |  None |  847.559686943   |
|  sqft_lot_sqrt   |  None |   143.98899197   |
|  sqft_basement   |  None |  118.945874954   |
|   yr_renovated   |  None |  48.6154673093   |
|    sqft_above    |  None |  43.0358299246   |
|   sqft_living    |  None |  39.0394459636   |
|     yr_built     |  None |  9.04040165402   |
|     sqft_lot     |  None | -0.0256861182399 |
+------------------+-------+------------------+
[18 rows x 3 columns]

Out[37]:
1

In [35]:
this_model['coefficients']['value'].nnz()


Out[35]:
18

Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.

More formally, find:

  • The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
    • Store this value in the variable l1_penalty_min (we will use it later)
  • The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
    • Store this value in the variable l1_penalty_max (we will use it later)

Hint: there are many ways to do this, e.g.:

  • Programmatically within the loop above
  • Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.

In [40]:
def choose_largest_l1(l1s):
    best_model = 0
    best_l1 = sys.maxint
    rss = sys.maxint
    min_nnz = sys.maxint
    l1_penalty_min = -sys.maxint
    l1_penalty_max = sys.maxint
    for l1 in l1s:
        model = graphlab.linear_regression.create(training, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=l1, verbose = False)
        validation['pred'] = model.predict(validation)
        testing['pred'] = model.predict(testing)
        this_rss = validation.apply(lambda x : (x['price'] - x['pred']) ** 2).sum()
        test_rss = testing.apply(lambda x : (x['price'] - x['pred']) ** 2).sum()
        nnz_num = model['coefficients']['value'].nnz()
        if this_rss < rss or rss < 0:
            rss = this_rss
            best_model = model
            print rss, test_rss, l1
            model.coefficients.sort('value', ascending = False).print_rows(num_rows = 50)
        
        if nnz_num < max_nonzeros and l1 < l1_penalty_max:
            l1_penalty_max = l1
            
        if nnz_num > max_nonzeros and l1 > l1_penalty_min:
            l1_penalty_min = l1
            
    return l1_penalty_min, l1_penalty_max

l1_penalty_min, l1_penalty_max = choose_largest_l1(l1_penalty_values)


6.27492659875e+14 1.56998193279e+14 100000000.0
+------------------+-------+------------------+
|       name       | index |      value       |
+------------------+-------+------------------+
|    waterfront    |  None |  568204.644584   |
|       view       |  None |  91066.9428088   |
|    bathrooms     |  None |  25234.2091945   |
|   (intercept)    |  None |  25090.9173672   |
|      floors      |  None |  20695.3592396   |
|  floors_square   |  None |  12466.6906503   |
|     bedrooms     |  None |   7789.1770611   |
|    condition     |  None |  6360.78092625   |
|      grade       |  None |  6139.21280565   |
| sqft_living_sqrt |  None |  1117.31189557   |
| bedrooms_square  |  None |  847.559686943   |
|  sqft_lot_sqrt   |  None |   143.98899197   |
|  sqft_basement   |  None |  118.945874954   |
|   yr_renovated   |  None |  48.6154673093   |
|    sqft_above    |  None |  43.0358299246   |
|   sqft_living    |  None |  39.0394459636   |
|     yr_built     |  None |  9.04040165402   |
|     sqft_lot     |  None | -0.0256861182399 |
+------------------+-------+------------------+
[18 rows x 3 columns]


In [41]:
l1_penalty_min, l1_penalty_max


Out[41]:
(2976351441.6313133, 3792690190.7322536)

QUIZ QUESTIONS

What values did you find for l1_penalty_min andl1_penalty_max?

Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set

We will now explore the narrow region of l1_penalty values we found:


In [45]:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
  • For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Measure the RSS of the learned model on the VALIDATION set

Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.


In [46]:
model_new, l1_new, nnz_new = choose_l1(l1_penalty_values, max_nonzeros)
l1_new, nnz_new


1.04693748875e+15 2.4304885073e+14 3448968612.16
+------------------+-------+---------------+
|       name       | index |     value     |
+------------------+-------+---------------+
|   (intercept)    |  None | 222253.192544 |
|    bathrooms     |  None | 15873.9572593 |
|      grade       |  None | 2899.42026975 |
| sqft_living_sqrt |  None | 690.114773313 |
|     bedrooms     |  None | 661.722717782 |
|   sqft_living    |  None | 32.4102214513 |
|    sqft_above    |  None | 30.0115753022 |
|  sqft_lot_sqrt   |  None |      0.0      |
|      floors      |  None |      0.0      |
|  floors_square   |  None |      0.0      |
|    waterfront    |  None |      0.0      |
|       view       |  None |      0.0      |
|    condition     |  None |      0.0      |
|     sqft_lot     |  None |      0.0      |
| bedrooms_square  |  None |      0.0      |
|  sqft_basement   |  None |      0.0      |
|     yr_built     |  None |      0.0      |
|   yr_renovated   |  None |      0.0      |
+------------------+-------+---------------+
[18 rows x 3 columns]

Out[46]:
(9223372036854775807, 6)

QUIZ QUESTIONS

  1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
  2. What features in this model have non-zero coefficients?

In [47]:
l1_penalty_values


Out[47]:
array([  2.97635144e+09,   3.01931664e+09,   3.06228184e+09,
         3.10524703e+09,   3.14821223e+09,   3.19117743e+09,
         3.23414263e+09,   3.27710782e+09,   3.32007302e+09,
         3.36303822e+09,   3.40600341e+09,   3.44896861e+09,
         3.49193381e+09,   3.53489901e+09,   3.57786420e+09,
         3.62082940e+09,   3.66379460e+09,   3.70675980e+09,
         3.74972499e+09,   3.79269019e+09])