Regression Week 5: Feature Selection and LASSO (Interpretation)

In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:

  • Run LASSO with different L1 penalties.
  • Choose best L1 penalty using a validation set.
  • Choose best L1 penalty using a validation set, with additional constraint on the size of subset.

In the second notebook, you will implement your own LASSO solver, using coordinate descent.

Fire up graphlab create


In [1]:
import graphlab

In [2]:
import numpy as np

Load in house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [3]:
sales = graphlab.SFrame('kc_house_data.gl/')


[INFO] graphlab.cython.cy_server: GraphLab Create v2.1 started. Logging: /tmp/graphlab_server_1472156482.log
This non-commercial license of GraphLab Create for academic use is assigned to gonadarush@gmail.com and will expire on July 07, 2017.

Create new features

As in Week 2, we consider features that are some transformations of inputs.


In [4]:
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']

# In the dataset, 'floors' was defined with type string, 
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float) 
sales['floors_square'] = sales['floors']*sales['floors']
  • Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
  • On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.

Learn regression weights with L1 penalty

Let us fit a model with all the features available, plus the features we just created above.


In [5]:
all_features = ['bedrooms', 'bedrooms_square',
            'bathrooms',
            'sqft_living', 'sqft_living_sqrt',
            'sqft_lot', 'sqft_lot_sqrt',
            'floors', 'floors_square',
            'waterfront', 'view', 'condition', 'grade',
            'sqft_above',
            'sqft_basement',
            'yr_built', 'yr_renovated']

Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.


In [6]:
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
                                              validation_set=None, 
                                              l2_penalty=0., l1_penalty=1e10)


Linear regression:
--------------------------------------------------------
Number of examples          : 21613
Number of features          : 17
Number of unpacked features : 17
Number of coefficients    : 18
Starting Accelerated Gradient (FISTA)
--------------------------------------------------------
+-----------+----------+-----------+--------------+--------------------+---------------+
| Iteration | Passes   | Step size | Elapsed Time | Training-max_error | Training-rmse |
+-----------+----------+-----------+--------------+--------------------+---------------+
Tuning step size. First iteration could take longer than subsequent iterations.
| 1         | 2        | 0.000002  | 1.338025     | 6962915.603493     | 426631.749026 |
| 2         | 3        | 0.000002  | 1.365842     | 6843144.200219     | 392488.929838 |
| 3         | 4        | 0.000002  | 1.397417     | 6831900.032123     | 385340.166783 |
| 4         | 5        | 0.000002  | 1.429134     | 6847166.848958     | 384842.383767 |
| 5         | 6        | 0.000002  | 1.460555     | 6869667.895833     | 385998.458623 |
| 6         | 7        | 0.000002  | 1.487085     | 6847177.773672     | 380824.455891 |
+-----------+----------+-----------+--------------+--------------------+---------------+
TERMINATED: Iteration limit reached.
This model may not be optimal. To improve it, consider increasing `max_iterations`.

Find what features had non-zero weight.


In [7]:
model_all.get('coefficients').print_rows(20)


+------------------+-------+---------------+--------+
|       name       | index |     value     | stderr |
+------------------+-------+---------------+--------+
|   (intercept)    |  None |  274873.05595 |  None  |
|     bedrooms     |  None |      0.0      |  None  |
| bedrooms_square  |  None |      0.0      |  None  |
|    bathrooms     |  None | 8468.53108691 |  None  |
|   sqft_living    |  None | 24.4207209824 |  None  |
| sqft_living_sqrt |  None | 350.060553386 |  None  |
|     sqft_lot     |  None |      0.0      |  None  |
|  sqft_lot_sqrt   |  None |      0.0      |  None  |
|      floors      |  None |      0.0      |  None  |
|  floors_square   |  None |      0.0      |  None  |
|    waterfront    |  None |      0.0      |  None  |
|       view       |  None |      0.0      |  None  |
|    condition     |  None |      0.0      |  None  |
|      grade       |  None | 842.068034898 |  None  |
|    sqft_above    |  None | 20.0247224171 |  None  |
|  sqft_basement   |  None |      0.0      |  None  |
|     yr_built     |  None |      0.0      |  None  |
|   yr_renovated   |  None |      0.0      |  None  |
+------------------+-------+---------------+--------+
[18 rows x 4 columns]

Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.

QUIZ QUESTION: According to this list of weights, which of the features have been chosen?

Selecting an L1 penalty

To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:

  • Split our sales data into 2 sets: training and test
  • Further split our training data into two sets: train, validation

Be very careful that you use seed = 1 to ensure you get the same answer!


In [8]:
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate

Next, we write a loop that does the following:

  • For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
    • Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
  • Report which l1_penalty produced the lowest RSS on validation data.

When you call linear_regression.create() make sure you set validation_set = None.

Note: you can turn off the print out of linear_regression.create() with verbose = False


In [9]:
l1_all = np.logspace(1,7,num=13)

In [10]:
def find_best_lasso_0(l1_penalties, training, validation, features, target):
    the_model = None
    the_RSS = float('inf')
    for l1 in l1_penalties:
        model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
                                                  verbose=False, validation_set=None,
                                                  target=target,
                                                  features=features)
        predictions = model.predict(validation)
        errors = predictions - validation[target]
        RSS = np.dot(errors, errors)
        if RSS < the_RSS:
            the_RSS = RSS
            the_model = model
    return the_model

In [11]:
best_model = find_best_lasso_0(l1_all, training, validation, all_features, 'price')

QUIZ QUESTIONS

  1. What was the best value for the l1_penalty?
  2. What is the RSS on TEST data of the model with the best l1_penalty?

In [12]:
predictions = best_model.predict(testing)
errors = predictions - testing['price']
RSS = np.dot(errors, errors)
print RSS


1.56983602382e+14

QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?


In [13]:
best_model.get('coefficients')


Out[13]:
name index value stderr
(intercept) None 18993.4272128 None
bedrooms None 7936.96767903 None
bedrooms_square None 936.993368193 None
bathrooms None 25409.5889341 None
sqft_living None 39.1151363797 None
sqft_living_sqrt None 1124.65021281 None
sqft_lot None 0.00348361822299 None
sqft_lot_sqrt None 148.258391011 None
floors None 21204.335467 None
floors_square None 12915.5243361 None
[18 rows x 4 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.

In [14]:
best_model['coefficients']['value'].nnz()


Out[14]:
18

Limit the number of nonzero weights

What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.

In this section, you are going to implement a simple, two phase procedure to achive this goal:

  1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
  2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.

In [15]:
max_nonzeros = 7

Exploring the larger range of values to find a narrow range with the desired sparsity

Let's define a wide range of possible l1_penalty_values:


In [16]:
l1_penalty_values = np.logspace(8, 10, num=20)

Now, implement a loop that search through this space of possible l1_penalty values:

  • For l1_penalty in np.logspace(8, 10, num=20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
      • Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!

In [17]:
nnz = []
for l1 in l1_penalty_values:
    model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
                                              verbose=False,
                                              validation_set=None,
                                              target='price',
                                              features=all_features)
    nnz.append(model['coefficients']['value'].nnz())

In [18]:
nnz


Out[18]:
[18, 18, 18, 18, 17, 17, 17, 17, 17, 16, 15, 15, 13, 12, 10, 6, 5, 3, 1, 1]

In [19]:
nnz[-6]


Out[19]:
10

In [ ]:

Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.

More formally, find:

  • The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
    • Store this value in the variable l1_penalty_min (we will use it later)
  • The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
    • Store this value in the variable l1_penalty_max (we will use it later)

Hint: there are many ways to do this, e.g.:

  • Programmatically within the loop above
  • Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.

In [20]:
l1_penalty_min = l1_penalty_values[-6]
l1_penalty_max = l1_penalty_values[-5]

In [21]:
print l1_penalty_min
print l1_penalty_max


2976351441.63
3792690190.73

QUIZ QUESTIONS

What values did you find for l1_penalty_min andl1_penalty_max?

Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set

We will now explore the narrow region of l1_penalty values we found:


In [22]:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
  • For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
    • Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
    • Measure the RSS of the learned model on the VALIDATION set

Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.


In [23]:
def find_best_lasso(l1_penalties, nonzero, training, validation, features, target):
    the_model = None
    the_RSS = float('inf')
    for l1 in l1_penalties:
        model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
                                                  verbose=False, validation_set=None,
                                                  target=target,
                                                  features=features)
        predictions = model.predict(validation)
        errors = predictions - validation[target]
        RSS = np.dot(errors, errors)
        print RSS, the_RSS
        print model['coefficients']['value'].nnz(), nonzero
        if (RSS < the_RSS) and (model['coefficients']['value'].nnz() == nonzero):
            the_RSS = RSS
            the_model = model
    return the_model

QUIZ QUESTIONS

  1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
  2. What features in this model have non-zero coefficients?

In [24]:
the_model = find_best_lasso(l1_penalty_values, max_nonzeros, training, validation, all_features, 'price')


9.66925692362e+14 inf
10 7
9.74019450085e+14 inf
10 7
9.81188367942e+14 inf
10 7
9.89328342459e+14 inf
10 7
9.98783211266e+14 inf
10 7
1.00847716702e+15 inf
10 7
1.01829878055e+15 inf
10 7
1.02824799221e+15 inf
10 7
1.03461690923e+15 inf
8 7
1.03855473594e+15 inf
8 7
1.04323723787e+15 inf
8 7
1.04693748875e+15 inf
7 7
1.05114762561e+15 1.04693748875e+15
7 7
1.05599273534e+15 1.04693748875e+15
7 7
1.06079953176e+15 1.04693748875e+15
7 7
1.0657076895e+15 1.04693748875e+15
6 7
1.06946433543e+15 1.04693748875e+15
6 7
1.07350454959e+15 1.04693748875e+15
6 7
1.07763277558e+15 1.04693748875e+15
6 7
1.08186759232e+15 1.04693748875e+15
6 7

In [25]:
the_model['coefficients'].print_rows(18)


+------------------+-------+---------------+--------+
|       name       | index |     value     | stderr |
+------------------+-------+---------------+--------+
|   (intercept)    |  None | 222253.192544 |  None  |
|     bedrooms     |  None | 661.722717782 |  None  |
| bedrooms_square  |  None |      0.0      |  None  |
|    bathrooms     |  None | 15873.9572593 |  None  |
|   sqft_living    |  None | 32.4102214513 |  None  |
| sqft_living_sqrt |  None | 690.114773313 |  None  |
|     sqft_lot     |  None |      0.0      |  None  |
|  sqft_lot_sqrt   |  None |      0.0      |  None  |
|      floors      |  None |      0.0      |  None  |
|  floors_square   |  None |      0.0      |  None  |
|    waterfront    |  None |      0.0      |  None  |
|       view       |  None |      0.0      |  None  |
|    condition     |  None |      0.0      |  None  |
|      grade       |  None | 2899.42026975 |  None  |
|    sqft_above    |  None | 30.0115753022 |  None  |
|  sqft_basement   |  None |      0.0      |  None  |
|     yr_built     |  None |      0.0      |  None  |
|   yr_renovated   |  None |      0.0      |  None  |
+------------------+-------+---------------+--------+
[18 rows x 4 columns]


In [26]:
the_model


Out[26]:
Class                          : LinearRegression

Schema
------
Number of coefficients         : 18
Number of examples             : 9761
Number of feature columns      : 17
Number of unpacked features    : 17

Hyperparameters
---------------
L1 penalty                     : 3448968612.16
L2 penalty                     : 0.0

Training Summary
----------------
Solver                         : fista
Solver iterations              : 10
Solver status                  : TERMINATED: Iteration limit reached.
Training time (sec)            : 0.7268

Settings
--------
Residual sum of squares        : 1.18886168871e+15
Training RMSE                  : 348994.4413

Highest Positive Coefficients
-----------------------------
(intercept)                    : 222253.1925
bathrooms                      : 15873.9573
grade                          : 2899.4203
sqft_living_sqrt               : 690.1148
bedrooms                       : 661.7227

Lowest Negative Coefficients
----------------------------
No Negative Coefficients       : 

In [ ]: