In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
In [1]:
import graphlab
In [2]:
sales = graphlab.SFrame('kc_house_data.gl/')
In [3]:
sales.head()
Out[3]:
As in Week 2, we consider features that are some transformations of inputs.
In [4]:
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
Let us fit a model with all the features available, plus the features we just created above.
In [5]:
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
In [6]:
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
Find what features had non-zero weight.
In [9]:
model_all.get('coefficients')[model_all.get('coefficients')['value'] > 0.0]
Out[9]:
Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION: According to this list of weights, which of the features have been chosen?
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
Be very careful that you use seed = 1 to ensure you get the same answer!
In [10]:
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
Next, we write a loop that does the following:
l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list..predict()) for that l1_penaltyl1_penalty produced the lowest RSS on validation data.When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
In [15]:
validation_rss_avg_list = []
best_l1_penalty = 1
min_rss = float("inf")
import numpy as np
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# find validation error
prediction = model.predict(validation[all_features])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
print "L1 penalty " + str(l1_penalty) + " validation rss = " + str(rss)
if (rss < min_rss):
min_rss = rss
best_l1_penalty = l1_penalty
validation_rss_avg_list.append(rss)
print "Best L1 penalty " + str(best_l1_penalty) + " validation rss = " + str(min_rss)
validation_rss_avg_list
Out[15]:
In [16]:
np.logspace(1, 7, num=13)
Out[16]:
QUIZ QUESTIONS
l1_penalty?l1_penalty?
In [17]:
best_l1_penalty
Out[17]:
In [18]:
model_best = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=best_l1_penalty, verbose=False)
QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?
In [20]:
len(model_best.get('coefficients')[model_best.get('coefficients')['value'] > 0.0])
Out[20]:
In this section, you are going to implement a simple, two phase procedure to achive this goal:
l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
In [21]:
max_nonzeros = 7
In [22]:
l1_penalty_values = np.logspace(8, 10, num=20)
Now, implement a loop that search through this space of possible l1_penalty values:
l1_penalty in np.logspace(8, 10, num=20):l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = Nonemodel['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
In [23]:
nnz_list = []
for l1_penalty in np.logspace(8, 10, num=20):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# extract number of nnz
nnz = model['coefficients']['value'].nnz()
print "L1 penalty " + str(l1_penalty) + " : # nnz = " + str(nnz)
nnz_list.append(nnz)
nnz_list
Out[23]:
Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)l1_penalty_min (we will use it later)l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)l1_penalty_max (we will use it later)Hint: there are many ways to do this, e.g.:
l1_penalty and inspecting it to find the appropriate boundaries.
In [25]:
l1_penalty_min = 2976351441.63
l1_penalty_max = 3792690190.73
QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
In [26]:
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = NoneFind the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
In [29]:
nnz_list = []
validation_rss_avg_list = []
best_l1_penalty = 1
min_rss = float("inf")
import numpy as np
for l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# find validation error
prediction = model.predict(validation[all_features])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
print "L1 penalty " + str(l1_penalty) + " validation rss = " + str(rss)
# extract number of nnz
nnz = model['coefficients']['value'].nnz()
print "L1 penalty " + str(l1_penalty) + " : # nnz = " + str(nnz)
nnz_list.append(nnz)
print "----------------------------------------------------------"
if (nnz == max_nonzeros and rss < min_rss):
min_rss = rss
best_l1_penalty = l1_penalty
validation_rss_avg_list.append(rss)
print "Best L1 penalty " + str(best_l1_penalty) + " validation rss = " + str(min_rss)
QUIZ QUESTIONS
l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
In [31]:
model_best = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=best_l1_penalty, verbose=False)
model_best.get('coefficients')[model_best.get('coefficients')['value'] > 0.0]
Out[31]:
In [ ]: