In this notebook, you will implement your very own LASSO solver via coordinate descent. You will:
Make sure you have the latest version of graphlab (>= 1.7)
In [1]:
import graphlab
In [2]:
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before using it below
sales['floors'] = sales['floors'].astype(int)
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data()
from the second notebook of Week 2.
In [3]:
import numpy as np # note this allows us to refer to numpy as np instead
In [4]:
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Also, copy and paste the predict_output()
function to compute the predictions for an entire matrix of features given the matrix and the weights:
In [5]:
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
In the house dataset, features vary wildly in their relative magnitude: sqft_living
is very large overall compared to bedrooms
, for instance. As a result, weight for sqft_living
would be much smaller than weight for bedrooms
. This is problematic because "small" weights are dropped first as l1_penalty
goes up.
To give equal considerations for all features, we need to normalize features as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1.
Let's see how we can do this normalization easily with Numpy: let us first consider a small matrix.
In [6]:
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X
Numpy provides a shorthand for computing 2-norms of each column:
In [7]:
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms
To normalize, apply element-wise division:
In [8]:
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]
Using the shorthand we just covered, write a short function called normalize_features(feature_matrix)
, which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms)
, where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
In [9]:
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
To test the function, run the following:
In [10]:
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6 0.6 0.6]
# [ 0.8 0.8 0.8]]
print norms
# should print
# [5. 10. 15.]
We seek to obtain a sparse set of weights by minimizing the LASSO cost function
SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).
(By convention, we do not include w[0]
in the L1 penalty term. We never want to push the intercept to zero.)
The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent: at each iteration, we will fix all weights but weight i
and find the value of weight i
that minimizes the objective. That is, we look for
argmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ]
where all weights other than w[i]
are held to be constant. We will optimize one w[i]
at a time, circling through the weights multiple times.
i
w[i]
that minimizes the cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|)
For this notebook, we use cyclical coordinate descent with normalized features, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows:
┌ (ro[i] + lambda/2) if ro[i] < -lambda/2
w[i] = ├ 0 if -lambda/2 <= ro[i] <= lambda/2
└ (ro[i] - lambda/2) if ro[i] > lambda/2
where
ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ].
Note that we do not regularize the weight of the constant feature (intercept) w[0]
, so, for this weight, the update is simply:
w[0] = ro[i]
Let us consider a simple model with 2 features:
In [11]:
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
Don't forget to normalize features:
In [12]:
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)
We assign some random set of initial weights and inspect the values of ro[i]
:
In [13]:
weights = np.array([1., 4., 1.])
Use predict_output()
to make predictions on this data.
In [14]:
prediction = predict_output(simple_feature_matrix, weights)
Compute the values of ro[i]
for each feature in this simple model, using the formula given above, using the formula:
ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]
Hint: You can get a Numpy vector for feature_i using:
simple_feature_matrix[:,i]
In [44]:
def get_ro_values(feature_matrix, output, prediction, weights):
ro_values = np.empty(feature_matrix.shape[1], dtype=float)
for i in range(feature_matrix.shape[1]):
ro_values[i] = sum(feature_matrix[:,i] * (output - prediction + weights[i] * feature_matrix[:,i]))
return ro_values
In [45]:
ros = get_ro_values(simple_feature_matrix, output, prediction, weights)
In [209]:
ros
Out[209]:
QUIZ QUESTION
Recall that, whenever ro[i]
falls between -l1_penalty/2
and l1_penalty/2
, the corresponding weight w[i]
is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty
would not set w[1]
zero, but would set w[2]
to zero, if we were to take a step in that coordinate?
In [212]:
print -ros * 2
print ros * 2
In [47]:
print (-ros[1] * 2, -ros[2] * 2), (ros[2] * 2, ros[1] * 2)
So we can say that ro[i]
quantifies the significance of the i-th feature: the larger ro[i]
is, the more likely it is for the i-th feature to be retained.
Using the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i.
In [142]:
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
# compute prediction
prediction = predict_output(feature_matrix, weights)
# compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
ro_i = sum(feature_matrix[:,i] * (output - prediction + weights[i] * feature_matrix[:,i]))
if i == 0: # intercept -- do not regularize
new_weight_i = ro_i
elif ro_i < -l1_penalty/2.:
new_weight_i = ro_i + l1_penalty/2.
elif ro_i > l1_penalty/2.:
new_weight_i = ro_i - l1_penalty/2.
else:
new_weight_i = 0.
return new_weight_i
To test the function, run the following cell:
In [143]:
# should print 0.425558846691
import math
print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]),
np.array([1., 1.]), np.array([1., 4.]), 0.1)
Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.
When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop.
For each iteration:
Return weights
IMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration.
for i in range(len(weights)):
old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten
# the following line uses new values for weight[0], weight[1], ..., weight[i-1]
# and old values for weight[i], ..., weight[d-1]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
# use old_weights_i to compute change in coordinate
...
In [155]:
def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):
weights = np.copy(initial_weights)
weight_diffs = np.zeros(len(weights))
converged = False
iteration = 0
abs_tolerance = abs(tolerance)
while( not converged and iteration < 10000 ):
for i in range(weights.size):
old_weights_i = weights[i]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
weight_diffs[i] = abs(old_weights_i - weights[i])
converged = max(weight_diffs) < abs_tolerance
iteration = iteration + 1
print converged
print iteration
return weights
Using the following parameters, learn the weights on the sales dataset.
In [156]:
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0
First create a normalized version of the feature matrix, normalized_simple_feature_matrix
In [157]:
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features
Then, run your implementation of LASSO coordinate descent:
In [158]:
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
In [159]:
predictions = predict_output(normalized_simple_feature_matrix, weights)
residuals = predictions - sales['price']
rss = sum(pow(residuals,2))
print rss
In [189]:
weights_mask = np.array(weights[1::] == 0.)
simple_features_np = np.array(simple_features)
simple_features_np[weights_mask]
Out[189]:
QUIZ QUESTIONS
Let us split the sales dataset into training and test sets.
In [161]:
train_data,test_data = sales.random_split(.8,seed=0)
Let us consider the following set of features.
In [162]:
all_features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
In [163]:
(all_features_matrix, all_output_array) = get_numpy_data(train_data, all_features, my_output)
(all_normalized_features, all_norms) = normalize_features(all_features_matrix)
In [164]:
all_initial_weights = np.zeros(all_features_matrix.shape[1])
NEXT, learn the weights with l1_penalty=1e7
, on the training data. Initialize weights to all zeros, and set the tolerance=1
. Call resulting weights weights1e7
, you will need them later.
In [165]:
weights1e7 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e7, 1)
In [204]:
weights1e7
Out[204]:
In [206]:
weights1e7_mask = np.array(weights1e7[1::] <> 0.)
all_features_np = np.array(all_features)
all_features_np[weights1e7_mask]
Out[206]:
Next, learn the weights with l1_penalty=1e8
, on the training data. Initialize weights to all zeros, and set the tolerance=1
. Call resulting weights weights1e8
, you will need them later.
In [190]:
all_initial_weights
Out[190]:
In [191]:
weights1e8 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e8, 1)
In [207]:
weights1e8_mask = np.array(weights1e8[1::] <> 0.)
all_features_np[weights1e8_mask]
Out[207]:
In [193]:
weights1e8
Out[193]:
Finally, learn the weights with l1_penalty=1e4
, on the training data. Initialize weights to all zeros, and set the tolerance=5e5
. Call resulting weights weights1e4
, you will need them later. (This case will take quite a bit longer to converge than the others above.)
In [194]:
weights1e4 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e4, 5e5)
In [205]:
weights1e4
Out[205]:
In [208]:
weights1e4_mask = np.array(weights1e4[1::] <> 0.)
all_features_np[weights1e4_mask]
Out[208]:
Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.
Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data:
In this case, we must scale the resulting weights so that we can make predictions with original features:
norms
:
features, norms = normalize_features(features)
weights
vectorweights_normalized = weights / norms
Now, we can apply weights_normalized
to the test data, without normalizing it!Create a normalized version of each of the weights learned above. (weights1e4
, weights1e7
, weights1e8
).
In [196]:
weights1e4_normalized = weights1e4 / all_norms
weights1e7_normalized = weights1e7 / all_norms
weights1e8_normalized = weights1e8 / all_norms
In [198]:
print weights1e7_normalized[3]
To check your results, if you call normalized_weights1e7
the normalized version of weights1e7
, then:
print normalized_weights1e7[3]
should return 161.31745624837794.
Let's now evaluate the three models on the test data:
In [199]:
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')
Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix
:
In [201]:
predictions1e4 = predict_output(test_feature_matrix, weights1e4_normalized)
residuals1e4 = predictions1e4 - test_data['price']
rss1e4 = sum(pow(residuals1e4,2))
print rss1e4
In [202]:
predictions1e7 = predict_output(test_feature_matrix, weights1e7_normalized)
residuals1e7 = predictions1e7 - test_data['price']
rss1e7 = sum(pow(residuals1e7,2))
print rss1e7
In [203]:
predictions1e8 = predict_output(test_feature_matrix, weights1e8_normalized)
residuals1e8 = predictions1e8 - test_data['price']
rss1e8 = sum(pow(residuals1e8,2))
print rss1e8
In [ ]: