The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
In [1]:
import graphlab
In [3]:
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
In [5]:
train_data,test_data = sales.random_split(.8,seed=0)
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
In [28]:
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
In [29]:
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
In [30]:
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
In [32]:
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residuals = predictions - outcome
# Then square and add them up
RSS = sum(pow(residuals,2))
return(RSS)
Test your function by computing the RSS on TEST data for the example model:
In [33]:
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
In [38]:
import numpy as np
train_data['bedrooms_squared'] = np.power(train_data['bedrooms'],2)
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
train_data['log_sqft_living'] = np.log(train_data['sqft_living'])
train_data['lat_plus_long'] = train_data['lat']+train_data['long']
train_data.head()
test_data['bedrooms_squared'] = np.power(test_data['bedrooms'],2)
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
test_data['log_sqft_living'] = np.log(test_data['sqft_living'])
test_data['lat_plus_long'] = test_data['lat']+test_data['long']
test_data.head()
Out[38]:
In [39]:
#from math import log
Next create the following 4 new features as column in both TEST and TRAIN data:
In [40]:
#train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
#test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
In [41]:
# create the remaining 3 features in both TEST and TRAIN data
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
In [42]:
print 'mean bedrooms_squared == ' + str(np.round(test_data['bedrooms_squared'].mean(),2))
print 'mean bed_bath_rooms == ' + str(np.round(test_data['bed_bath_rooms'].mean(),2))
print 'mean log_sqft_living == ' + str(np.round(test_data['log_sqft_living'].mean(),2))
print 'mean lat_plus_long == ' + str(np.round(test_data['lat_plus_long'].mean(),2))
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
In [43]:
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
In [44]:
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target='price', features=model_1_features, validation_set=None)
model_2 = graphlab.linear_regression.create(train_data, target='price', features=model_2_features, validation_set=None)
model_3 = graphlab.linear_regression.create(train_data, target='price', features=model_3_features, validation_set=None)
In [45]:
# Examine/extract each model's coefficients:
print model_1.get("coefficients")
print model_2.get("coefficients")
print model_3.get("coefficients")
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
In [48]:
# Compute the RSS on TRAINING data for each of the three models and record the values:
print 'rss train model_1 == ' + str(get_residual_sum_of_squares(model_1, train_data, train_data['price']))
print 'rss train model_2 == ' + str(get_residual_sum_of_squares(model_2, train_data, train_data['price']))
print 'rss train model_3 == ' + str(get_residual_sum_of_squares(model_3, train_data, train_data['price']))
Now compute the RSS on on TEST data for each of the three models.
In [49]:
# Compute the RSS on TESTING data for each of the three models and record the values:
print 'rss test model_1 == ' + str(get_residual_sum_of_squares(model_1, test_data, test_data['price']))
print 'rss test model_2 == ' + str(get_residual_sum_of_squares(model_2, test_data, test_data['price']))
print 'rss test model_3 == ' + str(get_residual_sum_of_squares(model_3, test_data, test_data['price']))
In [ ]: