Regression Week 2: Multiple Regression (Interpretation)

The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.

In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:

  • Use SFrames to do some feature engineering
  • Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
  • Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
  • Look at coefficients and interpret their meanings
  • Evaluate multiple models via RSS

Fire up graphlab create


In [1]:
import graphlab

Load in house sales data

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.


In [16]:
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()


Out[16]:
id date price bedrooms bathrooms sqft_living sqft_lot floors waterfront
7129300520 2014-10-13 00:00:00+00:00 221900.0 3.0 1.0 1180.0 5650 1 0
6414100192 2014-12-09 00:00:00+00:00 538000.0 3.0 2.25 2570.0 7242 2 0
5631500400 2015-02-25 00:00:00+00:00 180000.0 2.0 1.0 770.0 10000 1 0
2487200875 2014-12-09 00:00:00+00:00 604000.0 4.0 3.0 1960.0 5000 1 0
1954400510 2015-02-18 00:00:00+00:00 510000.0 3.0 2.0 1680.0 8080 1 0
7237550310 2014-05-12 00:00:00+00:00 1225000.0 4.0 4.5 5420.0 101930 1 0
1321400060 2014-06-27 00:00:00+00:00 257500.0 3.0 2.25 1715.0 6819 2 0
2008000270 2015-01-15 00:00:00+00:00 291850.0 3.0 1.5 1060.0 9711 1 0
2414600126 2015-04-15 00:00:00+00:00 229500.0 3.0 1.0 1780.0 7470 1 0
3793500160 2015-03-12 00:00:00+00:00 323000.0 3.0 2.5 1890.0 6560 2 0
view condition grade sqft_above sqft_basement yr_built yr_renovated zipcode lat
0 3 7 1180 0 1955 0 98178 47.51123398
0 3 7 2170 400 1951 1991 98125 47.72102274
0 3 6 770 0 1933 0 98028 47.73792661
0 5 7 1050 910 1965 0 98136 47.52082
0 3 8 1680 0 1987 0 98074 47.61681228
0 3 11 3890 1530 2001 0 98053 47.65611835
0 3 7 1715 0 1995 0 98003 47.30972002
0 3 7 1060 0 1963 0 98198 47.40949984
0 3 7 1050 730 1960 0 98146 47.51229381
0 3 7 1890 0 2003 0 98038 47.36840673
long sqft_living15 sqft_lot15
-122.25677536 1340.0 5650.0
-122.3188624 1690.0 7639.0
-122.23319601 2720.0 8062.0
-122.39318505 1360.0 5000.0
-122.04490059 1800.0 7503.0
-122.00528655 4760.0 101930.0
-122.32704857 2238.0 6819.0
-122.31457273 1650.0 9711.0
-122.33659507 1780.0 8113.0
-122.0308176 2390.0 7570.0
[10 rows x 21 columns]

Split data into training and testing.

We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).


In [17]:
train_data,test_data = sales.random_split(.8,seed=0)

Learning a multiple regression model

Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:

(Aside: We set validation_set = None to ensure that the results are always the same)


In [18]:
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, 
                                                  validation_set = None)


Linear regression:
--------------------------------------------------------
Number of examples          : 17384
Number of features          : 3
Number of unpacked features : 3
Number of coefficients    : 4
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+--------------------+---------------+
| Iteration | Passes   | Elapsed Time | Training-max_error | Training-rmse |
+-----------+----------+--------------+--------------------+---------------+
| 1         | 2        | 0.033914     | 4146407.600631     | 258679.804477 |
+-----------+----------+--------------+--------------------+---------------+
SUCCESS: Optimal solution found.

Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:


In [19]:
example_weight_summary = example_model.get("coefficients")
print example_weight_summary


+-------------+-------+----------------+---------------+
|     name    | index |     value      |     stderr    |
+-------------+-------+----------------+---------------+
| (intercept) |  None | 87910.0724924  |  7873.3381434 |
| sqft_living |  None | 315.403440552  | 3.45570032585 |
|   bedrooms  |  None | -65080.2155528 | 2717.45685442 |
|  bathrooms  |  None | 6944.02019265  | 3923.11493144 |
+-------------+-------+----------------+---------------+
[4 rows x 4 columns]

Making Predictions

In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.

Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:


In [20]:
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878


271789.505878

Compute RSS

Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.


In [21]:
def get_residual_sum_of_squares(model, data, outcome):
    # First get the predictions
    predictions = model.predict(data)
    # Then compute the residuals/errors
    errors = outcome - predictions
    # Then square and add them up
    RSS = (errors * errors).sum()
    return(RSS)

Test your function by computing the RSS on TEST data for the example model:


In [22]:
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14


2.7376153833e+14

Create some new features

Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.

You will use the logarithm function to create a new feature. so first you should import it from the math library.


In [23]:
from math import log

Next create the following 4 new features as column in both TEST and TRAIN data:

  • bedrooms_squared = bedrooms*bedrooms
  • bed_bath_rooms = bedrooms*bathrooms
  • log_sqft_living = log(sqft_living)
  • lat_plus_long = lat + long As an example here's the first one:

In [32]:
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)

In [33]:
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = (train_data['bedrooms'] * train_data['bathrooms']).apply(lambda x: x)
test_data['bed_bath_rooms'] = (test_data['bedrooms'] * test_data['bathrooms']).apply(lambda x: x)

train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))

train_data['lat_plus_long'] = (train_data['lat'] + train_data['long']).apply(lambda x: x)
test_data['lat_plus_long'] = (test_data['lat'] + test_data['long']).apply(lambda x: x)
  • Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
  • bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
  • Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
  • Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)

Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)


In [37]:
print 'bedrooms_squared:', test_data['bedrooms_squared'].mean()
print 'bed_bath_rooms', test_data['bed_bath_rooms'].mean()
print 'lat_plus_long', test_data['lat_plus_long'].mean()
print 'log_sqft_living', test_data['log_sqft_living'].mean()


bedrooms_squared: 12.4466777016
bed_bath_rooms 7.50390163159
lat_plus_long -74.6533349722
log_sqft_living 7.55027467965

Learning Multiple Models

Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:

  • Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
  • Model 2: add bedrooms*bathrooms
  • Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude

In [40]:
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']

Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:


In [41]:
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target = 'price', 
                                            features = model_1_features, validation_set = None)
model_2 = graphlab.linear_regression.create(train_data, target = 'price', 
                                            features = model_2_features, validation_set = None)
model_3 = graphlab.linear_regression.create(train_data, target = 'price', 
                                            features = model_3_features, validation_set = None)


Linear regression:
--------------------------------------------------------
Number of examples          : 17384
Number of features          : 5
Number of unpacked features : 5
Number of coefficients    : 6
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+--------------------+---------------+
| Iteration | Passes   | Elapsed Time | Training-max_error | Training-rmse |
+-----------+----------+--------------+--------------------+---------------+
| 1         | 2        | 0.072490     | 4074878.213096     | 236378.596455 |
+-----------+----------+--------------+--------------------+---------------+
SUCCESS: Optimal solution found.

Linear regression:
--------------------------------------------------------
Number of examples          : 17384
Number of features          : 6
Number of unpacked features : 6
Number of coefficients    : 7
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+--------------------+---------------+
| Iteration | Passes   | Elapsed Time | Training-max_error | Training-rmse |
+-----------+----------+--------------+--------------------+---------------+
| 1         | 2        | 0.078051     | 4014170.932928     | 235190.935428 |
+-----------+----------+--------------+--------------------+---------------+
SUCCESS: Optimal solution found.

Linear regression:
--------------------------------------------------------
Number of examples          : 17384
Number of features          : 9
Number of unpacked features : 9
Number of coefficients    : 10
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+--------------------+---------------+
| Iteration | Passes   | Elapsed Time | Training-max_error | Training-rmse |
+-----------+----------+--------------+--------------------+---------------+
| 1         | 2        | 0.045337     | 3193229.177908     | 228200.043155 |
+-----------+----------+--------------+--------------------+---------------+
SUCCESS: Optimal solution found.


In [42]:
# Examine/extract each model's coefficients:
model_1_summary = model_1.get("coefficients")
model_2_summary = model_2.get("coefficients")
model_3_summary = model_3.get("coefficients")
print model_1_summary
print model_2_summary
print model_3_summary


+-------------+-------+----------------+---------------+
|     name    | index |     value      |     stderr    |
+-------------+-------+----------------+---------------+
| (intercept) |  None | -56140675.7444 | 1649985.42028 |
| sqft_living |  None | 310.263325778  | 3.18882960408 |
|   bedrooms  |  None | -59577.1160683 | 2487.27977322 |
|  bathrooms  |  None | 13811.8405419  | 3593.54213297 |
|     lat     |  None | 629865.789485  | 13120.7100323 |
|     long    |  None | -214790.285186 | 13284.2851607 |
+-------------+-------+----------------+---------------+
[6 rows x 4 columns]

+----------------+-------+----------------+---------------+
|      name      | index |     value      |     stderr    |
+----------------+-------+----------------+---------------+
|  (intercept)   |  None | -54410676.1152 | 1650405.16541 |
|  sqft_living   |  None | 304.449298056  | 3.20217535637 |
|    bedrooms    |  None | -116366.04323  | 4805.54966546 |
|   bathrooms    |  None | -77972.3305131 | 7565.05991091 |
|      lat       |  None | 625433.834953  | 13058.3530972 |
|      long      |  None | -203958.602959 | 13268.1283711 |
| bed_bath_rooms |  None | 26961.6249091  | 1956.36561555 |
+----------------+-------+----------------+---------------+
[7 rows x 4 columns]

+------------------+-------+----------------+---------------+
|       name       | index |     value      |     stderr    |
+------------------+-------+----------------+---------------+
|   (intercept)    |  None | -52974974.0598 | 1615194.94383 |
|   sqft_living    |  None | 529.196420561  | 7.69913498509 |
|     bedrooms     |  None | 28948.5277291  | 9395.72889104 |
|    bathrooms     |  None | 65661.2072295  | 10795.3380703 |
|       lat        |  None | 704762.148378  |      nan      |
|       long       |  None | -137780.019966 |      nan      |
|  bed_bath_rooms  |  None | -8478.36410481 | 2858.95391257 |
| bedrooms_squared |  None | -6072.38466052 | 1494.97042777 |
| log_sqft_living  |  None | -563467.78426  | 17567.8230813 |
|  lat_plus_long   |  None | -83217.197896  |      nan      |
+------------------+-------+----------------+---------------+
[10 rows x 4 columns]

Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?

Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?

Think about what this means.

Comparing multiple models

Now that you've learned three models and extracted the model weights we want to evaluate which model is best.

First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.


In [44]:
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1 = get_residual_sum_of_squares(model_1, train_data, train_data['price'])
rss_model_2 = get_residual_sum_of_squares(model_2, train_data, train_data['price'])
rss_model_3 = get_residual_sum_of_squares(model_3, train_data, train_data['price'])
print 'rss_model_1: ', rss_model_1
print 'rss_model_2: ', rss_model_2
print 'rss_model_3: ', rss_model_3


rss_model_1:  9.71328233544e+14
rss_model_2:  9.61592067856e+14
rss_model_3:  9.05276314556e+14

Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?

Now compute the RSS on on TEST data for each of the three models.


In [45]:
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1 = get_residual_sum_of_squares(model_1, test_data, test_data['price'])
rss_model_2 = get_residual_sum_of_squares(model_2, test_data, test_data['price'])
rss_model_3 = get_residual_sum_of_squares(model_3, test_data, test_data['price'])
print 'rss_model_1: ', rss_model_1
print 'rss_model_2: ', rss_model_2
print 'rss_model_3: ', rss_model_3


rss_model_1:  2.26568089093e+14
rss_model_2:  2.24368799994e+14
rss_model_3:  2.5182931895e+14

Quiz Question: Which model (1, 2 or 3) has lowest RSS on TESTING Data? Is this what you expected?Think about the features that were added to each model from the previous.