In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
We will continue to use the House data from previous notebooks.
In [ ]:
import graphlab
Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions. For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
In [ ]:
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
In [ ]:
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
In [ ]:
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
return poly_sframe
To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
In [ ]:
print polynomial_sframe(tmp, 3)
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
In [ ]:
sales = graphlab.SFrame('kc_house_data.gl/')
For the rest of the notebook we'll use the sqft_living variable. For plotting purposes (connected the dots) you'll need to sort by the values of sqft_living first:
In [ ]:
sales = sales.sort('sqft_living')
Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
In [ ]:
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
In [ ]:
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
In [ ]:
#let's take a look at the weights before we plot
model1.get("coefficients")
In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline
In [ ]:
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
In [ ]:
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
In [ ]:
model2.get("coefficients")
In [ ]:
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
In [ ]:
In [ ]:
Now try a 15th degree polynomial:
In [ ]:
In [ ]:
What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
We're going to split the sales data into four subsets of roughly equal size. Do this as follows:
In [ ]:
Next you will
In [ ]:
In [ ]:
In [ ]:
In [ ]:
Some questions you will be asked on your quiz:
Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?
Quiz Question: True/False the plotted fitted lines look the same in all four plots
Whenever we have a "magic" parameter like the degree of the polynomial there is one very well-known way to select these parameters: cross validation.
Now you're going to split the sales data again this time into 3 subsets as follows:
Be very careful that you use seed = 1 to ensure you get the same answer!
In [ ]:
Next you should write a loop that does the following:
(Note you can turn off the print out of linear_regression.create() with verbose = False)
In [ ]:
Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?
Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
In [ ]:
Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data? (Make sure you got the correct degree from the previous question)