In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:

- Use Sklearn to run polynomial regression
- Use matplotlib to visualize polynomial regressions
- Use Sklearn to run polynomial regression with L2 penalty
- Use matplotlib to visualize polynomial regressions under L2 regularization
- Choose best L2 penalty using cross-validation.
- Assess the final fit using test data.

In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.

```
In [50]:
```import os
import zipfile
import numpy as np
import pandas as pd
from sklearn import linear_model
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline

Dataset is from house sales in King County, the region where the city of Seattle, WA is located.

```
In [51]:
```# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]

```
In [52]:
```# Filenames of unzipped files
unzip_files = ['kc_house_data.csv','wk3_kc_house_set_1_data.csv', 'wk3_kc_house_set_2_data.csv',
'wk3_kc_house_set_3_data.csv', 'wk3_kc_house_set_4_data.csv', 'wk3_kc_house_test_data.csv',
'wk3_kc_house_train_data.csv', 'wk3_kc_house_valid_data.csv', 'wk3_kc_house_train_valid_shuffled.csv']

```
In [53]:
```# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close

`polynomial_sframe`

from Week 3:

```
In [54]:
```def polynomial_dataframe(feature, degree): # feature is pandas.Series type
# assume that degree >= 1
# initialize the dataframe:
poly_dataframe = pd.DataFrame()
# and set poly_dataframe['power_1'] equal to the passed feature
poly_dataframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# assign poly_dataframe[name] to be feature^power; use apply(*)
poly_dataframe[name] = poly_dataframe['power_1'].apply(lambda x: x**power)
return poly_dataframe

Let's use matplotlib to visualize what a polynomial regression looks like on the house data.

```
In [55]:
```# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float,
'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float,
'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int,
'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}

```
In [56]:
```sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)

```
In [57]:
```sales = sales.sort_values(['sqft_living', 'price'])
sales[['sqft_living', 'price']].head()

```
Out[57]:
```

Plotting the data we are working with

```
In [58]:
```plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('King County, Seattle House Price Data', fontsize=18)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()

```
```

`polynomial_dataframe()`

and fit a model with these features. When fitting the model, use an L2 penalty of `1e-5`

:

```
In [59]:
```# Bulding dataframe with 15 polynomial features
poly15_data = polynomial_dataframe(sales['sqft_living'], 15)

```
In [60]:
```l2_small_penalty = 1e-5

```
In [61]:
```model = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model.fit(poly15_data, sales['price'])

```
Out[61]:
```

Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (`l2_penalty=1e-5`

) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)

With the L2 penalty specified above, fit the model and print out the learned weights.

*QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?*

```
In [62]:
```print 'Weight for power_1 feature is: %.2f' % (model.coef_[0])

```
```

*high variance*. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.

`set_1`

, `set_2`

, `set_3`

, and `set_4`

. Use `.random_split`

function and make sure you set `seed=0`

.

```
In [63]:
```set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict)
set_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict)
set_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict)
set_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict)

`set_1`

, `set_2`

, `set_3`

, and `set_4`

, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.

```
In [64]:
```# Putting data and keys in a list for looping
data_list = [set_1, set_2, set_3, set_4]
key_list = ['set_1', 'set_2', 'set_3', 'set_4']

```
In [65]:
```# model_poly_deg is a dict which holds all the regression models for the ith polynomial fit
poly_15_dframe_dict = {}
models_poly_15_dict = {}

```
In [66]:
```# First, learn models with a really small L2 penalty
l2_small_penalty = 1e-9

```
In [67]:
```# Looping over polynomial features from 1-15
for key, dframe in zip(key_list, data_list):
# Entering each dataframe returned from polynomial_dataframe function into a dict
# Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe
poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15)
# Adding regression models to dicts
models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] )

Plotting the data and the 4 different 15 degree polynomials we learned from the data.

```
In [68]:
```plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data')
plt.hold(True)
#
for i, key in enumerate(key_list):
leg_label = 'Model ' + str(i+1)
plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label )
#
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Small L2 Penalty', fontsize=16)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()

```
```

The four curves should differ from one another a lot, as should the coefficients you learned.

** QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature ** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)

`power_1`

?```
In [69]:
```power_l_coeff_list = []
for key in key_list:
power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] )

```
In [70]:
```print 'Smallest power_1 weight with small L2 penalty is: %.2f' %( min(power_l_coeff_list))
print 'Largest power_1 weight with small L2 penalty is: %.2f' %( max(power_l_coeff_list))

```
```

Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of `model15`

looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)

With the argument `l2_penalty=1e5`

, fit a 15th-order polynomial model on `set_1`

, `set_2`

, `set_3`

, and `set_4`

. Other than the change in the `l2_penalty`

parameter, the code should be the same as the experiment above.

```
In [71]:
```# model_poly_deg is a dict which holds all the regression models for the ith polynomial fit
poly_15_dframe_dict = {}
models_poly_15_dict = {}

```
In [72]:
```# Re-learn models with a large L2 penalty
l2_large_penalty=1.23e2

```
In [73]:
```# Looping over polynomial features from 1-15
for key, dframe in zip(key_list, data_list):
# Entering each dataframe returned from polynomial_dataframe function into a dict
# Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe
poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15)
# Adding regression models to dicts
models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] )

```
In [74]:
```plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data')
plt.hold(True)
#
for i, key in enumerate(key_list):
leg_label = 'Model ' + str(i+1)
plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label )
#
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Large L2 Penalty', fontsize=16)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()

```
```

These curves should vary a lot less, now that you applied a high degree of regularization.

** QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature ** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)

`power_1`

?```
In [75]:
```power_l_coeff_list = []
for key in key_list:
power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] )

```
In [76]:
```print 'Smallest power_1 weight with large L2 penalty is: %.2f' %( min(power_l_coeff_list))
print 'Largest power_1 weight with large L2 penalty is: %.2f' %( max(power_l_coeff_list))

```
```

Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. **Cross-validation** seeks to overcome this issue by using all of the training set in a smart way.

We will implement a kind of cross-validation called **k-fold cross-validation**. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:

Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set

Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set

...

Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set

After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.

To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. We reserve 10% of the data as the test set and shuffle the remainder.

```
In [77]:
```train_valid_shuffled = pd.read_csv('wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict)
test = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)

`n/k`

elements, where `n`

is the number of observations in the training set and `k`

is the number of segments. Since the segment 0 starts at index 0 and contains `n/k`

elements, it ends at index `(n/k)-1`

. The segment 1 starts where the segment 0 left off, at index `(n/k)`

. With `n/k`

elements, the segment 1 ends at index `(n*2/k)-1`

. Continuing in this fashion, we deduce that the segment `i`

starts at index `(n*i/k)`

and ends at `(n*(i+1)/k)-1`

.

```
In [78]:
```n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)

```
```

`train_valid_shuffled`

. Notice that the first index (0) is included in the slice but the last index (10) is omitted.

```
In [79]:
```train_valid_shuffled[0:10] # rows 0 to 9

```
Out[79]:
```

`train_valid_shuffled`

dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called `validation4`

.

```
In [80]:
```i = 3
start_ind = (n*3)/k
end_ind = (n*(3+1))/k-1
validation4 = train_valid_shuffled[ start_ind : end_ind + 1]

```
In [81]:
```print int(round(validation4['price'].mean(), 0))

```
```

`append()`

method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the `train_valid_shuffled`

dataframe.

```
In [82]:
```n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)

```
```

*excluding* fourth segment (segment 3) and assign the subset to `train4`

.

```
In [83]:
```i = 3
k = 10
n = len(train_valid_shuffled)
start_ind = (n*3)/k
end_ind = (n*(3+1))/k-1
train4 = train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])

```
In [84]:
```print int(round(train4['price'].mean(), 0))

```
```

Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) `k`

, (ii) `l2_penalty`

, (iii) dataframe, (iv) name of output column (e.g. `price`

) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.

- For each i in [0, 1, ..., k-1]:
- Compute starting and ending indices of segment i and call 'start' and 'end'
- Form validation set by taking a slice (start:end+1) from the data.
- Form training set by appending slice (end+1:n) to the end of slice (0:start).
- Train a linear model using training set just formed, with a given l2_penalty
- Compute validation error using validation set just formed

```
In [85]:
```def k_fold_cross_validation(k, l2_penalty, data, output_vals):
# Defining n as the number of observations and an empty list to store the k cross_validation errors
n = len(data)
cv_error_list = []
# Looping to compute k slices. Computing the array index to get the kth_slice.
for i in range(k):
# Getting the starting and ending index of the kth slice
start = (n*i)/k
end = (n*(i+1))/k-1
# Using start and end to split data into cross-validation and training set
cv_set = data[start: end + 1]
training_set = data[0:start].append(data[end+1:n])
# Using the training data to create a linear regression model
model_train_data = linear_model.Ridge(alpha=l2_penalty, normalize=True)
model_train_data.fit( data, output_vals )
# Computing np.array with predictions from the model we learn
predictions = model_train_data.predict(data)
# Computing the error on the cross-validation set
RSS_cv_set = sum( (predictions - output_vals)**2 )
cv_error_list.append(RSS_cv_set)
# Return the average validation error
return sum(cv_error_list)/float(len(cv_error_list))

Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:

- We will again be aiming to fit a 15th-order polynomial model using the
`sqft_living`

input - For
`l2_penalty`

in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function:`np.logspace(1, 7, num=13)`

.)- Run 10-fold cross-validation with
`l2_penalty`

- Run 10-fold cross-validation with
- Report which L2 penalty produced the lowest average validation error.

Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use `train_valid_shuffled`

when generating polynomial features!

```
In [86]:
```l2_penalty_list = np.logspace(3, 9, num=26)

```
In [87]:
```poly_15_dframe = polynomial_dataframe(train_valid_shuffled['sqft_living'], 15)
output_values = train_valid_shuffled['price']

```
In [88]:
```l2_RSS_list = []
for l2_pen in l2_penalty_list:
RSS_error = k_fold_cross_validation(10, l2_pen, poly_15_dframe, output_values)
l2_RSS_list.append( (RSS_error, l2_pen) )

*QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?*

```
In [89]:
```print 'Minimum value for RSS error is : %.2e' %min(l2_RSS_list)[0]
print 'L2 penalty for this RSS error is: %.2e' %min(l2_RSS_list)[1]

```
```

```
In [90]:
```# Putting all L2 penalties and RSS errors for plotting
L2_plot_list = []
RSS_plot_list = []
for entry in l2_RSS_list:
L2_plot_list.append(entry[1])
RSS_plot_list.append(entry[0])

```
In [91]:
```# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.figure(figsize=(8,6))
plt.plot(L2_plot_list, RSS_plot_list,'-')
plt.xscale('log')
#
plt.xlabel('L2 penalty ' + r'$(\lambda)$', fontsize=16)
plt.ylabel('Average Cross-Validation RSS', fontsize=16)
plt.title('Cross-Validation RSS vs. L2 Penalty', fontsize=16)
#
plt.show()

```
```

`l2_penalty`

. This way, your final model will be trained on the entire dataset.

```
In [92]:
```min_L2_pen = min(l2_RSS_list)[1]

```
In [93]:
```# Loading the training set data and defining the dataframe w/ 15 polynomial features
train_data = pd.read_csv('wk3_kc_house_train_data.csv', dtype = dtype_dict)
train_data = train_data.sort_values(['sqft_living', 'price'])
poly_15_train_data = polynomial_dataframe(train_data['sqft_living'], 15)

```
In [94]:
```# Training a linear regression model with L2 penalty that gave the smallest RSS error
model_train_data = linear_model.Ridge(alpha=min_L2_pen, normalize=True)
model_train_data.fit( poly_15_train_data, train_data['price'] )

```
Out[94]:
```

```
In [95]:
```# Now, loading the test data and defining the dataframe w/ 15 polynomial features
test_data = pd.read_csv('wk3_kc_house_test_data.csv', dtype = dtype_dict)
poly_15_test_data = polynomial_dataframe(test_data['sqft_living'], 15)

```
In [96]:
```# Using the weights learning from the training data to calculate the predictions on the test data
predictions = model_train_data.predict(poly_15_test_data)

```
In [97]:
```# Computing the RSS on the test data
RSS_test_set = sum( (predictions - test_data['price'])**2 )

```
In [98]:
```print 'RSS on test data with min_L2_pen: %.2e' %(RSS_test_set)

```
```

```
In [ ]:
```