This is mostly just code for reference. Please watch the video lecture for more info behind all of this code.
Your neighbor is a real estate agent and wants some help predicting housing prices for regions in the USA. It would be great if you could somehow create a model for her that allows her to put in a few features of a house and returns back an estimate of what the house would sell for.
She has asked you if you could help her out with your new data science skills. You say yes, and decide that Linear Regression might be a good path to solve this problem!
Your neighbor then gives you some information about a bunch of houses in regions of the United States,it is all in the data set: USA_Housing.csv.
The data contains the following columns:
In [255]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
In [256]:
USAhousing = pd.read_csv('USA_Housing.csv')
In [257]:
USAhousing.head()
Out[257]:
In [258]:
USAhousing.info()
In [259]:
USAhousing.describe()
Out[259]:
In [260]:
USAhousing.columns
Out[260]:
In [261]:
sns.pairplot(USAhousing)
Out[261]:
In [262]:
sns.distplot(USAhousing['Price'])
Out[262]:
In [263]:
sns.heatmap(USAhousing.corr())
Out[263]:
Let's now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use.
In [264]:
X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = USAhousing['Price']
In [265]:
from sklearn.model_selection import train_test_split
In [266]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
In [267]:
from sklearn.linear_model import LinearRegression
In [268]:
lm = LinearRegression()
In [269]:
lm.fit(X_train,y_train)
Out[269]:
In [270]:
# print the intercept
print(lm.intercept_)
In [277]:
coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
coeff_df
Out[277]:
Interpreting the coefficients:
Does this make sense? Probably not because I made up this data. If you want real data to repeat this sort of analysis, check out the boston dataset:
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
boston_df = boston.data
In [279]:
predictions = lm.predict(X_test)
In [282]:
plt.scatter(y_test,predictions)
Out[282]:
Residual Histogram
In [281]:
sns.distplot((y_test-predictions),bins=50);
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$Comparing these metrics:
All of these are loss functions, because we want to minimize them.
In [275]:
from sklearn import metrics
In [276]:
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))