Adapted from Chapter 3 of An Introduction to Statistical Learning
Regression problems are supervised learning problems in which the response is continuous. Classification problems are supervised learning problems in which the response is categorical. Linear regression is a technique that is useful for regression problems.
So, why are we learning linear regression?
We'll be using scikit-learn since it provides significantly more useful functionality for machine learning in general.
In [1]:
# imports
import pandas as pd
import seaborn as sns
#import statsmodels.formula.api as smf
from sklearn.linear_model import LinearRegression
from sklearn import metrics
import numpy as np
# allow plots to appear directly in the notebook
%matplotlib inline
In [2]:
# read data into a DataFrame
data = pd.read_csv('data/Advertising.csv', index_col=0)
data.head()
Out[2]:
What are the features?
What is the response?
In [3]:
# print the shape of the DataFrame
data.shape
Out[3]:
There are 200 observations, and thus 200 markets in the dataset.
In [4]:
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7)
Out[4]:
Let's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?
This general question might lead you to more specific questions:
We will explore these questions below!
Simple linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). It takes the following form:
$y = \beta_0 + \beta_1x$
What does each term represent?
Together, $\beta_0$ and $\beta_1$ are called the model coefficients. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales!
What elements are present in the diagram?
How do the model coefficients relate to the least squares line?
Here is a graphical depiction of those calculations:
Let's estimate the model coefficients for the advertising data:
In [5]:
### SCIKIT-LEARN ###
# create X and y
feature_cols = ['TV']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
print lm.intercept_
print lm.coef_
How do we interpret the TV coefficient ($\beta_1$)?
Note that if an increase in TV ad spending was associated with a decrease in sales, $\beta_1$ would be negative.
In [6]:
# manually calculate the prediction
7.032594 + 0.047537*50
Out[6]:
In [7]:
### SCIKIT-LEARN ###
# predict for a new observation
lm.predict(50)
Out[7]:
Thus, we would predict Sales of 9,409 widgets in that market.
In [8]:
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
Out[8]:
Simple linear regression can easily be extended to include multiple features. This is called multiple linear regression:
$y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$
Each $x$ represents a different feature, and each feature has its own coefficient. In this case:
$y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$
Let's estimate these coefficients:
In [9]:
### SCIKIT-LEARN ###
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
print lm.intercept_
print lm.coef_
In [10]:
# pair the feature names with the coefficients
zip(feature_cols, lm.coef_)
Out[10]:
How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an increase of $1000 in TV ad spending is associated with an increase in Sales of 45.765 widgets.
A lot of the information we have been reviewing piece-by-piece is available in the Statsmodels model summary output:
In [11]:
# set a seed for reproducibility
np.random.seed(12345)
# create a Series of booleans in which roughly half are True
nums = np.random.rand(len(data))
mask_large = nums > 0.5
# initially set Size to small, then change roughly half to be large
data['Size'] = 'small'
data.loc[mask_large, 'Size'] = 'large'
data.head()
Out[11]:
For scikit-learn, we need to represent all data numerically. If the feature only has two categories, we can simply create a dummy variable that represents the categories as a binary value:
In [12]:
# create a new Series called Size_large
data['Size_large'] = data.Size.map({'small':0, 'large':1})
data.head()
Out[12]:
Let's redo the multiple linear regression and include the Size_large feature:
In [13]:
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'Size_large']
X = data[feature_cols]
y = data.Sales
# instantiate, fit
lm = LinearRegression()
lm.fit(X, y)
# print coefficients
zip(feature_cols, lm.coef_)
Out[13]:
How do we interpret the Size_large coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in Sales of 57.42 widgets (as compared to a small market, which is called the baseline level).
What if we had reversed the 0/1 coding and created the feature 'Size_small' instead? The coefficient would be the same, except it would be negative instead of positive. As such, your choice of category for the baseline does not matter, all that changes is your interpretation of the coefficient.
In [14]:
# set a seed for reproducibility
np.random.seed(123456)
# assign roughly one third of observations to each group
nums = np.random.rand(len(data))
mask_suburban = (nums > 0.33) & (nums < 0.66)
mask_urban = nums > 0.66
data['Area'] = 'rural'
data.loc[mask_suburban, 'Area'] = 'suburban'
data.loc[mask_urban, 'Area'] = 'urban'
data.head()
Out[14]:
We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an ordered relationship between suburban and urban, and thus urban is somehow "twice" the suburban category. Note that if you do have ordered categories (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5).
Anyway, our Area feature is unordered, so we have to create additional dummy variables. Let's explore how to do this using pandas:
In [15]:
# create three dummy variables using get_dummies
x = pd.get_dummies(data.Area, prefix='Area')
In [16]:
x.tail()
Out[16]:
In [17]:
data = pd.concat([data, x], axis=1)
In [18]:
data.tail()
Out[18]:
However, we actually only need two dummy variables, not three. Why? Because two dummies captures all of the "information" about the Area feature, and implicitly defines rural as the "baseline level".
Let's see what that looks like:
In [19]:
# create three dummy variables using get_dummies, then exclude the first dummy column
area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:]
area_dummies.head()
Out[19]:
Here is how we interpret the coding:
If this is confusing, think about why we only needed one dummy variable for Size (Size_large), not two dummy variables (Size_small and Size_large). In general, if you have a categorical feature with k "levels", you create k-1 dummy variables.
Anyway, let's add these two new dummy variables onto the original DataFrame, and then include them in the linear regression model:
In [20]:
# concatenate the dummy variable columns onto the DataFrame (axis=0 means rows, axis=1 means columns)
data = pd.concat([data, area_dummies], axis=1)
data.head()
Out[20]:
In [21]:
data.tail()
Out[21]:
In [22]:
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'Size_large', 'Area_suburban', 'Area_urban', 'Area_rural']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
zip(feature_cols, lm.coef_)
Out[22]:
How do we interpret the coefficients?
In [23]:
import pydataset
from pydataset import data
trees=data('trees')
#can use the below line to examine the detailed data description
#data('trees',show_doc=True)
trees.head()
Out[23]:
The dataset trees have two features Girth and Height. we want to use them to predict the Volume of the trees.
In [24]:
#set up features and aimed result
feature_cols=["Girth", "Height"]
X=trees[feature_cols]
Y=trees.Volume
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Out[24]:
Let's examine the result of the fitting.
In [25]:
Ypredict=lm.predict(X)
print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
#print type(X)
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
Out[25]:
Can we do better than this? Let us add in non linear features
In [26]:
#since we are interested in the Volume of trees
#it's nature to add in the square of Girth into our features
#add in a new feature
X["GirthSquare"]=trees["Girth"]**2.
feature_cols=["Girth", "Height","GirthSquare"]
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Out[26]:
In [27]:
Ypredict=lm.predict(X)
#print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
Out[27]:
In [29]:
#We can keep on trying even higher order non liearn features
X["GirthCube"]=trees["Girth"]**3.
X["GirthFouth"]=trees["Girth"]**4.
print X.shape
feature_cols=["Girth", "Height","GirthSquare","GirthCube","GirthFouth"]
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Out[29]:
In [30]:
Ypredict=lm.predict(X)
#print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
#print type(X)
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
Out[30]:
You could certainly go very deep into linear regression, and learn how to apply it really, really well. It's an excellent way to start your modeling process when working a regression problem. However, it is limited by the fact that it can only make good predictions if there is a linear relationship between the features and the response, which is why more complex methods (with higher variance and lower bias) will often outperform linear regression.
Therefore, we want you to understand linear regression conceptually, understand its strengths and weaknesses, be familiar with the terminology, and know how to apply it. However, we also want to spend time on many other machine learning models, which is why we aren't going deeper here.