We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition (data dictionary).
In [223]:
# read the data and set the datetime as the index
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
import pandas as pd
urls = ['../data/KDCA-201601.csv', '../data/KDCA-201602.csv', '../data/KDCA-201603.csv']
frames = [pd.read_csv(url) for url in urls]
weather = pd.concat(frames)
cols = 'WBAN Date Time StationType SkyCondition Visibility WeatherType DryBulbFarenheit DryBulbCelsius WetBulbFarenheit WetBulbCelsius DewPointFarenheit DewPointCelsius RelativeHumidity WindSpeed WindDirection ValueForWindCharacter StationPressure PressureTendency PressureChange SeaLevelPressure RecordType HourlyPrecip Altimeter'
cols = cols.split()
weather = weather[cols]
weather.rename(columns={'DryBulbFarenheit':'temp',
'RelativeHumidity': 'humidity'}, inplace=True)
# weather['humidity'] = pd.to_numeric(weather.humidity, errors='coerce')
weather['datetime'] = pd.to_datetime(weather.Date.astype(str) + weather.Time.apply('{0:0>4}'.format))
weather['datetime_hour'] = weather.datetime.dt.floor(freq='h')
weather['month'] = weather.datetime.dt.month
bikes = pd.read_csv('../data/2016-Q1-Trips-History-Data.csv')
bikes['start'] = pd.to_datetime(bikes['Start date'], infer_datetime_format=True)
bikes['end'] = pd.to_datetime(bikes['End date'], infer_datetime_format=True)
bikes['datetime_hour'] = bikes.start.dt.floor(freq='h')
weather[['datetime', 'temp']].hist(bins=30)
print(weather.columns)
weather.head()
Out[223]:
In [224]:
bikes.merge(weather[['temp', 'datetime_hour', 'datetime']], on='datetime_hour')
Out[224]:
In [225]:
hours = bikes.groupby('datetime_hour').agg('count')
hours['datetime_hour'] = hours.index
hours.head()
hours['total'] = hours.start
hours = hours[['total', 'datetime_hour']]
hours.total.plot()
hours_weather = hours.merge(weather, on='datetime_hour')
hours_weather.plot(kind='scatter', x='temp', y='total')
Out[225]:
In [226]:
weekday = hours_weather[(hours_weather.datetime.dt.hour==11) & (hours_weather.datetime.dt.dayofweek<5) ]
weekday.plot(kind='scatter', x='temp', y='total')
Out[226]:
In [227]:
# import seaborn as sns
sns.lmplot(x='temp', y='total', data=weekday, aspect=1.5, scatter_kws={'alpha':0.8})
Out[227]:
Questions:
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
The $\beta$ values are called the model coefficients:
In the diagram above:
In [228]:
# create X and y
feature_cols = ['temp']
X = hours_weather[feature_cols]
y = hours_weather.total
In [229]:
# import, instantiate, fit
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
Out[229]:
In [230]:
# print the coefficients
print(linreg.intercept_)
print(linreg.coef_)
Interpreting the intercept ($\beta_0$):
Interpreting the "temp" coefficient ($\beta_1$):
In [231]:
# manually calculate the prediction
linreg.intercept_ + linreg.coef_ * 77
Out[231]:
In [232]:
# use the predict method
linreg.predict(77)
Out[232]:
In [233]:
# create a new column for Fahrenheit temperature
hours_weather['temp_C'] = (hours_weather.temp - 32) * 5/9
hours_weather.head()
Out[233]:
In [234]:
# Seaborn scatter plot with regression line
sns.lmplot(x='temp_C', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
sns.lmplot(x='temp', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
Out[234]:
In [235]:
# create X and y
feature_cols = ['temp_C']
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
In [236]:
# convert 77 degrees Fahrenheit to Celsius
(77 - 32)* 5/9
Out[236]:
In [237]:
# predict rentals for 25 degrees Celsius
linreg.predict(25)
Out[237]:
Conclusion: The scale of the features is irrelevant for linear regression models. When changing the scale, we simply change our interpretation of the coefficients.
In [239]:
# remove the temp_F column
# bikes.drop('temp_C', axis=1, inplace=True)
In [240]:
# explore more features
feature_cols = ['temp', 'month', 'humidity']
In [241]:
# multiple scatter plots in Seaborn
# print(hours_weather.humidity != 'M')
hours_weather.humidity = hours_weather.humidity.apply(lambda x: -1 if isinstance(x, str) else x)
# hours_weather.loc[hours_weather.humidity.dtype != int].humidity = 100
sns.pairplot(hours_weather, x_vars=feature_cols, y_vars='total', kind='reg')
Out[241]:
In [242]:
# multiple scatter plots in Pandas
fig, axs = plt.subplots(1, len(feature_cols), sharey=True)
for index, feature in enumerate(feature_cols):
hours_weather.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))
Are you seeing anything that you did not expect?
In [244]:
# cross-tabulation of season and month
pd.crosstab(hours_weather.month, hours_weather.datetime.dt.dayofweek)
Out[244]:
In [245]:
# box plot of rentals, grouped by season
hours_weather.boxplot(column='total', by='month')
Out[245]:
In [246]:
# line plot of rentals
hours_weather.total.plot()
Out[246]:
What does this tell us?
There are more rentals in the winter than the spring, but only because the system is experiencing overall growth and the winter months happen to come after the spring months.
In [247]:
# correlation matrix (ranges from 1 to -1)
hours_weather.corr()
Out[247]:
In [248]:
# visualize correlation matrix in Seaborn using a heatmap
sns.heatmap(hours_weather.corr())
Out[248]:
What relationships do you notice?
In [251]:
# create a list of features
feature_cols = ['temp', 'month', 'humidity']
In [252]:
# create X and y
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
In [253]:
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
Out[253]:
Interpreting the coefficients:
Does anything look incorrect?
How do we choose which features to include in the model? We're going to use train/test split (and eventually cross-validation).
Why not use of p-values or R-squared for feature selection?
More generally:
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. We need evaluation metrics designed for comparing continuous values.
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
In [254]:
# example true and predicted response values
true = [10, 7, 5, 5]
pred = [8, 6, 5, 10]
In [256]:
# calculate these metrics by hand!
from sklearn import metrics
import numpy as np
print('MAE:', metrics.mean_absolute_error(true, pred))
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
Comparing these metrics:
All of these are loss functions, because we want to minimize them.
Here's an additional example, to demonstrate how MSE/RMSE punish larger errors:
In [258]:
# same true values as above
true = [10, 7, 5, 5]
# new set of predicted values
pred = [10, 7, 5, 13]
# MAE is the same as before
print('MAE:', metrics.mean_absolute_error(true, pred))
# MSE and RMSE are larger than before
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
In [259]:
from sklearn.cross_validation import train_test_split
import sklearn.metrics as metrics
import numpy as np
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feature_cols, data):
X = data[feature_cols]
y = data.total
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
linreg = LinearRegression()
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
return np.sqrt(metrics.mean_squared_error(y_test, y_pred))
In [260]:
# compare different sets of features
print(train_test_rmse(['temp', 'month', 'humidity'], hours_weather))
print(train_test_rmse(['temp', 'month'], hours_weather))
print(train_test_rmse(['temp', 'humidity'], hours_weather))
In [261]:
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
Out[261]:
In [262]:
# compute null RMSE
np.sqrt(metrics.mean_squared_error(y_test, y_null))
Out[262]:
scikit-learn expects all features to be numeric. So how do we include a categorical feature in our model?
What are the categorical features in our dataset?
For season, we can't simply leave the encoding as 1 = spring, 2 = summer, 3 = fall, and 4 = winter, because that would imply an ordered relationship. Instead, we create multiple dummy variables:
In [263]:
# create dummy variables
season_dummies = pd.get_dummies(hours_weather.month, prefix='month')
# print 5 random rows
season_dummies.sample(n=5, random_state=1)
Out[263]:
In general, if you have a categorical feature with k possible values, you create k-1 dummy variables.
If that's confusing, think about why we only need one dummy variable for holiday, not two dummy variables (holiday_yes and holiday_no).
In [264]:
# concatenate the original DataFrame and the dummy DataFrame (axis=0 means rows, axis=1 means columns)
hw_dum = pd.concat([hours_weather, season_dummies], axis=1)
# print 5 random rows
hw_dum.sample(n=5, random_state=1)
Out[264]:
In [265]:
# include dummy variables for season in the model
feature_cols = ['temp', 'month_2', 'month_3', 'humidity']
X = hw_dum[feature_cols]
y = hw_dum.total
linreg = LinearRegression()
linreg.fit(X, y)
list(zip(feature_cols, linreg.coef_))
Out[265]:
In [266]:
# compare original season variable with dummy variables
print(train_test_rmse(['temp', 'month', 'humidity'], hw_dum))
print(train_test_rmse(['temp', 'month_2', 'month', 'humidity'], hw_dum))
See if you can create the following features:
Then, try using each of the three features (on its own) with train_test_rmse
to see which one performs the best!
In [267]:
# hour as a numeric feature
hw_dum['hour'] = hw_dum.datetime.dt.hour
In [268]:
# hour as a categorical feature
hour_dummies = pd.get_dummies(hw_dum.hour, prefix='hour')
# hour_dummies.drop(hour_dummies.columns[0], axis=1, inplace=True)
hw_dum = pd.concat([hw_dum, hour_dummies], axis=1)
In [269]:
# daytime as a categorical feature
hw_dum['daytime'] = ((hw_dum.hour > 6) & (hw_dum.hour < 21)).astype(int)
In [270]:
print(train_test_rmse(['hour'], hw_dum),
train_test_rmse(hw_dum.columns[hw_dum.columns.str.startswith('hour_')], hw_dum)
,train_test_rmse(['daytime'], hw_dum))
Advantages of linear regression:
Disadvantages of linear regression: