Let's grab a small little data set of Blue Book car values:
In [1]:
import pandas as pd
df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls')
In [2]:
df.head()
Out[2]:
We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.
Note how we are avoiding the make and model; regressions don't work well with ordinal values, unless you can convert them into some numerical order that makes sense somehow.
Let's scale our feature data into the same range so we can easily compare the coefficients we end up with.
In [4]:
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
X = df[['Mileage', 'Cylinder', 'Doors']]
y = df['Price']
X[['Mileage', 'Cylinder', 'Doors']] = scale.fit_transform(X[['Mileage', 'Cylinder', 'Doors']].as_matrix())
print (X)
est = sm.OLS(y, X).fit()
est.summary()
Out[4]:
In [5]:
y.groupby(df.Doors).mean()
Out[5]:
Surprisingly, more doors does not mean a higher price! (Maybe it implies a sport car in some cases?) So it's not surprising that it's pretty useless as a predictor here. This is a very small data set however, so we can't really read much meaning into it.
Mess around with the fake input data, and see if you can create a measurable influence of number of doors on price. Have some fun with it - why stop at 4 doors?
In [ ]: