(See Getting Started with SFrames for setup instructions)
In [1]:
import os
from urllib import urlretrieve
import graphlab
In [2]:
# Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing.
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
In [3]:
URL = 'https://d396qusza40orc.cloudfront.net/phoenixassets/home_data.csv'
In [4]:
def get_data(filename='home_data.csv', url=URL, force_download=False):
"""Download and cache the fremont data
Parameters
----------
filename: string (optional)
location to save the data
url: string (optional)
force_download: bool (optional)
if True, force redownload of data
Returns
-------
data: graphlab SFrame. Similer to a pandas DataFrame,
but with capacity for faster analysis of larger data sets
"""
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
sf = graphlab.SFrame('home_data.csv')
return sf
In [5]:
#sales = get_data()
#sales.head()
In [6]:
sales = graphlab.SFrame('home_data.gl')
In [7]:
sales
Out[7]:
The house price is correlated with the number of square feet of living space.
In [8]:
graphlab.canvas.set_target('ipynb')
sales.show(view='Scatter Plot', x='sqft_living', y='price')
Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
In [9]:
train_data, test_data = sales.random_split(.8, seed=0)
In [10]:
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])
In [11]:
print test_data['price'].mean()
In [12]:
print sqft_model.evaluate(test_data)
RMSE of about \$255,170!
In [13]:
import matplotlib.pyplot as plt
%matplotlib inline
In [14]:
plt.plot(test_data['sqft_living'], test_data['price'], '.',
test_data['sqft_living'], sqft_model.predict(test_data), '-')
Out[14]:
Above: blue dots are original data, green line is the prediction from the simple regression.
Below: we can view the learned regression coefficients.
In [15]:
sqft_model.coefficients
Out[15]:
In [16]:
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
In [17]:
sales[my_features].show()
In [18]:
sales.show(view='BoxWhisker Plot', x='bathrooms', y='price')
Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
In [19]:
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
In [20]:
print my_features
In [21]:
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
The RMSE goes down from \$255,170 to \$179,508 with more features.
The first house we will use is considered an "average" house in Seattle.
In [22]:
house1 = sales[sales['id'] =='5309101200']
In [23]:
house1
Out[23]:
In [24]:
print house1['price']
In [25]:
print sqft_model.predict(house1)
In [26]:
print my_features_model.predict(house1)
In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
In [27]:
house2 = sales[sales['id']=='1925069082']
In [28]:
house2
Out[28]:
In [29]:
print sqft_model.predict(house2)
In [30]:
print my_features_model.predict(house2)
In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
In [31]:
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
In [32]:
print my_features_model.predict(graphlab.SFrame(bill_gates))
The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
In [33]:
Fancy_zip = sales[sales['zipcode']=='98039']
In [34]:
Fancy_zip['price'].mean()
Out[34]:
In [35]:
sqftover2000 = sales[sales['sqft_living'] > 2000]
sqftover2000under4000 = sqftover2000[sqftover2000['sqft_living'] < 4000]
In [36]:
import numpy as np
In [37]:
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
In [38]:
advanced_features_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features, validation_set=None)
In [39]:
print my_features_model.evaluate(test_data)['rmse'] - advanced_features_model.evaluate(test_data)['rmse']
In [40]:
print advanced_features_model.predict(house1)
In [ ]: