In this notebook, we will use BigQuery ML to build more sophisticated models for taxifare prediction.
This is a continuation of our first models we created earlier with BigQuery ML but now with more feature engineering.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
In [ ]:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
In [31]:
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
In [32]:
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
In [33]:
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM `nyc-tlc.yellow.trips`
# The full dataset has 1+ Billion rows, let's take only 1 out of 1,000 (or 1 Million total)
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Out[33]:
In [34]:
%%bigquery
# Tip: You can CREATE MODEL IF NOT EXISTS as well
CREATE OR REPLACE MODEL serverlessml.model4_feateng
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek
, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg')
AS
SELECT * FROM serverlessml.feateng_training_data
Out[34]:
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:
In [35]:
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model4_feateng)
Out[35]:
In [36]:
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model4_feateng)
Out[36]:
Yippee! We're now below our target of 6 dollars in RMSE. We are now beating our goals, and with just a linear model.
This is how the prediction query would look that we saw earlier heading 1.3 miles uptown in New York City.
In [37]:
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model4_feateng, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
Out[37]:
Let's do a feature cross of the day-hour combination instead of using them raw
In [38]:
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model5_featcross
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(
STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)
) AS day_hr
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg')
AS
SELECT * FROM serverlessml.feateng_training_data
Out[38]:
In [39]:
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model5_featcross)
Out[39]:
In [40]:
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model5_featcross)
Out[40]:
Sometimes (not the case above), the training RMSE is quite reasonable, but the evaluation RMSE is terrible. This is an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxirides).
Let's add L2 regularization to help reduce overfitting. Let's set it to 0.1
In [41]:
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model6_featcross_l2
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1)
AS
SELECT * FROM serverlessml.feateng_training_data
Out[41]:
In [42]:
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model6_featcross_l2)
Out[42]:
These sorts of experiment would have taken days to do otherwise. We did it in minutes, thanks to BigQuery ML! The advantage of doing all this in the TRANSFORM is the client code doing the PREDICT doesn't change. Our model improvement is transparent to client code.
In [43]:
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model6_featcross_l2, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
Out[43]:
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what ML.BUCKETIZE does.
Here are some of the preprocessing functions in BigQuery ML:
In [44]:
%%bigquery
-- BQML chooses the wrong gradient descent strategy here. It will get fixed in (b/141429990)
-- But for now, as a workaround, explicitly specify optimize_strategy='BATCH_GRADIENT_DESCENT'
CREATE OR REPLACE MODEL serverlessml.model7_geo
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr
, CONCAT(
ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)),
ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01))
) AS pickup_and_dropoff
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1, optimize_strategy='BATCH_GRADIENT_DESCENT')
AS
SELECT * FROM serverlessml.feateng_training_data
Out[44]:
In [45]:
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model7_geo)
Out[45]:
Yippee! We're now below our target of 6 dollars in RMSE.
In [48]:
%%bigquery
-- This is alpha and may not work for you.
CREATE OR REPLACE MODEL serverlessml.model8_dnn
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS day_hr
, CONCAT(
ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon, pickuplat), 0.01)),
ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon, dropofflat), 0.01))
) AS pickup_and_dropoff
)
-- at the time of writing, l2_reg wasn't supported yet.
OPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8])
AS
SELECT * FROM serverlessml.feateng_training_data
Out[48]:
In [49]:
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model8_dnn)
Out[49]:
We really need the L2 reg (recall that we got 4.77 without the feateng). It's time to do Feature Engineering in Keras.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.