First BigQuery ML models for Taxifare Prediction

In this notebook, we will use BigQuery ML to build our first models for taxifare prediction.

BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.

Learning objectives

  1. Choose the correct BigQuery ML model type and specify options
  2. Evaluate the performance of your ML model
  3. Improve model performance through data quality cleanup
  4. Create a Deep Neural Network (DNN) using SQL

Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.

We'll start by creating a dataset to hold all the models we create in BigQuery.

Import libraries


In [ ]:
import os

Set environment variables


In [ ]:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT

In [ ]:
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1

# Do not change these
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION

if PROJECT == "your-gcp-project-here":
    print("Don't forget to update your PROJECT name! Currently:", PROJECT)

Create a BigQuery Dataset and Google Cloud Storage Bucket

A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.


In [ ]:
%%bash

## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)

if [ -n "$datasetexists" ]; then
    echo -e "BigQuery dataset already exists, let's not recreate it."
else
    echo "Creating BigQuery dataset titled: serverlessml"

    bq --location=US mk --dataset \
        --description 'Taxi Fare' \
        $PROJECT:serverlessml
    echo "\nHere are your current datasets:"
    bq ls
fi    

## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)

if [ -n "$exists" ]; then
    echo -e "Bucket exists, let's not recreate it."
else
    echo "Creating a new GCS bucket."
    gsutil mb -l ${REGION} gs://${BUCKET}
    echo "\nHere are your current buckets:"
    gsutil ls
fi

Model 1: Raw data

Let's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.

The model will take a minute or so to train. When it comes to ML, this is blazing fast.


In [ ]:
%%bigquery
CREATE OR REPLACE MODEL
    serverlessml.model1_rawdata
# TODO 1: Choose the correct ML model_type for forecasting:
# i.e. Linear Regression (linear_reg) or Logistic Regression (logistic_reg)
# Enter in the appropriate ML OPTIONS() in the line below:

SELECT
    (tolls_amount + fare_amount) AS fare_amount,
    pickup_longitude AS pickuplon,
    pickup_latitude AS pickuplat,
    dropoff_longitude AS dropofflon,
    dropoff_latitude AS dropofflat,
    passenger_count * 1.0 AS passengers
FROM
    `nyc-tlc.yellow.trips`
WHERE
    ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1

Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.

Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:


In [ ]:
%%bigquery
# TODO 2: Specify the command to evaluate your newly trained model
SELECT * FROM

Let's report just the error we care about, the Root Mean Squared Error (RMSE)


In [ ]:
%%bigquery
SELECT
    SQRT(mean_squared_error) AS rmse
FROM
    ML.EVALUATE(MODEL serverlessml.model1_rawdata)

We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.

Note that the error is going to depend on the dataset that we evaluate it on. We can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).


In [ ]:
%%bigquery
SELECT
    SQRT(mean_squared_error) AS rmse
FROM
    ML.EVALUATE(MODEL serverlessml.model1_rawdata, (
    SELECT
        (tolls_amount + fare_amount) AS fare_amount,
        pickup_longitude AS pickuplon,
        pickup_latitude AS pickuplat,
        dropoff_longitude AS dropofflon,
        dropoff_latitude AS dropofflat,
        passenger_count * 1.0 AS passengers # treat as decimal
    FROM
        `nyc-tlc.yellow.trips`
    WHERE
        ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2
        # Placeholder for additional filters as part of TODO 3 later
    ))

What was the RMSE from the above?

TODO 3: Now apply the below filters to the previous query inside the WHERE clause. Does the performance improve? Why or why not?

AND trip_distance > 0
    AND fare_amount >= 2.5
    AND pickup_longitude > -78
    AND pickup_longitude < -70
    AND dropoff_longitude > -78
    AND dropoff_longitude < -70
    AND pickup_latitude > 37
    AND pickup_latitude < 45
    AND dropoff_latitude > 37
    AND dropoff_latitude < 45
    AND passenger_count > 0

Model 2: Apply data cleanup

Recall that we did some data cleanup in the previous lab. Let's do those before training.

This is a dataset that we will need quite frequently in this notebook, so let's extract it first.


In [ ]:
%%bigquery
CREATE OR REPLACE TABLE
    serverlessml.cleaned_training_data AS

SELECT
    (tolls_amount + fare_amount) AS fare_amount,
    pickup_longitude AS pickuplon,
    pickup_latitude AS pickuplat,
    dropoff_longitude AS dropofflon,
    dropoff_latitude AS dropofflat,
    passenger_count * 1.0 AS passengers
FROM
    `nyc-tlc.yellow.trips`
WHERE
    ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
    AND trip_distance > 0
    AND fare_amount >= 2.5
    AND pickup_longitude > -78
    AND pickup_longitude < -70
    AND dropoff_longitude > -78
    AND dropoff_longitude < -70
    AND pickup_latitude > 37
    AND pickup_latitude < 45
    AND dropoff_latitude > 37
    AND dropoff_latitude < 45
    AND passenger_count > 0

In [ ]:
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM serverlessml.cleaned_training_data
LIMIT 0

In [ ]:
%%bigquery
CREATE OR REPLACE MODEL
    serverlessml.model2_cleanup

OPTIONS(input_label_cols=['fare_amount'],
        model_type='linear_reg') AS

SELECT
    *
FROM
    serverlessml.cleaned_training_data

In [ ]:
%%bigquery
SELECT
    SQRT(mean_squared_error) AS rmse
FROM
    ML.EVALUATE(MODEL serverlessml.model2_cleanup)

Model 3: More sophisticated models

What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:

DNN

To create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.


In [ ]:
%%bigquery
-- This model type is in alpha, so it may not work for you yet.
-- This training takes on the order of 15 minutes.
CREATE OR REPLACE MODEL
    serverlessml.model3b_dnn
# TODO 4a: Choose correct BigQuery ML model type for DNN and label field
# Options: dnn_regressor, linear_reg, logistic_reg
OPTIONS() AS

SELECT
    *
FROM
    serverlessml.cleaned_training_data

In [ ]:
%%bigquery
SELECT
    SQRT(mean_squared_error) AS rmse
FROM
    ML.EVALUATE(MODEL serverlessml.model3b_dnn)

Nice!

Evaluate DNN on benchmark dataset

Let's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.


In [ ]:
%%bigquery
SELECT
    SQRT(mean_squared_error) AS rmse 
# TODO 4b: What is the command to see how well a 
# ML model performed? ML.What? 
FROM
    ML.WHATCOMMAND(MODEL serverlessml.model3b_dnn, (
    SELECT
        (tolls_amount + fare_amount) AS fare_amount,
        pickup_datetime,
        pickup_longitude AS pickuplon,
        pickup_latitude AS pickuplat,
        dropoff_longitude AS dropofflon,
        dropoff_latitude AS dropofflat,
        passenger_count * 1.0 AS passengers,
        'unused' AS key
    FROM
        `nyc-tlc.yellow.trips`
    WHERE
        ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
        AND trip_distance > 0
        AND fare_amount >= 2.5
        AND pickup_longitude > -78
        AND pickup_longitude < -70
        AND dropoff_longitude > -78
        AND dropoff_longitude < -70
        AND pickup_latitude > 37
        AND pickup_latitude < 45
        AND dropoff_latitude > 37
        AND dropoff_latitude < 45
        AND passenger_count > 0
    ))

Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.

In this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.