Learning Objectives
BigQuery is more than just a data warehouse, it also has some ML capabilities baked into it.
As of January 2019 it is limited to linear models, but what it gives up in complexity, it gains in ease of use.
BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.
In this notebook, we will build a naive model using BQML. This notebook is intended to inspire usage of BQML, we will not focus on model performance.
In [ ]:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
REGION = "us-central1" # Choose an available region for Cloud MLE
In [ ]:
import os
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
In [ ]:
!pip freeze | grep google-cloud-bigquery==1.21.0 || pip install google-cloud-bigquery==1.21.0
In [ ]:
%load_ext google.cloud.bigquery
Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset
means a folder for tables.
We will take advantage of BigQuery's Python Client to create the dataset.
In [ ]:
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except:
print("Dataset already exists")
To create a model (documentation)
CREATE MODEL
and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL
which allows overwriting an existing model.OPTIONS
to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
In [ ]:
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_taxifare.taxifare_model
OPTIONS(model_type = "linear_reg",
input_label_cols = ["label"]) AS
-- query to fetch training data
SELECT
(tolls_amount + fare_amount) AS label,
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
-- repeatable 1/5000th sample
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1
Because the query uses a CREATE MODEL
statement to create a table, you do not see query results. The output is an empty string.
To get the training results we use the ML.TRAINING_INFO
function.
Have a look at Step Three and Four of this tutorial to see a similar example.
In [ ]:
%%bigquery --project $PROJECT
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_taxifare.taxifare_model`)
'eval_loss' is reported as mean squared error, so our RMSE is 8.29. Your results may vary.
To use our model to make predictions, we use ML.PREDICT
. Let's, use the taxifare_model
you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going from the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)
Have a look at Step Five of this tutorial to see another example.
In [ ]:
%%bigquery --project $PROJECT
#standardSQL
SELECT
predicted_label
FROM
ML.PREDICT(MODEL `bqml_taxifare.taxifare_model`,
(
SELECT
TIMESTAMP "2014-01-03 10:00:00" as pickup_datetime,
-74.0080 as pickup_longitude,
40.7434 as pickup_latitude,
-73.7781 as dropoff_longitude,
40.6413 as dropoff_latitude
))
Our model predicts the cost would be $22.12.
The value of BQML is its ease of use:
There's lots of work going on behind the scenes make this look easy. For example BQML is automatically creating a training/evaluation split, tuning our learning rate, and one-hot encoding features if neccesary. When we move to TensorFlow these are all things we'll need to do ourselves.
This notebook was just to inspire usage of BQML, the current model is actually very poor. We'll prove this in the next lesson by beating it with a simple heuristic.
We could improve our model considerably with some feature engineering but we'll save that for a future lesson. Also there are additional BQML functions such as ML.WEIGHTS
and ML.EVALUATE
that we haven't even explored. If you're interested in learning more about BQML I encourage you to read the offical docs.
From here on out we'll focus on pulling data out of BigQuery and building models using TensorFlow, which is more effort but also offers much more flexibility.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.