Data Preprocessing for Machine Learning

Learning Objectives

  • Understand the different approaches for data preprocessing in developing ML models
  • Use Dataflow to perform data preprocessing steps

Introduction

In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.

Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.

One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations:

  1. Instance-level transformations
  2. Full-pass transformations
  3. Time-windowed aggregations

Cloud Dataflow can perform each of these types of operations and is particularly useful when performing computationally expensive operations as it is an autoscaling service for batch and streaming data processing pipelines. We'll say a few words about each of these below. For more information, have a look at this article about data preprocessing for machine learning from Google Cloud.

1. Instance-level transformations These are transformations which take place during training and prediction, looking only at values from a single data point. For example, they might include clipping the value of a feature, polynomially expand a feature, multiply two features, or compare two features to create a Boolean flag.

It is necessary to apply the same transformations at training time and at prediction time. Failure to do this results in training/serving skew and will negatively affect the performance of the model.

2. Full-pass transformations These transformations occur during training, but occur as instance-level operations during prediction. That is, during training you must analyze the entirety of the training data to compute quantities such as maximum, minimum, mean or variance while at prediction time you need only use those values to rescale or normalize a single data point.

A good example to keep in mind is standard scaling (z-score normalization) of features for training. You need to compute the mean and standard deviation of that feature across the whole training data set, thus it is called a full-pass transformation. At prediction time you use those previously computed values to appropriately normalize the new data point. Failure to do so results in training/serving skew.

3. Time-windowed aggregations These types of transformations occur during training and at prediction time. They involve creating a feature by summarizing real-time values by aggregating over some temporal window clause. For example, if we wanted our model to estimate the taxi trip time based on the traffic metrics for the route in the last 5 minutes, in the last 10 minutes or the last 30 minutes we would want to create a time-window to aggreagate these values.

At prediction time these aggregations have to be computed in real-time from a data stream.

Set environment variables and load necessary libraries

Apache Beam only works in Python 2 at the moment, so switch to the Python 2 kernel in the upper right hand side. Then execute the following cells to install the necessary libraries if they have not been installed already.


In [ ]:
#Ensure that we have the correct version of Apache Beam installed
!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0

In [ ]:
import tensorflow as tf
import apache_beam as beam
import shutil
import os
print(tf.__version__)

Next, set the environment variables related to your GCP Project.


In [ ]:
PROJECT = "cloud-training-demos"  # Replace with your PROJECT
BUCKET = "cloud-training-bucket"  # Replace with your BUCKET
REGION = "us-central1"            # Choose an available region for Cloud MLE
TFVERSION = "1.13"                # TF version for CMLE to use

In [ ]:
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION

In [ ]:
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION

## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`

Create data preprocessing job with Cloud Dataflow

The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.

It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.

Define our query and pipeline functions

To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.


In [ ]:
def create_query(phase, sample_size):
    basequery = """
    SELECT
        (tolls_amount + fare_amount) AS fare_amount,
        EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,
        EXTRACT(HOUR from pickup_datetime) AS hourofday,
        pickup_longitude AS pickuplon,
        pickup_latitude AS pickuplat,
        dropoff_longitude AS dropofflon,
        dropoff_latitude AS dropofflat
    FROM
        `nyc-tlc.yellow.trips`
    WHERE
        trip_distance > 0
        AND fare_amount >= 2.5
        AND pickup_longitude > -78
        AND pickup_longitude < -70
        AND dropoff_longitude > -78
        AND dropoff_longitude < -70
        AND pickup_latitude > 37
        AND pickup_latitude < 45
        AND dropoff_latitude > 37
        AND dropoff_latitude < 45
        AND passenger_count > 0
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1
    """

    if phase == 'TRAIN':
        subsample = """
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) <  (EVERY_N * 70)
        """
    elif phase == 'VALID':
        subsample = """
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) <  (EVERY_N * 85)
        """
    elif phase == 'TEST':
        subsample = """
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)
        AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) <  (EVERY_N * 100)
        """

    query = basequery + subsample
    return query.replace("EVERY_N", sample_size)

Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.


In [ ]:
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
    gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi

Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record


In [ ]:
def to_csv(rowdict):
    """
    Arguments:
        -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
        which each row is represented as a python dictionary
    Returns:
        -rowstring: a comma separated string representation of the record
    """
    days = ["null", "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
    CSV_COLUMNS = "fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat".split(',')
    rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
    return rowstring

Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.


In [ ]:
import datetime

def preprocess(EVERY_N, RUNNER):
    """
    Arguments:
        -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
        Larger values will yield smaller sample
        -RUNNER: "DirectRunner" or "DataflowRunner". Specfy to run the pipeline
        locally or on Google Cloud respectively. 
    Side-effects:
        -Creates and executes dataflow pipeline. 
        See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
    """
    job_name = "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
    print("Launching Dataflow job {} ... hang on".format(job_name))
    OUTPUT_DIR = "gs://{0}/taxifare/ch4/taxi_preproc/".format(BUCKET)

    #dictionary of pipeline options
    options = {
        "staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
        "temp_location": os.path.join(OUTPUT_DIR, "tmp"),
        "job_name": "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S"),
        "project": PROJECT,
        "runner": RUNNER
    }
  
    #instantiate PipelineOptions object using options dictionary
    opts = beam.pipeline.PipelineOptions(flags = [], **options)

    #instantantiate Pipeline object using PipelineOptions
    with beam.Pipeline(options=opts) as p:
        for phase in ["TRAIN", "VALID", "TEST"]:
            query = create_query(phase, EVERY_N)
            outfile = os.path.join(OUTPUT_DIR, "{}.csv".format(phase))
            (
                p | "read_{}".format(phase) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
                  | "tocsv_{}".format(phase) >> beam.Map(to_csv)
                  | "write_{}".format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
            )
    print("Done")

Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.

As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.

Run Beam pipeline locally

We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.


In [ ]:
preprocess("50*10000", "DirectRunner")

Run Beam pipeline on Cloud Dataflow

Again, we'll clear out our bucket to GCS to ensure a fresh run.


In [ ]:
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
    gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi

The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of GCP Console. Note, you can change the first arugment to "None" to process the full dataset.


In [ ]:
preprocess("50*100", "DataflowRunner")

Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.


In [ ]:
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/

In [ ]:
%%bash
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" | head

Develop a model with new inputs

We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.


In [ ]:
%%bash
if [ -d sample ]; then
    rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/VALID.csv-00000-of-*" > sample/valid.csv

To begin let's copy the model.py and task.py we developed in the previous notebooks here.


In [ ]:
%%bash

MODELDIR=./taxifaremodel

test -d $MODELDIR || mkdir $MODELDIR
cp -r ../03_model_performance/taxifaremodel/* $MODELDIR

Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.


In [ ]:
%%bash
grep -A 15 "feature_cols =" taxifaremodel/model.py

We can also see the engineered features that are created by the add_engineered_features function here.


In [ ]:
%%bash
grep -A 5 "add_engineered_features" taxifaremodel/model.py

We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.


In [ ]:
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m taxifaremodel.task \
    --train_data_path=${PWD}/sample/train.csv \
    --eval_data_path=${PWD}/sample/valid.csv  \
    --output_dir=${PWD}/taxi_trained \
    --train_steps=10 \
    --job-dir=/tmp

We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.


In [ ]:
%%bash
ls -R taxi_trained/export

You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.


In [ ]:
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all

To test out prediciton with out model, we create a temporary json file containing the expected feature values.


In [ ]:
%%writefile /tmp/test.json
{"dayofweek": 0, "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403}

In [ ]:
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ml-engine local predict \
    --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
    --json-instances=/tmp/test.json

Train on the Cloud

This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.


In [ ]:
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
    --region=$REGION \
    --module-name=taxifaremodel.task \
    --package-path=${PWD}/taxifaremodel \
    --job-dir=$OUTDIR \
    --staging-bucket=gs://$BUCKET \
    --scale-tier=BASIC \
    --runtime-version=$TFVERSION \
    -- \
    --train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
    --eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*"  \
    --train_steps=5000 \
    --output_dir=$OUTDIR

Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.


In [ ]:
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1

As before, we can use the saved_model_cli to examine the exported signature.


In [ ]:
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all

And check out model's prediction with a local predict job on our test file.


In [ ]:
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ml-engine local predict \
    --model-dir=${model_dir} \
    --json-instances=/tmp/test.json

Hyperparameter tuning

Recall the hyper-parameter tuning notebook. We can repeat the process there to decide the best parameters to use for model. Based on that run, I ended up choosing:

  • train_batch_size: 512
  • hidden_units: "64 64 64 8"

Let's now try a training job over a larger dataset.

(Optional) Run Cloud training on 2 million row dataset

This run uses as input 2 million rows and takes ~20 minutes with 10 workers (STANDARD_1 pricing tier). The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). Because the Dataflow preprocessing takes about 15 minutes, we train here using csv files in a public bucket.

When doing distributed training, use train_steps instead of num_epochs. The distributed workers don't know how many rows there are, but we can calculate train_steps = num_rows num_epochs / train_batch_size. In this case, we have 2141023 100 / 512 = 418168 train steps.


In [ ]:
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
    gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi

In [ ]:
# Preprocess the entire dataset 
preprocess(None, "DataflowRunner")

In [ ]:
%%bash

WARNING -- this uses significant resources and is optional. Remove this line to run the block.

OUTDIR=gs://${BUCKET}/taxifare/feateng2m
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
TIER=STANDARD_1 
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
    --region=$REGION \
    --module-name=taxifaremodel.task \
    --package-path=${PWD}/taxifaremodel \
    --job-dir=$OUTDIR \
    --staging-bucket=gs://$BUCKET \
    --scale-tier=$TIER \
    --runtime-version=$TFVERSION \
    -- \
    --train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
    --eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*"  \
    --output_dir=$OUTDIR \
    --train_steps=418168 \
    --hidden_units="64,64,64,8"

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.