Feature Engineering

In this notebook, you will learn how to incorporate feature engineering into your pipeline.

  • Working with feature columns
  • Adding feature crosses in TensorFlow
  • Reading data from BigQuery
  • Creating datasets using Dataflow
  • Using a wide-and-deep model

Apache Beam works better with Python 2 at the moment, so we're going to work within the Python 2 kernel.


In [ ]:
%%bash
source activate py2env
conda install -y pytz
pip uninstall -y google-cloud-dataflow
pip install --upgrade apache-beam[gcp]==2.9.0

After doing a pip install, you have to Reset Session so that the new packages are picked up. Please click on the button in the above menu.


In [ ]:
import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__)

1. Environment variables for project and bucket

  • Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: Project ID: cloud-training-demos
  • Cloud training often involves saving and restoring model files. Therefore, we should create a single-region bucket. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available)
  • </ol> Change the cell below to reflect your Project ID and bucket name.

    
    
    In [ ]:
    import os
    REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
    BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
    PROJECT = 'cloud-training-demos'    # CHANGE THIS
    
    
    
    In [ ]:
    # for bash
    os.environ['PROJECT'] = PROJECT
    os.environ['BUCKET'] = BUCKET
    os.environ['REGION'] = REGION
    os.environ['TFVERSION'] = '1.8' 
    
    ## ensure we're using python2 env
    os.environ['CLOUDSDK_PYTHON'] = 'python2'
    
    
    
    In [ ]:
    %%bash
    ## ensure gcloud is up to date
    gcloud components update
    
    gcloud config set project $PROJECT
    gcloud config set compute/region $REGION
    
    ## ensure we predict locally with our current Python environment
    gcloud config set ml_engine/local_python `which python`
    

    2. Specifying query to pull the data

    Let's pull out a few extra columns from the timestamp.

    
    
    In [ ]:
    def create_query(phase, EVERY_N):
      if EVERY_N == None:
        EVERY_N = 4 #use full dataset
        
      #select and pre-process fields
      base_query = """
    SELECT
      (tolls_amount + fare_amount) AS fare_amount,
      DAYOFWEEK(pickup_datetime) AS dayofweek,
      HOUR(pickup_datetime) AS hourofday,
      pickup_longitude AS pickuplon,
      pickup_latitude AS pickuplat,
      dropoff_longitude AS dropofflon,
      dropoff_latitude AS dropofflat,
      passenger_count*1.0 AS passengers,
      CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
    FROM
      [nyc-tlc:yellow.trips]
    WHERE
      trip_distance > 0
      AND fare_amount >= 2.5
      AND pickup_longitude > -78
      AND pickup_longitude < -70
      AND dropoff_longitude > -78
      AND dropoff_longitude < -70
      AND pickup_latitude > 37
      AND pickup_latitude < 45
      AND dropoff_latitude > 37
      AND dropoff_latitude < 45
      AND passenger_count > 0
      """
      
      #add subsampling criteria by modding with hashkey
      if phase == 'train': 
        query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
      elif phase == 'valid': 
        query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
      elif phase == 'test':
        query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
      return query
        
    print create_query('valid', 100) #example query using 1% of data
    

    Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)

    3. Preprocessing Dataflow job from BigQuery

    This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.

    While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.

    Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch.

    
    
    In [ ]:
    %%bash
    gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
    
    
    
    In [ ]:
    import datetime
    
    ####
    # Arguments:
    #   -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
    #     which each row is represented as a python dictionary
    # Returns:
    #   -rowstring: a comma separated string representation of the record with dayofweek
    #     converted from int to string (e.g. 3 --> Tue)
    ####
    def to_csv(rowdict):
      days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
      CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
      rowdict['dayofweek'] = days[rowdict['dayofweek']]
      rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
      return rowstring
    
    
    ####
    # Arguments:
    #   -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
    #     Larger values will yield smaller sample
    #   -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specfy to run the pipeline
    #     locally or on Google Cloud respectively. 
    # Side-effects:
    #   -Creates and executes dataflow pipeline. 
    #     See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
    ####
    def preprocess(EVERY_N, RUNNER):
      job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
      print 'Launching Dataflow job {} ... hang on'.format(job_name)
      OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
    
      #dictionary of pipeline options
      options = {
        'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
        'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
        'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
        'project': PROJECT,
        'runner': RUNNER
      }
      #instantiate PipelineOptions object using options dictionary
      opts = beam.pipeline.PipelineOptions(flags=[], **options)
      #instantantiate Pipeline object using PipelineOptions
      with beam.Pipeline(options=opts) as p:
          for phase in ['train', 'valid']:
            query = create_query(phase, EVERY_N) 
            outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
            (
              p | 'read_{}'.format(phase) >> ##TODO: read from BigQuery
                | 'tocsv_{}'.format(phase) >> ##TODO: apply the to_csv function to every row
                | 'write_{}'.format(phase) >> ##TODO: write to outfile
            )
      print("Done")
    

    Run pipeline locally

    
    
    In [ ]:
    preprocess(50*10000, 'DirectRunner')
    

    Run pipleline on cloud on a larger sample size.

    
    
    In [ ]:
    preprocess(50*100, 'DataflowRunner') 
    #change first arg to None to preprocess full dataset
    

    Once the job completes, observe the files created in Google Cloud Storage

    
    
    In [ ]:
    %%bash
    gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
    
    
    
    In [ ]:
    %%bash
    #print first 10 lines of first shard of train.csv
    gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
    

    4. Develop model with new inputs

    Download the first shard of the preprocessed data to enable local development.

    
    
    In [ ]:
    %%bash
    mkdir sample
    gsutil cp "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" sample/train.csv
    gsutil cp "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" sample/valid.csv
    

    Complete the TODOs in taxifare/trainer/model.py so that the code below works.

    
    
    In [ ]:
    %%bash
    rm -rf taxifare.tar.gz taxi_trained
    export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
    python -m trainer.task \
      --train_data_paths=${PWD}/sample/train.csv \
      --eval_data_paths=${PWD}/sample/valid.csv  \
      --output_dir=${PWD}/taxi_trained \
      --train_steps=1000 \
      --job-dir=/tmp
    
    
    
    In [ ]:
    !ls taxi_trained/export/exporter/
    
    
    
    In [ ]:
    %%writefile /tmp/test.json
    {"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
    
    
    
    In [ ]:
    %%bash
    model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
    gcloud ai-platform local predict \
      --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
      --json-instances=/tmp/test.json
    
    
    
    In [ ]:
    #if gcloud ai-platform local predict fails, might need to update glcoud
    #!gcloud --quiet components update
    

    5. Train on cloud

    
    
    In [ ]:
    %%bash
    OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
    JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
    echo $OUTDIR $REGION $JOBNAME
    gsutil -m rm -rf $OUTDIR
    gcloud ai-platform jobs submit training $JOBNAME \
      --region=$REGION \
      --module-name=trainer.task \
      --package-path=${PWD}/taxifare/trainer \
      --job-dir=$OUTDIR \
      --staging-bucket=gs://$BUCKET \
      --scale-tier=BASIC \
      --runtime-version=$TFVERSION \
      -- \
      --train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
      --eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*"  \
      --train_steps=5000 \
      --output_dir=$OUTDIR
    

    Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License