Introduction to Feature Engineering

Learning Objectives

  • Improve the accuracy of a model by using feature engineering
  • Understand there's two places to do feature engineering in Tensorflow
    1. Using the tf.feature_column module
    2. In the input functions

Introduction

Up until now we've been focusing on Tensorflow mechanics to make sure our code works, we have neglected model performance, which at this point is 9.26 RMSE.

In this notebook we'll attempt to improve on that using feature engineering.


In [ ]:
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)

Load raw data

These are the same files created in the create_datasets.ipynb notebook


In [ ]:
!gsutil cp gs://cloud-training-demos/taxifare/small/*.csv .
!ls -l *.csv

Train and Evaluate input functions

These are the same as before with one additional line of code: a call to add_engineered_features() from within the _parse_row() function.


In [ ]:
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]

def read_dataset(csv_path):
    def _parse_row(row):
        # Decode the CSV row into list of TF tensors
        fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)

        # Pack the result into a dictionary
        features = dict(zip(CSV_COLUMN_NAMES, fields))
        
        # NEW: Add engineered features
        features = add_engineered_features(features)
        
        # Separate the label from the features
        label = features.pop("fare_amount") # remove label from features and store

        return features, label
    
    # Create a dataset containing the text lines.
    dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
    dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))

    # Parse each CSV row into correct (features,label) format for Estimator API
    dataset = dataset.map(map_func = _parse_row)
    
    return dataset

def train_input_fn(csv_path, batch_size = 128):
    #1. Convert CSV into tf.data.Dataset with (features,label) format
    dataset = read_dataset(csv_path)
      
    #2. Shuffle, repeat, and batch the examples.
    dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
   
    return dataset

def eval_input_fn(csv_path, batch_size = 128):
    #1. Convert CSV into tf.data.Dataset with (features,label) format
    dataset = read_dataset(csv_path)

    #2.Batch the examples.
    dataset = dataset.batch(batch_size = batch_size)
   
    return dataset

Feature Engineering: feature columns

There are two places in Tensorflow where we can do feature engineering. The first is using the tf.feature_column package. This allows us easily

  • bucketize continuous features
  • one hot encode categorical features
  • create feature crosses

For details on the possible tf.feature_column transformations and when to use each see the official guide.

Let's use tf.feature_column to create a feature that shows the combination of day of week and hour of day. This will allow our model to easily learn the difference between say Wednesday at 5pm (rush hour, expect higher fares) and Sunday at 5pm (light traffic, expect lower fares).

Let's also use it to bucketize our latitudes and longitudes because treating them as continuous numbers is misleading to the model.

Exercise 1

In the code cell below you are asked to create some additional crossed feature columns for the model. For fc_crossed_dloc create a crossed column using the bucketized dropoff latitude and dropoff longitude. For fc_crossed_ploc create a crossed column using the pickup latitude and pickup longitude. For fc_crossed_pd_pair create a crossed feature column using the pickup location and the dropoff location.


In [ ]:
# 1. One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)

# 2. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)

# 3. Cross features to get combination of day and hour
fc_crossed_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)

Feature Engineering: input functions

While feature columns are very powerful, what happens when we want to something that there isn't a feature column for?

Recall the input functions recieve csv data, format it, then pass it batch by batch to the model. We can also use input functions to inject arbitrary tensorflow code to manipulate the data.

However, we need to be careful that any transformations we do in one input function, we do for all, otherwise we'll have training-serving skew.

To guard against this we encapsulate all input function feature engineering in a single function, add_engineered_features(), and call this function from every input function.

Let's calculate the euclidean distance between the pickup and dropoff points and feed that as a new feature to our model.

Also it may be useful to know which cardinal direction that distance is in. I suspect that distance is cheaper to travel North/South because in Manhatten streets that run North/South have less stops than streets that run East/West.


In [ ]:
def add_engineered_features(features):
    features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
    
    features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
    features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
    features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)

    return features

Gather list of feature columns

Ultimately our estimator expects a list of feature columns, so let's gather all our engineered features into a single list.

We cannot pass categorical or crossed feature columns directly into a DNN, Tensorflow will give us an error. We must first wrap them using either indicator_column() or embedding_column(). The former will pass through the one-hot encoded representation as is, the latter will embed the feature into a dense representation of specified dimensionality (the 4th root of the number of categories is a good starting point for number of dimensions). Read more about indicator and embedding columns here.


In [ ]:
feature_cols = [
    #1. Engineered using tf.feature_column module
    tf.feature_column.indicator_column(categorical_column = fc_crossed_day_hr),
    fc_bucketized_plat,
    fc_bucketized_plon,
    fc_bucketized_dlat,
    fc_bucketized_dlon,
    #2. Engineered in input functions
    tf.feature_column.numeric_column(key = "latdiff"),
    tf.feature_column.numeric_column(key = "londiff"),
    tf.feature_column.numeric_column(key = "euclidean_dist") 
]

Serving Input Receiver function

Same as before except the received tensors are wrapped with add_engineered_features().


In [ ]:
def serving_input_receiver_fn():
    receiver_tensors = {
        'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
        'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
        'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]), 
        'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
        'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
        'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
    }
    
    features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
    
    return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)

Train and Evaluate (500 train steps)

The same as before, we'll train the model for 500 steps (sidenote: how many epochs do 500 trains steps represent?). Let's see how the engineered features we've added affect the performance.


In [ ]:
%%time
OUTDIR = "taxi_trained_dnn/500"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training

model = tf.estimator.DNNRegressor(
    hidden_units = [10,10], # specify neural architecture
    feature_columns = feature_cols, 
    model_dir = OUTDIR,
    config = tf.estimator.RunConfig(
        tf_random_seed = 1, # for reproducibility
        save_checkpoints_steps = 100 # checkpoint every N steps
    ) 
)

# Add custom evaluation metric
def my_rmse(labels, predictions):
    pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
    return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}

model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse) 
    
train_spec = tf.estimator.TrainSpec(
    input_fn = lambda: train_input_fn("./taxi-train.csv"),
    max_steps = 500)

exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint

eval_spec = tf.estimator.EvalSpec(
    input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
    steps = None,
    start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
    throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
    exporters = exporter) # export SavedModel once at the end of training

tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)

Results

Our RMSE is now 5.94, our first significant improvement! If we look at the RMSE trend in TensorBoard it appears the model is still learning, so training past 500 steps would likely lower the RMSE even more. Let's run again, this time for 10x as many steps.

Train and Evaluate (5,000 train steps)

Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance.


In [ ]:
%%time
OUTDIR = "taxi_trained_dnn/5000"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training

model = tf.estimator.DNNRegressor(
    hidden_units = [10,10], # specify neural architecture
    feature_columns = feature_cols, 
    model_dir = OUTDIR,
    config = tf.estimator.RunConfig(
        tf_random_seed = 1, # for reproducibility
        save_checkpoints_steps = 100 # checkpoint every N steps
    ) 
)

# Add custom evaluation metric
def my_rmse(labels, predictions):
    pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
    return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}

model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse) 
    
train_spec = tf.estimator.TrainSpec(
    input_fn = lambda: train_input_fn("./taxi-train.csv"),
    max_steps = 5000)

exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint

eval_spec = tf.estimator.EvalSpec(
    input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
    steps = None,
    start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
    throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
    exporters = exporter) # export SavedModel once at the end of training

tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)

Results

Our RMSE is now 4.13! It looks like RMSE may still be reducing, but training is getting slow so we should move to the cloud if we want to train longer.

Also we haven't explored our hyperparameters much. Is our neural architecture of two layers with 10 nodes each optimal?

In the next notebook we'll explore this.

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License