ML with TensorFlow Extended (TFX) -- Part 1

The puprpose of this tutorial is to show how to do end-to-end ML with TFX libraries on Google Cloud Platform. This tutorial covers:

  1. Data analysis and schema generation with TF Data Validation.
  2. Data preprocessing with TF Transform.
  3. Model training with TF Estimator.
  4. Model evaluation with TF Model Analysis.

This notebook has been tested in Jupyter on the Deep Learning VM.

Setup Cloud environment


In [ ]:
import tensorflow as tf
import tensorflow_data_validation as tfdv

print('TF version: {}'.format(tf.__version__))
print('TFDV version: {}'.format(tfdv.__version__))

In [ ]:
PROJECT = 'cloud-training-demos'    # Replace with your PROJECT
BUCKET = 'cloud-training-demos-ml'  # Replace with your BUCKET
REGION = 'us-central1'              # Choose an available region for Cloud MLE

import os

os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION

In [ ]:
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION

## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`

UCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult

Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset.


In [ ]:
DATA_DIR='gs://cloud-samples-data/ml-engine/census/data'

In [ ]:
import os

TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
!gsutil ls -l $TRAIN_DATA_FILE
!gsutil ls -l $EVAL_DATA_FILE

1. Data Analysis

For data analysis, visualization, and schema generation, we use TensorFlow Data Validation to perform the following:

  1. Analyze the training data and produce statistics.
  2. Generate data schema from the produced statistics.
  3. Configure the schema.
  4. Validate the evaluation data against the schema.
  5. Save the schema for later use.

1.1 Compute and visualise statistics


In [ ]:
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
               'marital_status', 'occupation', 'relationship', 'race', 'gender',
               'capital_gain', 'capital_loss', 'hours_per_week',
               'native_country', 'income_bracket']

TARGET_FEATURE_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'

In [ ]:
# This is a convenience function for CSV. We can write a Beam pipeline for other formats.
# https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_csv
train_stats = tfdv.generate_statistics_from_csv(
    data_location=TRAIN_DATA_FILE, 
    column_names=HEADER,
    stats_options=tfdv.StatsOptions(
        weight_feature=WEIGHT_COLUMN_NAME,
        sample_rate=1.0
    )
)

In [ ]:
tfdv.visualize_statistics(train_stats)

1.2 Infer Schema


In [ ]:
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)

In [ ]:
print(tfdv.get_feature(schema, 'age'))

1.3 Configure Schema


In [ ]:
# Relax the minimum fraction of values that must come from the domain for feature occupation.
occupation = tfdv.get_feature(schema, 'occupation')
occupation.distribution_constraints.min_domain_mass = 0.9

# Add new value to the domain of feature native_country, assuming that we start receiving this
# we won't be able to make great predictions of course, because this country is not part of our
# training data.
native_country_domain = tfdv.get_domain(schema, 'native_country')
native_country_domain.value.append('Egypt')

# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('EVALUATION')
schema.default_environment.append('SERVING')

# Specify that the class feature is not in SERVING environment.
tfdv.get_feature(schema, TARGET_FEATURE_NAME).not_in_environment.append('SERVING')

In [ ]:
tfdv.display_schema(schema=schema)

1.4 Validate evaluation data


In [ ]:
eval_stats = tfdv.generate_statistics_from_csv(
    EVAL_DATA_FILE, 
    column_names=HEADER,
    stats_options=tfdv.StatsOptions(
        weight_feature=WEIGHT_COLUMN_NAME)
)

eval_anomalies = tfdv.validate_statistics(eval_stats, schema, environment='EVALUATION')
tfdv.display_anomalies(eval_anomalies)

1.5 Freeze the schema


In [ ]:
RAW_SCHEMA_LOCATION = 'raw_schema.pbtxt'

In [ ]:
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format

tfdv.write_schema_text(schema, RAW_SCHEMA_LOCATION)

In [ ]:
!cat {RAW_SCHEMA_LOCATION}

License

Copyright 2019 Google LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


Disclaimer: This is not an official Google product. The sample code provided for an educational purpose.