Create TensorFlow Deep Neural Network Model

Learning Objective

  • Create a DNN model using the high-level Estimator API

Introduction

We'll begin by modeling our data using a Deep Neural Network. To achieve this we will use the high-level Estimator API in Tensorflow. Have a look at the various models available through the Estimator API in the documentation here.

Start by setting the environment variables related to your project.


In [ ]:
PROJECT = "cloud-training-demos"  # Replace with your PROJECT
BUCKET = "cloud-training-bucket"  # Replace with your BUCKET
REGION = "us-central1"            # Choose an available region for Cloud MLE
TFVERSION = "1.14"                # TF version for CMLE to use

In [ ]:
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION

In [ ]:
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
    gsutil mb -l ${REGION} gs://${BUCKET}
fi

In [ ]:
%%bash
ls *.csv

Create TensorFlow model using TensorFlow's Estimator API

We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps.


In [ ]:
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)

Exercise 1

To begin creating out Tensorflow model, we need to set up variables that determine the csv column values, the label column and the key column. Fill in the TODOs below to set these variables. Note, CSV_COLUMNS should be a list and LABEL_COLUMN should be a string. It is important to get the column names in the correct order as they appear in the csv train/eval/test sets. If necessary, look back at the previous notebooks at how these csv files were created to ensure you have the correct ordering.

We also need to set DEFAULTS for each of the CSV column values we prescribe. This will also the a list of entities that will vary depending on the data type of the csv column value. Have a look back at the previous examples to ensure you have the proper formatting.


In [ ]:
# Determine CSV, label, and key columns
CSV_COLUMNS = # TODO: Your code goes here
LABEL_COLUMN = # TODO: Your code goes here

# Set default values for each CSV column
DEFAULTS = # TODO: Your code goes here
TRAIN_STEPS = 1000

Create the input function

Now we are ready to create an input function using the Dataset API.

Exercise 2

In the code below you are asked to complete the TODOs to create the input function for our model. Look back at the previous examples we have completed if you need a hint as to how to complete the missing fields below.

In the first block of TODOs, your decode_csv file should return a dictionary called features and a value label.

In the next TODO, use tf.gfile.Glob to create a list of files that match the given filename_pattern. Have a look at the documentation for tf.gfile.Glob if you get stuck.

In the next TODO, use tf.data.TextLineDataset to read text file and apply the decode_csv function you created above to parse each row example.

In the next TODO you are asked to set up the dataset depending on whether you are in TRAIN mode or not. (Hint: Use tf.estimator.ModeKeys.TRAIN). When in TRAIN mode, set the appropriate number of epochs and shuffle the data accordingly. When not in TRAIN mode, you will use a different number of epochs and there is no need to shuffle the data.

Finally, in the last TODO, collect the operations you set up above to produce the final dataset we'll use to feed data into our model.

Have a look at the examples we did in the previous notebooks if you need inspiration.


In [ ]:
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename_pattern, mode, batch_size = 512):
    def _input_fn():
        def decode_csv(line_of_text):
            columns = # TODO: Your code goes here
            features = # TODO: Your code goes here
            label = # TODO: Your code goes here
            return features, label
    
        # Create list of files that match pattern
        file_list = # TODO: Your code goes here

        # Create dataset from file list
        dataset = # TODO: Your code goes here

        # In training mode, shuffle the dataset and repeat indefinitely
        # TODO: Your code goes here

        dataset = # TODO: Your code goes here

        # This will now return batches of features, label
        return dataset
    return _input_fn

Create the feature columns

Next, we define the feature columns

Exercise 3

There are different ways to set up the feature columns for our model.

In the first TODO below, you are asked to create a function get_categorical which takes a feature name and its potential values and returns an indicator tf.feature_column based on a categorical with vocabulary list column. Look back at the documentation for tf.feature_column.indicator_column to ensure you call the arguments correctly.

In the next TODO, you are asked to complete the code to create a function called get_cols. It has no argumnets but should return a list of all the tf.feature_columns you intend to use for your model. Hint: use the get_categorical function you created above to make your code easier to read.


In [ ]:
def get_categorical(name, values):
    return # TODO: Your code goes here

def get_cols():
    # Define column types
    return # TODO: Your code goes here

Create the Serving Input function

To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user.

Exercise 4

In the TODOs below, create the feature_placeholders dictionary by setting up the placeholders for each of the features we will use in our model. Look at the documentation for tf.placeholder to make sure you provide all the necessary arguments. You'll need to create placeholders for the features

  • is_male
  • mother_age
  • plurality
  • gestation_weeks
  • key

You'll also need to create the features dictionary to pass to the tf.estimator.export.ServingInputReceiver function. The features dictionary will reference the fearture_placeholders dict you created above. Remember to expand the dimensions of the tensors you'll incoude in the features dictionary to accomodate for batched data we'll send to the model for predicitons later.


In [ ]:
def serving_input_fn():
    feature_placeholders = # TODO: Your code goes here
    
    features = # TODO: Your code goes here
    
    return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)

Create the model and run training and evaluation

Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations.

Exercise 5

In the cell below, complete the TODOs to create our model for training.

  • First you must create your estimator using tf.estimator.DNNRegressor.
  • Next, complete the code to set up your tf.estimator.TrainSpec, selecting the appropriate input function and dataset to use to read data to your function during training.
  • Next, set up your exporter and tf.estimator.EvalSpec.
  • Finally, pass the variables you created above to call tf.estimator.train_and_evaluate

Be sure to check the documentation for these Tensorflow operations to make sure you set things up correctly.


In [ ]:
def train_and_evaluate(output_dir):
    EVAL_INTERVAL = 300
    run_config = tf.estimator.RunConfig(
        save_checkpoints_secs = EVAL_INTERVAL,
        keep_checkpoint_max = 3)

    estimator = # TODO: Your code goes here
    train_spec = # TODO: Your code goes here
    exporter = # TODO: Your code goes here
    eval_spec = # TODO: Your code goes here

    tf.estimator.train_and_evaluate(# TODO: Your code goes here)

Finally, we train the model!


In [ ]:
# Run the model
shutil.rmtree(path = "babyweight_trained_dnn", ignore_errors = True) # start fresh each time
train_and_evaluate("babyweight_trained_dnn")

Look at the results of your training job above. What RMSE (average_loss) did you get for the final eval step?

Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License