Passing through instance keys and features when using a keras model

This notebook will show you how to modify a Keras model to perform keyed predictions or forward features through with the prediction. This it the companion code for this blog.

Sometimes you'll have a unique instance key that is associated with each row and you want that key to be output along with the prediction so you know which row the prediction belongs to. You'll need to add keys when executing distributed batch predictions with a service like Cloud AI Platform batch prediction. Also, if you're performing continuous evaluation on your model and you'd like to log metadata about predictions for later analysis. There are also use-cases for forwarding a particular feature from your model out with the output, for example performing evaluation on certain slices of data.

Topics Covered

  • Modify serving signature of existing model to accept and forward keys
  • Multiple serving signatures on one model
  • Online and batch predictions with Google Cloud AI Platform
  • Forward features in model definition
  • Forward features with serving signature

In [1]:
import numpy as np

import tensorflow as tf
from tensorflow import keras

In [2]:
tf.__version__


Out[2]:
'2.1.0-dlenv_tfe'

In [3]:
# Set GCP configs if using Cloud AI Platform

import os

PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
BUCKET = "your-gcp-bucket-here"

# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID

if PROJECT == "your-gcp-project-here":
  print("Don't forget to update your PROJECT name! Currently:", PROJECT)

Build and Train a Fashion MNIST model

We will use a straightforward keras use case with the fashion mnist dataset to demonstrate building a model and then adding support for keyed predictions. More here on the use case: https://colab.sandbox.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/classification.ipynb


In [4]:
fashion_mnist = keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

In [5]:
# Scale down dataset
train_images = train_images / 255.0
test_images = test_images / 255.0

In [6]:
# Build and traing model

from tensorflow.keras import Sequential, Input
from tensorflow.keras.layers import Dense, Flatten

model = Sequential([
  Input(shape=(28,28), name="image"),
  Flatten(input_shape=(28, 28), name="flatten"),
  Dense(64, activation='relu', name="dense"),
  Dense(10, activation='softmax', name="preds"),
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics=['accuracy'])

# Only training for 1 epoch, we are not worried about model performance
model.fit(train_images, train_labels, epochs=1, batch_size=32)


Train on 60000 samples
60000/60000 [==============================] - 9s 152us/sample - loss: 0.5210 - accuracy: 0.8182
Out[6]:
<tensorflow.python.keras.callbacks.History at 0x7f44387647d0>

In [7]:
# Create test_image in shape, type that will be accepted as Tensor
test_image = np.expand_dims(test_images[0],0).astype('float32')
model.predict(test_image)


Out[7]:
array([[1.9574834e-05, 1.7343391e-06, 1.5372832e-05, 6.3454769e-05,
        4.5845241e-05, 4.0783577e-02, 1.1227881e-04, 4.5515549e-01,
        1.0713221e-02, 4.9308944e-01]], dtype=float32)

SavedModel and serving signature

Now save the model using tf.saved_model.save() into SavedModel format, not the older Keras H5 Format. This will add a serving signature which we can then inspect. The serving signature indicates exactly which input names and types are expected, and what will be output by the model


In [8]:
MODEL_EXPORT_PATH = './model/'
tf.saved_model.save(model, MODEL_EXPORT_PATH)


WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Assets written to: ./model/assets

In [9]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {MODEL_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['image'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 28, 28)
      name: serving_default_image:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['preds'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 10)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

In [10]:
# Load the model from storage and inspect the object types
loaded_model = tf.keras.models.load_model(MODEL_EXPORT_PATH)
loaded_model.signatures


Out[10]:
_SignatureMap({'serving_default': <tensorflow.python.saved_model.load._WrapperFunction object at 0x7f441473b5d0>})

In [11]:
loaded_model


Out[11]:
<tensorflow.python.keras.saving.saved_model.load.Sequential at 0x7f441471c3d0>

It's worth noting that original model did not have serving signature until we saved it and is a slightly different object type:


In [12]:
model


Out[12]:
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f443bab6fd0>

In [13]:
# Uncomment and expect an error since different object type
# model.signatures

Standard serving function

We can actually get access to the inference_function of the loaded model and is it directly to perform predictions, similar to a Keras Model.predict() call. Note the name of the output Tensor matches the serving signature.


In [14]:
inference_function = loaded_model.signatures['serving_default']

print(inference_function)


<tensorflow.python.saved_model.load._WrapperFunction object at 0x7f441473b5d0>

In [15]:
result = inference_function(tf.convert_to_tensor(test_image))

print(result)


{'preds': <tf.Tensor: shape=(1, 10), dtype=float32, numpy=
array([[1.9574834e-05, 1.7343391e-06, 1.5372832e-05, 6.3454769e-05,
        4.5845241e-05, 4.0783577e-02, 1.1227881e-04, 4.5515549e-01,
        1.0713221e-02, 4.9308944e-01]], dtype=float32)>}

In [16]:
# Matches serving signature
result['preds']


Out[16]:
<tf.Tensor: shape=(1, 10), dtype=float32, numpy=
array([[1.9574834e-05, 1.7343391e-06, 1.5372832e-05, 6.3454769e-05,
        4.5845241e-05, 4.0783577e-02, 1.1227881e-04, 4.5515549e-01,
        1.0713221e-02, 4.9308944e-01]], dtype=float32)>

Keyed Serving Function

Now we'll create a new serving function that accepts and outputs a unique instance key. We use the fact that a Keras Model(x) call actually runs a prediction. The training=False parameter is included only for clarity. Then we save the model as before but provide this function as our new serving signature.


In [17]:
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string),tf.TensorSpec([None, 28, 28], dtype=tf.float32)])
def keyed_prediction(key, image):
    pred = loaded_model(image, training=False)
    return {
        'preds': pred,
        'key': key
    }

In [18]:
# Resave model, but specify new serving signature
KEYED_EXPORT_PATH = './keyed_model/'
loaded_model.save(KEYED_EXPORT_PATH, signatures={'serving_default': keyed_prediction})


INFO:tensorflow:Assets written to: ./keyed_model/assets

In [19]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {KEYED_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['image'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 28, 28)
      name: serving_default_image:0
  inputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: serving_default_key:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: StatefulPartitionedCall:0
  outputs['preds'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 10)
      name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict

In [20]:
keyed_model = tf.keras.models.load_model(KEYED_EXPORT_PATH)

In [21]:
# Change 'flatten_input' to 'image' after b/159022434
keyed_model.predict({
    'flatten_input': test_image,
    'key': tf.constant("unique_key")}
)
# keyed_model.predict(test_image)


Out[21]:
array([[1.9574834e-05, 1.7343391e-06, 1.5372832e-05, 6.3454769e-05,
        4.5845241e-05, 4.0783577e-02, 1.1227881e-04, 4.5515549e-01,
        1.0713221e-02, 4.9308944e-01]], dtype=float32)

Multiple Signature Model

Sometimes it is useful to leave both signatures in the model definition so the user can indicate if they are performing a keyed prediction or not. This can easily be done with the model.save() method as before.

In general, your serving infrastructure will default to 'serving_default' unless otherwise specified in a prediction call. Google Cloud AI Platform online and batch prediction support multiple signatures, as does TFServing.


In [22]:
# Using inference_function from earlier
DUAL_SIGNATURE_EXPORT_PATH = './dual_signature_model/'
loaded_model.save(DUAL_SIGNATURE_EXPORT_PATH, signatures={'serving_default': keyed_prediction,
                                                  'unkeyed_signature': inference_function})


INFO:tensorflow:Assets written to: ./dual_signature_model/assets

In [23]:
# Examine the multiple signatures
!saved_model_cli show --tag_set serve --dir {DUAL_SIGNATURE_EXPORT_PATH}


The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:
SignatureDef key: "__saved_model_init_op"
SignatureDef key: "serving_default"
SignatureDef key: "unkeyed_signature"

In [24]:
# Default signature
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {DUAL_SIGNATURE_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['image'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 28, 28)
      name: serving_default_image:0
  inputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: serving_default_key:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: StatefulPartitionedCall:0
  outputs['preds'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 10)
      name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict

In [25]:
# Alternative unkeyed signature
!saved_model_cli show --tag_set serve --signature_def unkeyed_signature --dir {DUAL_SIGNATURE_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['image'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 28, 28)
      name: unkeyed_signature_image:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['preds'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 10)
      name: StatefulPartitionedCall_1:0
Method name is: tensorflow/serving/predict

Deploy the model and perform predictions

Now we'll deploy the model to AI Platform serving and perform both online and batch keyed predictions. Deployment will take 2-3 minutes.


In [ ]:
os.environ["MODEL_LOCATION"] = DUAL_SIGNATURE_EXPORT_PATH

In [26]:
%%bash

MODEL_NAME=fashion_mnist
MODEL_VERSION=v1

TFVERSION=2.1
# REGION and BUCKET and MODEL_LOCATION set earlier

# create the model if it doesn't already exist
modelname=$(gcloud ai-platform models list | grep -w "$MODEL_NAME")
echo $modelname
if [ -z "$modelname" ]; then
   echo "Creating model $MODEL_NAME"
   gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
else
   echo "Model $MODEL_NAME already exists"
fi

# delete the model version if it already exists
modelver=$(gcloud ai-platform versions list --model "$MODEL_NAME" | grep -w "$MODEL_VERSION")
echo $modelver
if [ "$modelver" ]; then
   echo "Deleting version $MODEL_VERSION"
   yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
   sleep 10
fi


echo "Creating version $MODEL_VERSION from $MODEL_LOCATION"
gcloud ai-platform versions create ${MODEL_VERSION} \
       --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --staging-bucket gs://${BUCKET} \
       --runtime-version $TFVERSION


fashion_mnist v1
Model fashion_mnist already exists
v1 gs://dhodun1/d42f33c1391fa46ba5837061282b1c8567bfa204fe3db9f72c2a0da394f0ed4f/ READY
Deleting version v1
Creating version v1 from ./dual_signature_model/
WARNING: Using endpoint [https://ml.googleapis.com/]
WARNING: Using endpoint [https://ml.googleapis.com/]
WARNING: Using endpoint [https://ml.googleapis.com/]
This will delete version [v1]...

Do you want to continue (Y/n)?  
Deleting version [v1]......
..............................................................................................................................................................done.
WARNING: Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......
.....................................................................................................................................................................................................................................................................................................................................................................................................................................done.

In [27]:
# Create keyed test_image file

with open("keyed_input.json", "w") as file:
    print(f'{{"image": {test_image.tolist()}, "key": "image_id_1234"}}', file=file)

In [28]:
# Single online keyed prediction, --signature-name is not required since we're hitting the default but shown for clarity

!gcloud ai-platform predict --model fashion_mnist --json-instances keyed_input.json --version v1 --signature-name serving_default


WARNING: Using endpoint [https://ml.googleapis.com/]
KEY            PREDS
image_id_1234  [1.9574799807742238e-05, 1.7343394347335561e-06, 1.537282150820829e-05, 6.345478323055431e-05, 4.584524504025467e-05, 0.0407835878431797, 0.00011227882350794971, 0.4551553726196289, 0.010713225230574608, 0.493089497089386]

In [29]:
# Create unkeyed test_image file

with open("unkeyed_input.json", "w") as file:
    print(f'{{"image": {test_image.tolist()}}}', file=file)

In [30]:
# Single online unkeyed prediction using alternative serving signature

!gcloud ai-platform predict --model fashion_mnist --json-instances unkeyed_input.json --version v1 --signature-name unkeyed_signature


WARNING: Using endpoint [https://ml.googleapis.com/]
PREDS
[1.9574799807742238e-05, 1.7343394347335561e-06, 1.537282150820829e-05, 6.345478323055431e-05, 4.584524504025467e-05, 0.0407835878431797, 0.00011227882350794971, 0.4551553726196289, 0.010713225230574608, 0.493089497089386]

Batch Predictions

Now we'll create multiple keyed prediction files and create a job to perform these predictions in a scalable, distributed manner. The keys will be retained so the results can be stored and associated with the initial inputs.


In [31]:
# Create Data files:
import shutil

DATA_DIR = './batch_data'
shutil.rmtree(DATA_DIR, ignore_errors=True)
os.makedirs(DATA_DIR)

# Create 10 files with 10 images each
for i in range(10):
    with open(f'{DATA_DIR}/keyed_batch_{i}.json', "w") as file:
        for z in range(10):
            key = f'key_{i}_{z}'
            print(f'{{"image": {test_images[z].tolist()}, "key": "{key}"}}', file=file)

In [32]:
%%bash
gsutil -m cp -r ./batch_data gs://$BUCKET/


Copying file://./batch_data/keyed_batch_1.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_9.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_8.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_3.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_5.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_0.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_7.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_2.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_4.json [Content-Type=application/json]...
Copying file://./batch_data/keyed_batch_6.json [Content-Type=application/json]...
/ [10/10 files][870.0 KiB/870.0 KiB] 100% Done                                  
Operation completed over 10 objects/870.0 KiB.                                   

This following batch prediction job took me 8-10 minutes, most of the time spent in infrastructure spin up.


In [33]:
%%bash

DATA_FORMAT="text" # JSON data format
INPUT_PATHS="gs://${BUCKET}/batch_data/*"
OUTPUT_PATH="gs://${BUCKET}/batch_predictions"
MODEL_NAME='fashion_mnist'
VERSION_NAME='v1'
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="fashion_mnist_batch_predict_$now"
LABELS="team=engineering,phase=test,owner=drew"
SIGNATURE_NAME="serving_default"

gcloud ai-platform jobs submit prediction $JOB_NAME \
    --model $MODEL_NAME \
    --version $VERSION_NAME \
    --input-paths $INPUT_PATHS \
    --output-path $OUTPUT_PATH \
    --region $REGION \
    --data-format $DATA_FORMAT \
    --labels $LABELS \
    --signature-name $SIGNATURE_NAME


jobId: fashion_mnist_batch_predict_20200615_153958
state: QUEUED
Job [fashion_mnist_batch_predict_20200615_153958] submitted successfully.
Your job is still active. You may view the status of your job with the command

  $ gcloud ai-platform jobs describe fashion_mnist_batch_predict_20200615_153958

or continue streaming the logs with the command

  $ gcloud ai-platform jobs stream-logs fashion_mnist_batch_predict_20200615_153958

In [34]:
# You can stream the logs, this cell will block until the job completes.
# Copy and paste from the previous cell's output based to grab your job name

# gcloud ai-platform jobs stream-logs fashion_mnist_batch_predict_20200611_151356

In [35]:
!gsutil ls gs://$BUCKET/batch_predictions


gs://dhodun1/temp/batch_predictions/prediction.errors_stats-00000-of-00001
gs://dhodun1/temp/batch_predictions/prediction.results-00000-of-00001
gs://dhodun1/temp/batch_predictions/prediction.results-00000-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00001-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00002-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00003-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00004-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00005-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00006-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00007-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00008-of-00010
gs://dhodun1/temp/batch_predictions/prediction.results-00009-of-00010

In [36]:
# View predictions with keys
!gsutil cat gs://$BUCKET/batch_predictions/prediction.results-00000-of-00010


{"preds": [9.211135329678655e-05, 1.0087987902807072e-05, 4.698108750744723e-05, 2.715539994824212e-05, 0.00019967503612861037, 0.174049511551857, 0.0005865175626240671, 0.2705112099647522, 0.002678860677406192, 0.5517978668212891], "key": "key_7_0"}
{"preds": [0.002726481296122074, 7.680333510506898e-06, 0.7650008797645569, 0.0001010186315397732, 0.038841612637043, 3.7204465286322375e-08, 0.19325067102909088, 1.9686152707976134e-09, 7.169554010033607e-05, 1.4934888395434776e-11], "key": "key_7_1"}
{"preds": [0.00010154004121432081, 0.9996353387832642, 7.832989467715379e-06, 0.0002033840719377622, 4.973643081029877e-05, 2.2773709584811286e-09, 1.5637542674085125e-06, 1.9033164377901812e-08, 1.7880319092000718e-07, 4.1070393308473285e-07], "key": "key_7_2"}
{"preds": [3.554671638994478e-05, 0.9984777569770813, 1.888977931230329e-05, 0.0013873657444491982, 6.41861479380168e-05, 2.0880989382021653e-08, 2.1890855350648053e-06, 1.9535197282039007e-07, 4.799605335392698e-07, 1.3420240065897815e-05], "key": "key_7_3"}
{"preds": [0.08514387905597687, 0.00020077414228580892, 0.0692749172449112, 0.06048263981938362, 0.06608160585165024, 7.142244430724531e-05, 0.714330792427063, 2.0437615603441373e-05, 0.004391715861856937, 1.7476610310040996e-06], "key": "key_7_4"}
{"preds": [0.004108963534235954, 0.9902229309082031, 0.0002155099791707471, 0.0025740088894963264, 0.0025837370194494724, 8.710831167491051e-08, 0.00028743583243340254, 8.160312603422426e-08, 5.302461886458332e-06, 2.0205186501698336e-06], "key": "key_7_5"}
{"preds": [0.014112361706793308, 0.0008088137838058174, 0.01675746962428093, 0.002164569217711687, 0.9211376309394836, 0.00014193437527865171, 0.041439879685640335, 7.985366210050415e-06, 0.0034278519451618195, 1.5230604049065732e-06], "key": "key_7_6"}
{"preds": [0.0018075327388942242, 0.0005742131033912301, 0.013851719908416271, 0.0030816274229437113, 0.1138634979724884, 0.000244705326622352, 0.8649102449417114, 5.37119649379747e-06, 0.0016507108230143785, 1.044568489305675e-05], "key": "key_7_7"}
{"preds": [0.006706225220113993, 0.003986412659287453, 0.007010308559983969, 0.006693772505968809, 0.005332822911441326, 0.8451583981513977, 0.018238505348563194, 0.08709842711687088, 0.014139154925942421, 0.00563598470762372], "key": "key_7_8"}
{"preds": [2.508779289200902e-05, 2.406256135145668e-05, 3.5246364859631285e-05, 3.418887354200706e-05, 2.4260680220322683e-05, 0.0027981558814644814, 9.72664711298421e-05, 0.9939435124397278, 0.0019495101878419518, 0.0010686515597626567], "key": "key_7_9"}

Feature Forward Models

There are also times where it's desirable to forward some or all of the input features along with the output. This can be achieved in a very similar manner as adding keyed outputs to our model.

Note that this will be a little trickier to grab a subset of features if you are feeding all of your input features as a single Input() layer in the Keras model. This example takes multiple Inputs.

Build and train Boston Housing model


In [37]:
# Build a toy model using the Boston Housing dataset
# https://www.kaggle.com/c/boston-housing
# Prediction target is median value of homes in $1000's

(train_data, train_targets), (test_data, test_targets) = keras.datasets.boston_housing.load_data()

# Extract just two of the features for simplicity's sake
train_tax_rate = train_data[:,10]
train_rooms = train_data[:,5]

In [38]:
# Build a toy model with multiple inputs
# This time using the Keras functional API

from tensorflow.keras.layers import Input
from tensorflow.keras import Model


tax_rate = Input(shape=(1,), dtype=tf.float32, name="tax_rate")
rooms = Input(shape=(1,), dtype=tf.float32, name="rooms")

x = tf.keras.layers.Concatenate()([tax_rate, rooms])
x = tf.keras.layers.Dense(64, activation='relu')(x)
price = tf.keras.layers.Dense(1, activation=None, name="price")(x)

# Functional API model instead of Sequential
model = Model(inputs=[tax_rate, rooms], outputs=[price])

In [39]:
model.compile(
        optimizer='adam',
        loss='mean_squared_error',
        metrics=['accuracy']
    )
# Again, we're not concerned with model performance
model.fit([train_tax_rate, train_rooms], train_targets, epochs=10)


Train on 404 samples
Epoch 1/10
404/404 [==============================] - 0s 1ms/sample - loss: 540.8722 - accuracy: 0.0000e+00
Epoch 2/10
404/404 [==============================] - 0s 89us/sample - loss: 423.0400 - accuracy: 0.0000e+00
Epoch 3/10
404/404 [==============================] - 0s 94us/sample - loss: 328.7296 - accuracy: 0.0000e+00
Epoch 4/10
404/404 [==============================] - 0s 107us/sample - loss: 255.8777 - accuracy: 0.0000e+00
Epoch 5/10
404/404 [==============================] - 0s 98us/sample - loss: 202.7813 - accuracy: 0.0000e+00
Epoch 6/10
404/404 [==============================] - 0s 99us/sample - loss: 162.3239 - accuracy: 0.0000e+00
Epoch 7/10
404/404 [==============================] - 0s 102us/sample - loss: 136.0128 - accuracy: 0.0000e+00
Epoch 8/10
404/404 [==============================] - 0s 104us/sample - loss: 119.6358 - accuracy: 0.0000e+00
Epoch 9/10
404/404 [==============================] - 0s 110us/sample - loss: 110.2340 - accuracy: 0.0000e+00
Epoch 10/10
404/404 [==============================] - 0s 101us/sample - loss: 105.8428 - accuracy: 0.0000e+00
Out[39]:
<tensorflow.python.keras.callbacks.History at 0x7f4438703490>

Feature forward and non feature forward predictions

Using the Keras sequential API, we create another model with slightly different inputs and outputs, but retaining the weights of the existing model. Notice the predictions with and without feature forwarding.


In [40]:
model.predict({
    'tax_rate': tf.convert_to_tensor([20.2]),
    'rooms': tf.convert_to_tensor([6.2])
})


Out[40]:
array([[22.294939]], dtype=float32)

In [41]:
BOSTON_EXPORT_PATH = './boston_model/'
model.save(BOSTON_EXPORT_PATH)


INFO:tensorflow:Assets written to: ./boston_model/assets

In [42]:
# Will retain weights from trained model but also forward out a feature
forward_model = Model(inputs=[tax_rate, rooms], outputs=[price, tax_rate])

In [43]:
# Notice we get both outputs now
forward_model.predict({
    'tax_rate': tf.convert_to_tensor([5.0]),
    'rooms': tf.convert_to_tensor([6.2])
})


Out[43]:
[array([[7.927329]], dtype=float32), array([[5.]], dtype=float32)]

In [44]:
FORWARD_EXPORT_PATH = './forward_model/'
forward_model.save(FORWARD_EXPORT_PATH)


INFO:tensorflow:Assets written to: ./forward_model/assets

In [45]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {FORWARD_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['rooms'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: serving_default_rooms:0
  inputs['tax_rate'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: serving_default_tax_rate:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['price'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:0
  outputs['tax_rate'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict

Forwarding by changing serving signature

We could have employed the same method as before to also modify the serving signature and save out the model to achieve the same result.


In [46]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {BOSTON_EXPORT_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['rooms'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: serving_default_rooms:0
  inputs['tax_rate'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: serving_default_tax_rate:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['price'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

In [47]:
# In our previous example, we leverage and inference function pulled off of a loaded model
# In this case we will need to create ourselves since we haven't saved it out yet
@tf.function(input_signature=[tf.TensorSpec([None, 1], dtype=tf.float32), tf.TensorSpec([None, 1], dtype=tf.float32)])
def standard_forward_prediction(tax_rate, rooms):
    pred = model([tax_rate, rooms], training=False)
    return {
        'price': pred,
    }

In [48]:
# Return out the feature of interest as well as the prediction
@tf.function(input_signature=[tf.TensorSpec([None, 1], dtype=tf.float32), tf.TensorSpec([None, 1], dtype=tf.float32)])
def feature_forward_prediction(tax_rate, rooms):
    pred = model([tax_rate, rooms], training=False)
    return {
        'price': pred,
        'tax_rate': tax_rate
    }

In [49]:
# Save out the model with both signatures
DUAL_SIGNATURE_FORWARD_PATH = './dual_signature_forward_model/'
model.save(DUAL_SIGNATURE_FORWARD_PATH, signatures={'serving_default': standard_forward_prediction,
                                   'feature_forward': feature_forward_prediction})


INFO:tensorflow:Assets written to: ./dual_signature_forward_model/assets

In [50]:
# Inspect just the feature_forward signature, but we also have standard serving_default
!saved_model_cli show --tag_set serve --signature_def feature_forward --dir {DUAL_SIGNATURE_FORWARD_PATH}


The given SavedModel SignatureDef contains the following input(s):
  inputs['rooms'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: feature_forward_rooms:0
  inputs['tax_rate'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: feature_forward_tax_rate:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['price'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:0
  outputs['tax_rate'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict

In [ ]:

Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.