In [0]:
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
|
Beta
This is a beta release of custom prediction routines. This feature might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.
This tutorial shows how to deploy a trained Keras model to AI Platform and serve predictions using a custom prediction routine. This lets you customize how AI Platform responds to each prediction request.
In this example, you will use a custom prediction routine to preprocess prediction input by scaling it, and to postprocess prediction output by converting softmax probability outputs to label strings.
The tutorial walks through several steps:
This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.
This tutorial uses the copy of the Iris dataset included in the scikit-learn library.
This tutorial uses billable components of Google Cloud Platform (GCP):
Learn about AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.
You must do several things before you can train and deploy a model in AI Platform:
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter
in a shell to install
Jupyter.
Run jupyter notebook
in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
The following steps are required, regardless of your notebook environment.
Enable the AI Platform ("Cloud Machine Learning Engine") and Compute Engine APIs.
Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with !
as shell commands, and it interpolates Python variables prefixed with $
into these commands.
In [0]:
PROJECT_ID = "<your-project-id>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select Machine Learning Engine > AI Platform Admin and Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS
variable in the cell below and run the cell.
In [0]:
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS '<path-to-your-service-account-key.json>'
The following steps are required, regardless of your notebook environment.
To deploy a custom prediction routine, you must upload your trained model artifacts and your custom code to Cloud Storage.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
You may also change the REGION
variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
In [0]:
BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
In [0]:
! gsutil mb -l $REGION gs://$BUCKET_NAME
Finally, validate access to your Cloud Storage bucket by examining its contents:
In [0]:
! gsutil ls -al gs://$BUCKET_NAME
Often, you can't use your data in its raw form to train a machine learning model. Even when you can, preprocessing the data before using it for training can sometimes improve your model.
Assuming that you expect the input for prediction to have the same format as your training data, you must apply identical preprocessing during training and prediction to ensure that your model makes consistent predictions.
In this section, create a preprocessing module and use it as part of training. Then export a preprocessor with characteristics learned during training to use later in your custom prediction routine.
In [0]:
! pip3 install numpy scikit-learn 'tensorflow>=1.13,<2'
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py
, which contains a class to do this scaling:
In [0]:
%%writefile preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError('At least one column has standard deviation of 0.')
return (data - self._means) / self._stds
Notice that an instance of MySimpleScaler
saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.
This lets you store characteristics of the training distribution and use them for identical preprocessing at prediction time.
In [0]:
import pickle
from sklearn.datasets import load_iris
import tensorflow as tf
from preprocess import MySimpleScaler
iris = load_iris()
scaler = MySimpleScaler()
num_classes = len(iris.target_names)
X = scaler.preprocess(iris.data)
y = tf.keras.utils.to_categorical(iris.target, num_classes=num_classes)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(25, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(25, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax))
model.compile(
optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X, y, epochs=10, batch_size=1)
model.save('model.h5')
with open ('preprocessor.pkl', 'wb') as f:
pickle.dump(scaler, f)
Note: When deploying a TensorFlow model to AI Platform without a custom prediction routine, you must export the trained model in the SavedModel
format. When you deploy a custom prediction routine, you are able to export to the HDF5 format instead—or any other format that suits your needs.
To deploy a custom prediction routine to serve predictions from your trained model, do the following:
To deploy a custom prediction routine, you must create a class that implements the Predictor interface. This tells AI Platform how to load your model and how to handle prediction requests.
Write the following code to predictor.py
:
In [0]:
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
import tensorflow as tf
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
outputs = self._model.predict(preprocessed_inputs)
if kwargs.get('probabilities'):
return outputs.tolist()
else:
return [self._class_names[index] for index in np.argmax(outputs, axis=1)]
@classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.h5')
model = tf.keras.models.load_model(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the neural network's softmax output (an array denoting the probability of each label being the correct one) into the label with the highest probability.
However, if the predictor receives a probabilities
keyword argument with the value True
, it
returns the probability array instead. The last part of this tutorial shows how to provide this keyword argument.
In [0]:
%%writefile setup.py
from setuptools import setup
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py', 'preprocess.py'])
Then run the following command to createdist/my_custom_code-0.1.tar.gz
:
In [0]:
! python setup.py sdist --formats=gztar
Before you can deploy your model for serving, AI Platform needs access to the following files in Cloud Storage:
model.h5
(model artifact)preprocessor.pkl
(model artifact)my_custom_code-0.1.tar.gz
(custom code)Model artifacts must be stored together in a model directory, which your
Predictor can access as the model_dir
argument in its from_path
class
method. The custom
code does not need to be in the same directory. Run the following commands to
upload your files:
In [0]:
! gsutil cp ./dist/my_custom_code-0.1.tar.gz gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz
! gsutil cp model.h5 preprocessor.pkl gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/
Create a model resource and a version resource to deploy your custom prediction routine. First define variables with your resource names:
In [0]:
MODEL_NAME = 'IrisPredictor'
VERSION_NAME = 'v1'
Then create your model:
In [0]:
! gcloud ai-platform models create $MODEL_NAME \
--regions $REGION
Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage:
In [0]:
# --quiet automatically installs the beta component if it isn't already installed
! gcloud --quiet beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/ \
--package-uris gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
Learn more about the options you must specify when you deploy a custom prediction routine.
In [0]:
! pip install --upgrade google-api-python-client
Then send two instances of iris data to your deployed version:
In [0]:
import googleapiclient.discovery
instances = [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2],
]
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
Note: This code uses the credentials you set up during the authentication step to make the online prediction request.
When you send a prediction request to a custom prediction routine, you can provide additional fields on your request body. The Predictor's predict
method receives these as fields of the **kwargs
dictionary.
The following code sends the same request as before, but this time it adds a probabilities
field to the request body:
In [0]:
response = service.projects().predict(
name=name,
body={'instances': instances, 'probabilities': True}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following commands:
In [0]:
# Delete version resource
! gcloud ai-platform versions delete $VERSION_NAME --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME/custom_prediction_routine_tutorial