In [ ]:
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

AI Explanations: Deploying an image model

View on GitHub

Overview

This tutorial shows how to train a Keras classification model on image data and deploy it to the AI Platform Explanations service to get feature attributions on your deployed model.

If you've already got a trained model and want to deploy it to AI Explanations, skip to the Export the model as a TF 1 SavedModel section.

Dataset

The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets.

Objective

The goal of this tutorial is to train a model on a simple image dataset (flower classification) to understand how you can use AI Explanations with image models. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most.

This tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself.

Costs

This tutorial uses billable components of Google Cloud Platform (GCP):

  • AI Platform for:
    • Prediction
    • Explanation: AI Explanations comes at no extra charge to prediction prices. However, explanation requests take longer to process than normal predictions, so heavy usage of Explanations along with auto-scaling may result in more nodes being started and thus more charges
  • Cloud Storage for:
    • Storing model files for deploying to Cloud AI Platform

Learn about AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin

Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type

This tutorial assumes you are running the notebook either in Colab or Cloud AI Platform Notebooks.

Set up your GCP project

The following steps are required, regardless of your notebook environment.

  1. Select or create a GCP project.

  2. Make sure that billing is enabled for your project.

  3. Enable the AI Platform Training & Prediction and Compute Engine APIs.

  4. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.

Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.


In [ ]:
PROJECT_ID = "your-project-id"

Authenticate your GCP account

If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step.

If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.


In [ ]:
import sys, os
import warnings
import googleapiclient

warnings.filterwarnings('ignore')
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
# If you are running this notebook in Colab, follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.

if 'google.colab' in sys.modules:
  from google.colab import auth as google_auth
  google_auth.authenticate_user()
  !gcloud config set project $PROJECT_ID
  try:
    %tensorflow_version 1.x
  except Exception:
    pass
  import tensorflow as tf

Create a Cloud Storage bucket

The following steps are required, regardless of your notebook environment.

When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model version based on this output in order to serve online predictions.

Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.

You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You may not use a Multi-Regional Storage bucket for training with AI Platform.


In [ ]:
BUCKET_NAME = PROJECT_ID + "_flowers_model"
REGION = "us-central1"

Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.


In [ ]:
! gsutil mb -l $REGION gs://$BUCKET_NAME

Import libraries

Import the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow versions 1.14 and 1.15.


In [ ]:
import math, json, random
import numpy as np
import PIL
import tensorflow as tf

from matplotlib import pyplot as plt
from base64 import b64encode


print("Tensorflow version " + tf.__version__)
AUTO = tf.data.experimental.AUTOTUNE

Downloading and preprocessing the flowers dataset

In this section you'll download the flower images (in this dataset they are TFRecords), use the tf.data API to create a data input pipeline, and split the data into training and validation sets.


In [ ]:
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'
IMAGE_SIZE = [192, 192]

BATCH_SIZE = 32 

VALIDATION_SPLIT = 0.19
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)

# Split data files between training and validation
filenames = tf.gfile.Glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))

The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.

If you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> "Form" --> "Show form" if you'd like to see it.


In [ ]:
#@title display utilities [RUN ME]

def dataset_to_numpy_util(dataset, N):
  dataset = dataset.batch(N)
  
  if tf.executing_eagerly():
    # In eager mode, iterate in the Datset directly.
    for images, labels in dataset:
      numpy_images = images.numpy()
      numpy_labels = labels.numpy()
      break;
      
  else: # In non-eager mode, must get the TF note that 
        # yields the nextitem and run it in a tf.Session.
    get_next_item = dataset.make_one_shot_iterator().get_next()
    with tf.Session() as ses:
      numpy_images, numpy_labels = ses.run(get_next_item)

  return numpy_images, numpy_labels

def title_from_label_and_target(label, correct_label):
  label = np.argmax(label, axis=-1)  # one-hot to class number
  correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
  correct = (label == correct_label)
  return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
                              CLASSES[correct_label] if not correct else ''), correct

def display_one_flower(image, title, subplot, red=False):
    plt.subplot(subplot)
    plt.axis('off')
    plt.imshow(image)
    plt.title(title, fontsize=16, color='red' if red else 'black')
    return subplot+1
  
def display_9_images_from_dataset(dataset):
  subplot=331
  plt.figure(figsize=(13,13))
  images, labels = dataset_to_numpy_util(dataset, 9)
  for i, image in enumerate(images):
    title = CLASSES[np.argmax(labels[i], axis=-1)]
    subplot = display_one_flower(image, title, subplot)
    if i >= 8:
      break;
              
  plt.tight_layout()
  plt.subplots_adjust(wspace=0.1, hspace=0.1)
  plt.show()
  
def display_9_images_with_predictions(images, predictions, labels):
  subplot=331
  plt.figure(figsize=(13,13))
  for i, image in enumerate(images):
    title, correct = title_from_label_and_target(predictions[i], labels[i])
    subplot = display_one_flower(image, title, subplot, not correct)
    if i >= 8:
      break;
              
  plt.tight_layout()
  plt.subplots_adjust(wspace=0.1, hspace=0.1)
  plt.show()
  
def display_training_curves(training, validation, title, subplot):
  if subplot%10==1: # set up the subplots on the first call
    plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
    plt.tight_layout()
  ax = plt.subplot(subplot)
  ax.set_facecolor('#F8F8F8')
  ax.plot(training)
  ax.plot(validation)
  ax.set_title('model '+ title)
  ax.set_ylabel(title)
  ax.set_xlabel('epoch')
  ax.legend(['train', 'valid.'])

Read images and labels from TFRecords


In [ ]:
def read_tfrecord(example):
    features = {
        "image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
        "class": tf.io.FixedLenFeature([], tf.int64),  # shape [] means scalar
        "one_hot_class": tf.io.VarLenFeature(tf.float32),
    }
    example = tf.parse_single_example(example, features)
    image = tf.image.decode_jpeg(example['image'], channels=3)
    image = tf.cast(image, tf.float32) / 255.0  # convert image to floats in [0, 1] range
    image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU
    class_label = tf.cast(example['class'], tf.int32)
    one_hot_class = tf.sparse.to_dense(example['one_hot_class'])
    one_hot_class = tf.reshape(one_hot_class, [5])
    return image, one_hot_class

def load_dataset(filenames):
  # Read data from TFRecords

  dataset = tf.data.Dataset.from_tensor_slices(filenames)
  dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster
  dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)
  return dataset

In the following cell, we'll use a visualization utility function we defined above to preview some flower images with their associated labels.


In [ ]:
display_9_images_from_dataset(load_dataset(training_filenames))

Create training and validation datasets


In [ ]:
def get_batched_dataset(filenames):
  dataset = load_dataset(filenames)
  dataset = dataset.cache() # This dataset fits in RAM
  dataset = dataset.repeat()
  dataset = dataset.shuffle(2048)
  dataset = dataset.batch(BATCH_SIZE)
  dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
  # For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
  return dataset

def get_training_dataset():
  return get_batched_dataset(training_filenames)

def get_validation_dataset():
  return get_batched_dataset(validation_filenames)

some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 8*20)

Build, train, and evaluate the model

In this section we'll define the layers of our model using the Keras Sequential model API. Then we'll run training and evaluation, and finally run some test predictions on the local model.


In [ ]:
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]),
    tf.keras.layers.Conv2D(kernel_size=3, filters=30, padding='same', activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=2),
    tf.keras.layers.Conv2D(kernel_size=3, filters=60, padding='same', activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=2),
    tf.keras.layers.Conv2D(kernel_size=3, filters=90, padding='same', activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=2),
    tf.keras.layers.Conv2D(kernel_size=3, filters=110, padding='same', activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=2),
    tf.keras.layers.Conv2D(kernel_size=3, filters=130, padding='same', activation='relu'),
    tf.keras.layers.Conv2D(kernel_size=1, filters=40, padding='same', activation='relu'),
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(5, activation='softmax')
])

model.compile(
  optimizer='adam',
  loss= 'categorical_crossentropy',
  metrics=['accuracy'])

model.summary()

Train the model

Train this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.


In [ ]:
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%

history = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
                    validation_data=get_validation_dataset(), validation_steps=validation_steps)

Get predictions on local model and visualize them


In [ ]:
# Randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(8*20)
some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])

predictions = model.predict(some_flowers, batch_size=16)
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
  
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
print('[val_loss, val_acc]', evaluations)

In [ ]:
display_9_images_with_predictions(some_flowers, predictions, some_labels)

Export the model as a TF 1 SavedModel

AI Explanations currently supports TensorFlow 1.x. In order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it in GCS.


In [ ]:
## Convert our Keras model to an estimator and then export to SavedModel
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='savedmodel_export')

The decode_img_bytes function below handles converting image bytes (the format our served model will expect) to floats: a [192,192,3] dimensional matrix that our model is expecting. For image explanations models, we recommend this approach rather than sending an image as a float array from the client.


In [ ]:
def decode_img_bytes(img_bytes, height, width, color_depth):
  features = tf.squeeze(img_bytes, axis=1, name='input_squeeze')
  float_pixels = tf.map_fn(
    lambda img_string: tf.io.decode_image(
        img_string, 
        channels=color_depth,
        dtype=tf.float32
    ),
    features,
    dtype=tf.float32,
    name='input_convert'
  )

  tf.Tensor.set_shape(float_pixels, (None, height, width, color_depth))
  float_pixels = tf.identity(float_pixels, name='input_pixels')

  return float_pixels

def serving_input_receiver_fn():
  img_bytes = tf.placeholder(shape=(None,1), dtype=tf.string)
  img_float = decode_img_bytes(img_bytes, 192,192, 3)
  return tf.estimator.export.ServingInputReceiver({'conv2d_input': img_float}, {'conv2d_input': img_bytes})

In [ ]:
export_path = keras_estimator.export_saved_model(
  'gs://' + BUCKET_NAME + '/explanations',
  serving_input_receiver_fn
).decode('utf-8')
print("Model exported to: ", export_path)

In [ ]:
!saved_model_cli show --dir $export_path --all

Generate the metadata for AI Explanations

In order to deploy this model to Cloud Explanations, we need to create an explanation_metadata.json file with information about our model inputs, outputs, and baseline.

For image models, using [0,1] as your input baseline represents black and white images. In this case we're using np.random to generate the baseline because our training images contain a lot of black and white (i.e. daisy petals).


In [ ]:
random_baseline = np.random.rand(192,192,3)

In [ ]:
explanation_metadata = {
    "inputs": {
      "data": {
        "input_tensor_name": "input_pixels:0",
        "modality": "image",
        "input_baselines": [random_baseline.tolist()]
      }
    },
    "outputs": {
      "probability": {
        "output_tensor_name": "dense/Softmax:0"
      }
    },
  "framework": "tensorflow"
  }

In [ ]:
# Write the json to a local file
with open('explanation_metadata.json', 'w') as output_file:
  json.dump(explanation_metadata, output_file)

In [ ]:
# Copy this file into the GCS location with our SavedModel assets
!gsutil cp explanation_metadata.json $export_path

Deploy model to AI Explanations

In this step we'll use the gcloud CLI to deploy our model to AI Explanations.

Create the model


In [ ]:
MODEL = 'flowers'

In [ ]:
# Create the model if it doesn't exist yet (you only need to run this once)
!gcloud ai-platform models create $MODEL --enable-logging --regions=us-central1

Create explainable model versions

For image models, we offer two choices for explanation methods:

  • Integrated Gradients (IG)
  • XRAI

You can find more info on each method in the documentation. Below, we'll show you how to deploy a version with both so that you can compare results. If you already know which explanation method you'd like to use, you can deploy one version and skip the code blocks for the other method.

Creating the version will take ~5-10 minutes. Note that your first deploy may take longer.

Deploy an explainable model with Integrated Gradients


In [ ]:
# Each time you create a version the name should be unique
IG_VERSION = 'v_ig'

In [ ]:
# Create the version with gcloud
!gcloud beta ai-platform versions create $IG_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25

In [ ]:
# Make sure the IG model deployed correctly. State should be `READY` in the following log
!gcloud ai-platform versions describe $IG_VERSION --model $MODEL

Deploy an explainable model with XRAI


In [ ]:
# Each time you create a version the name should be unique
XRAI_VERSION = 'v_xrai'

In [ ]:
# Create the XRAI version with gcloud
!gcloud beta ai-platform versions create $XRAI_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method xrai \
--num-integral-steps 25

In [ ]:
# Make sure the XRAI model deployed correctly. State should be `READY` in the following log
!gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL

Get predictions and explanations on deployed model

Here we'll prepare some test images to send to our model. Then we'll use the AI Platform Prediction API to get the model's predicted class along with the explanation for each image.


In [ ]:
# Download test flowers from public bucket
!mkdir flowers
!gsutil -m cp gs://flowers_model/test_flowers/* ./flowers

In [ ]:
# Resize the images to what our model is expecting (192,192)
test_filenames = []

for i in os.listdir('flowers'):
  img_path = 'flowers/' + i
  with PIL.Image.open(img_path) as ex_img:
    resize_img = ex_img.resize([192,192])
    resize_img.save(img_path)
    test_filenames.append(img_path)

In [ ]:
# Prepare our prediction JSON to send to our Cloud model
instances = []

for i in test_filenames:
  with open(i, 'rb') as example_img:
    b64str = b64encode(example_img.read()).decode('utf-8')
    instances.append({'conv2d_input': [{'b64': b64str}]})

The predict_json method below calls our deployed model with the specified image data, model name, and version.


In [ ]:
# This is adapted from a sample in the docs
# Find it here: https://cloud.google.com/ai-platform/prediction/docs/online-predict#python

def predict_json(project, model, instances, version=None):
    """Send json data to a deployed model for prediction.

    Args:
        project (str): project where the AI Platform Model is deployed.
        model (str): model name.
        instances ([Mapping[str: Any]]): Keys should be the names of Tensors
            your deployed model expects as inputs. Values should be datatypes
            convertible to Tensors, or (potentially nested) lists of datatypes
            convertible to tensors.
        version: str, version of the model to target.
    Returns:
        Mapping[str: any]: dictionary of prediction results defined by the
            model.
    """

    service = googleapiclient.discovery.build('ml', 'v1')
    name = 'projects/{}/models/{}'.format(project, model)

    if version is not None:
        name += '/versions/{}'.format(version)

    response = service.projects().explain(
        name=name,
        body={'instances': instances}
    ).execute()

    if 'error' in response:
        raise RuntimeError(response['error'])

    return response

Make an AI Explanations request with gcloud

First we'll look at the explanations results for IG, then we'll compare with XRAI.

If you only deployed one model above, run only the cell for that explanation method.


In [ ]:
# IG EXPLANATIONS
ig_response = predict_json(PROJECT_ID, MODEL, instances, IG_VERSION)

In [ ]:
# XRAI EXPLANATIONS
xrai_response = predict_json(PROJECT_ID, MODEL, instances, XRAI_VERSION)

See our model's predicted classes without explanations

First, let's preview the images and see what our model predicted for them. Why did the model predict these classes? We'll see explanations in the next section.


In [ ]:
from io import BytesIO
import matplotlib.image as mpimg
import base64

# Note: change the `ig_response` variable below if you didn't deploy an IG model
for i,val in enumerate(ig_response['explanations']):
    class_name = CLASSES[val['attributions_by_label'][0]['label_index']]
    confidence_score = str(round(val['attributions_by_label'][0]['example_score'] * 100, 3)) + '%'
    print('Predicted class: ' + class_name + '\n' + 'Confidence score: ' + confidence_score)
    
    img = instances[i]['conv2d_input'][0]['b64']
    im = BytesIO(base64.b64decode(img))
    i = mpimg.imread(im, format='JPG')
    plt.imshow(i, interpolation='nearest')
    plt.show()

Visualize the images with AI Explanations

Now let's look at the explanations.

The images returned show the explanations for only the top class predicted by the model. This means that if one of our model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted rose when it should have predicted tulip, you'll see explanations for why the model thought this image was a rose.

First, we'll visualize the attributions for our Integrated Gradients version. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels we'll see after running the cell below show us the pixels that signaled the model's prediction most.


In [ ]:
import io

for idx, flower in enumerate(ig_response['explanations']):
  predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']]
  confidence = flower['attributions_by_label'][0]['example_score']
  print('Predicted flower: ', predicted_flower)
  b64str = flower['attributions_by_label'][0]['attributions']['data']['b64_jpeg']
  i = base64.b64decode(b64str)
  i = io.BytesIO(i)
  i = mpimg.imread(i, format='JPG')

  plt.imshow(i, interpolation='nearest')
  plt.show()

Let's compare this with the image explanations we get from our XRAI version.


In [ ]:
for idx, flower in enumerate(xrai_response['explanations']):
  predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']]
  confidence = flower['attributions_by_label'][0]['example_score']
  print('Predicted flower: ', predicted_flower)
  b64str = flower['attributions_by_label'][0]['attributions']['data']['b64_jpeg']
  i = base64.b64decode(b64str)
  i = io.BytesIO(i)
  i = mpimg.imread(i, format='JPG')

  plt.imshow(i, interpolation='nearest')
  plt.show()

Sanity check our explanations

To better make sense of the feature attributions we're getting, we should compare them with our model's baseline. In the case of image models, the baseline_score returned by AI Explanations is the score our model would give an image input with the baseline we specified. The baseline will be different for each class in our model. In other words, every time your model predicts tulip as the top class, you'll see the same baseline score.

In this case, we used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above.

If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful.

Below we'll calculate the difference between baseline_score and example_score for the 3 test images above.

Note that the score values for classification models are probabilities: the confidence your model has in its predicted class. A score of 0.90 for tulip means your model has classified the image as a tulip with 90% confidence.

We're running sanity checks below on our IG model, but if you'd like to inspect your XRAI model just swap out the ig_response and IG_VERSION variables below.


In [ ]:
for i,val in enumerate(ig_response['explanations']):
  baseline_score = val['attributions_by_label'][0]['baseline_score']
  predicted_score = val['attributions_by_label'][0]['example_score']
  print('Baseline score: ', baseline_score) 
  print('Predicted score: ', predicted_score)
  print('Predicted - Baseline: ', predicted_score - baseline_score, '\n')

As another sanity check, we'll also look at the explanations for this model's baseline image: an image array of randomly generated values using np.random. First, we'll convert the same np.random baseline array we generated above to a base64 string and preview it.


In [ ]:
# Convert our baseline from above to a base64 string
rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype('uint8'))
buffer = BytesIO()
rand_test_img.save(buffer, format="BMP")
new_image_string = base64.b64encode(buffer.getvalue()).decode("utf-8")

# Preview it
plt.imshow(rand_test_img)

In [ ]:
# Save the image to a variable in the format our model is expecting
sanity_check_img = {'conv2d_input': [{'b64': new_image_string}]}

In [ ]:
# Make the prediction request
sanity_check_resp = predict_json(PROJECT_ID, MODEL, sanity_check_img, IG_VERSION)

In [ ]:
# View explanations on the baseline random image
sanity_check_img = base64.b64decode(sanity_check_resp['explanations'][0]['attributions_by_label'][0]['attributions']['data']['b64_jpeg'])
sanity_check_img = io.BytesIO(sanity_check_img)
sanity_check_img = mpimg.imread(sanity_check_img, format='JPG')

plt.imshow(sanity_check_img, interpolation='nearest')
plt.show()

The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values you may need to increase the number of integral steps used when you deploy your model.


In [ ]:
baseline_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['baseline_score']
example_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['example_score']

print(abs(baseline_score - example_score))

Cleaning up

To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.

Alternatively, you can clean up individual resources by running the following commands:


In [ ]:
# Delete model version resource
!gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL
!gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL

# Delete model resource
!gcloud ai-platform models delete $MODEL --quiet

# Delete Cloud Storage objects that were created
!gsutil -m rm -r $BUCKET_NAME

If your Cloud Storage bucket doesn't contain any other objects and you would like to delete it, run gsutil rm -r gs://$BUCKET_NAME.

What's next?

To learn more about AI Explanations, check out the resources here.