Inspecting TFX metadata

Learning Objectives

  1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.

In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. An AI Platform Pipelines instance includes the ML Metadata service. In AI Platform Pipelines, ML Metadata uses MySQL as a database backend and can be accessed using a GRPC server.

Setup


In [ ]:
import os

import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma


from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2

from tfx.orchestration import metadata
from tfx.types import standard_artifacts

In [ ]:
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"

Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in lab-02 or lab-03.

1.1 Configure Kubernetes port forwarding

To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.

From a JupyterLab terminal, execute the following commands:

gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE]  
kubectl port-forward  service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080

Proceed to the next step, "Connecting to ML Metadata".

Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.

Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.


In [ ]:
import yaml

# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}

The pipeline source can be found in the pipeline folder. Switch to the pipeline folder and compile the pipeline.


In [ ]:
%cd pipeline

2.1 Create AI Platform Pipelines cluster

Navigate to AI Platform Pipelines page in the Google Cloud Console.

Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select "Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform" to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an App instance name such as "TFX-lab-04".

2.2 Configure environment settings

Update the below constants with the settings reflecting your lab environment.

  • GCP_REGION - the compute region for AI Platform Training and Prediction
  • ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the kubeflowpipelines- prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.

In [ ]:
!gsutil ls
  • ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
  1. Open the SETTINGS for your instance
  2. Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.

In [ ]:
GCP_REGION = 'us-central1'
ENDPOINT = '7ce34659d9b02e54-dot-us-central2.pipelines.googleusercontent.com'
ARTIFACT_STORE_URI = 'gs://asl-ml-immersion-kfp'

PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]

2.3 Compile pipeline


In [ ]:
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'

USE_KFP_SA=False
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.1'
PYTHON_VERSION = '3.7'

In [ ]:
%env PROJECT_ID={PROJECT_ID}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env GCP_REGION={GCP_REGION}
%env MODEL_NAME={MODEL_NAME}
%env PIPELINE_NAME={PIPELINE_NAME}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}

In [ ]:
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py

2.4 Deploy pipeline to AI Platform


In [ ]:
!tfx pipeline create  \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}

(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:


In [ ]:
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}

2.5 Create and monitor pipeline run


In [ ]:
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}

2.6 Configure Kubernetes port forwarding

To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.

From a JupyterLab terminal, execute the following commands:

gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE]  
kubectl port-forward  service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080

Connecting to ML Metadata

Configure ML Metadata GRPC client.


In [ ]:
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port

Connect to ML Metadata service.


In [ ]:
store = metadata_store.MetadataStore(connection_config)

Exploring ML Metadata

The Metadata Store uses the following data model:

  • ArtifactType describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.
  • Artifact describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.
  • ExecutionType describes a type of component or step in a workflow, and its runtime parameters.
  • Execution is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.
  • Event is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.
  • ContextType describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.
  • Context is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.
  • Attribution is a record of the relationship between Artifacts and Contexts.
  • Association is a record of the relationship between Executions and Contexts.

List the registered artifact types.


In [ ]:
for artifact_type in store.get_artifact_types():
    print(artifact_type.name)

Display the registered execution types.


In [ ]:
for execution_type in store.get_execution_types():
    print(execution_type.name)

List the registered context types.


In [ ]:
for context_type in store.get_context_types():
    print(context_type.name)

Visualizing TFX artifacts

Retrieve data analysis and validation artifacts


In [ ]:
with metadata.Metadata(connection_config) as store:
    stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
    schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
    anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)

In [ ]:
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
    train_stats_file, eval_stats_file))

schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
anomalies_file = os.path.join(anomalies_artifacts[-1].uri, 'anomalies.pbtxt')
print("Generated anomalies file:{}".format(anomalies_file))

Visualize statistics


In [ ]:
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
                          lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')

Visualize schema


In [ ]:
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)

Visualize anomalies


In [ ]:
anomalies = tfdv.load_anomalies_text(anomalies_file)
tfdv.display_anomalies(anomalies)

Retrieve model evaluations


In [ ]:
with metadata.Metadata(connection_config) as store:
    model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
    
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))

Visualize model evaluations


In [ ]:
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
    eval_result, slicing_column='Wilderness_Area')

Debugging tip: If the TFMA visualization of the Evaluator results do not render, try switching to view in a Classic Jupyter Notebook. You do so by clicking Help > Launch Classic Notebook and re-opening the notebook and running the above cell to see the interactive TFMA results.

License

Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>