In [ ]:
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

Composing a pipeline from reusable, pre-built, and lightweight components

This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:

  • Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.
  • Containerize the program.
  • Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.
  • Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.

Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps:

  • Train an MNIST model and export it to Google Cloud Storage.
  • Deploy the exported TensorFlow model on AI Platform Prediction service.
  • Test the deployment by calling the endpoint with test data.

Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:

which docker

The result should be something like:

/usr/bin/docker


In [ ]:
import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import datetime

import kubernetes as k8s

In [ ]:
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'

Create client

If you run this notebook outside of a Kubeflow cluster, run the following command:

  • host: The URL of your Kubeflow Pipelines instance, for example "https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline"
  • client_id: The client ID used by Identity-Aware Proxy
  • other_client_id: The client ID used to obtain the auth codes and refresh tokens.
  • other_client_secret: The client secret used to obtain the auth codes and refresh tokens.
client = kfp.Client(host, client_id, other_client_id, other_client_secret)

If you run this notebook within a Kubeflow cluster, run the following command:

client = kfp.Client()

You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials


In [ ]:
# Optional Parameters, but required for running outside Kubeflow cluster

# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'

# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following 
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'

In [ ]:
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token

In [ ]:
# Create kfp client
in_cluster = True
try:
  k8s.config.load_incluster_config()
except:
  in_cluster = False
  pass

if in_cluster:
    client = kfp.Client()
else:
    if HOST.endswith('googleusercontent.com'):
        CLIENT_ID = None
        OTHER_CLIENT_ID = None
        OTHER_CLIENT_SECRET = None

    client = kfp.Client(host=HOST, 
                        client_id=CLIENT_ID,
                        other_client_id=OTHER_CLIENT_ID, 
                        other_client_secret=OTHER_CLIENT_SECRET)

Build reusable components

Writing the program code

The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.

Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.


In [ ]:
%%bash

# Create folders if they don't exist.
mkdir -p tmp/reuse_components_pipeline/mnist_training

# Create the Python file that lists GCS blobs.
cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE
import argparse
from datetime import datetime
import tensorflow as tf

parser = argparse.ArgumentParser()
parser.add_argument(
    '--model_path', type=str, required=True, help='Name of the model file.')
parser.add_argument(
    '--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()

bucket=args.bucket
model_path=args.model_path

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

print(model.summary())    

mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

callbacks = [
  tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
  # Interrupt training if val_loss stops improving for over 2 epochs
  tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]

model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
          validation_data=(x_test, y_test))

from tensorflow import gfile

gcs_path = bucket + "/" + model_path
# The export require the folder is new
if gfile.Exists(gcs_path):
    gfile.DeleteRecursively(gcs_path)
tf.keras.experimental.export_saved_model(model, gcs_path)

with open('/output.txt', 'w') as f:
  f.write(gcs_path)
HERE

Create a Docker container

Create your own container image that includes your program.

Creating a Dockerfile

Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.


In [ ]:
%%bash

# Create Dockerfile.
# AI platform only support tensorflow 1.14
cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF
FROM tensorflow/tensorflow:1.14.0-py3
WORKDIR /app
COPY . /app
EOF

Build docker image

Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:

  • Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.
  • Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.
  • Use Docker installed locally and push to e.g. GCR.

Note: If you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace.

%%bash

NAMESPACE=<your notebook name space>
SOURCE=kubeflow
NAME=user-gcp-sa
SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}\.json}" | base64 -D)
kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"

In [ ]:
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"

GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
    PROJECT_ID=PROJECT_ID,
    IMAGE_NAME=IMAGE_NAME,
    TAG=TAG
)

APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/'

In [ ]:
# In the following, for the purpose of demonstration
# Cloud Build is choosen for 'AI Platform Pipelines'
# kaniko is choosen for 'full Kubeflow deployment'

if HOST.endswith('googleusercontent.com'):
    # kaniko is not pre-installed with 'AI Platform Pipelines'
    import subprocess
    # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}
    cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]
    build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
    print(build_log)
    
else:
    if kfp.__version__ <= '0.1.36':
        # kfp with version 0.1.36+ introduce broken change that will make the following code not working
        import subprocess
        
        builder = kfp.containers._container_builder.ContainerBuilder(
            gcs_staging=GCS_BUCKET + "/kfp_container_build_staging"
        )

        kfp.containers.build_image_from_working_dir(
            image_name=GCR_IMAGE,
            working_dir=APP_FOLDER,
            builder=builder
        )
    else:
        raise("Please build the docker image use either [Docker] or [Cloud Build]")

If you want to use docker to build the image

Run the following in a cell

%%bash -s "{PROJECT_ID}"

IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"

# Create script to build docker image and push it.
cat > ./tmp/components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE

cd tmp/components/mnist_training
bash build_image.sh

In [ ]:
image_name = GCR_IMAGE

Writing your component definition file

To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.

For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.

Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:


In [ ]:
%%bash -s "{image_name}"

GCR_IMAGE="${1}"
echo ${GCR_IMAGE}

# Create Yaml
# the image uri should be changed according to the above docker image push output

cat > mnist_pipeline_component.yaml <<HERE
name: Mnist training
description: Train a mnist model and save to GCS
inputs:
  - name: model_path
    description: 'Path of the tf model.'
    type: String
  - name: bucket
    description: 'GCS bucket name.'
    type: String
outputs:
  - name: gcs_model_path
    description: 'Trained model path.'
    type: GCSPath
implementation:
  container:
    image: ${GCR_IMAGE}
    command: [
      python, /app/app.py,
      --model_path, {inputValue: model_path},
      --bucket,     {inputValue: bucket},
    ]
    fileOutputs:
      gcs_model_path: /output.txt
HERE

In [ ]:
import os
mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml'))

In [ ]:
mnist_train_op.component_spec

Define deployment operation on AI Platform


In [ ]:
mlengine_deploy_op = comp.load_component_from_url(
    'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')

def deploy(
    project_id,
    model_uri,
    model_id,
    runtime_version,
    python_version):
    
    return mlengine_deploy_op(
        model_uri=model_uri,
        project_id=project_id, 
        model_id=model_id, 
        runtime_version=runtime_version, 
        python_version=python_version,
        replace_existing_version=True, 
        set_default=True)

Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component.

kubeflow_deploy_op = comp.load_component_from_url(
    'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')

def deploy_kubeflow(
    model_dir,
    tf_server_name):
    return kubeflow_deploy_op(
        model_dir=model_dir,
        server_name=tf_server_name,
        cluster_name='kubeflow', 
        namespace='kubeflow',
        pvc_name='', 
        service_type='ClusterIP')

Create a lightweight component for testing the deployment


In [ ]:
def deployment_test(project_id: str, model_name: str, version: str) -> str:

    model_name = model_name.split("/")[-1]
    version = version.split("/")[-1]
    
    import googleapiclient.discovery
    
    def predict(project, model, data, version=None):
      """Run predictions on a list of instances.

      Args:
        project: (str), project where the Cloud ML Engine Model is deployed.
        model: (str), model name.
        data: ([[any]]), list of input instances, where each input instance is a
          list of attributes.
        version: str, version of the model to target.

      Returns:
        Mapping[str: any]: dictionary of prediction results defined by the model.
      """

      service = googleapiclient.discovery.build('ml', 'v1')
      name = 'projects/{}/models/{}'.format(project, model)

      if version is not None:
        name += '/versions/{}'.format(version)

      response = service.projects().predict(
          name=name, body={
              'instances': data
          }).execute()

      if 'error' in response:
        raise RuntimeError(response['error'])

      return response['predictions']

    import tensorflow as tf
    import json
    
    mnist = tf.keras.datasets.mnist
    (x_train, y_train),(x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0

    result = predict(
        project=project_id,
        model=model_name,
        data=x_test[0:2].tolist(),
        version=version)
    print(result)
    
    return json.dumps(result)

In [ ]:
# # Test the function with already deployed version
# deployment_test(
#     project_id=PROJECT_ID,
#     model_name="mnist",
#     version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing
# )

In [ ]:
deployment_test_op = comp.func_to_container_op(
    func=deployment_test, 
    base_image="tensorflow/tensorflow:1.15.0-py3",
    packages_to_install=["google-api-python-client==1.7.8"])

Create your workflow as a Python function

Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.


In [ ]:
# Define the pipeline
@dsl.pipeline(
   name='Mnist pipeline',
   description='A toy pipeline that performs mnist model training.'
)
def mnist_reuse_component_deploy_pipeline(
    project_id: str = PROJECT_ID,
    model_path: str = 'mnist_model', 
    bucket: str = GCS_BUCKET
):
    train_task = mnist_train_op(
        model_path=model_path, 
        bucket=bucket
    ).apply(gcp.use_gcp_secret('user-gcp-sa'))
    
    deploy_task = deploy(
        project_id=project_id,
        model_uri=train_task.outputs['gcs_model_path'],
        model_id="mnist", 
        runtime_version="1.14",
        python_version="3.5"
    ).apply(gcp.use_gcp_secret('user-gcp-sa'))  
    
    deploy_test_task = deployment_test_op(
        project_id=project_id,
        model_name=deploy_task.outputs["model_name"], 
        version=deploy_task.outputs["version_name"],
    ).apply(gcp.use_gcp_secret('user-gcp-sa'))
    
    return True

Submit a pipeline run


In [ ]:
pipeline_func = mnist_reuse_component_deploy_pipeline

In [ ]:
experiment_name = 'minist_kubeflow'

arguments = {"model_path":"mnist_model",
             "bucket":GCS_BUCKET}

run_name = pipeline_func.__name__ + ' run'

# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func, 
                                                  experiment_name=experiment_name, 
                                                  run_name=run_name, 
                                                  arguments=arguments)

As an alternative, you can compile the pipeline into a package. The compiled pipeline can be easily shared and reused by others to run the pipeline.

pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)

experiment = client.create_experiment('python-functions-mnist')

run_result = client.run_pipeline(
    experiment_id=experiment.id, 
    job_name=run_name, 
    pipeline_package_path=pipeline_filename, 
    params=arguments)

In [ ]: