Train and deploy on Kubeflow from Notebooks

This notebook shows you how to use Kubeflow to build, train, and deploy models on Kubernetes. This notebook walks you through the following

  • Building an XGBoost model inside a notebook
  • Training the model inside the notebook
  • Performing inference using the model inside the notebook
  • Using Kubeflow Fairing to launch training jobs on Kubernetes
  • Using Kubeflow Fairing to build and deploy a model using Seldon Core
  • Using Kubeflow metadata to record metadata about your models
  • Using Kubeflow Pipelines to build a pipeline to train your model

Prerequisites

  • This notebook assumes you are running inside 0.6 Kubeflow deployed on GKE following the GKE instructions
  • If you are running somewhere other than GKE you will need to modify the notebook to use a different docker registry or else configure Kubeflow to work with GCR.

Verify we have a GCP account

  • The cell below checks that this notebook was spawned with credentials to access GCP

In [1]:
import os
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()

Install Required Libraries

Import the libraries required to train this model.


In [2]:
import notebook_setup
notebook_setup.notebook_setup()


pip installing requirements.txt
pip installing KFP https://storage.googleapis.com/ml-pipeline/release/0.1.32/kfp.tar.gz
pip installing fairing git+git://github.com/kubeflow/fairing.git@9b0d4ed4796ba349ac6067bbd802ff1d6454d015
Configure docker credentials
  • Import the python libraries we will use
  • We add a comment "fairing:include-cell" to tell the kubefow fairing preprocessor to keep this cell when converting to python code later

In [3]:
# fairing:include-cell
import fire
import joblib
import logging
import nbconvert
import os
import pathlib
import sys
from pathlib import Path
import pandas as pd
import pprint
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
from importlib import reload
from sklearn.datasets import make_regression
from kubeflow.metadata import metadata
from datetime import datetime
import retrying
import urllib3

In [4]:
# Imports not to be included in the built docker image
import util
import kfp
import kfp.components as comp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
from kubernetes import client as k8s_client
from kubeflow import fairing   
from kubeflow.fairing.builders import append
from kubeflow.fairing.deployers import job
from kubeflow.fairing.preprocessors.converted_notebook import ConvertNotebookPreprocessorWithFire

Code to train and predict

  • In the cells below we define some functions to generate data and train a model
  • These functions could just as easily be defined in a separate python module

In [5]:
# fairing:include-cell
def read_synthetic_input(test_size=0.25):
    """generate synthetic data and split it into train and test."""
    # generate regression dataset
    X, y = make_regression(n_samples=200, n_features=5, noise=0.1)
    train_X, test_X, train_y, test_y = train_test_split(X,
                                                      y,
                                                      test_size=test_size,
                                                      shuffle=False)

    imputer = SimpleImputer()
    train_X = imputer.fit_transform(train_X)
    test_X = imputer.transform(test_X)

    return (train_X, train_y), (test_X, test_y)

In [6]:
# fairing:include-cell
def train_model(train_X,
                train_y,
                test_X,
                test_y,
                n_estimators,
                learning_rate):
    """Train the model using XGBRegressor."""
    model = XGBRegressor(n_estimators=n_estimators, learning_rate=learning_rate)

    model.fit(train_X,
            train_y,
            early_stopping_rounds=40,
            eval_set=[(test_X, test_y)])

    print("Best RMSE on eval: %.2f with %d rounds",
               model.best_score,
               model.best_iteration+1)
    return model

def eval_model(model, test_X, test_y):
    """Evaluate the model performance."""
    predictions = model.predict(test_X)
    mae=mean_absolute_error(predictions, test_y)
    logging.info("mean_absolute_error=%.2f", mae)
    return mae

def save_model(model, model_file):
    """Save XGBoost model for serving."""
    joblib.dump(model, model_file)
    logging.info("Model export success: %s", model_file)

def create_workspace():
    METADATA_STORE_HOST = "metadata-grpc-service.kubeflow" # default DNS of Kubeflow Metadata gRPC serivce.
    METADATA_STORE_PORT = 8080
    return metadata.Workspace(
        store=metadata.Store(grpc_host=METADATA_STORE_HOST, grpc_port=METADATA_STORE_PORT),
        name="xgboost-synthetic",
        description="workspace for xgboost-synthetic artifacts and executions")

Wrap Training and Prediction in a class

  • In the cell below we wrap training and prediction in a class
  • A class provides the structure we will need to eventually use kubeflow fairing to launch separate training jobs and/or deploy the model on Kubernetes

In [7]:
# fairing:include-cell
class ModelServe(object):    
    def __init__(self, model_file=None):
        self.n_estimators = 50
        self.learning_rate = 0.1
        if not model_file:
            if "MODEL_FILE" in os.environ:
                print("model_file not supplied; checking environment variable")
                model_file = os.getenv("MODEL_FILE")
            else:
                print("model_file not supplied; using the default")
                model_file = "mockup-model.dat"
        
        self.model_file = model_file
        print("model_file={0}".format(self.model_file))
        
        self.model = None
        self._workspace = None
        self.exec = self.create_execution()

    def train(self):
        (train_X, train_y), (test_X, test_y) = read_synthetic_input()
        
        # Here we use Kubeflow's metadata library to record information
        # about the training run to Kubeflow's metadata store.
        self.exec.log_input(metadata.DataSet(
            description="xgboost synthetic data",
            name="synthetic-data",
            owner="someone@kubeflow.org",
            uri="file://path/to/dataset",
            version="v1.0.0"))
        
        model = train_model(train_X,
                          train_y,
                          test_X,
                          test_y,
                          self.n_estimators,
                          self.learning_rate)

        mae = eval_model(model, test_X, test_y)
        
        # Here we log metrics about the model to Kubeflow's metadata store.
        self.exec.log_output(metadata.Metrics(
            name="xgboost-synthetic-traing-eval",
            owner="someone@kubeflow.org",
            description="training evaluation for xgboost synthetic",
            uri="gcs://path/to/metrics",
            metrics_type=metadata.Metrics.VALIDATION,
            values={"mean_absolute_error": mae}))
        
        save_model(model, self.model_file)
        self.exec.log_output(metadata.Model(
            name="housing-price-model",
            description="housing price prediction model using synthetic data",
            owner="someone@kubeflow.org",
            uri=self.model_file,
            model_type="linear_regression",
            training_framework={
                "name": "xgboost",
                "version": "0.9.0"
            },
            hyperparameters={
                "learning_rate": self.learning_rate,
                "n_estimators": self.n_estimators
            },
            version=datetime.utcnow().isoformat("T")))
        
    def predict(self, X, feature_names):
        """Predict using the model for given ndarray.
        
        The predict signature should match the syntax expected by Seldon Core
        https://github.com/SeldonIO/seldon-core so that we can use
        Seldon h to wrap it a model server and deploy it on Kubernetes
        """
        if not self.model:
            self.model = joblib.load(self.model_file)
        # Do any preprocessing
        prediction = self.model.predict(data=X)
        # Do any postprocessing
        return [[prediction.item(0), prediction.item(1)]]

    @property
    def workspace(self):
        if not self._workspace:
            self._workspace = create_workspace()
        return self._workspace
    
    def create_execution(self):                
        r = metadata.Run(
            workspace=self.workspace,
            name="xgboost-synthetic-faring-run" + datetime.utcnow().isoformat("T"),
            description="a notebook run")

        return metadata.Execution(
            name = "execution" + datetime.utcnow().isoformat("T"),
            workspace=self.workspace,
            run=r,
            description="execution for training xgboost-synthetic")

Train your Model Locally

  • Train your model locally inside your notebook
  • To train locally we just instatiante the ModelServe class and then call train

In [8]:
model = ModelServe(model_file="mockup-model.dat")
model.train()


MetadataStore with gRPC connection initialized
model_file=mockup-model.dat
[23:44:47] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[0]	validation_0-rmse:134.005
Will train until validation_0-rmse hasn't improved in 40 rounds.
[1]	validation_0-rmse:129.102
[2]	validation_0-rmse:124.39
[3]	validation_0-rmse:119.218
[4]	validation_0-rmse:114.096
[5]	validation_0-rmse:109.494
[6]	validation_0-rmse:107.101
[7]	validation_0-rmse:103.463
[8]	validation_0-rmse:100.657
[9]	validation_0-rmse:96.576
[10]	validation_0-rmse:94.8884
[11]	validation_0-rmse:91.7095
[12]	validation_0-rmse:90.7389
[13]	validation_0-rmse:88.1934
[14]	validation_0-rmse:86.1535
[15]	validation_0-rmse:84.8222
[16]	validation_0-rmse:83.5818
[17]	validation_0-rmse:81.6697
[18]	validation_0-rmse:80.2789
[19]	validation_0-rmse:79.4583
[20]	validation_0-rmse:78.4213
[21]	validation_0-rmse:77.0478
[22]	validation_0-rmse:75.3792
[23]	validation_0-rmse:73.9913
[24]	validation_0-rmse:73.2026
[25]	validation_0-rmse:72.2079
[26]	validation_0-rmse:70.9489
[27]	validation_0-rmse:70.5206
[28]	validation_0-rmse:69.8641
[29]	validation_0-rmse:69.0409
[30]	validation_0-rmse:68.3776
[31]	validation_0-rmse:67.2776
[32]	validation_0-rmse:66.7612
[33]	validation_0-rmse:65.9548
[34]	validation_0-rmse:65.5048
[35]	validation_0-rmse:64.8582
[36]	validation_0-rmse:64.118
[37]	validation_0-rmse:63.5615
[38]	validation_0-rmse:63.2716
[39]	validation_0-rmse:62.9765
[40]	validation_0-rmse:62.3468
[41]	validation_0-rmse:62.0579
[42]	validation_0-rmse:61.9598
[43]	validation_0-rmse:61.6452
[44]	validation_0-rmse:61.2468
[45]	validation_0-rmse:60.7332
[46]	validation_0-rmse:60.6493
[47]	validation_0-rmse:60.2032
[48]	validation_0-rmse:59.9972
[49]	validation_0-rmse:59.5956
mean_absolute_error=47.22
Model export success: mockup-model.dat
Best RMSE on eval: %.2f with %d rounds 59.595573 50

Predict locally

  • Run prediction inside the notebook using the newly created model
  • To run prediction we just invoke redict

In [9]:
(train_X, train_y), (test_X, test_y) =read_synthetic_input()

ModelServe().predict(test_X, None)


MetadataStore with gRPC connection initialized
model_file not supplied; using the default
model_file=mockup-model.dat
[23:44:47] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Out[9]:
[[-30.6968994140625, 45.884098052978516]]

Use Kubeflow Fairing to Launch a K8s Job to train your model

  • Now that we have trained a model locally we can use Kubeflow fairing to
    1. Launch a Kubernetes job to train the model
    2. Deploy the model on Kubernetes
  • Launching a separate Kubernetes job to train the model has the following advantages

    • You can leverage Kubernetes to run multiple training jobs in parallel
    • You can run long running jobs without blocking your kernel

Configure The Docker Registry For Kubeflow Fairing

  • In order to build docker images from your notebook we need a docker registry where the images will be stored
  • Below you set some variables specifying a GCR container registry
  • Kubeflow Fairing provides a utility function to guess the name of your GCP project

In [10]:
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)

Use Kubeflow fairing to build the docker image

  • First you will use kubeflow fairing's kaniko builder to build a docker image that includes all your dependencies
    • You use kaniko because you want to be able to run pip to install dependencies
    • Kaniko gives you the flexibility to build images from Dockerfiles
  • kaniko, however, can be slow
  • so you will build a base image using Kaniko and then every time your code changes you will just build an image starting from your base image and adding your code to it
  • you use the kubeflow fairing build to enable these fast rebuilds

In [11]:
# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default 
# Kaniko image is updated to a newer image than 0.7.0.
from kubeflow.fairing import constants
constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.14.0"

In [12]:
from kubeflow.fairing.builders import cluster

# output_map is a map of extra files to add to the notebook.
# It is a map from source location to the location inside the context.
output_map =  {
    "Dockerfile": "Dockerfile",
    "requirements.txt": "requirements.txt",
}


preprocessor = ConvertNotebookPreprocessorWithFire(class_name='ModelServe', notebook_file='build-train-deploy.ipynb',
                                                   output_map=output_map)

if not preprocessor.input_files:
    preprocessor.input_files = set()
input_files=["xgboost_util.py", "mockup-model.dat"]
preprocessor.input_files =  set([os.path.normpath(f) for f in input_files])
preprocessor.preprocess()


Converting build-train-deploy.ipynb to build-train-deploy.py
Creating entry point for the class name ModelServe
Out[12]:
[PosixPath('build-train-deploy.py'), 'xgboost_util.py', 'mockup-model.dat']

Build the base image

  • You use cluster_builder to build the base image
  • You only need to perform this again if we change our Docker image or the dependencies we need to install
  • ClusterBuilder takes as input the DockerImage to use as a base image
  • You should use the same Jupyter image that you are using for your notebook server so that your environment will be the same when you launch Kubernetes jobs

In [13]:
# Use a stock jupyter image as our base image
# TODO(jlewi): Should we try to use the downward API to default to the image we are running in?
base_image = "gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0"
# We use a custom Dockerfile 
cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,
                                                 base_image=base_image,
                                                 preprocessor=preprocessor,
                                                 dockerfile_path="Dockerfile",
                                                 pod_spec_mutators=[fairing.cloud.gcp.add_gcp_credentials_if_exists],
                                                 context_source=cluster.gcs_context.GCSContextSource())
cluster_builder.build()


Building image using cluster builder.
Creating docker context: /tmp/fairing_context_n34sz0lr
Converting build-train-deploy.ipynb to build-train-deploy.py
Creating entry point for the class name ModelServe
Not able to find gcp credentials secret: user-gcp-sa
Trying workload identity service account: default-editor
Waiting for fairing-builder-dcbz2-lqzjg to start...
Waiting for fairing-builder-dcbz2-lqzjg to start...
Waiting for fairing-builder-dcbz2-lqzjg to start...
Pod started running True
ERROR: logging before flag.Parse: E0226 23:44:52.505936       1 metadata.go:241] Failed to unmarshal scopes: invalid character 'h' looking for beginning of value
INFO[0002] Resolved base name gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0 to gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0002] Resolved base name gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0 to gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0002] Downloading base image gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0002] Error while retrieving image from cache: getting file info: stat /cache/sha256:fe174faf7c477bc3dae796b067d98ac3f0d31e8075007a1146f86d13f2c98e13: no such file or directory
INFO[0002] Downloading base image gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0003] Built cross stage deps: map[]
INFO[0003] Downloading base image gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0003] Error while retrieving image from cache: getting file info: stat /cache/sha256:fe174faf7c477bc3dae796b067d98ac3f0d31e8075007a1146f86d13f2c98e13: no such file or directory
INFO[0003] Downloading base image gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0
INFO[0003] Using files from context: [/kaniko/buildcontext/requirements.txt]
INFO[0003] Checking for cached layer gcr.io/kubeflow-ci/fairing-job/fairing-job/cache:233bc2f24de09b29aa4c12d0f5adcc3098286c3c35eb0b4864fa00f73d8b9d2c...
INFO[0003] Using caching version of cmd: COPY requirements.txt .
INFO[0003] cmd: USER
INFO[0003] Checking for cached layer gcr.io/kubeflow-ci/fairing-job/fairing-job/cache:1acac4c9cb73d1b18003ae6076fd264e37af3983927234122784b04452b9b44e...
INFO[0004] Using caching version of cmd: RUN pip3 --no-cache-dir install -r requirements.txt
INFO[0004] cmd: USER
INFO[0004] Skipping unpacking as no commands require it.
INFO[0004] Taking snapshot of full filesystem...
INFO[0004] COPY requirements.txt .
INFO[0004] Found cached layer, extracting to filesystem
INFO[0004] extractedFiles: [/tf/requirements.txt / /tf]
INFO[0004] Taking snapshot of files...
INFO[0004] USER root
INFO[0004] cmd: USER
INFO[0004] No files changed in this command, skipping snapshotting.
INFO[0004] RUN pip3 --no-cache-dir install -r requirements.txt
INFO[0004] Found cached layer, extracting to filesystem
INFO[0032] Taking snapshot of files...
INFO[0070] USER jovyan
INFO[0070] cmd: USER
INFO[0070] No files changed in this command, skipping snapshotting.

Build the actual image

Here you use the append builder to add your code to the base image

  • Calling preprocessor.preprocess() converts your notebook file to a python file

    • You are using the ConvertNotebookPreprocessorWithFire
    • This preprocessor converts ipynb files to py files by doing the following

      1. Removing all cells which don't have a comment # fairing:include-cell
      2. Using python-fire to add entry points for the class specified in the constructor
    • Call preprocess() will create the file build-train-deploy.py

  • You use the AppendBuilder to rapidly build a new docker image by quickly adding some files to an existing docker image

    • The AppendBuilder is super fast so its very convenient for rebuilding your images as you iterate on your code
    • The AppendBuilder will add the converted notebook, build-train-deploy.py, along with any files specified in preprocessor.input_files to /app in the newly created image

In [14]:
preprocessor.preprocess()

builder = append.append.AppendBuilder(registry=DOCKER_REGISTRY,
                                      base_image=cluster_builder.image_tag, preprocessor=preprocessor)
builder.build()


Converting build-train-deploy.ipynb to build-train-deploy.py
Creating entry point for the class name ModelServe
Building image using Append builder...
Creating docker context: /tmp/fairing_context_x4g0orab
Converting build-train-deploy.ipynb to build-train-deploy.py
Creating entry point for the class name ModelServe
build-train-deploy.py already exists in Fairing context, skipping...
Loading Docker credentials for repository 'gcr.io/kubeflow-ci/fairing-job/fairing-job:F47EE88D'
Invoking 'docker-credential-gcloud' to obtain Docker credentials.
Successfully obtained Docker credentials.
Image successfully built in 2.249176573008299s.
Pushing image gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77...
Loading Docker credentials for repository 'gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77'
Invoking 'docker-credential-gcloud' to obtain Docker credentials.
Successfully obtained Docker credentials.
Uploading gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77
Layer sha256:8832e37735788665026956430021c6d1919980288c66c4526502965aeb5ac006 exists, skipping
Layer sha256:b4ecb6928817c974946ba93ffc5ce60de886457eb57955dae9d7bc8facfb690a exists, skipping
Layer sha256:9269cef1ab8b202433fe1dfbfbdf4649926d70d7a8b94f0324421bda79b917fa exists, skipping
Layer sha256:d77b634303a107b22366d05d1071bf79e7d2a27b3d6db1fb726fcfd5dd5f9831 exists, skipping
Layer sha256:5bd1cb59702536c10e96bb14e54846922c9b257580d4e2c733076a922525240b exists, skipping
Layer sha256:7babe47a4c402afbe26f10dffceb85be7bfd2072a96b816814503f41ce9c5273 exists, skipping
Layer sha256:21a7832aeb8625dc8228ceb115a28222f87e0fbce61b2588c42a2cce7a3a63d6 exists, skipping
Layer sha256:107cba84ef3d72ed995c76c7a4f60ba5613f58b029ab7e42ac20ece99bec88b1 exists, skipping
Layer sha256:a31c3b1caad473a474d574283741f880e37c708cc06ee620d3e93fa602125ee0 exists, skipping
Layer sha256:92d24c89f5bc70958385728755b042a5a45bddf2f997de80e84d1161f43ba316 exists, skipping
Layer sha256:e590ee7edf442435692956d6fed54190416a217147a50c63e73a6a78d15bec84 exists, skipping
Layer sha256:96685dce34a0d24bf69741972441398cffbed89aed4f40e3c063176c59a3c81c exists, skipping
Layer sha256:daa5c419d33d51d1730ea530f4f7335640f5bb42856f319c63a1a521aee368c1 exists, skipping
Layer sha256:016724bbd2c9643f24eff7c1e86d9202d7c04caddd7fdd4375a77e3998ce8203 exists, skipping
Layer sha256:b5494e32d0131350be270a54399cee65934e90d3c2df87a83757903e627813b2 exists, skipping
Layer sha256:823f4685c03b26a545ca41dcdca1e782ad5e52cf85bac03113edaa6aebdca1b3 exists, skipping
Layer sha256:777cec03b3e23c21f8cf78f07812cc83dd7f352719226f27f361c5b706f6a93f exists, skipping
Layer sha256:dc2840b4417186d66a29d64a039ac164be95929211d808294d36acae9301fc6b exists, skipping
Layer sha256:5e671b828b2af02924968841e5d12084fa78e8722e9510402aaee80dc5d7a6db exists, skipping
Layer sha256:5bac0c144f6e0b7082e3691da95d3f057ee0be0735e9efca76096da59cfd1786 exists, skipping
Layer sha256:5b7339215d1d5f8e68622d584a224f60339f5bef41dbd74330d081e912f0cddd exists, skipping
Layer sha256:35daced67e5901b8de4a92bca9fdc67c8593d400aae483591987442f54c87d0a exists, skipping
Layer sha256:330a9002e0b4aa1e27d3628dd3f02ff9a39d25745b8f2f219b06e3725153ffc0 exists, skipping
Layer sha256:4e8a6b90828e0d339f646d723df8720ffa17c0ffb905f8f009faf1be320ab5d9 exists, skipping
Layer sha256:2b940936f9933b7737cf407f2149dd7393998d7a0bee5acf1c4a57b0487cef79 exists, skipping
Layer sha256:d684674aa1a4d080be26286fd9356f573b80d2448599392e3dcf3c61ce98a0f0 exists, skipping
Layer sha256:68543864d6442a851eaff0500161b92e4a151051cf7ed2649b3790a3f876bada exists, skipping
Layer sha256:21640f54008ccbfc0d100246633f8e6f18f918a0566561f61aebbda785321e56 exists, skipping
Layer sha256:f44c204b040238da05a21af1fd8543ea95f1e9249fac34b3b65217e38815568d exists, skipping
Layer sha256:b054a26005b7f3b032577f811421fab5ec3b42ce45a4012dfa00cf6ed6191b0f exists, skipping
Layer sha256:14ca88e9f6723ce82bc14b241cda8634f6d19677184691d086662641ab96fe68 exists, skipping
Layer sha256:e3ab47ad84d9e11c5fad45791ce00ec5b5f3b7f1ae61a5fab17eb44c399d910f exists, skipping
Layer sha256:01ad04a655b291ed8502f23f5b8c73d94475763e9b3cdbf6d1107f7879aadac6 exists, skipping
Layer sha256:4b9d9f2fa2a2b168f0a49fcd3074c885ab1ca2c507848f7b2e3cee8104f1f7c3 exists, skipping
Layer sha256:5bd2e6f0de430cd3936eec59afb6cf466b052344fe4348ac33a48ac903b661e2 exists, skipping
Layer sha256:15bca5bd6fdc1b1ac156de08ce3b0f57760b345556b64017d1be5cc7c95e5e5b pushed.
Layer sha256:d23f2bdcc84f066126b083288f75d140d58fc252618bda5bf05cb9696a183958 pushed.
Finished upload of: gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77
Pushed image gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77 in 2.74867532402277s.

Launch the K8s Job

  • You can use kubeflow fairing to easily launch a Kubernetes job to invoke code
  • You use fairings Kubernetes job library to build a Kubernetes job
    • You use pod mutators to attach GCP credentials to the pod
    • You can also use pod mutators to attch PVCs
  • Since the ConvertNotebookPreprocessorWithFire is using python-fire you can easily invoke any method inside the ModelServe class just by configuring the command invoked by the Kubernetes job
    • In the cell below you extend the command to include train as an argument because you want to invoke the train function

Note When you invoke train_deployer.deploy; kubeflow fairing will stream the logs from the Kubernetes job. The job will initially show some connection errors because the job will try to connect to the metadataserver. You can ignore these errors; the job will retry until its able to connect and then continue


In [15]:
pod_spec = builder.generate_pod_spec()
train_deployer = job.job.Job(cleanup=False,
                             pod_spec_mutators=[
                             fairing.cloud.gcp.add_gcp_credentials_if_exists])

# Add command line arguments
pod_spec.containers[0].command.extend(["train"])
result = train_deployer.deploy(pod_spec)


Not able to find gcp credentials secret: user-gcp-sa
Trying workload identity service account: default-editor
The job fairing-job-qwdlb launched.
Waiting for fairing-job-qwdlb-67ddb to start...
Waiting for fairing-job-qwdlb-67ddb to start...
Waiting for fairing-job-qwdlb-67ddb to start...
Pod started running True
2020-02-26 23:48:04.056153: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-02-26 23:48:04.056318: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-02-26 23:48:04.056332: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING: Logging before flag parsing goes to stderr.
I0226 23:48:06.277673 140238089848640 metadata_store.py:80] MetadataStore with gRPC connection initialized
model_file not supplied; using the default
model_file=mockup-model.dat
[23:48:06] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[0]	validation_0-rmse:106.201
Will train until validation_0-rmse hasn't improved in 40 rounds.
[1]	validation_0-rmse:102.289
[2]	validation_0-rmse:99.0904
[3]	validation_0-rmse:95.5223
[4]	validation_0-rmse:92.2357
[5]	validation_0-rmse:90.1649
[6]	validation_0-rmse:87.6004
[7]	validation_0-rmse:85.4127
[8]	validation_0-rmse:82.7163
[9]	validation_0-rmse:81.1641
[10]	validation_0-rmse:79.1006
[11]	validation_0-rmse:77.2564
[12]	validation_0-rmse:75.3755
[13]	validation_0-rmse:74.3393
[14]	validation_0-rmse:72.0505
[15]	validation_0-rmse:70.8315
[16]	validation_0-rmse:69.1124
[17]	validation_0-rmse:67.9681
[18]	validation_0-rmse:66.2094
[19]	validation_0-rmse:64.6999
[20]	validation_0-rmse:63.6925
[21]	validation_0-rmse:62.261
[22]	validation_0-rmse:60.887
[23]	validation_0-rmse:59.5543
[24]	validation_0-rmse:58.3673
[25]	validation_0-rmse:57.0439
[26]	validation_0-rmse:55.7172
[27]	validation_0-rmse:54.7011
[28]	validation_0-rmse:53.8976
[29]	validation_0-rmse:53.3325
[30]	validation_0-rmse:52.81
[31]	validation_0-rmse:51.8806
[32]	validation_0-rmse:50.9026
[33]	validation_0-rmse:50.0451
[34]	validation_0-rmse:49.2711
[35]	validation_0-rmse:48.6533
[36]	validation_0-rmse:47.8613
[37]	validation_0-rmse:47.5519
[38]	validation_0-rmse:46.9383
[39]	validation_0-rmse:46.7275
[40]	validation_0-rmse:46.1317
[41]	validation_0-rmse:45.7704
[42]	validation_0-rmse:45.4888
[43]	validation_0-rmse:44.8847
[44]	validation_0-rmse:44.5583
[45]	validation_0-rmse:43.9202
[46]	validation_0-rmse:43.7332
[47]	validation_0-rmse:43.2122
[48]	validation_0-rmse:43.0383
[49]	validation_0-rmse:42.7427
I0226 23:48:06.457567 140238089848640 build-train-deploy.py:100] mean_absolute_error=33.15
I0226 23:48:06.494030 140238089848640 build-train-deploy.py:106] Model export success: mockup-model.dat
Best RMSE on eval: %.2f with %d rounds 42.742691 50
  • You can use kubectl to inspect the job that fairing created

In [16]:
!kubectl get jobs -l fairing-id={train_deployer.job_id} -o yaml


apiVersion: v1
items:
- apiVersion: batch/v1
  kind: Job
  metadata:
    creationTimestamp: "2020-02-26T23:47:21Z"
    generateName: fairing-job-
    labels:
      fairing-deployer: job
      fairing-id: 54d568cc-58f2-11ea-964d-46fd3ccc57c5
    name: fairing-job-qwdlb
    namespace: zhenghui
    resourceVersion: "11375571"
    selfLink: /apis/batch/v1/namespaces/zhenghui/jobs/fairing-job-qwdlb
    uid: 54d8d81b-58f2-11ea-a99d-42010a8000ac
  spec:
    backoffLimit: 0
    completions: 1
    parallelism: 1
    selector:
      matchLabels:
        controller-uid: 54d8d81b-58f2-11ea-a99d-42010a8000ac
    template:
      metadata:
        annotations:
          sidecar.istio.io/inject: "false"
        creationTimestamp: null
        labels:
          controller-uid: 54d8d81b-58f2-11ea-a99d-42010a8000ac
          fairing-deployer: job
          fairing-id: 54d568cc-58f2-11ea-964d-46fd3ccc57c5
          job-name: fairing-job-qwdlb
        name: fairing-deployer
      spec:
        containers:
        - command:
          - python
          - /app/build-train-deploy.py
          - train
          env:
          - name: FAIRING_RUNTIME
            value: "1"
          image: gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77
          imagePullPolicy: IfNotPresent
          name: fairing-job
          resources: {}
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          workingDir: /app/
        dnsPolicy: ClusterFirst
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: default-editor
        serviceAccountName: default-editor
        terminationGracePeriodSeconds: 30
  status:
    completionTime: "2020-02-26T23:48:08Z"
    conditions:
    - lastProbeTime: "2020-02-26T23:48:08Z"
      lastTransitionTime: "2020-02-26T23:48:08Z"
      status: "True"
      type: Complete
    startTime: "2020-02-26T23:47:21Z"
    succeeded: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Deploy the trained model to Kubeflow for predictions

  • Now that you have trained a model you can use kubeflow fairing to deploy it on Kubernetes
  • When you call deployer.deploy fairing will create a Kubernetes Deployment to serve your model
  • Kubeflow fairing uses the docker image you created earlier
  • The docker image you created contains your code and Seldon core
  • Kubeflow fairing uses Seldon to wrap your prediction code, ModelServe.predict, in a REST and gRPC server

In [17]:
from kubeflow.fairing.deployers import serving
pod_spec = builder.generate_pod_spec()

module_name = os.path.splitext(preprocessor.executable.name)[0]
deployer = serving.serving.Serving(module_name + ".ModelServe",
                                   service_type="ClusterIP",
                                   labels={"app": "mockup"})
    
url = deployer.deploy(pod_spec)


Cluster endpoint: http://fairing-service-kkbtm.zhenghui.svc.cluster.local:5000/predict
  • You can use kubectl to inspect the deployment that fairing created

In [18]:
!kubectl get deploy -o yaml {deployer.deployment.metadata.name}


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2020-02-26T23:48:12Z"
  generateName: fairing-deployer-
  generation: 1
  labels:
    app: mockup
    fairing-deployer: serving
    fairing-id: 73532514-58f2-11ea-964d-46fd3ccc57c5
  name: fairing-deployer-p8xc9
  namespace: zhenghui
  resourceVersion: "11375642"
  selfLink: /apis/extensions/v1beta1/namespaces/zhenghui/deployments/fairing-deployer-p8xc9
  uid: 7354b5ec-58f2-11ea-a99d-42010a8000ac
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: mockup
      fairing-deployer: serving
      fairing-id: 73532514-58f2-11ea-964d-46fd3ccc57c5
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
      creationTimestamp: null
      labels:
        app: mockup
        fairing-deployer: serving
        fairing-id: 73532514-58f2-11ea-964d-46fd3ccc57c5
      name: fairing-deployer
    spec:
      containers:
      - command:
        - seldon-core-microservice
        - build-train-deploy.ModelServe
        - REST
        - --service-type=MODEL
        - --persistence=0
        env:
        - name: FAIRING_RUNTIME
          value: "1"
        image: gcr.io/kubeflow-ci/fairing-job/fairing-job:BDE79D77
        imagePullPolicy: IfNotPresent
        name: model
        resources: {}
        securityContext:
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        workingDir: /app/
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2020-02-26T23:48:12Z"
    lastUpdateTime: "2020-02-26T23:48:12Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2020-02-26T23:48:12Z"
    lastUpdateTime: "2020-02-26T23:48:13Z"
    message: ReplicaSet "fairing-deployer-p8xc9-854c699677" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

Send an inference request to the prediction server

  • Now that you have deployed the model into your Kubernetes cluster, you can send a REST request to preform inference
  • The code below reads some data, sends, a prediction request and then prints out the response

In [19]:
(train_X, train_y), (test_X, test_y) = read_synthetic_input()

In [20]:
result = util.predict_nparray(url, test_X)
pprint.pprint(result.content)


(b'{"data":{"names":["t:0","t:1"],"tensor":{"shape":[1,2],"values":[-49.2782592'
 b'7734375,-54.25324630737305]}},"meta":{}}\n')

Clean up the prediction endpoint

  • You can use kubectl to delete the Kubernetes resources for your model
  • If you want to delete the resources uncomment the following lines and run them

In [21]:
# !kubectl delete service -l app=ames
# !kubectl delete deploy -l app=ames

Track Models and Artifacts

  • Using Kubeflow's metadata server you can track models and artifacts
  • The ModelServe code was instrumented to log executions and outputs
  • You can access Kubeflow's metadata UI by selecting Artifact Store from the central dashboard
    • See here for instructions on connecting to Kubeflow's UIs
  • You can also use the python SDK to read and write entries
  • This notebook illustrates a bunch of metadata functionality

Create a workspace

  • Kubeflow metadata uses workspaces as a logical grouping for artifacts, executions, and datasets that belong together
  • Earlier in the notebook we defined the function create_workspace to create a workspace for this example
  • You can use that function to return a workspace object and then call list to see all the artifacts in that workspace

In [22]:
ws = create_workspace()
ws.list()


MetadataStore with gRPC connection initialized
Out[22]:
[{'id': 3,
  'workspace': 'xgboost-synthetic',
  'run': 'xgboost-synthetic-faring-run2020-02-26T23:26:36.443396',
  'version': '2020-02-26T23:26:36.660862',
  'owner': 'someone@kubeflow.org',
  'description': 'housing price prediction model using synthetic data',
  'name': 'housing-price-model',
  'model_type': 'linear_regression',
  'create_time': '2020-02-26T23:26:36.660887Z',
  'uri': 'mockup-model.dat',
  'training_framework': {'name': 'xgboost', 'version': '0.9.0'},
  'hyperparameters': {'learning_rate': 0.1, 'n_estimators': 50},
  'labels': None,
  'kwargs': {}},
 {'id': 6,
  'workspace': 'xgboost-synthetic',
  'run': 'xgboost-synthetic-faring-run2020-02-26T23:27:11.144500',
  'create_time': '2020-02-26T23:27:11.458520Z',
  'version': '2020-02-26T23:27:11.458480',
  'owner': 'someone@kubeflow.org',
  'description': 'housing price prediction model using synthetic data',
  'name': 'housing-price-model',
  'model_type': 'linear_regression',
  'uri': 'mockup-model.dat',
  'training_framework': {'name': 'xgboost', 'version': '0.9.0'},
  'hyperparameters': {'learning_rate': 0.1, 'n_estimators': 50},
  'labels': None,
  'kwargs': {}},
 {'id': 9,
  'workspace': 'xgboost-synthetic',
  'run': 'xgboost-synthetic-faring-run2020-02-26T23:30:04.636580',
  'create_time': '2020-02-26T23:30:04.866997Z',
  'version': '2020-02-26T23:30:04.866972',
  'owner': 'someone@kubeflow.org',
  'description': 'housing price prediction model using synthetic data',
  'name': 'housing-price-model',
  'model_type': 'linear_regression',
  'uri': 'mockup-model.dat',
  'training_framework': {'name': 'xgboost', 'version': '0.9.0'},
  'hyperparameters': {'learning_rate': 0.1, 'n_estimators': 50},
  'labels': None,
  'kwargs': {}},
 {'id': 12,
  'workspace': 'xgboost-synthetic',
  'run': 'xgboost-synthetic-faring-run2020-02-26T23:44:47.344352',
  'create_time': '2020-02-26T23:44:47.585805Z',
  'version': '2020-02-26T23:44:47.585782',
  'owner': 'someone@kubeflow.org',
  'description': 'housing price prediction model using synthetic data',
  'name': 'housing-price-model',
  'model_type': 'linear_regression',
  'uri': 'mockup-model.dat',
  'training_framework': {'name': 'xgboost', 'version': '0.9.0'},
  'hyperparameters': {'learning_rate': 0.1, 'n_estimators': 50},
  'labels': None,
  'kwargs': {}},
 {'id': 15,
  'workspace': 'xgboost-synthetic',
  'run': 'xgboost-synthetic-faring-run2020-02-26T23:48:06.287002',
  'version': '2020-02-26T23:48:06.495138',
  'owner': 'someone@kubeflow.org',
  'description': 'housing price prediction model using synthetic data',
  'name': 'housing-price-model',
  'model_type': 'linear_regression',
  'create_time': '2020-02-26T23:48:06.495166Z',
  'uri': 'mockup-model.dat',
  'training_framework': {'name': 'xgboost', 'version': '0.9.0'},
  'hyperparameters': {'learning_rate': 0.1, 'n_estimators': 50},
  'labels': None,
  'kwargs': {}}]

Create a pipeline to train your model

  • Kubeflow pipelines makes it easy to define complex workflows to build and deploy models
  • Below you will define and run a simple one step pipeline to train your model
  • Kubeflow pipelines uses experiments to group different runs of a pipeline together
  • So you start by defining a name for your experiement

Define the pipeline

  • To create a pipeline you create a function and decorate it with the @dsl.pipeline decorator

    • You use the decorator to give the pipeline a name and description
  • Inside the function, each step in the function is defined by a ContainerOp that specifies a container to invoke

  • You will use the container image that you built earlier using Kubeflow Fairing

  • Since the Kubeflow Fairing preprocessor added a main function using python-fire, a step in your pipeline can invocation any function in the ModelServe class just by setting the command for the container op
  • See the pipelines SDK reference for more information

In [23]:
@dsl.pipeline(
   name='Training pipeline',
   description='A pipeline that trains an xgboost model for the Ames dataset.'
)
def train_pipeline(
   ):      
    command=["python", preprocessor.executable.name, "train"]
    train_op = dsl.ContainerOp(
            name="train", 
            image=builder.image_tag,        
            command=command,
            ).apply(
                gcp.use_gcp_secret('user-gcp-sa'),
            )
    train_op.container.working_dir = "/app"

Compile the pipeline

  • Pipelines need to be compiled

In [24]:
pipeline_func = train_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)

Submit the pipeline for execution

  • Pipelines groups runs using experiments
  • So before you submit a pipeline you need to create an experiment or pick an existing experiment
  • Once you have compiled a pipeline, you can use the pipelines SDK to submit that pipeline

In [25]:
EXPERIMENT_NAME = 'MockupModel'

#Specify pipeline argument values
arguments = {}

# Get or create an experiment and submit a pipeline run
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)

#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)

#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)


Creating experiment MockupModel.
Experiment link here
Run link here