This notebook assumes that you have already set up a GKE cluster with Kubeflow installed as per this codelab: g.co/codelabs/kubecon18. Currently, this notebook must be run from the Kubeflow JupyterHub installation, as described in the codelab.
In this notebook, we will show how to:
This example pipeline trains a Tensor2Tensor model on Github issue data, learning to predict issue titles from issue bodies. It then exports the trained model and deploys the exported model to Tensorflow Serving. The final step in the pipeline launches a web app which interacts with the TF-Serving instance in order to get model predictions.
Do some installations and imports, and set some variables. Set the WORKING_DIR
to a path under the Cloud Storage bucket you created earlier. The Pipelines SDK is bundled with the notebook server image, but we'll make sure that we're using the most current version for this example. You may need to restart your kernel after the SDK update.
In [ ]:
!pip install -U kfp
In [ ]:
import kfp # the Pipelines SDK.
from kfp import compiler
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.components as comp
from kfp.dsl.types import Integer, GCSPath, String
import kfp.notebook
In [ ]:
# Define some pipeline input variables.
WORKING_DIR = 'gs://YOUR_GCS_BUCKET/t2t/notebooks' # Such as gs://bucket/object/path
PROJECT_NAME = 'YOUR_PROJECT'
GITHUB_TOKEN = 'YOUR_GITHUB_TOKEN' # needed for prediction, to grab issue data from GH
DEPLOY_WEBAPP = 'false'
In [ ]:
# Note that this notebook should be running in JupyterHub in the same cluster as the pipeline system.
# Otherwise, additional config would be required to connect.
client = kfp.Client()
client.list_experiments()
In [ ]:
exp = client.create_experiment(name='t2t_notebook')
Authoring a pipeline is like authoring a normal Python function. The pipeline function describes the topology of the pipeline. The pipeline components (steps) are container-based. For this pipeline, we're using a mix of predefined components loaded from their component definition files, and some components defined via the dsl.ContainerOp
constructor. For this codelab, we've prebuilt all the components' containers.
While not shown here, there are other ways to build Kubeflow Pipeline components as well, including converting stand-alone python functions to containers via kfp.components.func_to_container_op(func)
. You can read more here.
This pipeline has several steps:
We'll first define some constants and load some components from their definition files.
In [ ]:
COPY_ACTION = 'copy_data'
TRAIN_ACTION = 'train'
WORKSPACE_NAME = 'ws_gh_summ'
DATASET = 'dataset'
MODEL = 'model'
copydata_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/examples/master/github_issue_summarization/pipelines/components/t2t/datacopy_component.yaml' # pylint: disable=line-too-long
)
train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/examples/master/github_issue_summarization/pipelines/components/t2t/train_component.yaml' # pylint: disable=line-too-long
)
metadata_log_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/examples/master/github_issue_summarization/pipelines/components/t2t/metadata_log_component.yaml' # pylint: disable=line-too-long
)
Next, we'll define the pipeline itself.
In [ ]:
@dsl.pipeline(
name='Github issue summarization',
description='Demonstrate Tensor2Tensor-based training and TF-Serving'
)
def gh_summ( #pylint: disable=unused-argument
train_steps: 'Integer' = 2019300,
project: str = 'YOUR_PROJECT_HERE',
github_token: str = 'YOUR_GITHUB_TOKEN_HERE',
working_dir: 'GCSPath' = 'YOUR_GCS_DIR_HERE',
checkpoint_dir: 'GCSPath' = 'gs://aju-dev-demos-codelabs/kubecon/model_output_tbase.bak2019000/',
deploy_webapp: str = 'true',
data_dir: 'GCSPath' = 'gs://aju-dev-demos-codelabs/kubecon/t2t_data_gh_all/'
):
copydata = copydata_op(
data_dir=data_dir,
checkpoint_dir=checkpoint_dir,
model_dir='%s/%s/model_output' % (working_dir, dsl.RUN_ID_PLACEHOLDER),
action=COPY_ACTION,
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset = metadata_log_op(
log_type=DATASET,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
data_uri=data_dir
)
train = train_op(
data_dir=data_dir,
model_dir=copydata.outputs['copy_output_path'],
action=TRAIN_ACTION, train_steps=train_steps,
deploy_webapp=deploy_webapp
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_model = metadata_log_op(
log_type=MODEL,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
model_uri=train.outputs['train_output_path']
)
serve = dsl.ContainerOp(
name='serve',
image='gcr.io/google-samples/ml-pipeline-kubeflow-tfserve',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--model_path", train.outputs['train_output_path']
]
)
log_dataset.after(copydata)
log_model.after(train)
train.set_gpu_limit(1)
train.set_memory_limit('48G')
with dsl.Condition(train.outputs['launch_server'] == 'true'):
webapp = dsl.ContainerOp(
name='webapp',
image='gcr.io/google-samples/ml-pipeline-webapp-launcher:v2ap',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--github_token", github_token]
)
webapp.after(serve)
In [ ]:
compiler.Compiler().compile(gh_summ, 'ghsumm.tar.gz')
The call below will run the compiled pipeline. We won't actually do that now, but instead we'll add a new step to the pipeline, then run it.
In [ ]:
# You'd uncomment this call to actually run the pipeline.
# run = client.run_pipeline(exp.id, 'ghsumm', 'ghsumm.tar.gz',
# params={'working_dir': WORKING_DIR,
# 'github_token': GITHUB_TOKEN,
# 'project': PROJECT_NAME})
Next, let's add a new step to the pipeline above. As currently defined, the pipeline accesses a directory of pre-processed data as input to training. Let's see how we could include the pre-processing as part of the pipeline.
We're going to cheat a bit, as processing the full dataset will take too long for this workshop, so we'll use a smaller sample. For that reason, you won't actually make use of the generated data from this step (we'll stick to using the full dataset for training), but this shows how you could do so if we had more time.
First, we'll define the new pipeline step. Note the last line of this new function, which gives this step's pod the credentials to access GCP.
In [ ]:
# defining the new data preprocessing pipeline step.
# Note the last line, which gives this step's pod the credentials to access GCP
def preproc_op(data_dir, project):
return dsl.ContainerOp(
name='datagen',
image='gcr.io/google-samples/ml-pipeline-t2tproc',
arguments=[ "--data-dir", data_dir, "--project", project]
).apply(gcp.use_gcp_secret('user-gcp-sa'))
In [ ]:
# Then define a new Pipeline. It's almost the same as the original one,
# but with the addition of the data processing step.
@dsl.pipeline(
name='Github issue summarization',
description='Demonstrate Tensor2Tensor-based training and TF-Serving'
)
def gh_summ2(
train_steps: 'Integer' = 2019300,
project: str = 'YOUR_PROJECT_HERE',
github_token: str = 'YOUR_GITHUB_TOKEN_HERE',
working_dir: 'GCSPath' = 'YOUR_GCS_DIR_HERE',
checkpoint_dir: 'GCSPath' = 'gs://aju-dev-demos-codelabs/kubecon/model_output_tbase.bak2019000/',
deploy_webapp: str = 'true',
data_dir: 'GCSPath' = 'gs://aju-dev-demos-codelabs/kubecon/t2t_data_gh_all/'
):
# The new pre-processing op.
preproc = preproc_op(project=project,
data_dir=('%s/%s/gh_data' % (working_dir, dsl.RUN_ID_PLACEHOLDER)))
copydata = copydata_op(
data_dir=data_dir,
checkpoint_dir=checkpoint_dir,
model_dir='%s/%s/model_output' % (working_dir, dsl.RUN_ID_PLACEHOLDER),
action=COPY_ACTION,
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset = metadata_log_op(
log_type=DATASET,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
data_uri=data_dir
)
train = train_op(
data_dir=data_dir,
model_dir=copydata.outputs['copy_output_path'],
action=TRAIN_ACTION, train_steps=train_steps,
deploy_webapp=deploy_webapp
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset.after(copydata)
train.after(preproc)
log_model = metadata_log_op(
log_type=MODEL,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
model_uri=train.outputs['train_output_path']
)
serve = dsl.ContainerOp(
name='serve',
image='gcr.io/google-samples/ml-pipeline-kubeflow-tfserve',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--model_path", train.outputs['train_output_path']
]
)
log_model.after(train)
train.set_gpu_limit(1)
train.set_memory_limit('48G')
with dsl.Condition(train.outputs['launch_server'] == 'true'):
webapp = dsl.ContainerOp(
name='webapp',
image='gcr.io/google-samples/ml-pipeline-webapp-launcher:v2ap',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--github_token", github_token]
)
webapp.after(serve)
In [ ]:
compiler.Compiler().compile(gh_summ2, 'ghsumm2.tar.gz')
In [ ]:
run = client.run_pipeline(exp.id, 'ghsumm2', 'ghsumm2.tar.gz',
params={'working_dir': WORKING_DIR,
'github_token': GITHUB_TOKEN,
'deploy_webapp': DEPLOY_WEBAPP,
'project': PROJECT_NAME})
You should be able to see your newly defined pipeline run in the dashboard:
The new pipeline has the following structure:
Below is a screenshot of the pipeline running.
When this new pipeline finishes running, you'll be able to see your generated processed data files in GCS under the path: WORKING_DIR/<pipeline_name>/gh_data
. There isn't time in the workshop to pre-process the full dataset, but if there had been, we could have defined our pipeline to read from that generated directory for its training input.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.