XGBoost Training on AI Platform

This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.

How to bring your model to AI Platform

Getting your model ready for training can be done in 3 steps:

  1. Create your python model file
    1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
    2. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
  2. Prepare a package
  3. Submit the training job

Prerequisites

Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.

Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.

AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.

Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.

Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.

Part 0: Setup

These variables will be needed for the following steps.

  • TRAINER_PACKAGE_PATH <./census_training> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
  • MAIN_TRAINER_MODULE <census_training.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
  • JOB_DIR <gs://$BUCKET_ID/xgb_job_dir> - The path to a Google Cloud Storage location to use for job output.
  • RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
  • PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.

Replace:

  • PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
  • BUCKET_ID <YOUR_BUCKET_ID> - with the bucket id you created above.
  • JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir> - with the bucket id you created above.
  • REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.

In [1]:
%env PROJECT_ID <YOUR_PROJECT_ID>
%env BUCKET_ID <YOUR_BUCKET_ID>
%env REGION <REGION>
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir>
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training


env: PROJECT_ID=<YOUR_PROJECT_ID>
env: BUCKET_ID=<YOUR_BUCKET_ID>
env: REGION=<REGION>
env: TRAINER_PACKAGE_PATH=./census_training
env: MAIN_TRAINER_MODULE=census_training.train
env: JOB_DIR=<gs://YOUR_BUCKET_ID/xgb_job_dir>
env: RUNTIME_VERSION=1.9
env: PYTHON_VERSION=3.5

The data

The Census Income Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/census/data/.

  • Training file is adult.data.csv
  • Evaluation file is adult.test.csv (not used in this notebook)

Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.

Disclaimer

This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset.

Part 1: Create your python model file

First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are two key differences:

  1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
  2. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.

The code in this file loads the data into a pandas DataFrame and pre-processes the data with scikit-learn. This data is then loaded into a DMatrix and used to train a model. Lastly, the model is saved to a file that can be uploaded to AI Platform's prediction service.

REPLACE Line 18: BUCKET_ID = 'true-ability-192918' with your GCS BUCKET_ID

Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.


In [2]:
%%writefile ./census_training/train.py
# [START setup]
import datetime
import os
import subprocess

from sklearn.preprocessing import LabelEncoder
import pandas as pd
from google.cloud import storage
import xgboost as xgb


# TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID
BUCKET_ID = 'torryyang-xgb-models'
# [END setup]

# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
census_data_filename = 'adult.data.csv'

# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')

# Path to the data inside the public bucket
data_dir = 'ml-engine/census/data/'

# Download the data
blob = bucket.blob(''.join([data_dir, census_data_filename]))
blob.download_to_filename(census_data_filename)
# [END download-data]

# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------

# [START define-and-load-data]

# these are the column labels from the census data files
COLUMNS = (
    'age',
    'workclass',
    'fnlwgt',
    'education',
    'education-num',
    'marital-status',
    'occupation',
    'relationship',
    'race',
    'sex',
    'capital-gain',
    'capital-loss',
    'hours-per-week',
    'native-country',
    'income-level'
)
# categorical columns contain data that need to be turned into numerical values before being used by XGBoost
CATEGORICAL_COLUMNS = (
    'workclass',
    'education',
    'marital-status',
    'occupation',
    'relationship',
    'race',
    'sex',
    'native-country'
)

# Load the training census dataset
with open(census_data_filename, 'r') as train_data:
    raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# remove column we are trying to predict ('income-level') from features list
train_features = raw_training_data.drop('income-level', axis=1)
# create training labels list
train_labels = (raw_training_data['income-level'] == ' >50K')

# [END define-and-load-data]

# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values. 
# convert data in categorical columns to numerical values
encoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS}
for col in CATEGORICAL_COLUMNS:
    train_features[col] = encoders[col].fit_transform(train_features[col])
# [END categorical-feature-conversion]

# [START load-into-dmatrix-and-train]
# load data into DMatrix object
dtrain = xgb.DMatrix(train_features, train_labels)
# train model
bst = xgb.train({}, dtrain, 20)
# [END load-into-dmatrix-and-train]

# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.bst'
bst.save_model(model)

# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_ID)
blob = bucket.blob('{}/{}'.format(
    datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
    model))
blob.upload_from_filename(model)
# [END export-to-gcs]


Writing ./census_training/train.py

Part 2: Create Trainer Package

Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here


In [3]:
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.


Writing ./census_training/__init__.py

Part 3: Submit Training Job

Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:

  • job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%S")
  • job-dir - The path to a Google Cloud Storage location to use for job output.
  • package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
  • module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
  • region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
  • runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
  • python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
  • scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.

Note: Check to make sure gcloud is set to the current PROJECT_ID


In [4]:
! gcloud config set project $PROJECT_ID


Updated property [core/project].

Submit the training job.


In [5]:
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
  --job-dir $JOB_DIR \
  --package-path $TRAINER_PACKAGE_PATH \
  --module-name $MAIN_TRAINER_MODULE \
  --region $REGION \
  --runtime-version=$RUNTIME_VERSION \
  --python-version=$PYTHON_VERSION \
  --scale-tier BASIC


Job [census_training_20180719_224106] submitted successfully.
Your job is still active. You may view the status of your job with the command

  $ gcloud ml-engine jobs describe census_training_20180719_224106

or continue streaming the logs with the command

  $ gcloud ml-engine jobs stream-logs census_training_20180719_224106
jobId: census_training_20180719_224106
state: QUEUED

[Optional] StackDriver Logging

You can view the logs for your training job:

  1. Go to https://console.cloud.google.com/
  2. Select "Logging" in left-hand pane
  3. Select "Cloud ML Job" resource from the drop-down
  4. In filter by prefix, use the value of $JOB_NAME to view the logs

[Optional] Verify Model File in GCS

View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS.

Note: The model can take a few minutes to train and show up in GCS.


In [ ]:
! gsutil ls gs://$BUCKET_ID/census_*

Next Steps:

The AI Platform online prediction service manages computing resources in the cloud to run your models. Check out the documentation pages that describe the process to get online predictions from these exported models using AI Platform.