In [ ]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

scikit-learn Training on AI Platform

This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.

How to bring your model to AI Platform

Getting your model ready for training can be done in 3 steps:

  1. Create your python model file
    1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
    2. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
  2. Prepare a package
  3. Submit the training job

Prerequisites

Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.

Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.

AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.

Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.

Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.

Part 0: Setup

These variables will be needed for the following steps.

  • TRAINER_PACKAGE_PATH <./census_training> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
  • MAIN_TRAINER_MODULE <census_training.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
  • JOB_DIR <gs://$BUCKET_NAME/scikit_learn_job_dir> - The path to a Google Cloud Storage location to use for job output.
  • RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
  • PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.

Replace:

  • PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
  • BUCKET_NAME <YOUR_BUCKET_NAME> - with the bucket id you created above.
  • JOB_DIR <gs://YOUR_BUCKET_NAME/scikit_learn_job_dir> - with the bucket id you created above.
  • REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.

In [1]:
%env PROJECT_ID <PROJECT_ID>
%env BUCKET_NAME <BUCKET_NAME>
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR gs://<BUCKET_NAME>/scikit_learn_job_dir
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training


env: PROJECT_ID=<PROJECT_ID>
env: BUCKET_NAME=<BUCKET_NAME>
env: REGION=us-central1
env: TRAINER_PACKAGE_PATH=./census_training
env: MAIN_TRAINER_MODULE=census_training.train
env: JOB_DIR=gs://<BUCKET_NAME>/scikit_learn_job_dir
env: RUNTIME_VERSION=1.9
env: PYTHON_VERSION=3.5

The data

The Census Income Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/sklearn/census_data/.

  • Training file is adult.data
  • Evaluation file is adult.test (not used in this notebook)

Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.

Disclaimer

This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset.

Part 1: Create your python model file

First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a scikit-learn model. However, there are two key differences:

  1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
  2. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.

The code in this file loads the data into a pandas DataFrame that can be used by scikit-learn. Then the model is fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform's prediction service.

REPLACE Line 18: BUCKET_NAME = '' with your GCS BUCKET_NAME

Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.


In [2]:
%%writefile ./census_training/train.py
# [START setup]
import datetime
import pandas as pd

from google.cloud import storage

from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer


# TODO: REPLACE '<BUCKET_NAME>' with your GCS BUCKET_NAME
BUCKET_NAME = '<BUCKET_NAME>'
# [END setup]


# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')

# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/sklearn/census_data/adult.data')
# Download the data
blob.download_to_filename('adult.data')
# [END download-data]


# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------
# [START define-and-load-data]
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
    'age',
    'workclass',
    'fnlwgt',
    'education',
    'education-num',
    'marital-status',
    'occupation',
    'relationship',
    'race',
    'sex',
    'capital-gain',
    'capital-loss',
    'hours-per-week',
    'native-country',
    'income-level'
)

# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
    'workclass',
    'education',
    'marital-status',
    'occupation',
    'relationship',
    'race',
    'sex',
    'native-country'
)


# Load the training census dataset
with open('./adult.data', 'r') as train_data:
    raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)

# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()
# [END define-and-load-data]


# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []

# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
    if col in CATEGORICAL_COLUMNS:
        # Create a scores array to get the individual categorical column.
        # Example:
        #  data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical', 
        #         'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
        #  scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        #
        # Returns: [['State-gov']]
        # Build the scores array
        scores = [0] * len(COLUMNS[:-1])
        # This column is the categorical column we want to extract.
        scores[i] = 1
        skb = SelectKBest(k=1)
        skb.scores_ = scores
        # Convert the categorical column to a numerical value
        lbn = LabelBinarizer()
        r = skb.transform(train_features)
        lbn.fit(r)
        # Create the pipeline to extract the categorical feature
        categorical_pipelines.append(
            ('categorical-{}'.format(i), Pipeline([
                ('SKB-{}'.format(i), skb),
                ('LBN-{}'.format(i), lbn)])))
# [END categorical-feature-conversion]

# [START create-pipeline]
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))

# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)

# Create the classifier
classifier = RandomForestClassifier()

# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)

# Create the overall model as a single pipeline
pipeline = Pipeline([
    ('union', preprocess),
    ('classifier', classifier)
])
# [END create-pipeline]


# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.joblib'
joblib.dump(pipeline, model)

# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob('{}/{}'.format(
    datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
    model))
blob.upload_from_filename(model)
# [END export-to-gcs]


Writing ./census_training/train.py

Part 2: Create Trainer Package

Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here


In [3]:
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.


Writing ./census_training/__init__.py

Part 3: Submit Training Job

Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:

  • job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%S")
  • job-dir - The path to a Google Cloud Storage location to use for job output.
  • package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
  • module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
  • region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
  • runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
  • python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
  • scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.

Note: Check to make sure gcloud is set to the current PROJECT_ID


In [4]:
! gcloud config set project $PROJECT_ID


Updated property [core/project].

Submit the training job.


In [5]:
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
  --job-dir $JOB_DIR \
  --package-path $TRAINER_PACKAGE_PATH \
  --module-name $MAIN_TRAINER_MODULE \
  --region $REGION \
  --runtime-version=$RUNTIME_VERSION \
  --python-version=$PYTHON_VERSION \
  --scale-tier BASIC


Job [census_training_20180718_083452] submitted successfully.
Your job is still active. You may view the status of your job with the command

  $ gcloud ml-engine jobs describe census_training_20180718_083452

or continue streaming the logs with the command

  $ gcloud ml-engine jobs stream-logs census_training_20180718_083452
jobId: census_training_20180718_083452
state: QUEUED

[Optional] StackDriver Logging

You can view the logs for your training job:

  1. Go to https://console.cloud.google.com/
  2. Select "Logging" in left-hand pane
  3. Select "Cloud ML Job" resource from the drop-down
  4. In filter by prefix, use the value of $JOB_NAME to view the logs

[Optional] Verify Model File in GCS

View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS.

Note: The model can take a few minutes to train and show up in GCS.


In [ ]:
! gsutil ls gs://$BUCKET_NAME/census_*

Next Steps:

The AI Platform online prediction service manages computing resources in the cloud to run your models. Check out the documentation pages that describe the process to get online predictions from these exported models using AI Platform.