In [ ]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
This notebook uses the Census Income Data Set to create a simple model, train the model, upload the model to Ai Platform, and lastly use the model to make predictions.
Getting your model ready for predictions can be done in 5 steps:
Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
These variables will be needed for the following steps.
Replace:
PROJECT_ID <YOUR_PROJECT_ID>
- with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.BUCKET_NAME <YOUR_BUCKET_NAME>
- with the bucket id you created above.MODEL_NAME <YOUR_MODEL_NAME>
- with your model name, such as 'census
'VERSION <YOUR_VERSION>
- with your version name, such as 'v1
'REGION <REGION>
- select a region or use the default 'us-central1
'. The region is where the model will be deployed.
In [1]:
%env PROJECT_ID PROJECT_ID
%env BUCKET_NAME BUCKET_NAME
%env MODEL_NAME census
%env VERSION_NAME v1
%env REGION us-central1
The Census Income Data Set that this sample uses for training is hosted by the UC Irvine Machine Learning Repository.
adult.data
adult.test
In [2]:
# Create a directory to hold the data
! mkdir census_data
# Download the data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data --output census_data/adult.data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test --output census_data/adult.test
In [3]:
import googleapiclient.discovery
import json
import numpy as np
import os
import pandas as pd
import pickle
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./census_data/adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).as_matrix().tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').as_matrix().tolist()
# Load the test census dataset
with open('./census_data/adult.test', 'r') as test_data:
raw_testing_data = pd.read_csv(test_data, names=COLUMNS, skiprows=1)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
test_features = raw_testing_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
test_labels = (raw_testing_data['income-level'] == ' >50K.').values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array.
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# Export the model to a file
joblib.dump(pipeline, 'model.joblib')
print('Model trained and saved')
Next, you'll need to upload the model to your project's storage bucket in GCS. To use your model with AI Platform, it needs to be uploaded to Google Cloud Storage (GCS). This step takes your local ‘model.joblib’ file and uploads it GCS via the Cloud SDK using gsutil.
Before continuing, make sure you're properly authenticated and have access to the bucket. This next command sets your project to the one specified above.
Note: If you get an error below, make sure the Cloud SDK is installed in the kernel's environment.
In [4]:
! gcloud config set project $PROJECT_ID
Note: The exact file name of of the exported model you upload to GCS is important! Your model must be named “model.joblib”, “model.pkl”, or “model.bst” with respect to the library you used to export it. This restriction ensures that the model will be safely reconstructed later by using the same technique for import as was used during export.
In [5]:
! gsutil cp ./model.joblib gs://$BUCKET_NAME/model.joblib
AI Platform organizes your trained models using model and version resources. An AI Platform model is a container for the versions of your machine learning model. For more information on model resources and model versions look here.
At this step, you create a container that you can use to hold several different versions of your actual model.
In [6]:
! gcloud ml-engine models create $MODEL_NAME --regions $REGION
Now it’s time to get your model online and ready for predictions. The model version requires a few components as specified here.
VERSION_NAME
variable you declared at the beginning.MODEL_NAME
variable you declared at the beginning.BUCKET_NAME
TENSORFLOW
, SCIKIT_LEARN
, XGBOOST
. This is set to SCIKIT_LEARN
“2.7”
, if you are using Python 3.5, set the value to “3.5”
Note: If you require a feature of scikit-learn that isn’t available in the publicly released version yet, you can specify “runtimeVersion”: “HEAD” instead, and that would get the latest version of scikit-learn available from the github repo. Otherwise the following versions will be used:
First, we need to create a YAML file to configure our model version.
REPLACE: PREVIOUSLY_SPECIFIED_BUCKET_NAME
with your BUCKET_NAME
In [7]:
%%writefile ./config.yaml
deploymentUri: "gs://BUCKET_NAME/"
runtimeVersion: '1.4'
framework: "SCIKIT_LEARN"
pythonVersion: "3.5"
Use the created YAML file to create a model version.
Note: It can take several minutes for you model to be available.
In [8]:
! gcloud ml-engine versions create $VERSION_NAME \
--model $MODEL_NAME \
--config config.yaml
In [9]:
# Get one person that makes <=50K and one that makes >50K to test our model.
print('Show a person that makes <=50K:')
print('\tFeatures: {0} --> Label: {1}\n'.format(test_features[0], test_labels[0]))
with open('less_than_50K.json', 'w') as outfile:
json.dump(test_features[0], outfile)
print('Show a person that makes >50K:')
print('\tFeatures: {0} --> Label: {1}'.format(test_features[3], test_labels[3]))
with open('more_than_50K.json', 'w') as outfile:
json.dump(test_features[3], outfile)
Use the two people (as seen in the table) gathered in the previous step for the gcloud predictions.
Person | age | workclass | fnlwgt | education | education-num | marital-status | occupation |
---|---|---|---|---|---|---|---|
1 | 25 | Private | 226802 | 11th | 7 | Never-married | Machine-op-inspect |
2 | 44 | Private | 160323 | Some-college | 10 | Married-civ-spouse | Machine-op-inspct |
Person | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | (Label) income-level | |
---|---|---|---|---|---|---|---|---|---|
1 | Own-child | Black | Male | 0 | 0 | 40 | United-States | False (<=50K) | |
2 | Huasband | Black | Male | 7688 | 0 | 40 | United-States | True (>50K) |
Test the model with an online prediction using the data of a person who makes <=50K.
Note: If you see an error, the model from Part 4 may not be created yet as it takes several minutes for a new model version to be created.
In [10]:
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances less_than_50K.json
Test the model with an online prediction using the data of a person who makes >50K.
In [11]:
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances more_than_50K.json
Test the model with the entire test set and print out some of the results.
Note: If running notebook server on Compute Engine, make sure to "allow full access to all Cloud APIs".
In [12]:
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['PROJECT_ID']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Due to the size of the data, it needs to be split in 2
first_half = test_features[:int(len(test_features)/2)]
second_half = test_features[int(len(test_features)/2):]
complete_results = []
for data in [first_half, second_half]:
responses = service.projects().predict(
name=name,
body={'instances': data}
).execute()
if 'error' in responses:
print(response['error'])
else:
complete_results.extend(responses['predictions'])
# Print the first 10 responses
for i, response in enumerate(complete_results[:10]):
print('Prediction: {}\tLabel: {}'.format(response, test_labels[i]))
In [13]:
actual = pd.Series(test_labels, name='actual')
online = pd.Series(complete_results, name='online')
pd.crosstab(actual,online)
Out[13]:
Use a confusion matrix create a visualization of the predicted results from the local model. These results should be identical to the results above.
In [14]:
local_results = pipeline.predict(test_features)
local = pd.Series(local_results, name='local')
pd.crosstab(actual,local)
Out[14]:
Directly compare the two results
In [15]:
identical = 0
different = 0
for i in range(len(complete_results)):
if complete_results[i] == local_results[i]:
identical += 1
else:
different += 1
print('identical: {}, different: {}'.format(identical,different))
If all results are identical, it means you've successfully uploaded your local model to AI Platform and performed online predictions correctly.