Training and Serving CARET models using AI Platform Custom Containers and Cloud Run

Overview

This notebook illustrates how to use CARET R package to build an ML model to estimate the baby's weight given a number of factors, using the BigQuery natality dataset. We use AI Platform Training with Custom Containers to train the TensorFlow model at scale. Rhen use the Cloud Run to serve the trained model as a Web API for online predictions.

R is one of the most widely used programming languages for statistical modeling, which has a large and active community of data scientists and ML professional. With over 10,000 packages in the open-source repository of CRAN, R caters to all statistical data analysis applications, ML, and visualisation.

Dataset

The dataset used in this tutorial is natality data, which describes all United States births registered in the 50 States, the District of Columbia, and New York City from 1969 to 2008, with more than 137 million records. The dataset is available in BigQuery public dataset. We use the data extracted from BigQuery and stored as CSV in Cloud Storage (GCS) in the Exploratory Data Analysis notebook.

In this notebook, we focus on Exploratory Data Analysis, while the goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother.

Objective

The goal of this tutorial is to:

  1. Create a CARET regression model
  2. Train the CARET model using on AI Platform Training with custom R container
  3. Implement a Web API wrapper to the trained model using Plumber R package
  4. Build Docker container image for the prediction Web API
  5. Deploy the prediction Web API container image model on Cloud Run
  6. Invoke the deployed Web API for predictions.
  7. Use the AI Platform Notebooks to drive the workflow.

Costs

This tutorial uses billable components of Google Cloud Platform (GCP):

  1. Create a TensorFlow premade Estimator trainer using R interface
  2. Train and export the Estimator on AI Platform Training using the cloudml APIs
  3. Deploy the exported model to AI Platform prediction using the cloudml APIs
  4. Invoke the deployed model API for predictions.
  5. Use the AI Platform Notebooks to drive the workflow.

Learn about GCP pricing, use the Pricing Calculator to generate a cost estimate based on your projected usage.

0. Setup


In [1]:
version


               _                           
platform       x86_64-pc-linux-gnu         
arch           x86_64                      
os             linux-gnu                   
system         x86_64, linux-gnu           
status                                     
major          3                           
minor          5.1                         
year           2018                        
month          07                          
day            02                          
svn rev        74947                       
language       R                           
version.string R version 3.5.1 (2018-07-02)
nickname       Feather Spray               

Install and import the required libraries.

This may take several minutes if not installed already...


In [2]:
install.packages(c("caret"))


Updating HTML index of packages in '.Library'
Making 'packages.html' ... done

In [3]:
library(caret) # used to build a regression model


Loading required package: lattice
Loading required package: ggplot2

Set your PROJECT_ID, BUCKET_NAME, and REGION


In [4]:
# Set the project id
PROJECT_ID <- "r-on-gcp"

# Set yout GCS bucket
BUCKET_NAME <- "r-on-gcp"

# Set your training and model deployment region
REGION <- 'europe-west1'

1. Building a CARET Regression Model

1.1. Load data

If you run the Exploratory Data Analysis Notebook, you should have the train_data.csv and eval_data.csv files uploaded to GCS. You can download them to train your model locally using the following cell. However, if you have the files available locally, you can skip the following cell.


In [5]:
dir.create(file.path('data'), showWarnings = FALSE)
gcs_data_dir <- paste0("gs://", BUCKET_NAME, "/data/*_data.csv")
command <- paste("gsutil cp -r", gcs_data_dir, "data/")
print(command)
system(command, intern = TRUE)


[1] "gsutil cp -r gs://r-on-gcp/data/*_data.csv data/"

In [6]:
train_file <- "data/train_data.csv"
eval_file <- "data/eval_data.csv"
header <- c(
    "weight_pounds", 
    "is_male", "mother_age", "mother_race", "plurality", "gestation_weeks", 
    "mother_married", "cigarette_use", "alcohol_use", 
    "key")

target <- "weight_pounds"
key <- "key"
features <- setdiff(header, c(target, key))

train_data <- read.table(train_file, col.names = header, sep=",")
eval_data <- read.table(eval_file, col.names = header, sep=",")

1.2. Train the model

In this example, we will train an XGboost Tree model for regression.


In [7]:
trainControl <- trainControl(method = 'boot', number = 10)
hyper_parameters <- expand.grid(
    nrounds = 100,
    max_depth = 6,
    eta = 0.3,
    gamma = 0,
    colsample_bytree = 1,
    min_child_weight = 1,
    subsample = 1
)
  
print('Training the model...')

model <- train(
    y=train_data$weight_pounds, 
    x=train_data[, features], 
    preProc = c("center", "scale"),
    method='xgbTree', 
    trControl=trainControl,
    tuneGrid=hyper_parameters
)

print('Model is trained.')


[1] "Training the model..."
[1] "Model is trained."

1.2. Evaluate the model


In [8]:
eval(model)


eXtreme Gradient Boosting 

7708 samples
   8 predictor

Pre-processing: centered (4), scaled (4), ignore (4) 
Resampling: Bootstrapped (10 reps) 
Summary of sample sizes: 7708, 7708, 7708, 7708, 7708, 7708, ... 
Resampling results:

  RMSE      Rsquared   MAE      
  1.094446  0.2985824  0.8440141

Tuning parameter 'nrounds' was held constant at a value of 100
Tuning
 held constant at a value of 1
Tuning parameter 'subsample' was held
 constant at a value of 1

1.3. Save the trained model


In [9]:
model_dir <- "models"
model_name <- "caret_babyweight_estimator"

In [10]:
# Saving the trained model
dir.create(model_dir, showWarnings = FALSE)
dir.create(file.path(model_dir, model_name), showWarnings = FALSE)
saveRDS(model, file.path(model_dir, model_name, "trained_model.rds"))

1.4. Implementing a model prediction function

This is an implementation of wrapper function to the model to perform prediction. The function expects a list of instances in a JSON format, and returns a list of predictions (estimated weights). This prediction function implementation will be used when serving the model as a Web API for online predictions.


In [11]:
xgbtree <- readRDS(file.path(model_dir, model_name, "trained_model.rds"))

estimate_babyweights <- function(instances_json){
    library("rjson")
    instances <- jsonlite::fromJSON(instances_json)
    df_instances <- data.frame(instances)
    # fix data types
    boolean_columns <- c("is_male", "mother_married", "cigarette_use", "alcohol_use")
    for(col in boolean_columns){
        df_instances[[col]] <- as.logical(df_instances[[col]])
    }
    
    estimates <- predict(xgbtree, df_instances)
    return(estimates) 
}

instances_json <- '
[
    {
        "is_male": "TRUE",
        "mother_age": 28,
        "mother_race": 8,
        "plurality": 1,
        "gestation_weeks":  28,
        "mother_married": "TRUE",
        "cigarette_use": "FALSE",
        "alcohol_use": "FALSE"
     },
    {
        "is_male": "FALSE",
        "mother_age": 38,
        "mother_race": 18,
        "plurality": 1,
        "gestation_weeks":  28,
        "mother_married": "TRUE",
        "cigarette_use": "TRUE",
        "alcohol_use": "TRUE"
     }
]
'

estimate <- round(estimate_babyweights(instances_json), digits = 2)
print(paste("Estimated weight(s):", estimate))


[1] "Estimated weight(s): 4.5"  "Estimated weight(s): 2.57"

3. Submit a Training Job to AI Platform with Custom Containers

In order to train your CARET model in at scale using AI Platform Training, you need to implement your training logic in an R script file, containerize it in a Docker image, and submit the Docker image to AI Platform Training.

The src/caret/training directory includes the following code files:

  1. model_trainer.R - This is the implementation of the CARET model training logic.
  2. Dockerfile - This is the definition of the Docker container image to run the model_trainer.R script.

To submit the training job with the custom container to AI Platform, you need to do the following steps:

  1. set your PROJECT_ID and BUCKET_NAME in training/model_trainer.R, and PROJECT_ID in training/Dockerfile so that the first line reads "FROM gcr.io/[PROJECT_ID]/caret_base"
  2. Build a Docker container image with that runs the model_trainer.R
  3. Push the Docker container image to Container Registry.
  4. Submit an AI Platform Training job with the custom container.

3.1. Build and Push the Docker container image.

A - Build base image

This can take several minutes ...


In [13]:
# Create base image
base_image_url <- paste0("gcr.io/", PROJECT_ID, "/caret_base")
print(base_image_url)

setwd("src/caret")
getwd()

print("Building the base Docker container image...")
command <- paste0("docker build -f Dockerfile --tag ", base_image_url, " ./")
print(command)
system(command, intern = TRUE)

print("Pushing the baseDocker container image...")
command <- paste0("gcloud docker -- push ", base_image_url)
print(command)
system(command, intern = TRUE)

setwd("../..")
getwd()


[1] "gcr.io/r-on-gcp/caret_base"
'/home/jupyter/cloudml-samples/notebooks/R/src/caret'
[1] "Building the base Docker container image..."
[1] "docker build -f Dockerfile --tag gcr.io/r-on-gcp/caret_base ./"
  1. 'Sending build context to Docker daemon 11.26kB\r\r'
  2. 'Step 1/2 : FROM gcr.io/deeplearning-platform-release/r-cpu'
  3. ' ---> ecba7177e2c2'
  4. 'Step 2/2 : RUN R -e "install.packages(c(\'readr\', \'caret\', \'xgboost\', \'rjson\', \'plumber\'), repos=\'http://cran.rstudio.com/\')"'
  5. ' ---> Using cache'
  6. ' ---> 23896e0a8a18'
  7. 'Successfully built 23896e0a8a18'
  8. 'Successfully tagged gcr.io/r-on-gcp/caret_base:latest'
[1] "Pushing the baseDocker container image..."
[1] "gcloud docker -- push gcr.io/r-on-gcp/caret_base"
  1. 'The push refers to repository [gcr.io/r-on-gcp/caret_base]'
  2. '74186bfe1b58: Preparing'
  3. '0ce62bef372e: Preparing'
  4. 'ac80f37c61e8: Preparing'
  5. 'd35b581063f9: Preparing'
  6. '2cdc3d03b403: Preparing'
  7. '842c821f54eb: Preparing'
  8. '81c79e512da5: Preparing'
  9. '071617594623: Preparing'
  10. '2eee26189b5e: Preparing'
  11. '473763e23878: Preparing'
  12. '8ff58362bc10: Preparing'
  13. 'c5890df75ecc: Preparing'
  14. '1fb17ae4fc11: Preparing'
  15. '0aaa26768f46: Preparing'
  16. 'fd5276389b8a: Preparing'
  17. '56ac87b2a469: Preparing'
  18. '140ce133886c: Preparing'
  19. 'b20d335c2af7: Preparing'
  20. 'e4dc9a88747b: Preparing'
  21. '027d71a0bd0a: Preparing'
  22. '2bcd744f68d7: Preparing'
  23. '75e70aa52609: Preparing'
  24. 'dda151859818: Preparing'
  25. 'fbd2732ad777: Preparing'
  26. 'ba9de9d8475e: Preparing'
  27. '842c821f54eb: Waiting'
  28. '81c79e512da5: Waiting'
  29. '071617594623: Waiting'
  30. '2eee26189b5e: Waiting'
  31. '473763e23878: Waiting'
  32. '8ff58362bc10: Waiting'
  33. 'c5890df75ecc: Waiting'
  34. '1fb17ae4fc11: Waiting'
  35. '0aaa26768f46: Waiting'
  36. 'fd5276389b8a: Waiting'
  37. '56ac87b2a469: Waiting'
  38. '140ce133886c: Waiting'
  39. 'b20d335c2af7: Waiting'
  40. 'e4dc9a88747b: Waiting'
  41. '027d71a0bd0a: Waiting'
  42. '2bcd744f68d7: Waiting'
  43. '75e70aa52609: Waiting'
  44. 'dda151859818: Waiting'
  45. 'fbd2732ad777: Waiting'
  46. 'ba9de9d8475e: Waiting'
  47. '74186bfe1b58: Layer already exists'
  48. '0ce62bef372e: Layer already exists'
  49. '2cdc3d03b403: Layer already exists'
  50. 'd35b581063f9: Layer already exists'
  51. 'ac80f37c61e8: Layer already exists'
  52. '842c821f54eb: Layer already exists'
  53. '81c79e512da5: Layer already exists'
  54. '071617594623: Layer already exists'
  55. '473763e23878: Layer already exists'
  56. '2eee26189b5e: Layer already exists'
  57. 'c5890df75ecc: Layer already exists'
  58. '8ff58362bc10: Layer already exists'
  59. '1fb17ae4fc11: Layer already exists'
  60. '0aaa26768f46: Layer already exists'
  61. 'fd5276389b8a: Layer already exists'
  62. '56ac87b2a469: Layer already exists'
  63. '140ce133886c: Layer already exists'
  64. 'e4dc9a88747b: Layer already exists'
  65. 'b20d335c2af7: Layer already exists'
  66. '027d71a0bd0a: Layer already exists'
  67. '2bcd744f68d7: Layer already exists'
  68. 'fbd2732ad777: Layer already exists'
  69. '75e70aa52609: Layer already exists'
  70. 'ba9de9d8475e: Layer already exists'
  71. 'dda151859818: Layer already exists'
  72. 'latest: digest: sha256:43f66f157027aced3c583006c2606a6eac66a891edea82422f345f8fcf6c1e4f size: 5552'
'/home/jupyter/cloudml-samples/notebooks/R'

B - Build trainer image


In [14]:
training_image_url <- paste0("gcr.io/", PROJECT_ID, "/", model_name, "_training")
print(training_image_url)

setwd("src/caret/training")
getwd()

print("Building the Docker container image...")
command <- paste0("docker build -f Dockerfile --tag ", training_image_url, " ./")
print(command)
system(command, intern = TRUE)

print("Pushing the Docker container image...")
command <- paste0("gcloud docker -- push ", training_image_url)
print(command)
system(command, intern = TRUE)

setwd("../../..")
getwd()


[1] "gcr.io/r-on-gcp/caret_babyweight_estimator_training"
'/home/jupyter/cloudml-samples/notebooks/R/src/caret/training'
[1] "Building the Docker container image..."
[1] "docker build -f Dockerfile --tag gcr.io/r-on-gcp/caret_babyweight_estimator_training ./"
  1. 'Sending build context to Docker daemon 5.632kB\r\r'
  2. 'Step 1/5 : FROM gcr.io/r-on-gcp/caret_base'
  3. ' ---> 23896e0a8a18'
  4. 'Step 2/5 : RUN mkdir -p /root'
  5. ' ---> Running in dfffccca5b17'
  6. 'Removing intermediate container dfffccca5b17'
  7. ' ---> 8e3b3ba4b7bc'
  8. 'Step 3/5 : COPY model_trainer.R /root'
  9. ' ---> 83f6807f8116'
  10. 'Step 4/5 : WORKDIR /root'
  11. ' ---> Running in 80a3ac026ce4'
  12. 'Removing intermediate container 80a3ac026ce4'
  13. ' ---> 06415782e550'
  14. 'Step 5/5 : CMD ["Rscript", "model_trainer.R"]'
  15. ' ---> Running in a35a45de623a'
  16. 'Removing intermediate container a35a45de623a'
  17. ' ---> dee0b504a5e1'
  18. 'Successfully built dee0b504a5e1'
  19. 'Successfully tagged gcr.io/r-on-gcp/caret_babyweight_estimator_training:latest'
[1] "Pushing the Docker container image..."
[1] "gcloud docker -- push gcr.io/r-on-gcp/caret_babyweight_estimator_training"
  1. 'The push refers to repository [gcr.io/r-on-gcp/caret_babyweight_estimator_training]'
  2. '6dfd17fccb17: Preparing'
  3. '74186bfe1b58: Preparing'
  4. '0ce62bef372e: Preparing'
  5. 'ac80f37c61e8: Preparing'
  6. 'd35b581063f9: Preparing'
  7. '2cdc3d03b403: Preparing'
  8. '842c821f54eb: Preparing'
  9. '81c79e512da5: Preparing'
  10. '071617594623: Preparing'
  11. '2eee26189b5e: Preparing'
  12. '473763e23878: Preparing'
  13. '8ff58362bc10: Preparing'
  14. 'c5890df75ecc: Preparing'
  15. '1fb17ae4fc11: Preparing'
  16. '0aaa26768f46: Preparing'
  17. 'fd5276389b8a: Preparing'
  18. '56ac87b2a469: Preparing'
  19. '140ce133886c: Preparing'
  20. 'b20d335c2af7: Preparing'
  21. 'e4dc9a88747b: Preparing'
  22. '027d71a0bd0a: Preparing'
  23. '2bcd744f68d7: Preparing'
  24. '75e70aa52609: Preparing'
  25. 'dda151859818: Preparing'
  26. 'fbd2732ad777: Preparing'
  27. 'ba9de9d8475e: Preparing'
  28. '2cdc3d03b403: Waiting'
  29. '842c821f54eb: Waiting'
  30. '81c79e512da5: Waiting'
  31. '071617594623: Waiting'
  32. '2eee26189b5e: Waiting'
  33. '473763e23878: Waiting'
  34. '8ff58362bc10: Waiting'
  35. 'c5890df75ecc: Waiting'
  36. '1fb17ae4fc11: Waiting'
  37. '0aaa26768f46: Waiting'
  38. 'fd5276389b8a: Waiting'
  39. '56ac87b2a469: Waiting'
  40. '140ce133886c: Waiting'
  41. 'b20d335c2af7: Waiting'
  42. 'e4dc9a88747b: Waiting'
  43. '027d71a0bd0a: Waiting'
  44. '2bcd744f68d7: Waiting'
  45. '75e70aa52609: Waiting'
  46. 'dda151859818: Waiting'
  47. 'fbd2732ad777: Waiting'
  48. 'ba9de9d8475e: Waiting'
  49. 'ac80f37c61e8: Layer already exists'
  50. '0ce62bef372e: Layer already exists'
  51. 'd35b581063f9: Layer already exists'
  52. '74186bfe1b58: Layer already exists'
  53. '2cdc3d03b403: Layer already exists'
  54. '842c821f54eb: Layer already exists'
  55. '81c79e512da5: Layer already exists'
  56. '071617594623: Layer already exists'
  57. '2eee26189b5e: Layer already exists'
  58. '473763e23878: Layer already exists'
  59. '8ff58362bc10: Layer already exists'
  60. 'c5890df75ecc: Layer already exists'
  61. '1fb17ae4fc11: Layer already exists'
  62. '0aaa26768f46: Layer already exists'
  63. 'fd5276389b8a: Layer already exists'
  64. '56ac87b2a469: Layer already exists'
  65. 'e4dc9a88747b: Layer already exists'
  66. 'b20d335c2af7: Layer already exists'
  67. '140ce133886c: Layer already exists'
  68. '027d71a0bd0a: Layer already exists'
  69. '2bcd744f68d7: Layer already exists'
  70. 'dda151859818: Layer already exists'
  71. '75e70aa52609: Layer already exists'
  72. 'fbd2732ad777: Layer already exists'
  73. 'ba9de9d8475e: Layer already exists'
  74. '6dfd17fccb17: Pushed'
  75. 'latest: digest: sha256:a8c86b3024e96e75e2425832c323130a8e0843779e946921a3073bd4cbe3cb79 size: 5760'
'/home/jupyter/cloudml-samples/notebooks/R'

C- Verifying uploaded images to Container Registry


In [15]:
command <- paste0("gcloud container images list --repository=gcr.io/", PROJECT_ID)
system(command, intern = TRUE)


  1. 'NAME'
  2. 'gcr.io/r-on-gcp/caret_babyweight_estimator_training'
  3. 'gcr.io/r-on-gcp/caret_base'

3.2. Submit an AI Plaform Training job with the custom container.


In [16]:
job_name <- paste0("train_caret_contrainer_", format(Sys.time(), "%Y%m%d_%H%M%S"))

command = paste0("gcloud beta ai-platform jobs submit training ", job_name, 
  " --master-image-uri=", training_image_url,
  " --scale-tier=BASIC", 
  " --region=", REGION
)
print(command)

system(command, intern = TRUE)


[1] "gcloud beta ai-platform jobs submit training train_caret_contrainer_20190725_131432 --master-image-uri=gcr.io/r-on-gcp/caret_babyweight_estimator_training --scale-tier=BASIC --region=europe-west1"
  1. 'jobId: train_caret_contrainer_20190725_131432'
  2. 'state: QUEUED'

Verify the trained model in GCS after the job finishes


In [18]:
model_name <- 'caret_babyweight_estimator'
gcs_model_dir <- paste0("gs://", BUCKET_NAME, "/models/", model_name)
command <- paste0("gsutil ls ", gcs_model_dir)
system(command, intern = TRUE)


'gs://r-on-gcp/models/caret_babyweight_estimator/trained_model.rds'

4. Deploy the trained model to Cloud Run

In order to serve the trained CARET model as a Web API, you need to wrap it with a prediction function, as serve this prediction function as a REST API. Then you containerize this Web API and deploy it in Cloud Run.

The src/caret/serving directory includes the following code files:

  1. model_prediction.R - This script downloads the trained model from GCS and loads (only once). It includes estimate function, which accepts instances in JSON format, and return the of baby weight estimate for each instance.
  2. model_api.R - This is a plumber Web API that runs model_prediction.R.
  3. Dockerfile - This is the definition of Docker container image that runs the model_api.R

To deploy the prediction Web API to Cloud Run, you need to do the following steps:

  1. set your PROJECT_ID and BUCKET_NAME in serving/model_prediction.R, and PROJECT_ID in serving/Dockerfile so that the first line reads "FROM gcr.io/[PROJECT_ID]/caret_base"
  2. Build the Docker container image for the prediction API.
  3. Push the Docker container image to Cloud Registry.
  4. Enable the Cloud Run API if not enabled yet, click "Enable" at https://console.developers.google.com/apis/api/run.googleapis.com/overview .
  5. Deploy the Docker container to Cloud Run.

(Optional) 4.0. Upload the trained model to GCS

If you train your model using the model_trainer.R in AI Platform, it will upload the saved model to GCS. However, if you only train your model locally and have your saved model locally, you need to upload it to GCS.


In [ ]:
model_name <- 'caret_babyweight_estimator'
gcs_model_dir = paste0("gs://", BUCKET_NAME, "/models/", model_name, "/")
command <- paste0("gsutil cp -r models/", model_name ,"/* ",gcs_model_dir)
print(command)
system(command, intern = TRUE)

4.1. Build and Push prediction Docker container image


In [19]:
serving_image_url <- paste0("gcr.io/", PROJECT_ID, "/", model_name, "_serving")
print(serving_image_url)

setwd("src/caret/serving")
getwd()

print("Building the Docker container image...")
command <- paste0("docker build -f Dockerfile --tag ", serving_image_url, " ./")
print(command)
system(command, intern = TRUE)

print("Pushing the Docker container image...")
command <- paste0("gcloud docker -- push ", serving_image_url)
print(command)
system(command, intern = TRUE)

setwd("../../..")
getwd()


[1] "gcr.io/r-on-gcp/caret_babyweight_estimator_serving"
'/home/jupyter/cloudml-samples/notebooks/R/src/caret/serving'
[1] "Building the Docker container image..."
[1] "docker build -f Dockerfile --tag gcr.io/r-on-gcp/caret_babyweight_estimator_serving ./"
  1. 'Sending build context to Docker daemon 4.608kB\r\r'
  2. 'Step 1/8 : FROM gcr.io/r-on-gcp/caret_base'
  3. ' ---> 23896e0a8a18'
  4. 'Step 2/8 : RUN mkdir -p /root'
  5. ' ---> Using cache'
  6. ' ---> 8e3b3ba4b7bc'
  7. 'Step 3/8 : COPY model_prediction.R /root'
  8. ' ---> f67f31f717e0'
  9. 'Step 4/8 : COPY model_api.R /root'
  10. ' ---> a49939cb55d7'
  11. 'Step 5/8 : WORKDIR /root'
  12. ' ---> Running in 1437d0489563'
  13. 'Removing intermediate container 1437d0489563'
  14. ' ---> 6b86e65e04ed'
  15. 'Step 6/8 : ENV PORT 8080'
  16. ' ---> Running in d4a9076afbea'
  17. 'Removing intermediate container d4a9076afbea'
  18. ' ---> f4289e2b1454'
  19. 'Step 7/8 : EXPOSE 8080'
  20. ' ---> Running in fbd3dba8910a'
  21. 'Removing intermediate container fbd3dba8910a'
  22. ' ---> f8c3268385e9'
  23. 'Step 8/8 : ENTRYPOINT ["Rscript", "model_api.R"]'
  24. ' ---> Running in f15f02eb590b'
  25. 'Removing intermediate container f15f02eb590b'
  26. ' ---> af322a627dbd'
  27. 'Successfully built af322a627dbd'
  28. 'Successfully tagged gcr.io/r-on-gcp/caret_babyweight_estimator_serving:latest'
[1] "Pushing the Docker container image..."
[1] "gcloud docker -- push gcr.io/r-on-gcp/caret_babyweight_estimator_serving"
  1. 'The push refers to repository [gcr.io/r-on-gcp/caret_babyweight_estimator_serving]'
  2. 'ff4c31850eb9: Preparing'
  3. '788543d979fc: Preparing'
  4. '74186bfe1b58: Preparing'
  5. '0ce62bef372e: Preparing'
  6. 'ac80f37c61e8: Preparing'
  7. 'd35b581063f9: Preparing'
  8. '2cdc3d03b403: Preparing'
  9. '842c821f54eb: Preparing'
  10. '81c79e512da5: Preparing'
  11. '071617594623: Preparing'
  12. '2eee26189b5e: Preparing'
  13. '473763e23878: Preparing'
  14. '8ff58362bc10: Preparing'
  15. 'c5890df75ecc: Preparing'
  16. '1fb17ae4fc11: Preparing'
  17. '0aaa26768f46: Preparing'
  18. 'fd5276389b8a: Preparing'
  19. '56ac87b2a469: Preparing'
  20. '140ce133886c: Preparing'
  21. 'b20d335c2af7: Preparing'
  22. 'e4dc9a88747b: Preparing'
  23. '027d71a0bd0a: Preparing'
  24. '2bcd744f68d7: Preparing'
  25. '75e70aa52609: Preparing'
  26. 'dda151859818: Preparing'
  27. 'fbd2732ad777: Preparing'
  28. 'ba9de9d8475e: Preparing'
  29. 'd35b581063f9: Waiting'
  30. '2cdc3d03b403: Waiting'
  31. '842c821f54eb: Waiting'
  32. '81c79e512da5: Waiting'
  33. '071617594623: Waiting'
  34. '2eee26189b5e: Waiting'
  35. '473763e23878: Waiting'
  36. '8ff58362bc10: Waiting'
  37. 'c5890df75ecc: Waiting'
  38. '1fb17ae4fc11: Waiting'
  39. '0aaa26768f46: Waiting'
  40. 'fd5276389b8a: Waiting'
  41. '56ac87b2a469: Waiting'
  42. '140ce133886c: Waiting'
  43. 'b20d335c2af7: Waiting'
  44. 'e4dc9a88747b: Waiting'
  45. '027d71a0bd0a: Waiting'
  46. '2bcd744f68d7: Waiting'
  47. '75e70aa52609: Waiting'
  48. 'dda151859818: Waiting'
  49. 'fbd2732ad777: Waiting'
  50. 'ba9de9d8475e: Waiting'
  51. '0ce62bef372e: Layer already exists'
  52. 'ac80f37c61e8: Layer already exists'
  53. '74186bfe1b58: Layer already exists'
  54. 'd35b581063f9: Layer already exists'
  55. '2cdc3d03b403: Layer already exists'
  56. '842c821f54eb: Layer already exists'
  57. '81c79e512da5: Layer already exists'
  58. '071617594623: Layer already exists'
  59. '2eee26189b5e: Layer already exists'
  60. '473763e23878: Layer already exists'
  61. '8ff58362bc10: Layer already exists'
  62. 'c5890df75ecc: Layer already exists'
  63. '1fb17ae4fc11: Layer already exists'
  64. 'fd5276389b8a: Layer already exists'
  65. '0aaa26768f46: Layer already exists'
  66. '56ac87b2a469: Layer already exists'
  67. '140ce133886c: Layer already exists'
  68. 'b20d335c2af7: Layer already exists'
  69. 'e4dc9a88747b: Layer already exists'
  70. '027d71a0bd0a: Layer already exists'
  71. '2bcd744f68d7: Layer already exists'
  72. '75e70aa52609: Layer already exists'
  73. 'dda151859818: Layer already exists'
  74. 'fbd2732ad777: Layer already exists'
  75. 'ba9de9d8475e: Layer already exists'
  76. 'ff4c31850eb9: Pushed'
  77. '788543d979fc: Pushed'
  78. 'latest: digest: sha256:745024f961a369f01d8b97f4d8b423cae7c884f7eaf7e3c8f840212140587830 size: 5966'
'/home/jupyter/cloudml-samples/notebooks/R'

In [20]:
command <- paste0("gcloud container images list --repository=gcr.io/", PROJECT_ID)
system(command, intern = TRUE)


  1. 'NAME'
  2. 'gcr.io/r-on-gcp/caret_babyweight_estimator_serving'
  3. 'gcr.io/r-on-gcp/caret_babyweight_estimator_training'
  4. 'gcr.io/r-on-gcp/caret_base'

4.2. Deploy prediction container to Cloud Run


In [ ]:
service_name <- "caret-babyweight-estimator"
command <- paste(
    "gcloud beta run deploy", service_name,
    "--image", serving_image_url,
    "--platform managed",
    "--allow-unauthenticated",
    "--region", REGION
)

print(command)
system(command, intern = TRUE)

5. Invoke the Model API for Predictions

When the caret-babyweight-estimator service is deployed to Cloud Run:

  1. Go to Cloud Run in the Cloud Console.
  2. Select the caret-babyweight-estimator service.
  3. Copy the service URL, and use it to update the url variable in the following cell.

In [21]:
# Update to the deployed service URL
url <- "https://caret-babyweight-estimator-lbcii4x34q-uc.a.run.app/"
endpoint <- "estimate"

In [22]:
instances_json <- '
[
    {
        "is_male": "TRUE",
        "mother_age": 28,
        "mother_race": 8,
        "plurality": 1,
        "gestation_weeks":  28,
        "mother_married": "TRUE",
        "cigarette_use": "FALSE",
        "alcohol_use": "FALSE"
     },
    {
        "is_male": "FALSE",
        "mother_age": 38,
        "mother_race": 18,
        "plurality": 1,
        "gestation_weeks":  28,
        "mother_married": "TRUE",
        "cigarette_use": "TRUE",
        "alcohol_use": "TRUE"
     }
]
'

In [23]:
library("httr")
full_url <- paste0(url, endpoint)
response <- POST(full_url, body = instances_json)
estimates <- content(response)
print(paste("Estimated weight(s):", estimate))


Attaching package: ‘httr’

The following object is masked from ‘package:caret’:

    progress

[1] "Estimated weight(s): 4.5"  "Estimated weight(s): 2.57"

License

Authors: Daniel Sparing & Khalid Salama


Disclaimer: This is not an official Google product. The sample code provided for an educational purpose.


Copyright 2019 Google LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


In [ ]: