LAB 2b: Prepare babyweight dataset.

Learning Objectives

  1. Setup up the environment
  2. Preprocess natality dataset
  3. Augment natality dataset
  4. Create the train and eval tables in BigQuery
  5. Export data from BigQuery to GCS in CSV format

Introduction

In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.

In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.

Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.

Set up environment variables and load necessary libraries

Check that the Google BigQuery library is installed and if not, install it.


In [ ]:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1

Import necessary libraries.


In [ ]:
import os
from google.cloud import bigquery

Lab Task #1: Set environment variables.

Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.


In [ ]:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT

In [ ]:
# TODO: Change environment variables
PROJECT = "cloud-training-demos"  # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET"  # REPLACE WITH YOUR BUCKET NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1"  # REPLACE WITH YOUR BUCKET REGION e.g. us-central1

# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION

if PROJECT == "cloud-training-demos":
    print("Don't forget to update your PROJECT name! Currently:", PROJECT)

The source dataset

Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.

The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.

Create a BigQuery Dataset and Google Cloud Storage Bucket

A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.


In [ ]:
%%bash

## Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w # TODO: Add dataset name)

if [ -n "$datasetexists" ]; then
    echo -e "BigQuery dataset already exists, let's not recreate it."

else
    echo "Creating BigQuery dataset titled: babyweight"
    
    bq --location=US mk --dataset \
        --description "Babyweight" \
        $PROJECT:# TODO: Add dataset name
    echo "Here are your current datasets:"
    bq ls
fi
    
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)

if [ -n "$exists" ]; then
    echo -e "Bucket exists, let's not recreate it."
    
else
    echo "Creating a new GCS bucket."
    gsutil mb -l ${REGION} gs://${BUCKET}
    echo "Here are your current buckets:"
    gsutil ls
fi

Create the training and evaluation data tables

Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.

  • Note: The dataset in the create table code below is the one created previously, e.g. "babyweight".

Lab Task #2: Preprocess and filter dataset

We have some preprocessing and filtering we would like to do to get our data in the right format for training.

Preprocessing:

  • Cast is_male from BOOL to STRING
  • Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]
  • Add hashcolumn hashing on year and month

Filtering:

  • Only want data for years later than 2000
  • Only want baby weights greater than 0
  • Only want mothers whose age is greater than 0
  • Only want plurality to be greater than 0
  • Only want the number of weeks of gestation to be greater than 0

In [ ]:
%%bigquery
CREATE OR REPLACE TABLE
    babyweight.babyweight_data AS
SELECT
    # TODO: Add selected raw features and preprocessed features
FROM
    publicdata.samples.natality
WHERE
    # TODO: Add filters

Lab Task #3: Augment dataset to simulate missing data

Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).


In [ ]:
%%bigquery
CREATE OR REPLACE TABLE
    babyweight.babyweight_augmented_data AS
SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks,
    hashmonth
FROM
    babyweight.babyweight_data
UNION ALL
SELECT
    # TODO: Replace is_male and plurality as indicated above
FROM
    babyweight.babyweight_data

Lab Task #4: Split augmented dataset into train and eval sets

Using hashmonth, apply a modulo to get approximately a 75/25 train/eval split.

Split augmented dataset into train dataset

Exercise: RUN the query to create the training data table.


In [ ]:
%%bigquery
CREATE OR REPLACE TABLE
    babyweight.babyweight_data_train AS
SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks
FROM
    babyweight.babyweight_augmented_data
WHERE
    # TODO: Modulo hashmonth to be approximately 75% of the data

Split augmented dataset into eval dataset

Exercise: RUN the query to create the evaluation data table.


In [ ]:
%%bigquery
CREATE OR REPLACE TABLE
    babyweight.babyweight_data_eval AS
SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks
FROM
    babyweight.babyweight_augmented_data
WHERE
    # TODO: Modulo hashmonth to be approximately 25% of the data

Verify table creation

Verify that you created the dataset and training data table.


In [ ]:
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0

In [ ]:
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0

Lab Task #5: Export from BigQuery to CSVs in GCS

Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.


In [ ]:
# Construct a BigQuery client object.
client = bigquery.Client()

dataset_name = # TODO: Add dataset name

# Create dataset reference object
dataset_ref = client.dataset(
    dataset_id=dataset_name, project=client.project)

# Export both train and eval tables
for step in [# TODO: Loop over train and eval]:
    destination_uri = os.path.join(
        "gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
    table_name = "babyweight_data_{}".format(step)
    table_ref = dataset_ref.table(table_name)
    extract_job = client.extract_table(
        table_ref,
        destination_uri,
        # Location must match that of the source table.
        location="US",
    )  # API request
    extract_job.result()  # Waits for job to complete.

    print("Exported {}:{}.{} to {}".format(
        client.project, dataset_name, table_name, destination_uri))

Verify CSV creation

Verify that we correctly created the CSV files in our bucket.


In [ ]:
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv

In [ ]:
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5

In [ ]:
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5

Lab Summary:

In this lab, we setup our environment, created a BigQuery dataset, preprocessed and augmented the natality dataset, created train and eval tables in BigQuery, and exported data from BigQuery to GCS in CSV format.

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License


In [ ]: