Creating a Sampled Dataset

Learning Objectives

  • Sample the natality dataset to create train/eval/test sets
  • Preprocess the data in Pandas dataframe

Introduction

In this notebook we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe.


In [ ]:
PROJECT = "cloud-training-demos"  # Replace with your PROJECT
BUCKET = "cloud-training-bucket"  # Replace with your BUCKET
REGION = "us-central1"            # Choose an available region for Cloud MLE
TFVERSION = "1.14"                # TF version for CMLE to use

In [ ]:
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION

In [ ]:
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
    gsutil mb -l ${REGION} gs://${BUCKET}
fi

Create ML datasets by sampling using BigQuery

We'll begin by sampling the BigQuery data to create smaller datasets.


In [ ]:
# Create SQL query using natality data after the year 2000
query_string = """
WITH
  CTE_hash_cols_fixed AS (
  SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks,
    year,
    month,
    CASE
      WHEN day IS NULL AND wday IS NULL THEN 0
    ELSE
    CASE
      WHEN day IS NULL THEN wday
    ELSE
    wday
  END
  END
    AS date,
    IFNULL(state,
      "Unknown") AS state,
    IFNULL(mother_birth_state,
      "Unknown") AS mother_birth_state
  FROM
    publicdata.samples.natality
  WHERE
    year > 2000)

SELECT
  weight_pounds,
  is_male,
  mother_age,
  plurality,
  gestation_weeks,
  FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
FROM
  CTE_hash_cols_fixed
"""

There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are.

We'll call BigQuery but group by the hashcolumn and see the number of records for each group. This will enable us to get the correct train/eval/test percentages


In [ ]:
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)

df = bq.query("SELECT hashvalues, COUNT(weight_pounds) AS num_babies FROM (" 
              + query_string + 
              ") GROUP BY hashvalues").to_dataframe()

print("There are {} unique hashvalues.".format(len(df)))
df.head()

We can make a query to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly


In [ ]:
sampling_percentages_query = """
WITH
  -- Get label, features, and column that we are going to use to split into buckets on
  CTE_hash_cols_fixed AS (
  SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks,
    year,
    month,
    CASE
      WHEN day IS NULL AND wday IS NULL THEN 0
    ELSE
    CASE
      WHEN day IS NULL THEN wday
    ELSE
    wday
  END
  END
    AS date,
    IFNULL(state,
      "Unknown") AS state,
    IFNULL(mother_birth_state,
      "Unknown") AS mother_birth_state
  FROM
    publicdata.samples.natality
  WHERE
    year > 2000),
  CTE_data AS (
  SELECT
    weight_pounds,
    is_male,
    mother_age,
    plurality,
    gestation_weeks,
    FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
  FROM
    CTE_hash_cols_fixed),
  -- Get the counts of each of the unique hashs of our splitting column
  CTE_first_bucketing AS (
  SELECT
    hashvalues,
    COUNT(*) AS num_records
  FROM
    CTE_data
  GROUP BY
    hashvalues ),
  -- Get the number of records in each of the hash buckets
  CTE_second_bucketing AS (
  SELECT
    ABS(MOD(hashvalues, {0})) AS bucket_index,
    SUM(num_records) AS num_records
  FROM
    CTE_first_bucketing
  GROUP BY
    ABS(MOD(hashvalues, {0}))),
  -- Calculate the overall percentages
  CTE_percentages AS (
  SELECT
    bucket_index,
    num_records,
    CAST(num_records AS FLOAT64) / (
    SELECT
      SUM(num_records)
    FROM
      CTE_second_bucketing) AS percent_records
  FROM
    CTE_second_bucketing ),
  -- Choose which of the hash buckets will be used for training and pull in their statistics
  CTE_train AS (
  SELECT
    *,
    "train" AS dataset_name
  FROM
    CTE_percentages
  WHERE
    bucket_index >= 0
    AND bucket_index < {1}),
  -- Choose which of the hash buckets will be used for validation and pull in their statistics
  CTE_eval AS (
  SELECT
    *,
    "eval" AS dataset_name
  FROM
    CTE_percentages
  WHERE
    bucket_index >= {1}
    AND bucket_index < {2}),
  -- Choose which of the hash buckets will be used for testing and pull in their statistics
  CTE_test AS (
  SELECT
    *,
    "test" AS dataset_name
  FROM
    CTE_percentages
  WHERE
    bucket_index >= {2}
    AND bucket_index < {0}),
  -- Union the training, validation, and testing dataset statistics
  CTE_union AS (
  SELECT
    0 AS dataset_id,
    *
  FROM
    CTE_train
  UNION ALL
  SELECT
    1 AS dataset_id,
    *
  FROM
    CTE_eval
  UNION ALL
  SELECT
    2 AS dataset_id,
    *
  FROM
    CTE_test ),
  -- Show final splitting and associated statistics
  CTE_split AS (
  SELECT
    dataset_id,
    dataset_name,
    SUM(num_records) AS num_records,
    SUM(percent_records) AS percent_records
  FROM
    CTE_union
  GROUP BY
    dataset_id,
    dataset_name )
SELECT
  *
FROM
  CTE_split
ORDER BY
    dataset_id
"""

modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0

train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)

df = bq.query(sampling_percentages_query.format(modulo_divisor, train_buckets, train_buckets + eval_buckets)).to_dataframe()
df.head()

Here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap.


In [ ]:
# Added every_n so that we can now subsample from each of the hash values to get approximately the record counts we want
every_n = 500

train_query = "SELECT * FROM ({0}) WHERE ABS(MOD(hashvalues, {1} * 100)) < 80".format(query_string, every_n)
eval_query = "SELECT * FROM ({0}) WHERE ABS(MOD(hashvalues, {1} * 100)) >= 80 AND ABS(MOD(hashvalues, {1} * 100)) < 90".format(query_string, every_n)
test_query = "SELECT * FROM ({0}) WHERE ABS(MOD(hashvalues, {1} * 100)) >= 90 AND ABS(MOD(hashvalues, {1} * 100)) < 100".format(query_string, every_n)

train_df = bq.query(train_query).to_dataframe()
eval_df = bq.query(eval_query).to_dataframe()
test_df = bq.query(test_query).to_dataframe()

print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))

Preprocess data using Pandas

We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, We'll also change the plurality column to be a string. We'll perform these operations below.

Let's start by examining the training dataset as is.


In [ ]:
train_df.head()

Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)


In [ ]:
train_df.describe()

It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.


In [ ]:
import pandas as pd

def preprocess(df):
    # Clean up data
    # Remove what we don"t want to use for training
    df = df[df.weight_pounds > 0]
    df = df[df.mother_age > 0]
    df = df[df.gestation_weeks > 0]
    df = df[df.plurality > 0]

    # Modify plurality field to be a string
    twins_etc = dict(zip([1,2,3,4,5],
                   ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]))
    df["plurality"].replace(twins_etc, inplace = True)

    # Now create extra rows to simulate lack of ultrasound
    no_ultrasound = df.copy(deep = True)
    no_ultrasound.loc[no_ultrasound["plurality"] != "Single(1)", "plurality"] = "Multiple(2+)"
    no_ultrasound["is_male"] = "Unknown"

    # Concatenate both datasets together and shuffle
    return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True)

Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:


In [ ]:
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)

In [ ]:
train_df.head()

In [ ]:
train_df.tail()

Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.


In [ ]:
train_df.describe()

Write to .csv files

In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.


In [ ]:
columns = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',')
train_df.to_csv(path_or_buf = "train.csv", columns = columns, header = False, index = False)
eval_df.to_csv(path_or_buf = "eval.csv", columns = columns, header = False, index = False)
test_df.to_csv(path_or_buf = "test.csv", columns = columns, header = False, index = False)

In [ ]:
%%bash
wc -l *.csv

In [ ]:
%%bash
head *.csv

In [ ]:
%%bash
tail *.csv

Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License