In [ ]:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
In [ ]:
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
In [ ]:
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
In [ ]:
# Create SQL query using natality data after the year 2000
query_string = """
WITH
CTE_hash_cols_fixed AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL AND wday IS NULL THEN 0
ELSE
CASE
WHEN day IS NULL THEN wday
ELSE
wday
END
END
AS date,
IFNULL(state,
"Unknown") AS state,
IFNULL(mother_birth_state,
"Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
FROM
CTE_hash_cols_fixed
"""
There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are.
We'll call BigQuery but group by the hashcolumn and see the number of records for each group. This will enable us to get the correct train/eval/test percentages
In [ ]:
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df = bq.query("SELECT hashvalues, COUNT(weight_pounds) AS num_babies FROM ("
+ query_string +
") GROUP BY hashvalues").to_dataframe()
print("There are {} unique hashvalues.".format(len(df)))
df.head()
We can make a query to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly
In [ ]:
sampling_percentages_query = """
WITH
-- Get label, features, and column that we are going to use to split into buckets on
CTE_hash_cols_fixed AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL AND wday IS NULL THEN 0
ELSE
CASE
WHEN day IS NULL THEN wday
ELSE
wday
END
END
AS date,
IFNULL(state,
"Unknown") AS state,
IFNULL(mother_birth_state,
"Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000),
CTE_data AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
FROM
CTE_hash_cols_fixed),
-- Get the counts of each of the unique hashs of our splitting column
CTE_first_bucketing AS (
SELECT
hashvalues,
COUNT(*) AS num_records
FROM
CTE_data
GROUP BY
hashvalues ),
-- Get the number of records in each of the hash buckets
CTE_second_bucketing AS (
SELECT
ABS(MOD(hashvalues, {0})) AS bucket_index,
SUM(num_records) AS num_records
FROM
CTE_first_bucketing
GROUP BY
ABS(MOD(hashvalues, {0}))),
-- Calculate the overall percentages
CTE_percentages AS (
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
CTE_second_bucketing) AS percent_records
FROM
CTE_second_bucketing ),
-- Choose which of the hash buckets will be used for training and pull in their statistics
CTE_train AS (
SELECT
*,
"train" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= 0
AND bucket_index < {1}),
-- Choose which of the hash buckets will be used for validation and pull in their statistics
CTE_eval AS (
SELECT
*,
"eval" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= {1}
AND bucket_index < {2}),
-- Choose which of the hash buckets will be used for testing and pull in their statistics
CTE_test AS (
SELECT
*,
"test" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= {2}
AND bucket_index < {0}),
-- Union the training, validation, and testing dataset statistics
CTE_union AS (
SELECT
0 AS dataset_id,
*
FROM
CTE_train
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
CTE_eval
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
CTE_test ),
-- Show final splitting and associated statistics
CTE_split AS (
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
CTE_union
GROUP BY
dataset_id,
dataset_name )
SELECT
*
FROM
CTE_split
ORDER BY
dataset_id
"""
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
df = bq.query(sampling_percentages_query.format(modulo_divisor, train_buckets, train_buckets + eval_buckets)).to_dataframe()
df.head()
In [ ]:
# Added every_n so that we can now subsample from each of the hash values to get approximately the record counts we want
every_n = 500
train_query = # TODO: Your code goes here
eval_query = # TODO: Your code goes here
test_query = # TODO: Your code goes here
train_df = # TODO: Your code goes here
eval_df = # TODO: Your code goes here
test_df = # TODO: Your code goes here
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male
field be Unknown
. Also, if there is more than child we'll change the plurality
to Multiple(2+)
. While we're at it, We'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
In [ ]:
train_df.head()
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
In [ ]:
train_df.describe()
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess
function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
The code cell below has some TODOs for you to complete.
In the first block of TODOs, we'll clean the data so that
weight_pounds
is always positivemother_age
is always positivegestation_weeks
is always positiveplurality
is always positiveThe next block of TODOs will create extra rows to simulate lack of ultrasound information. That is, we'll make a copy of the dataframe and call it no_ultrasound
. Then, use Pandas functionality to make two changes in place to no_ultrasound
:
plurality
value of no_ultrasound
to be 'Multiple(2+)' whenever the plurality is not 'Single(1)'is_male
value of no_ultrasound
to be 'Unknown'
In [ ]:
import pandas as pd
def preprocess(df):
# Clean up data
# Remove what we don"t want to use for training
df = # TODO: Your code goes here
df = # TODO: Your code goes here
df = # TODO: Your code goes here
df = # TODO: Your code goes here
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Now create extra rows to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# TODO: Your code goes here
# TODO: Your code goes here
# Concatenate both datasets together and shuffle
return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
In [ ]:
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
In [ ]:
train_df.head()
In [ ]:
train_df.tail()
Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality
does not show up.
In [ ]:
train_df.describe()
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
Complete the code in the cell below to write the the three Pandas dataframes you made above to csv files. Have a look at the documentation for .to_csv
to remind yourself its usage. Remove hashvalues
from the data since we will not be using it in training so there is no need to move around extra data.
In [ ]:
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
Check your work above by inspecting the files you made.
In [ ]:
%%bash
wc -l *.csv
In [ ]:
%%bash
head *.csv
In [ ]:
%%bash
tail *.csv
Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License