Learning Objectives
In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.
We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe.
Check that the Google BigQuery library is installed and if not, install it.
In [ ]:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
In [1]:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
Import necessary libraries.
In [1]:
from google.cloud import bigquery
import pandas as pd
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
In [ ]:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
In [3]:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
In [4]:
bq = bigquery.Client(project = PROJECT)
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
In [5]:
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
In [6]:
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
In [7]:
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
Out[7]:
Using COALESCE
would provide the same result as the nested CASE WHEN
. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN
would become COALESCE(wday, day, 0) AS date
. You can read more about it here.
Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
In [8]:
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
Out[8]:
The next query is going to find the counts of each of the unique 657484 hash_values
. This will be our first step at making actual hash buckets for our split via the GROUP BY
.
In [9]:
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
Out[9]:
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
In [10]:
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
Out[10]:
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
In [11]:
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
Out[11]:
We'll now select the range of buckets to be used in training.
In [12]:
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
Out[12]:
We'll do the same by selecting the range of buckets to be used evaluation.
In [13]:
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
Out[13]:
Lastly, we'll select the hash buckets to be used for the test split.
In [14]:
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
Out[14]:
In the below query, we'll UNION ALL
all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id
so that we can sort on it in the query after.
In [15]:
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
Out[15]:
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
In [16]:
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
Out[16]:
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
In [33]:
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male
field be Unknown
. Also, if there is more than child we'll change the plurality
to Multiple(2+)
. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
In [34]:
train_df.head()
Out[34]:
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
In [35]:
train_df.describe()
Out[35]:
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess
function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
In [36]:
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
In [37]:
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
In [38]:
train_df.head()
Out[38]:
In [39]:
train_df.tail()
Out[39]:
Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality
does not show up.
In [40]:
train_df.describe()
Out[40]:
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
In [41]:
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
In [42]:
%%bash
wc -l *.csv
In [43]:
%%bash
head *.csv
In [44]:
%%bash
tail *.csv
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
In [ ]: