Exporting data from BigQuery to Google Cloud Storage

In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.


In [ ]:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst

In [ ]:
!pip install tensorflow==2.1 --user

Please ignore any compatibility warnings and errors. Make sure to restart your kernel to ensure this change has taken place.


In [1]:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT


Your current GCP Project Name is: qwiklabs-gcp-bdc77450c97b4bf6

In [2]:
import os

PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1

# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml

if PROJECT == "your-gcp-project-here":
  print("Don't forget to update your PROJECT name! Currently:", PROJECT)

Create BigQuery dataset and GCS Bucket

If you haven't already, create the the BigQuery dataset and GCS Bucket we will need.


In [3]:
%%bash
    
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)

if [ -n "$datasetexists" ]; then
    echo -e "BigQuery dataset already exists, let's not recreate it."

else
    echo "Creating BigQuery dataset titled: serverlessml"
    
    bq --location=US mk --dataset \
        --description 'Taxi Fare' \
        $PROJECT:serverlessml
   echo "\nHere are your current datasets:"
   bq ls
fi    
    
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)

if [ -n "$exists" ]; then
   echo -e "Bucket exists, let's not recreate it."
    
else
   echo "Creating a new GCS bucket."
   gsutil mb -l ${REGION} gs://${PROJECT}-ml
   echo -e "\nHere are your current buckets:"
   gsutil ls
fi


BigQuery dataset already exists, let's not recreate it.
Bucket exists, let's not recreate it.

Create BigQuery tables

Let's create a table with 1 million examples.

Note that the order of columns is exactly what was in our CSV files.


In [4]:
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS

SELECT
  (tolls_amount + fare_amount) AS fare_amount,
  pickup_datetime,
  pickup_longitude AS pickuplon,
  pickup_latitude AS pickuplat,
  dropoff_longitude AS dropofflon,
  dropoff_latitude AS dropofflat,
  passenger_count*1.0 AS passengers,
  'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
  trip_distance > 0
  AND fare_amount >= 2.5
  AND pickup_longitude > -78
  AND pickup_longitude < -70
  AND dropoff_longitude > -78
  AND dropoff_longitude < -70
  AND pickup_latitude > 37
  AND pickup_latitude < 45
  AND dropoff_latitude > 37
  AND dropoff_latitude < 45
  AND passenger_count > 0


Out[4]:

Make the validation dataset be 1/10 the size of the training dataset.


In [6]:
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS

SELECT
  (tolls_amount + fare_amount) AS fare_amount,
  pickup_datetime,
  pickup_longitude AS pickuplon,
  pickup_latitude AS pickuplat,
  dropoff_longitude AS dropofflon,
  dropoff_latitude AS dropofflat,
  passenger_count*1.0 AS passengers,
  'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
  trip_distance > 0
  AND fare_amount >= 2.5
  AND pickup_longitude > -78
  AND pickup_longitude < -70
  AND dropoff_longitude > -78
  AND dropoff_longitude < -70
  AND pickup_latitude > 37
  AND pickup_latitude < 45
  AND dropoff_latitude > 37
  AND dropoff_latitude < 45
  AND passenger_count > 0


Out[6]:

Export the tables as CSV files

Change the BUCKET variable below to match a bucket that you own.


In [8]:
%%bash
OUTDIR=gs://$BUCKET/quests/serverlessml/data
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR

echo "Extracting training data to $OUTDIR"
bq --location=US extract \
   --destination_format CSV  \
   --field_delimiter "," --noprint_header \
   serverlessml.feateng_training_data \
   $OUTDIR/taxi-train-*.csv

echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
   --destination_format CSV  \
   --field_delimiter "," --noprint_header \
   serverlessml.feateng_valid_data \
   $OUTDIR/taxi-valid-*.csv

gsutil ls -l $OUTDIR


Deleting current contents of gs://qwiklabs-gcp-bdc77450c97b4bf6-ml/quests/serverlessml/data
Extracting training data to gs://qwiklabs-gcp-bdc77450c97b4bf6-ml/quests/serverlessml/data

Extracting validation data to gs://qwiklabs-gcp-bdc77450c97b4bf6-ml/quests/serverlessml/data

  88345235  2019-09-23T03:22:05Z  gs://qwiklabs-gcp-bdc77450c97b4bf6-ml/quests/serverlessml/data/taxi-train-000000000000.csv
   8725746  2019-09-23T03:22:15Z  gs://qwiklabs-gcp-bdc77450c97b4bf6-ml/quests/serverlessml/data/taxi-valid-000000000000.csv
TOTAL: 2 objects, 97070981 bytes (92.57 MiB)
CommandException: 1 files/objects could not be removed.
Waiting on bqjob_r44a811fe5f8fd083_0000016d5c241702_1 ... (2s) Current status: DONE    

In [9]:
!gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2


52,2015-02-07 23:10:27 UTC,-73.781852722167969,40.644840240478516,-73.967453002929688,40.771881103515625,2,unused
57.33,2015-02-15 12:22:12 UTC,-73.98321533203125,40.738700866699219,-73.78955078125,40.642852783203125,2,unused

Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.