Reusable Embeddings

Learning Objectives

  1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors
  2. Learn how to incorporate a pre-trained TF-Hub module into a Keras model
  3. Learn how to deploy and use a text model on CAIP

Introduction

In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.

First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.

Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.


In [ ]:
import os

from google.cloud import bigquery
import pandas as pd

In [ ]:
%load_ext google.cloud.bigquery

Replace the variable values in the cell below:


In [ ]:
PROJECT = "cloud-training-demos"  # Replace with your PROJECT
BUCKET = PROJECT  # defaults to PROJECT
REGION = "us-central1"  # Replace with your REGION
SEED = 0

Create a Dataset from BigQuery

Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.

Here is a sample of the dataset:


In [ ]:
%%bigquery --project $PROJECT

SELECT
    url, title, score
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    LENGTH(title) > 10
    AND score > 10
    AND LENGTH(url) > 0
LIMIT 10

Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with nytimes


In [ ]:
%%bigquery --project $PROJECT

SELECT
    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
    COUNT(title) AS num_articles
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
    AND LENGTH(title) > 10
GROUP BY
    source
ORDER BY num_articles DESC
  LIMIT 100

Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.


In [ ]:
regex = '.*://(.[^/]+)/'


sub_query = """
SELECT
    title,
    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
    
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
    AND LENGTH(title) > 10
""".format(regex)


query = """
SELECT 
    LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
    source
FROM
  ({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)

print(query)

For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.


In [ ]:
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()

AutoML for text classification requires that

  • the dataset be in csv form with
  • the first column being the texts to classify or a GCS path to the text
  • the last colum to be the text labels

The dataset we pulled from BiqQuery satisfies these requirements.


In [ ]:
print("The full dataset contains {n} titles".format(n=len(title_dataset)))

Let's make sure we have roughly the same number of labels for each of our three labels:


In [ ]:
title_dataset.source.value_counts()

Finally we will save our data, which is currently in-memory, to disk.

We will create a csv file containing the full dataset and another containing only 1000 articles for development.

Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.


In [ ]:
DATADIR = './data/'

if not os.path.exists(DATADIR):
    os.makedirs(DATADIR)

In [ ]:
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)

# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))

title_dataset.to_csv(
    FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')

Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).


In [ ]:
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()

Let's write the sample datatset to disk.


In [ ]:
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)

sample_title_dataset.to_csv(
    SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')

In [ ]:
import datetime
import os
import shutil

import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical


print(tf.__version__)


2.0.1

In [ ]:
%matplotlib inline

Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:


In [ ]:
MODEL_DIR = "./text_models"
DATA_DIR = "./data"

Loading the dataset

As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times):


In [ ]:
ls ./data/

In [ ]:
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']

titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()

Let's look again at the number of examples per label to make sure we have a well-balanced dataset:


In [ ]:
titles_df.source.value_counts()

Preparing the labels

In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text. This also means that our model will be able to consume text directly instead of sequences of integers representing the words.

However, as before, we still need to preprocess the labels into one-hot-encoded vectors:


In [ ]:
CLASSES = {
    'github': 0,
    'nytimes': 1,
    'techcrunch': 2
}
N_CLASSES = len(CLASSES)

In [ ]:
def encode_labels(sources):
    classes = [CLASSES[source] for source in sources]
    one_hots = to_categorical(classes, num_classes=N_CLASSES)
    return one_hots

In [ ]:
encode_labels(titles_df.source[:4])

Preparing the train/test splits

Let's split our data into train and test splits:


In [ ]:
N_TRAIN = int(len(titles_df) * 0.95)

titles_train, sources_train = (
    titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])

titles_valid, sources_valid = (
    titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])

To be on the safe side, we verify that the train and test splits have roughly the same number of examples per class.

Since it is the case, accuracy will be a good metric to use to measure the performance of our models.


In [ ]:
sources_train.value_counts()

In [ ]:
sources_valid.value_counts()

Now let's create the features and labels we will feed our models with:


In [ ]:
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)

In [ ]:
X_train[:3]

In [ ]:
Y_train[:3]

NNLM Model

We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called nnlm-en-dim50-with-normalization, which also normalizes the vectors produced.

Lab Task 1a: Import NNLM TF Hub module into KerasLayer

Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding:


In [ ]:
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"

nnlm_module = KerasLayer(# TODO)

Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:

Lab Task 1b: Use module to encode a sentence string


In [ ]:
nnlm_module(tf.constant([# TODO]))

Swivel Model

Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.

Lab Task 1c: Import Swivel TF Hub module into KerasLayer


In [ ]:
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"

swivel_module = KerasLayer(# TODO)

Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence:

Lab Task 1d: Use module to encode a sentence string


In [ ]:
swivel_module(tf.constant([# TODO]))

Building the models

Let's write a function that

  • takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm)
  • returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:

Lab Task 2: Incorporate a pre-trained TF Hub module as first layer of Keras Sequential Model


In [ ]:
def build_model(hub_module, name):
    model = Sequential([
        # TODO
        Dense(16, activation='relu'),
        Dense(N_CLASSES, activation='softmax')
    ], name=name)

    model.compile(
        optimizer='adam',
        loss='categorical_crossentropy',
        metrics=['accuracy']
    )
    return model

Let's also wrap the training code into a train_and_evaluate function that

  • takes as input the training and validation data, as well as the compiled model itself, and the batch_size
  • trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
  • returns an history object, which will help us to plot the learning curves

In [ ]:
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
    X_train, Y_train = train_data

    tf.random.set_seed(33)

    model_dir = os.path.join(MODEL_DIR, model.name)
    if tf.io.gfile.exists(model_dir):
        tf.io.gfile.rmtree(model_dir)

    history = model.fit(
        X_train, Y_train,
        epochs=100,
        batch_size=batch_size,
        validation_data=val_data,
        callbacks=[EarlyStopping(), TensorBoard(model_dir)],
    )
    return history

Training NNLM


In [ ]:
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)

In [ ]:
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)

In [ ]:
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()

Training Swivel


In [ ]:
swivel_model = build_model(swivel_module, name='swivel')

In [ ]:
swivel_history = train_and_evaluate(data, val_data, swivel_model)

In [ ]:
history = swivel_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()

Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.

Deploying the model

The first step is to serialize one of our trained Keras model as a SavedModel:


In [ ]:
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)

EXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')
os.environ['EXPORT_PATH'] = EXPORT_PATH

shutil.rmtree(EXPORT_PATH, ignore_errors=True)

tf.saved_model.save(swivel_model, EXPORT_PATH)

Then we can deploy the model using the gcloud CLI as before:

Lab Task 3a: Complete the following script to deploy the swivel model


In [ ]:
%%bash

# TODO 5

PROJECT=# TODO: Change this to your PROJECT
BUCKET=${PROJECT}
REGION=us-east1
MODEL_NAME=title_model
VERSION_NAME=swivel
EXPORT_PATH=$EXPORT_PATH

if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
    echo "$MODEL_NAME already exists"
else
    echo "Creating $MODEL_NAME"
    gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi

if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
    echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
    echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
    echo "Please run this cell again if you don't see a Creating message ... "
    sleep 2
fi

echo "Creating $MODEL_NAME:$VERSION_NAME"

gcloud beta ai-platform versions create $VERSION_NAME\
  --model=$MODEL_NAME  \
  --framework=# TODO \
  --python-version=# TODO \
  --runtime-version=1.15 \
  --origin=# TODO \
  --staging-bucket=# TODO\
  --machine-type n1-standard-4

Before we try our deployed model, let's inspect its signature to know what to send to the deployed API:


In [ ]:
!saved_model_cli show \
 --tag_set serve \
 --signature_def serving_default \
 --dir {EXPORT_PATH}
!find {EXPORT_PATH}

Let's go ahead and hit our model:

Lab Task 3b: Create the JSON object to send a title to the API you just deployed

(Hint: Look at the 'saved_model_cli show' command output above.)


In [ ]:
%%writefile input.json
{# TODO}

In [ ]:
!gcloud ai-platform predict \
  --model title_model \
  --json-instances input.json \
  --version swivel

Bonus

Try to beat the best model by modifying the model architecture, changing the TF-Hub embedding, and tweaking the training parameters.

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License