Using pre-trained embedding with Tensorflow Hub

Learning Objectives

  1. How to instantiate a Tensorflow Hub module
  2. How to find pretrained Tensorflow Hub module for variety of purposes
  3. How to use a pre-trained TF Hub text modules to generate sentence vectors
  4. How to incorporate a pre-trained TF-Hub module into a Keras model

Introduction

In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.

First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.

Then we will build, train and compare simple models starting with different pre-trained TF-Hub layers.


In [ ]:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1

In [ ]:
import os

from google.cloud import bigquery
import pandas as pd

In [ ]:
%load_ext google.cloud.bigquery

Replace the variable values in the cell below:


In [ ]:
PROJECT = "cloud-training-demos"  # Replace with your PROJECT
BUCKET = PROJECT  # defaults to PROJECT
REGION = "us-central1"  # Replace with your REGION
SEED = 0

Create a Dataset from BigQuery

Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.

Here is a sample of the dataset:


In [ ]:
%%bigquery --project $PROJECT

SELECT
    url, title, score
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    LENGTH(title) > 10
    AND score > 10
    AND LENGTH(url) > 0
LIMIT 10

Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL.


In [ ]:
%%bigquery --project $PROJECT

SELECT
    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
    COUNT(title) AS num_articles
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
    AND LENGTH(title) > 10
GROUP BY
    source
ORDER BY num_articles DESC
  LIMIT 100

Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.


In [ ]:
regex = '.*://(.[^/]+)/'


sub_query = """
SELECT
    title,
    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
    
FROM
    `bigquery-public-data.hacker_news.stories`
WHERE
    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
    AND LENGTH(title) > 10
""".format(regex)


query = """
SELECT 
    LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
    source
FROM
  ({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)

print(query)

For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.


In [ ]:
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()

AutoML for text classification requires that

  • the dataset be in csv form with
  • the first column being the texts to classify or a GCS path to the text
  • the last colum to be the text labels

The dataset we pulled from BiqQuery satisfies these requirements.


In [ ]:
print("The full dataset contains {n} titles".format(n=len(title_dataset)))

Let's make sure we have roughly the same number of labels for each of our three labels:


In [ ]:
title_dataset.source.value_counts()

Finally we will save our data, which is currently in-memory, to disk.

We will create a csv file containing the full dataset and another containing only 1000 articles for development.

Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.


In [ ]:
DATADIR = './data/'

if not os.path.exists(DATADIR):
    os.makedirs(DATADIR)

In [ ]:
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)

# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))

title_dataset.to_csv(
    FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')

Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).


In [ ]:
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()

Let's write the sample datatset to disk.


In [ ]:
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)

sample_title_dataset.to_csv(
    SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')

In [ ]:
sample_title_dataset.head()

In [1]:
import datetime
import os
import shutil

import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical


print(tf.__version__)


2.1.0

In [2]:
%matplotlib inline

Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:


In [3]:
MODEL_DIR = "./text_models"
DATA_DIR = "./data"

Loading the dataset

As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times):


In [4]:
ls ./data/


titles_full.csv    titles_sample.csv

In [5]:
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']

titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()


Out[5]:
title source
0 code reviews good idea bad idea github
1 brainfuck on rails github
2 abundance without attachment nytimes
3 finally a startup visa that works techcrunch
4 unraveling the key to a cold virus s effective... nytimes

Let's look again at the number of examples per label to make sure we have a well-balanced dataset:


In [6]:
titles_df.source.value_counts()


Out[6]:
github        36525
techcrunch    30891
nytimes       28787
Name: source, dtype: int64

Preparing the labels

In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text. This also means that our model will be able to consume text directly instead of sequences of integers representing the words.

However, as before, we still need to preprocess the labels into one-hot-encoded vectors:


In [7]:
CLASSES = {
    'github': 0,
    'nytimes': 1,
    'techcrunch': 2
}
N_CLASSES = len(CLASSES)

In [8]:
def encode_labels(sources):
    classes = [CLASSES[source] for source in sources]
    one_hots = to_categorical(classes, num_classes=N_CLASSES)
    return one_hots

In [9]:
encode_labels(titles_df.source[:4])


Out[9]:
array([[1., 0., 0.],
       [1., 0., 0.],
       [0., 1., 0.],
       [0., 0., 1.]], dtype=float32)

Preparing the train/test splits

Let's split our data into train and test splits:


In [10]:
N_TRAIN = int(len(titles_df) * 0.95)

titles_train, sources_train = (
    titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])

titles_valid, sources_valid = (
    titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])

To be on the safe side, we verify that the train and test splits have roughly the same number of examples per class.

Since it is the case, accuracy will be a good metric to use to measure the performance of our models.


In [11]:
sources_train.value_counts()


Out[11]:
github        34710
techcrunch    29284
nytimes       27398
Name: source, dtype: int64

In [12]:
sources_valid.value_counts()


Out[12]:
github        1815
techcrunch    1607
nytimes       1389
Name: source, dtype: int64

Now let's create the features and labels we will feed our models with:


In [13]:
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)

In [14]:
X_train[:3]


Out[14]:
array(['code reviews  good idea  bad idea ', 'brainfuck on rails',
       'abundance without attachment'], dtype=object)

In [15]:
Y_train[:3]


Out[15]:
array([[1., 0., 0.],
       [1., 0., 0.],
       [0., 1., 0.]], dtype=float32)

NNLM Model

We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called nnlm-en-dim50-with-normalization, which also normalizes the vectors produced.

Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding:


In [16]:
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"

nnlm_module = KerasLayer(
    NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True)

Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:


In [17]:
nnlm_module(tf.constant(["The dog is happy to see people in the street."]))


Out[17]:
<tf.Tensor: shape=(1, 50), dtype=float32, numpy=
array([[ 0.19331802,  0.05893906,  0.15330684,  0.2505918 ,  0.19369544,
         0.03578748,  0.07387847, -0.10962156, -0.11377034,  0.07172022,
         0.12458669, -0.02289705, -0.18177685, -0.07084437, -0.00225849,
        -0.36875236,  0.05772953, -0.14222091,  0.08765972, -0.14068899,
        -0.07005888, -0.20634466,  0.07220475,  0.04258814,  0.0955702 ,
         0.19424029, -0.42492998, -0.00706906, -0.02095   , -0.05055764,
        -0.18988201, -0.02841404,  0.13222624, -0.01459922, -0.31255388,
        -0.09577855,  0.05469003, -0.13858607,  0.01141668, -0.12352604,
        -0.07250367, -0.11605677, -0.06976165,  0.14313601, -0.15183711,
        -0.06836402,  0.03054246, -0.13259597, -0.14599673,  0.05094011]],
      dtype=float32)>

Building the models

Let's write a function that

  • takes as input an instance of a KerasLayer (i.e. the nnlm_module we constructed above) as well as the name of the model (say nnlm)
  • returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:

In [20]:
def build_model(hub_module, name):
    model = Sequential([
        hub_module, # TODO 
        Dense(16, activation='relu'),
        Dense(N_CLASSES, activation='softmax')
    ], name=name)

    model.compile(
        optimizer='adam',
        loss='categorical_crossentropy',
        metrics=['accuracy']
    )
    return model

Let's also wrap the training code into a train_and_evaluate function that

  • takes as input the training and validation data, as well as the compiled model itself, and the batch_size
  • trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
  • returns an history object, which will help us to plot the learning curves

In [21]:
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
    X_train, Y_train = train_data

    tf.random.set_seed(33)

    model_dir = os.path.join(MODEL_DIR, model.name)
    if tf.io.gfile.exists(model_dir):
        tf.io.gfile.rmtree(model_dir)

    history = model.fit(
        X_train, Y_train,
        epochs=100,
        batch_size=batch_size,
        validation_data=val_data,
        callbacks=[EarlyStopping(), TensorBoard(model_dir)],
    )
    return history

Training NNLM


In [22]:
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)

In [23]:
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)


Train on 91392 samples, validate on 4811 samples
Epoch 1/100
91392/91392 [==============================] - 13s 138us/sample - loss: 1.0636 - accuracy: 0.4781 - val_loss: 1.0152 - val_accuracy: 0.6096
Epoch 2/100
91392/91392 [==============================] - 12s 129us/sample - loss: 0.9620 - accuracy: 0.6753 - val_loss: 0.8947 - val_accuracy: 0.7246
Epoch 3/100
91392/91392 [==============================] - 12s 132us/sample - loss: 0.8159 - accuracy: 0.7539 - val_loss: 0.7376 - val_accuracy: 0.7703
Epoch 4/100
91392/91392 [==============================] - 12s 134us/sample - loss: 0.6517 - accuracy: 0.7981 - val_loss: 0.5933 - val_accuracy: 0.7996
Epoch 5/100
91392/91392 [==============================] - 12s 134us/sample - loss: 0.5175 - accuracy: 0.8258 - val_loss: 0.4967 - val_accuracy: 0.8158
Epoch 6/100
91392/91392 [==============================] - 12s 127us/sample - loss: 0.4290 - accuracy: 0.8476 - val_loss: 0.4432 - val_accuracy: 0.8271
Epoch 7/100
91392/91392 [==============================] - 12s 131us/sample - loss: 0.3724 - accuracy: 0.8649 - val_loss: 0.4134 - val_accuracy: 0.8354
Epoch 8/100
91392/91392 [==============================] - 12s 133us/sample - loss: 0.3334 - accuracy: 0.8783 - val_loss: 0.3975 - val_accuracy: 0.8397
Epoch 9/100
91392/91392 [==============================] - 12s 126us/sample - loss: 0.3044 - accuracy: 0.8894 - val_loss: 0.3887 - val_accuracy: 0.8427
Epoch 10/100
91392/91392 [==============================] - 12s 128us/sample - loss: 0.2815 - accuracy: 0.8977 - val_loss: 0.3857 - val_accuracy: 0.8406
Epoch 11/100
91392/91392 [==============================] - 12s 134us/sample - loss: 0.2627 - accuracy: 0.9052 - val_loss: 0.3853 - val_accuracy: 0.8439
Epoch 12/100
91392/91392 [==============================] - 12s 134us/sample - loss: 0.2469 - accuracy: 0.9119 - val_loss: 0.3877 - val_accuracy: 0.8447

In [24]:
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()


Out[24]:
<matplotlib.axes._subplots.AxesSubplot at 0x14a524828>

Bonus

Try to beat the best model by modifying the model architecture, changing the TF-Hub embedding, and tweaking the training parameters.

Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License