Collaborative filtering on the MovieLense Dataset

Learning Objectives

  1. Know how to build a BigQuery ML Matrix Factorization Model
  2. Know how to use the model to make recommendations for a user
  3. Know how to use the model to recommend an item to a group of users
This notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.

MovieLens dataset

To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.

Download the data and load it as a BigQuery table using:


In [ ]:
import os
PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID

# Do not change these
os.environ["PROJECT"] = PROJECT

In [ ]:
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
    --description 'Movie Recommendations' \
    $PROJECT:movielens
bq --location=US load --source_format=CSV \
    --autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load  --source_format=CSV \
    --autodetect movielens.movies_raw ml-20m/movies.csv

Exploring the data

Two tables should now be available in BigQuery.

Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, movielens.ratings, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.


In [ ]:
%%bigquery --project $PROJECT
SELECT *
FROM movielens.ratings
LIMIT 10

A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.


In [ ]:
%%bigquery --project $PROJECT
SELECT 
    COUNT(DISTINCT userId) numUsers,
    COUNT(DISTINCT movieId) numMovies,
    COUNT(*) totalRatings
FROM movielens.ratings

On examining the first few movies using the query following query, we can see that the genres column is a formatted string:


In [ ]:
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies_raw
WHERE movieId < 5

We can parse the genres into an array and rewrite the table as follows:


In [ ]:
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
    SELECT * REPLACE(SPLIT(genres, "|") AS genres)
    FROM movielens.movies_raw

In [ ]:
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies
WHERE movieId < 5

Matrix factorization

Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.

We can create the recommender model using (Optional, takes 30 minutes. Note: we have a model we already trained if you want to skip this step):


In [ ]:
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender
options(model_type='matrix_factorization',
        user_col='userId', item_col='movieId', rating_col='rating')
AS

SELECT 
userId, movieId, rating
FROM movielens.ratings

In [ ]:
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model: 
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)

Note that we create a model as usual, except that the model_type is matrix_factorization and that we have to identify which columns play what roles in the collaborative filtering setup.

What did you get? Our model took an hour to train, and the training loss starts out extremely bad and gets driven down to near-zero over next the four iterations:

Iteration Training Data Loss Evaluation Data Loss Duration (seconds)
4 0.5734 172.4057 180.99
3 0.5826 187.2103 1,040.06
2 0.6531 4,758.2944 219.46
1 1.9776 6,297.2573 1,093.76
0 63,287,833,220.5795 168,995,333.0464 1,091.21

However, the evaluation data loss is quite high, and much higher than the training data loss. This indicates that overfitting is happening, and so we need to add some regularization. Let’s do that next. Note the added l2_reg=0.2 (Optional, takes 30 minutes):


In [ ]:
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_l2
options(model_type='matrix_factorization',
        user_col='userId', item_col='movieId', 
        rating_col='rating', l2_reg=0.2)
AS

SELECT 
userId, movieId, rating
FROM movielens.ratings

In [ ]:
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model: 
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_l2`)

Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results:

Iteration Training Data Loss Evaluation Data Loss Duration (seconds)
2 0.6509 1.4596 198.17
1 1.9829 33,814.3017 1,066.06
0 481,434,346,060.7928 2,156,993,687.7928 1,024.59

By default, BigQuery sets the number of factors to be the log2 of the number of rows. In our case, since we have 20 million rows in the table, the number of factors would have been chosen to be 24. As with the number of clusters in K-Means clustering, this is a reasonable default but it is often worth experimenting with a number about 50% higher (36) and a number that is about a third lower (16):

TODO 1: Create a Matrix Factorization model with 16 factors


In [ ]:
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_16
options( #TODO: Insert paramters to make a 16 factor matrix factorization model
) AS

SELECT 
userId, movieId, rating
FROM movielens.ratings

In [ ]:
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)

When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. So, let’s pick this as the final matrix factorization model and move on.

Making recommendations

With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.


In [ ]:
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
  SELECT 
    movieId, title, 903 AS userId
  FROM movielens.movies, UNNEST(genres) g
  WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5

Filtering out already rated movies

Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.

TODO 2: Make a prediction for user 903 that does not include already seen movies.


In [ ]:
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
  WITH seen AS (
    SELECT ARRAY_AGG(movieId) AS movies 
    FROM movielens.ratings
    WHERE userId = 903
  )
  SELECT 
    movieId, title, 903 AS userId
  FROM movielens.movies, UNNEST(genres) g, seen
  WHERE # TODO: Complete this WHERE to remove seen movies.
))
ORDER BY predicted_rating DESC
LIMIT 5

For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.

Customer targeting

In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId = 96481 (American Mullet) which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.

TODO 3: Find the top five users who will likely enjoy American Mullet (2001)


In [ ]:
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
  SELECT 
    96481 AS movieId, 
    (SELECT title FROM movielens.movies WHERE movieId=96481) title,
    userId
  FROM
    # TODO: Select all users
))
ORDER BY predicted_rating DESC
LIMIT 5

Batch predictions for all users and movies

What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.


In [ ]:
%%bigquery --project $PROJECT
SELECT *
FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)
LIMIT 10

As seen in a section above, it is possible to filter out movies the user has already seen and rated in the past. The reason already seen movies aren’t filtered out by default is that there are situations (think of restaurant recommendations, for example) where it is perfectly expected that we would need to recommend restaurants the user has liked in the past.

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.