Scalable GP Regression (CUDA) with Fast Predictive Distributions (KISS-GP/LOVE)

Introduction

In this notebook, we'll give a brief tutorial on how to use Lanczos Variance Estimates (LOVE) to achieve fast predictive distributions as described in this paper https://arxiv.org/abs/1803.06058. To see LOVE in use with exact GPs, see fast_variances_exact_(LOVE).ipynb. For a tutorial on using the fast sampling mechanism described in the paper, see fast_sampling_ski_(LOVE).ipynb.

LOVE is an algorithm for approximating the predictive covariances of a Gaussian process in constant time after linear time precomputation. In this notebook, we will train a deep kernel learning model with SKI on one of the UCI datasets used in the paper, and then compare the time required to make predictions with each model.

NOTE: The timing results reported in the paper compare the time required to compute (co)variances only. Because excluding the mean computations from the timing results requires hacking the internals of GPyTorch, the timing results presented in this notebook include the time required to compute predictive means, which are not accelerated by LOVE. Nevertheless, as we will see, LOVE achieves impressive speed-ups.


In [1]:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt

# Make plots inline
%matplotlib inline

Loading Data

For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.

Note: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.


In [2]:
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor

if not os.path.isfile('elevators.mat'):
    print('Downloading \'elevators\' UCI dataset...')
    urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', 'elevators.mat')
    
data = torch.Tensor(loadmat('elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]

# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))

train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()

test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()

Defining the DKL Feature Extractor

Next, we define the deep feature extractor we'll be using for DKL. In this case, we use a fully connected network with the architecture d -> 1000 -> 500 -> 50 -> 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.


In [3]:
data_dim = train_x.size(-1)

class LargeFeatureExtractor(torch.nn.Sequential):           
    def __init__(self):                                      
        super(LargeFeatureExtractor, self).__init__()        
        self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
        self.add_module('relu1', torch.nn.ReLU())                  
        self.add_module('linear2', torch.nn.Linear(1000, 500))     
        self.add_module('relu2', torch.nn.ReLU())                  
        self.add_module('linear3', torch.nn.Linear(500, 50))       
        self.add_module('relu3', torch.nn.ReLU())                  
        self.add_module('linear4', torch.nn.Linear(50, 2))         
                                                             
feature_extractor = LargeFeatureExtractor().cuda()

Defining the GP Model

We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an RBF base kernel. The forward method passes the input data x through the neural network feature extractor defined above, scales the resulting features to be between 0 and 1, and then calls the kernel.


In [4]:
class GPRegressionModel(gpytorch.models.ExactGP):
    def __init__(self, train_x, train_y, likelihood):
        super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
        
        self.mean_module = gpytorch.means.ConstantMean()
        self.covar_module = gpytorch.kernels.GridInterpolationKernel(
            gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()),
            grid_size=100, num_dims=2,
        )
        
        # Also add the deep net
        self.feature_extractor = feature_extractor

    def forward(self, x):
        # We're first putting our data through a deep net (feature extractor)
        # We're also scaling the features so that they're nice values
        projected_x = self.feature_extractor(x)
        projected_x = projected_x - projected_x.min(0)[0]
        projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1
        
        # The rest of this looks like what we've seen
        mean_x = self.mean_module(projected_x)
        covar_x = self.covar_module(projected_x)
        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)

    
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPRegressionModel(train_x, train_y, likelihood).cuda()

Training the model

The cell below trains the DKL model above, finding optimal hyperparameters using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.

It's good to add some L2 regularization to the feature extractor part of the model, but NOT to any other part of the model.


In [5]:
# Find optimal model hyperparameters
model.train()
likelihood.train()

# Use the adam optimizer
# Add weight decay to the feature exactractor ONLY
optimizer = torch.optim.Adam([
    {'params': model.mean_module.parameters()},
    {'params': model.covar_module.parameters()},
    {'params': model.likelihood.parameters()},
    {'params': model.feature_extractor.parameters(), 'weight_decay': 1e-3}
], lr=0.01)

# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
 
def train(training_iterations=40):
    for i in range(training_iterations):
        optimizer.zero_grad()
        output = model(train_x)
        loss = -mll(output, train_y)
        loss.backward()
        print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
        optimizer.step()
        
# Sometimes we get better performance on the GPU when we don't use Toeplitz math
# for SKI. This flag controls that
with gpytorch.settings.use_toeplitz(False):
    %time train()


/home/jrg365/gpytorch/gpytorch/utils/cholesky.py:14: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  potrf_list = [sub_mat.potrf() for sub_mat in mat.view(-1, *mat.shape[-2:])]
/home/jrg365/gpytorch/gpytorch/lazy/added_diag_lazy_tensor.py:66: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  ld_one = lr_flipped.potrf().diag().log().sum() * 2
Iter 1/40 - Loss: 0.950
Iter 2/40 - Loss: 0.936
Iter 3/40 - Loss: 0.928
Iter 4/40 - Loss: 0.921
Iter 5/40 - Loss: 0.915
Iter 6/40 - Loss: 0.910
Iter 7/40 - Loss: 0.905
Iter 8/40 - Loss: 0.899
Iter 9/40 - Loss: 0.894
Iter 10/40 - Loss: 0.889
Iter 11/40 - Loss: 0.884
Iter 12/40 - Loss: 0.879
Iter 13/40 - Loss: 0.874
Iter 14/40 - Loss: 0.869
Iter 15/40 - Loss: 0.864
Iter 16/40 - Loss: 0.859
Iter 17/40 - Loss: 0.853
Iter 18/40 - Loss: 0.848
Iter 19/40 - Loss: 0.843
Iter 20/40 - Loss: 0.840
Iter 21/40 - Loss: 0.834
Iter 22/40 - Loss: 0.829
Iter 23/40 - Loss: 0.820
Iter 24/40 - Loss: 0.815
Iter 25/40 - Loss: 0.810
Iter 26/40 - Loss: 0.806
Iter 27/40 - Loss: 0.801
Iter 28/40 - Loss: 0.796
Iter 29/40 - Loss: 0.790
Iter 30/40 - Loss: 0.785
Iter 31/40 - Loss: 0.780
Iter 32/40 - Loss: 0.775
Iter 33/40 - Loss: 0.769
Iter 34/40 - Loss: 0.764
Iter 35/40 - Loss: 0.759
Iter 36/40 - Loss: 0.754
Iter 37/40 - Loss: 0.754
Iter 38/40 - Loss: 0.748
Iter 39/40 - Loss: 0.739
Iter 40/40 - Loss: 0.735
CPU times: user 6.73 s, sys: 2.3 s, total: 9.04 s
Wall time: 9.02 s

Make Predictions using Standard SKI Code

The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean()) using the standard SKI testing code, with no acceleration or precomputation.

Note: Full predictive covariance matrices (and the computations needed to get them) can be quite memory intensive. Depending on the memory available on your GPU, you may need to reduce the size of the test set for the code below to run. If you run out of memory, try replacing test_x below with something like test_x[:1000] to use the first 1000 test points only, and then restart the notebook.


In [6]:
import time

# Set into eval mode
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False):
    start_time = time.time()
    preds = model(test_x[:1000])
    exact_covar = preds.covariance_matrix
    exact_covar_time = time.time() - start_time


/home/jrg365/gpytorch/gpytorch/utils/cholesky.py:14: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  potrf_list = [sub_mat.potrf() for sub_mat in mat.view(-1, *mat.shape[-2:])]
/home/jrg365/gpytorch/gpytorch/lazy/added_diag_lazy_tensor.py:66: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  ld_one = lr_flipped.potrf().diag().log().sum() * 2

In [7]:
print('Time to compute exact mean + covariances: {:.2f}s'.format(exact_covar_time))


Time to compute exact mean + covariances: 0.60s

Clear Memory and any Precomputed Values

The next cell clears as much as possible to avoid influencing the timing results of the fast predictive variances code. Strictly speaking, the timing results above and the timing results to follow should be run in entirely separate notebooks. However, this will suffice for this simple example.


In [8]:
# Clear as much 'stuff' as possible
import gc
gc.collect()
torch.cuda.empty_cache()
model.train()
likelihood.train()


Out[8]:
GaussianLikelihood()

Compute Predictions with LOVE, but Before Precomputation

Next we compute predictive covariances (and the predictive means) for LOVE, but starting from scratch. That is, we don't yet have access to the precomputed cache discussed in the paper. This should still be faster than the full covariance computation code above.

In this simple example, we allow a rank 10 root decomposition, although increasing this to rank 20-40 should not affect the timing results substantially.


In [9]:
# Set into eval mode
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(10):
    start_time = time.time()
    preds = model(test_x[:1000])
    fast_time_no_cache = time.time() - start_time


/home/jrg365/gpytorch/gpytorch/utils/cholesky.py:14: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  potrf_list = [sub_mat.potrf() for sub_mat in mat.view(-1, *mat.shape[-2:])]
/home/jrg365/gpytorch/gpytorch/lazy/added_diag_lazy_tensor.py:66: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
  ld_one = lr_flipped.potrf().diag().log().sum() * 2

Compute Predictions with LOVE After Precomputation

The above cell additionally computed the caches required to get fast predictions. From this point onwards, unless we put the model back in training mode, predictions should be extremely fast. The cell below re-runs the above code, but takes full advantage of both the mean cache and the LOVE cache for variances.


In [10]:
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(10):
    start_time = time.time()
    preds = model(test_x[:1000])
    fast_covar = preds.covariance_matrix
    fast_time_with_cache = time.time() - start_time

In [11]:
print('Time to compute mean + covariances (no cache) {:.2f}s'.format(fast_time_no_cache))
print('Time to compute mean + variances (cache): {:.2f}s'.format(fast_time_with_cache))


Time to compute mean + covariances (no cache) 0.37s
Time to compute mean + variances (cache): 0.04s

Compute Error between Exact and Fast Variances

Finally, we compute the mean absolute error between the fast variances computed by LOVE (stored in fast_covar), and the exact variances computed previously.

Note that these tests were run with a root decomposition of rank 10, which is about the minimum you would realistically ever run with. Despite this, the fast variance estimates are quite good. If more accuracy was needed, increasing max_root_decomposition_size to 30 or 40 would provide even better estimates.


In [12]:
print('MAE between exact covar matrix and fast covar matrix: {}'.format((exact_covar - fast_covar).abs().mean()))


MAE between exact covar matrix and fast covar matrix: 4.553118742478546e-06

In [ ]: