Heteroscedastic Regression

11th July 2014

by Ricardo Andrade

In this Ipython Notebook we will look at how to implement a GP regression with different noise terms using GPy.

$\bf N.B.:$ There is currently no implementation to predict the noise for outputs that are not part of the training set.

Usually, a GP regression model assumes that a set of targets $\{ y_1,\ldots,y_n \}$ is related to a set of inputs $\{ {\bf x }_1,\ldots, {\bf x }_n \}$ through the relation: $$ y_i = f({\bf x}_i) + \epsilon_i, $$ where $f \sim \mathcal{GP}$ and $\epsilon_i \sim \mathcal{N}(0,\sigma^2)$ for all $i$.. An heteroscedastic model works in the same way, but allows different variances for the noise terms of the observations, i.e., $\epsilon_i \sim \mathcal{N}(0,\sigma_i^2)$. By adding this assumption, the model will now give different weights to each observation. Hence, it will try to fit better those observations with smaller noise and will be free not to fit very well those observations with larger noise.

Before using GPy, we need to perform some setup.


In [1]:
import numpy as np
import pylab as pb
import GPy 
%pylab inline


Populating the interactive namespace from numpy and matplotlib

As an example we will use the following function, which has a peak around 0.


In [2]:
def f(X):
    return 10. + .1*X + 2*np.sin(X)/X

fig,ax = pb.subplots()
ax.plot(np.linspace(-15,25),f(np.linspace(-10,20)),'r-')
ax.grid()


We will draw some points $( {\bf x},y)$ from the function above, and add some noise on $y$.


In [3]:
X = np.random.uniform(-10,20, 50)
X = X[~np.logical_and(X>-2,X<3)] #Remove points between -2 and 3 (just for illustration) 
X = np.hstack([np.random.uniform(-1,1,1),X]) #Prepend a point between -1 and 1  (just for illustration)
error = np.random.normal(0,.2,X.size)
Y = f(X) + error
fig,ax = pb.subplots()
ax.plot(np.linspace(-15,25),f(np.linspace(-10,20)),'r-')
ax.plot(X,Y,'kx',mew=1.5)
ax.grid()


We will use a combination of an MLP and Bias kernels. Although other kernels can be used as well.


In [4]:
kern = GPy.kern.MLP(1) + GPy.kern.Bias(1)

For the moment, we will assume that we already know the error on each observation. To call the model we just need to run these lines.


In [5]:
m = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m['.*Gaussian_noise'] = abs(error)#.flatten() #Set the noise parameters to the error in Y
m.likelihood['Gaussian_n.*'].unconstrain()
m.likelihood['Gaussian_n.*'].fix() #We can fix the noise term, since we already know it
m.optimize()
fig,ax = pb.subplots(1)
m.plot_f(ax=ax) #Show the predictive values of the GP.
ax.errorbar(X,Y,yerr=np.array(m.likelihood.flattened_parameters).flatten(),fmt=None,ecolor='r',zorder=1)
ax.grid()
ax.plot(X,Y,'kx',mew=1.5)


Out[5]:
[<matplotlib.lines.Line2D at 0x10ebcf3d0>]

In the following example we show how the magnitud of the noise of a specific observation modifines the model fit.


In [6]:
def noise_effect(noise):
    m['.*Gaussian_noise_0'] = noise
    m.likelihood['Gaussian_n.*'].unconstrain()
    m.likelihood['Gaussian_n.*'].fix()
    m.optimize()
    fig = pb.figure()
    ax = fig.add_subplot(111)
    m.plot_f(ax=ax) 
    ax.errorbar(X.flatten(),Y.flatten(),yerr=np.array(m.likelihood.flattened_parameters).flatten(),fmt=None,ecolor='r',zorder=1)        
    ax.plot(X[1:],Y[1:],'kx',mew=1.5)
    ax.plot(X[:1],Y[:1],'ko',mew=.5)
    ax.grid()

Scroll the bar to see how the GP fitted changes.


In [7]:
from IPython.html.widgets import *
interact(noise_effect, noise=(0.1,2.))


Out[7]:
<function __main__.noise_effect>

If we set all the noise terms to be equal, then we have just a homoscedastic GP regression model. The code below shows a comparison between the heteroscedastic and the homoscedastic models when the noise terms are fixed to the same value.


In [8]:
#Heteroscedastic model
m1 = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m1['.*Gaussian_noise'] = .05
m1.likelihood['Gaussian_n.*'].unconstrain()
m1.likelihood['Gaussian_n.*'].fix()
m1.optimize()

# Homoscedastic model
m2 = GPy.models.GPRegression(X[:,None],Y[:,None],kern)
m2['.*Gaussian_noise'] = .05
m2['.*noise'].unconstrain()
m2['.*noise'].fix()
m2.optimize()

fig = pb.figure(figsize=(12,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
m1.plot_f(ax = ax1)
m2.plot_f(ax = ax2)
ax1.set_title('Heteroscedastic model')
ax2.set_title('Homoscedastic model')

print "Kernel parameters (optimized) in the heteroscedastic model"
print m1.kern
print "\nKernel parameters (optimized) in the homoscedastic model"
print m2.kern


Kernel parameters (optimized) in the heteroscedastic model
  add.                 |        Value        |  Constraint  |  Prior  |  Tied to
  mlp.variance         |      47.0854156112  |     +ve      |         |         
  mlp.weight_variance  |      3.14235815572  |     +ve      |         |         
  mlp.bias_variance    |  1.06800469017e-08  |     +ve      |         |         
  bias.variance        |      101.254356297  |     +ve      |         |         

Kernel parameters (optimized) in the homoscedastic model
  add.                 |        Value        |  Constraint  |  Prior  |  Tied to
  mlp.variance         |      47.0854156112  |     +ve      |         |         
  mlp.weight_variance  |      3.14235815572  |     +ve      |         |         
  mlp.bias_variance    |  1.06800469017e-08  |     +ve      |         |         
  bias.variance        |      101.254356297  |     +ve      |         |         

We can also learn the noise for each observation. In this case I found useful to set a lower bound on the noise terms of $10^{-6}$ or to add a white noise kenel. In this case, I found useful to add a white noise kernel.


In [9]:
kern = GPy.kern.MLP(1) + GPy.kern.Bias(1)

In [10]:
m = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m['.*Gaussian_noise'].unconstrain()
m['.*Gaussian_noise'].constrain_bounded(1E-6,1000.)
m['.*Gaussian_noise'] = np.random.rand(len(m.likelihood.flattened_parameters))
m.optimize()
fig,ax = pb.subplots(1)
m.plot_f(ax=ax)
ax.errorbar(X,Y,yerr=np.array(m.likelihood.flattened_parameters).flatten(),fmt=None,ecolor='r',zorder=1)
ax.grid()
ax.plot(X,Y,'kx',mew=1.5)


Out[10]:
[<matplotlib.lines.Line2D at 0x11005c910>]

Predictions of $y$ at new points $\bf x$ would need a estimate of the noise term, however we just have those for the training set. At the moment we don't have a rotine to estimate heteroscedastic noise at new points. Estimates for the GP at new poinst are still available using the following command:


In [11]:
m._raw_predict?

In [11]: