In [1]:
#import necessary modules, set up the plotting
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib;matplotlib.rcParams['figure.figsize'] = (8,6)
from matplotlib import pyplot as plt
import GPy
The GPy model class has a set of features which are designed to make it simple to explore the parameter space of the model. By default, the scipy optimisers are used to fit GPy models (via model.optimize()), for which we provide mechanisms for ‘free’ optimisation: GPy can ensure that naturally positive parameters (such as variances) remain positive. But these mechanisms are much more powerful than simple reparameterisation, as we shall see.
Along this tutorial we’ll use a sparse GP regression model as example. This example can be in GPy.examples.regression. All of the examples included in GPy return an instance of a model class, and therefore they can be called in the following way:
In [2]:
m = GPy.examples.regression.sparse_GP_regression_1D(plot=False, optimize=False)
To see the current state of the model parameters, and the model’s (marginal) likelihood just print the model
print m
The first thing displayed on the screen is the log-likelihood value of the model with its current parameters. Below the log-likelihood, a table with all the model’s parameters is shown. For each parameter, the table contains the name of the parameter, the current value, and in case there are defined: constraints, ties and prior distrbutions associated.
In [3]:
m
Out[3]:
In this case the kernel parameters (bf.variance
, bf.lengthscale
) as well as the likelihood noise parameter (Gaussian_noise.variance
), are constrained to be positive, while the inducing inputs have no constraints associated. Also there are no ties or prior defined.
You can also print all subparts of the model, by printing the subcomponents individually; this will print the details of this particular parameter handle:
In [4]:
m.rbf
Out[4]:
When you want to get a closer look into multivalue parameters, print them directly:
In [5]:
m.inducing_inputs
Out[5]:
In [6]:
m.inducing_inputs[0] = 1
The preferred way of interacting with parameters is to act on the parameter handle itself. Interacting with parameter handles is simple. The names, printed by print m are accessible interactively and programatically. For example try to set the kernel's lengthscale
to 0.2 and print the result:
In [7]:
m.rbf.lengthscale = 0.2
print m
This will already have updated the model’s inner state: note how the log-likelihood has changed. YOu can immediately plot the model or see the changes in the posterior (m.posterior
) of the model.
The model’s parameters can also be accessed through regular expressions, by ‘indexing’ the model with a regular expression, matching the parameter name. Through indexing by regular expression, you can only retrieve leafs of the hierarchy, and you can retrieve the values matched by calling values()
on the returned object
In [8]:
print m['.*var']
#print "variances as a np.array:", m['.*var'].values()
#print "np.array of rbf matches: ", m['.*rbf'].values()
There is access to setting parameters by regular expression, as well. Here are a few examples of how to set parameters by regular expression. Note that each time the values are set, computations are done internally to compute the log likeliood of the model.
In [9]:
m['.*var'] = 2.
print m
m['.*var'] = [2., 3.]
print m
A handy trick for seeing all of the parameters of the model at once is to regular-expression match every variable:
In [10]:
print m['']
Another way to interact with the model’s parameters is through the parameter_array. The Parameter array holds all the parameters of the model in one place and is editable. It can be accessed through indexing the model for example you can set all the parameters through this mechanism:
In [11]:
new_params = np.r_[[-4,-2,0,2,4], [.1,2], [.7]]
print new_params
m[:] = new_params
print m
Parameters themselves (leafs of the hierarchy) can be indexed and used the same way as numpy arrays. First let us set a slice of the inducing_inputs:
In [12]:
m.inducing_inputs[2:, 0] = [1,3,5]
print m.inducing_inputs
Or you use the parameters as normal numpy arrays for calculations:
In [13]:
precision = 1./m.Gaussian_noise.variance
print precision
In [14]:
print "all gradients of the model:\n", m.gradient
print "\n gradients of the rbf kernel:\n", m.rbf.gradient
If we optimize the model, the gradients (should be close to) zero
In [15]:
m.optimize()
print m.gradient
When we initially call the example, it was optimized and hence the log-likelihood gradients were close to zero. However, since we have been changing the parameters, the gradients are far from zero now. Next we are going to show how to optimize the model setting different restrictions on the parameters.
Once a constraint has been set on a parameter, it is possible to remove it with the command unconstrain(), which can be called on any parameter handle of the model. The methods constrain() and unconstrain() return the indices which were actually unconstrained, relative to the parameter handle the method was called on. This is particularly handy for reporting which parameters where reconstrained, when reconstraining a parameter, which was already constrained:
In [16]:
m.rbf.variance.unconstrain()
print m
In [17]:
m.unconstrain()
print m
If you want to unconstrain only a specific constraint, you can call the respective method, such as unconstrain_fixed()
(or unfix()
) to only unfix fixed parameters:
In [18]:
m.inducing_inputs[0].fix()
m.rbf.constrain_positive()
print m
m.unfix()
print m
In [19]:
m.Gaussian_noise.constrain_positive()
m.rbf.constrain_positive()
m.optimize()
By deafult, GPy uses the lbfgsb optimizer.
Some optional parameters may be discussed here.
optimizer
: which optimizer to use, currently there are lbfgsb, fmin_tnc, scg, simplex or any unique identifier uniquely identifying an optimizer.
Thus, you can say m.optimize('bfgs') for using the lbfgsb
optimizermessages
: if the optimizer is verbose. Each optimizer has its own way of printing, so do not be confused by differing messages of different optimizersmax_iters
: Maximum number of iterations to take. Some optimizers see iterations as function calls, others as iterations of the algorithm. Please be advised to look into scipy.optimize for more instructions, if the number of iterations matter, so you can give the right parameters to optimize()gtol
: only for some optimizers. Will determine the convergence criterion, as the tolerance of gradient to finish the optimization.Many of GPys models have built-in plot functionality. we distringuish between plotting the posterior of the function (m.plot_f
) and plotting the posterior over predicted data values (m.plot
). This becomes especially important for non-Gaussian likleihoods. Here we'll plot the sparse GP model we've been working with. for more information of the meaning of the plot, please refer to the accompanying basic_gp_regression
and sparse_gp
noteooks.
In [20]:
fig = m.plot()
We can even change the backend for plotting and plot the model using a different backend.
In [21]:
GPy.plotting.change_plotting_library('plotly')
fig = m.plot(plot_density=True)
GPy.plotting.show(fig, filename='gpy_sparse_gp_example')
Out[21]:
In [ ]: