Bayesian optimization is particularly useful for expensive optimization problems. This includes optimization problems where the objective (and constraints) are time-consuming to evaluate: measurements, engineering simulations, hyperparameter optimization of deep learning models, etc. Another area where Bayesian optimization may provide a benefit is in the presence of (a lot of) noise. If your problem does not satisfy these requirements other optimization algorithms might be better suited.
To setup a Bayesian optimization scheme with GPflowOpt you have to:
In [1]:
import numpy as np
from gpflowopt.domain import ContinuousParameter
def branin(x):
x = np.atleast_2d(x)
x1 = x[:, 0]
x2 = x[:, 1]
a = 1.
b = 5.1 / (4. * np.pi ** 2)
c = 5. / np.pi
r = 6.
s = 10.
t = 1. / (8. * np.pi)
ret = a * (x2 - b * x1 ** 2 + c * x1 - r) ** 2 + s * (1 - t) * np.cos(x1) + s
return ret[:, None]
domain = ContinuousParameter('x1', -5, 10) + \
ContinuousParameter('x2', 0, 15)
domain
Out[1]:
In [8]:
import gpflow
from gpflowopt.bo import BayesianOptimizer
from gpflowopt.design import LatinHyperCube
from gpflowopt.acquisition import ExpectedImprovement
from gpflowopt.optim import SciPyOptimizer, StagedOptimizer, MCOptimizer
# Use standard Gaussian process Regression
lhd = LatinHyperCube(21, domain)
X = lhd.generate()
Y = branin(X)
model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
# Now create the Bayesian Optimizer
alpha = ExpectedImprovement(model)
acquisition_opt = StagedOptimizer([MCOptimizer(domain, 200),
SciPyOptimizer(domain)])
optimizer = BayesianOptimizer(domain, alpha, optimizer=acquisition_opt, verbose=True)
# Run the Bayesian optimization
r = optimizer.optimize(branin, n_iter=10)
print(r)
That's all! Your objective function has now been optimized for 10 iterations.