In [1]:
from bayes_opt import BayesianOptimization
Internally the maximize
method is simply a wrapper around the methods suggest
, probe
, and register
. If you need more control over your optimization loops the Suggest-Evaluate-Register paradigm should give you that extra flexibility.
For an example of running the BayesianOptimization
in a distributed fashion (where the function being optimized is evaluated concurrently in different cores/machines/servers), checkout the async_optimization.py
script in the examples folder.
In [2]:
# Let's start by definying our function, bounds, and instanciating an optimization object.
def black_box_function(x, y):
return -x ** 2 - (y - 1) ** 2 + 1
Notice that the evaluation of the blackbox function will NOT be carried out by the optimizer object. We are simulating a situation where this function could be being executed in a different machine, maybe it is written in another language, or it could even be the result of a chemistry experiment. Whatever the case may be, you can take charge of it and as long as you don't invoke the probe
or maximize
methods directly, the optimizer object will ignore the blackbox function.
In [3]:
optimizer = BayesianOptimization(
f=None,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
One extra ingredient we will need is an UtilityFunction
instance. In case it is not clear why, take a look at the literature to understand better how this method works.
In [4]:
from bayes_opt import UtilityFunction
utility = UtilityFunction(kind="ucb", kappa=2.5, xi=0.0)
The suggest
method of our optimizer can be called at any time. What you get back is a suggestion for the next parameter combination the optimizer wants to probe.
Notice that while the optimizer hasn't observed any points, the suggestions will be random. However, they will stop being random and improve in quality the more points are observed.
In [5]:
next_point_to_probe = optimizer.suggest(utility)
print("Next point to probe is:", next_point_to_probe)
You are now free to evaluate your function at the suggested point however/whenever you like.
In [6]:
target = black_box_function(**next_point_to_probe)
print("Found the target value to be:", target)
Last thing left to do is to tell the optimizer what target value was observed.
In [7]:
optimizer.register(
params=next_point_to_probe,
target=target,
)
In [8]:
for _ in range(5):
next_point = optimizer.suggest(utility)
target = black_box_function(**next_point)
optimizer.register(params=next_point, target=target)
print(target, next_point)
print(optimizer.max)
There is no principled way of dealing with discrete parameters using this package.
Ok, now that we got that out of the way, how do you do it? You're bound to be in a situation where some of your function's parameters may only take on discrete values. Unfortunately, the nature of bayesian optimization with gaussian processes doesn't allow for an easy/intuitive way of dealing with discrete parameters - but that doesn't mean it is impossible. The example below showcases a simple, yet reasonably adequate, way to dealing with discrete parameters.
In [8]:
def func_with_discrete_params(x, y, d):
# Simulate necessity of having d being discrete.
assert type(d) == int
return ((x + y + d) // (1 + d)) / (1 + (x + y) ** 2)
In [9]:
def function_to_be_optimized(x, y, w):
d = int(w)
return func_with_discrete_params(x, y, d)
In [10]:
optimizer = BayesianOptimization(
f=function_to_be_optimized,
pbounds={'x': (-10, 10), 'y': (-10, 10), 'w': (0, 5)},
verbose=2,
random_state=1,
)
In [11]:
optimizer.maximize(alpha=1e-3)
The bayesian optimization algorithm works by performing a gaussian process regression of the observed combination of parameters and their associated target values. The predicted parameter$\rightarrow$target hyper-surface (and its uncertainty) is then used to guide the next best point to probe.
In [12]:
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
optimizer.maximize(
init_points=1,
n_iter=5,
# What follows are GP regressor parameters
alpha=1e-3,
n_restarts_optimizer=5
)
Another alternative, specially useful if you're calling maximize
multiple times or optimizing outside the maximize
loop, is to call the set_gp_params
method.
In [13]:
optimizer.set_gp_params(normalize_y=True)
alpha
parameterWhen dealing with functions with discrete parameters,or particularly erratic target space it might be beneficial to increase the value of the alpha
parameter. This parameters controls how much noise the GP can handle, so increase it whenever you think that extra flexibility is needed.
Observers are objects that subscribe and listen to particular events fired by the BayesianOptimization
object.
When an event gets fired a callback function is called with the event and the BayesianOptimization
instance passed as parameters. The callback can be specified at the time of subscription. If none is given it will look for an update
method from the observer.
In [14]:
from bayes_opt.event import DEFAULT_EVENTS, Events
In [15]:
optimizer = BayesianOptimization(
f=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
In [16]:
class BasicObserver:
def update(self, event, instance):
"""Does whatever you want with the event and `BayesianOptimization` instance."""
print("Event `{}` was observed".format(event))
In [17]:
my_observer = BasicObserver()
optimizer.subscribe(
event=Events.OPTIMIZATION_STEP,
subscriber=my_observer,
callback=None, # Will use the `update` method as callback
)
Alternatively you have the option to pass a completely different callback.
In [18]:
def my_callback(event, instance):
print("Go nuts here!")
optimizer.subscribe(
event=Events.OPTIMIZATION_START,
subscriber="Any hashable object",
callback=my_callback,
)
In [19]:
optimizer.maximize(init_points=1, n_iter=2)
For a list of all default events you can checkout DEFAULT_EVENTS
In [20]:
DEFAULT_EVENTS
Out[20]: