Here is a simple example of using a GA in Python using PyOptSparse and my wrapper available here.
In [2]:
def rosen(x):
f = (1 - x[0])**2 + 100*(x[1] - x[0]**2)**2
c = []
return f, c
Here is where we define the problem and choose an optimizer. Various options exist, I've set a few, see documentation for details.
In [3]:
from pyoptsparse import NSGA2
# choose optimizer and define options
optimizer = NSGA2()
optimizer.setOption('maxGen', 200)
optimizer.setOption('PopSize', 40)
optimizer.setOption('pMut_real', 0.01)
optimizer.setOption('pCross_real', 1.0)
Now we can run the optimizer and parse the results.
In [4]:
from pyoptwrapper import optimize
x0 = [4.0, 4.0]
lb = [-5.0, -5.0]
ub = [5.0, 5.0]
xopt, fopt, info = optimize(rosen, x0, lb, ub, optimizer)
print 'results:', xopt, fopt, info
NSGA, like many genetic algorithms, doesn't have any speicific convergence criteria other than the maximum number of generations. I set it at 200 in this case. Notice that the answer is ok, but not super great.
Let's also try with SNOPT and start fairly far away (and I won't supply gradients):
In [5]:
from pyoptsparse import SNOPT
optimizer = SNOPT()
xopt, fopt, info = optimize(rosen, x0, lb, ub, optimizer)
print 'results:', xopt, fopt, info
We have the answer to high precision, and its fast and repeatable. For something that is differentiable a gradient-based method is preferable, but if the function space is fundamentally noisy, discrete, or highly multi-modal then a GA or other gradient-free method can be effective.
In [ ]: