We consider the simple example from the Gloptipoly manual,
In [1]:
using Polyopt
We define the variables
In [2]:
x1, x2, x3 = variables(["x1","x2","x3"]);
and the objective function
In [3]:
f = -2*x1 + x2 - x3;
and an array with all inequalities in the form $g_i(x) \geq 0$,
In [4]:
g = [24 - 20*x1 + 9*x2 - 13*x3 + 4*x1^2 - 4*x1*x2 + 4*x1*x3 + 2*x2^2 - 2*x2*x3 + 2*x3^2,
4 - (x1 + x2 + x3),
6 - (3*x2 + x3),
x1, 2-x1,
x2,
x3, 3-x3]
Out[4]:
This problem has multiple optimal minima, so we perturb the problem to locate one of the minimizers,
In [5]:
prob = momentprob(2, f + 1e-3*(x1+x2+x3), g);
We next solve the problem using MOSEK, turning off log-output:
In [6]:
X, Z, t, y, solsta = solve_mosek(prob, showlog=false);
In [7]:
t
Out[7]:
In [8]:
prob = momentprob(3, f + 1e-3*(x1+x2+x3), g);
In [9]:
X, Z, t, y, solsta = solve_mosek(prob, showlog=false);
The lower bound decreases going from order two to order three,
In [10]:
t
Out[10]:
In [11]:
prob = momentprob(4, f + 1e-3*(x1+x2+x3), g);
We solve the fourth-order relaxation, this time showing the log-output from the solver,
In [12]:
X, Z, t, y, solsta = solve_mosek(prob, showlog=true);
Again the lower increases when we increase the relaxation order,
In [13]:
t
Out[13]:
At this point the hierarchy of relaxations has converged; higher order relaxations give the same bound. Let us inspect the recovered solution
In [14]:
[prob.basis y]
Out[14]:
Let us verify that the recovered solution
In [15]:
X1, X2, X3 = Polyopt.vectorize([x1,x2,x3], 8)*y
Out[15]:
is optimal. It achieves the same objective value as the relaxation,
In [16]:
Polyopt.evalpoly(f, [X1; X2; X3])
Out[16]:
and it is feasible,
In [17]:
[ Polyopt.evalpoly(gi, [X1; X2; X3]) for gi=g ]
Out[17]:
In [ ]: