In [1]:
% ignore - internal setup
path('approx', path)
set(0, 'DefaultLineLineWidth', 2)
set(0, 'DefaultAxesFontSize', 14);
Let's consider the following (very simple) optimization problem. We made it simple intentionally so we could work the transmitted variance symbolically in class.
\begin{align*} \text{min.} &\quad 4x_1^2 + 2x_2^2 + x_3^2\\ \text{s.t.} &\quad 6x_1 + 2x_2 + 4x_3 \ge 12\\ &\quad x_1 - 4x_2 + 7x_3 \le 10 \end{align*}We can rewrite the two constraints in standard form as: \begin{align*} c_1(x) &= -6x_1 - 2x_2 - 4x_3 + 12 \le 0\\ c_2(x) &= x_1 - 4x_2 + 7x_3 - 10 \le 0 \end{align*}
Let's create functions for the objective and the deterministic constraints. Obtaining the derivatives is easy symbolically, so let's do that as well.
In [2]:
%load approx/fun.m
In [ ]:
function [f, g] = fun(x)
f = 4*x(1)^2 + 2*x(2)^2 + x(3)^2;
g = [8*x(1); 4*x(2); 2*x(3)];
end
In [3]:
%load approx/dcon.m
In [ ]:
function [c, ceq, gc, gceq] = dcon(x)
c = [-6*x(1) - 2*x(2) - 4*x(3) + 12;
x(1) - 4*x(2) + 7*x(3) - 10];
ceq = [];
gc = [-6 1;
-2 -4;
-4 7];
gceq = [];
end
Computing the deterministic optimum:
In [4]:
lb = [-10; -10; -10];
ub = [10; 10; 10];
x0 = [1; 1; 1];
options = optimoptions('fmincon', 'disp', 'iter',...
'SpecifyObjectiveGradient', true, ...
'SpecifyConstraintGradient',true);
[xopt, fopt] = fmincon(@fun, x0, [], [], [], [], lb, ub, @dcon, options)
Now, let's try to compute a reliable optimum. First, we were given that each of the design variables is normally distributed with a standard deviation of 0.1. Let's propagate that uncertainty using the simple linear expansion method discussed in class (with no uncertain parameters in this case):
$$ \sigma_c^2 = \sum_{i=1}^n \left(\frac{\partial c}{\partial x_i} \sigma_{x_i}\right)^2 $$
In [5]:
% standard deviation of each constraint (input)
sigma = [0.1; 0.1; 0.1];
% compute gradient at our deterministic optimum
[~, ~, gc, ~] = dcon(xopt);
% initialize, separate sigma for each constraint
sigmac = zeros(2, 1);
% estimate transmitted variance
for i = 1:2 % loop through each constraint
for j = 1:3 % loop through each design variable
sigmac(i) = sigmac(i) + (gc(j, i)*sigma(j))^2;
end
end
sigmac = sqrt(sigmac)
Now, we were also given that we wanted to satisfy each constraint separately with a probability of 99.865%. For a normal distribution we need to compute how many standard deviations that corresponds to (for a one-sided constraint). We can do that either using an inverse CDF calculation.
In [6]:
k = norminv(0.99865)
where we use the same k for both constraints (although in general we could use a different k for each constraint). Now, we want to specify new constraints of the form: $$c(x) + k \sigma_c \le 0$$ Our reliable constraint function is below
In [7]:
%load approx/rcon.m
In [ ]:
function [c, ceq, gc, gceq] = rcon(x, k, sigmac)
% deterministic constraint
c = [-6*x(1) - 2*x(2) - 4*x(3) + 12;
x(1) - 4*x(2) + 7*x(3) - 10];
% add transmitted variance for a reliable constraint
c = c + k*sigmac;
ceq = [];
gc = [-6 1;
-2 -4;
-4 7];
gceq = [];
end
We need to wrap the rcon function so that it only takes in x before passing to fmincon.
In [8]:
rconwrap = @(x) rcon(x, k, sigmac); % we've already computed k and sigmac
[xoptR, foptR] = fmincon(@fun, xopt, [], [], [], [], lb, ub, rconwrap, options) % note starting at deterministic optimum to speed up convergence.
We've sacrificed some objective value in order to obtain a reliable solution. Let's check the reliability of our deterministic solution using a Monte Carlo simulation.
In [9]:
% number of samples
N = 1e5;
% random variables from normal distribution
rv = randn(3, N);
% initialize
c = zeros(2, N); % constraint values (2 constraints, N samples)
for i = 1:N
% sample random numbers from normal distribution about the deterministic optimum
xr = xopt + rv(:, i).*sigma;
[c(:, i), ~, ~, ~] = dcon(xr);
end
reliability1 = nnz(c(1, :) < 0)/N
reliability2 = nnz(c(2, :) < 0)/N
Notice that it fails about 50% of the time! Let's now check the reliability of our new solution.
In [22]:
for i = 1:N
% sample random numbers from normal distribution about the reliable optimum
xr = xoptR + rv(:, i).*sigma;
[c(:, i), ~, ~, ~] = dcon(xr);
end
reliability1 = nnz(c(1, :) < 0)/N
reliability2 = nnz(c(2, :) < 0)/N
The reliability is very close the our target of 99.86% for each constraint individually!
In [ ]: