In [1]:
% ignore - internal setup
path('approx', path)
set(0, 'DefaultLineLineWidth', 2)
set(0, 'DefaultAxesFontSize', 14);

Approximate Reliability Methods

Let's consider the following (very simple) optimization problem. We made it simple intentionally so we could work the transmitted variance symbolically in class.

\begin{align*} \text{min.} &\quad 4x_1^2 + 2x_2^2 + x_3^2\\ \text{s.t.} &\quad 6x_1 + 2x_2 + 4x_3 \ge 12\\ &\quad x_1 - 4x_2 + 7x_3 \le 10 \end{align*}

We can rewrite the two constraints in standard form as: \begin{align*} c_1(x) &= -6x_1 - 2x_2 - 4x_3 + 12 \le 0\\ c_2(x) &= x_1 - 4x_2 + 7x_3 - 10 \le 0 \end{align*}

Let's create functions for the objective and the deterministic constraints. Obtaining the derivatives is easy symbolically, so let's do that as well.


In [2]:
%load approx/fun.m

In [ ]:
function [f, g] = fun(x)

f = 4*x(1)^2 + 2*x(2)^2 + x(3)^2;

g = [8*x(1); 4*x(2); 2*x(3)];

end

In [3]:
%load approx/dcon.m

In [ ]:
function [c, ceq, gc, gceq] = dcon(x)

c = [-6*x(1) - 2*x(2) - 4*x(3) + 12;
    x(1) - 4*x(2) + 7*x(3) - 10];

ceq = [];

gc = [-6 1;
     -2 -4;
     -4 7];

gceq = [];

end

Computing the deterministic optimum:


In [4]:
lb = [-10; -10; -10];
ub = [10; 10; 10];
x0 = [1; 1; 1];

options = optimoptions('fmincon', 'disp', 'iter',...
    'SpecifyObjectiveGradient', true, ...
    'SpecifyConstraintGradient',true);

[xopt, fopt] = fmincon(@fun, x0, [], [], [], [], lb, ub, @dcon, options)


                                            First-order      Norm of
 Iter F-count            f(x)  Feasibility   optimality         step
    0       1    7.000000e+00    0.000e+00    1.193e+00
    1       2    5.854459e+00    0.000e+00    3.361e+00    9.920e-01
    2       3    5.746305e+00    0.000e+00    1.688e+00    7.306e-01
    3       4    5.544435e+00    0.000e+00    9.873e-02    3.227e-01
    4       5    5.388740e+00    0.000e+00    1.539e-01    8.590e-02
    5       6    5.387544e+00    0.000e+00    3.961e-04    4.465e-03
    6       7    5.386940e+00    0.000e+00    1.466e-04    6.772e-04
    7       8    5.386939e+00    0.000e+00    2.000e-06    2.693e-06

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.




xopt =

    0.7136
    0.5628
    1.6482


fopt =

    5.3869

Now, let's try to compute a reliable optimum. First, we were given that each of the design variables is normally distributed with a standard deviation of 0.1. Let's propagate that uncertainty using the simple linear expansion method discussed in class (with no uncertain parameters in this case):

$$ \sigma_c^2 = \sum_{i=1}^n \left(\frac{\partial c}{\partial x_i} \sigma_{x_i}\right)^2 $$

In [5]:
% standard deviation of each constraint (input)
sigma = [0.1; 0.1; 0.1];  

% compute gradient at our deterministic optimum
[~, ~, gc, ~] = dcon(xopt);

% initialize, separate sigma for each constraint
sigmac = zeros(2, 1);

% estimate transmitted variance
for i = 1:2  % loop through each constraint
    for j = 1:3  % loop through each design variable
        sigmac(i) = sigmac(i) + (gc(j, i)*sigma(j))^2;
    end
end
sigmac = sqrt(sigmac)


sigmac =

    0.7483
    0.8124

Now, we were also given that we wanted to satisfy each constraint separately with a probability of 99.865%. For a normal distribution we need to compute how many standard deviations that corresponds to (for a one-sided constraint). We can do that either using an inverse CDF calculation.


In [6]:
k = norminv(0.99865)


k =

    3.0000

where we use the same k for both constraints (although in general we could use a different k for each constraint). Now, we want to specify new constraints of the form: $$c(x) + k \sigma_c \le 0$$ Our reliable constraint function is below


In [7]:
%load approx/rcon.m

In [ ]:
function [c, ceq, gc, gceq] = rcon(x, k, sigmac)

% deterministic constraint
c = [-6*x(1) - 2*x(2) - 4*x(3) + 12;
    x(1) - 4*x(2) + 7*x(3) - 10];

% add transmitted variance for a reliable constraint
c = c + k*sigmac;

ceq = [];

gc = [-6 1;
     -2 -4;
     -4 7];

gceq = [];

end

We need to wrap the rcon function so that it only takes in x before passing to fmincon.


In [8]:
rconwrap = @(x) rcon(x, k, sigmac);  % we've already computed k and sigmac

[xoptR, foptR] = fmincon(@fun, xopt, [], [], [], [], lb, ub, rconwrap, options)  % note starting at deterministic optimum to speed up convergence.


                                            First-order      Norm of
 Iter F-count            f(x)  Feasibility   optimality         step
    0       1    5.386939e+00    2.437e+00    9.646e-01
    1       3    6.947600e+00    1.075e+00    1.406e+00    3.415e-01
    2       4    8.906138e+00    0.000e+00    1.238e+00    5.155e-01
    3       5    8.781377e+00    0.000e+00    4.527e-01    3.692e-01
    4       6    8.598708e+00    0.000e+00    2.110e-02    5.385e-02
    5       7    8.562647e+00    0.000e+00    2.863e-03    8.034e-03
    6       8    8.562204e+00    0.000e+00    2.003e-04    4.962e-04
    7       9    8.561807e+00    0.000e+00    2.023e-06    9.439e-05

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.




xoptR =

    0.9986
    1.0506
    1.5381


foptR =

    8.5618

We've sacrificed some objective value in order to obtain a reliable solution. Let's check the reliability of our deterministic solution using a Monte Carlo simulation.


In [9]:
% number of samples
N = 1e5;

% random variables from normal distribution
rv = randn(3, N);  

% initialize 
c = zeros(2, N);  % constraint values (2 constraints, N samples)

for i = 1:N

    % sample random numbers from normal distribution about the deterministic optimum
    xr = xopt + rv(:, i).*sigma;
    [c(:, i), ~, ~, ~] = dcon(xr);
    
end

reliability1 = nnz(c(1, :) < 0)/N
reliability2 = nnz(c(2, :) < 0)/N


reliability1 =

    0.4986


reliability2 =

    0.5033

Notice that it fails about 50% of the time! Let's now check the reliability of our new solution.


In [22]:
for i = 1:N

    % sample random numbers from normal distribution about the reliable optimum
    xr = xoptR + rv(:, i).*sigma;
    
    [c(:, i), ~, ~, ~] = dcon(xr);
end

reliability1 = nnz(c(1, :) < 0)/N
reliability2 = nnz(c(2, :) < 0)/N


reliability1 =

    0.9986


reliability2 =

    0.9988

The reliability is very close the our target of 99.86% for each constraint individually!


In [ ]: