Introduction

Model of a two state reparable system: $$ \begin{equation} \frac{dp_0}{dt} = -\lambda_0 p_0(t) + \int_0^1 \mu_1(x)p_1(x,t)dx + \int_0^1u^{*}(x,t)dx \end{equation} $$

$$ \begin{equation} \frac{\partial p_1(x,t)}{\partial t} + \frac{\partial p_1(x,t)}{\partial x} = -\mu_1(x)p_1(x,t) - u^{*}(x,t) \end{equation} $$

where:

$p_0(t)$: Probability that the device is in good mode 0 at time $t$.

$p_1(x,t)$: Probability density (with respect to repair time $x$) that the failed device is in failure mode 1 at time $t$ and has an elapsed repair time of $x$

$\mu_1(x)$: Time-dependent nonnegative repair rate when the device is in failure state and has an elapsed repair time of $x$.

Carefully provide a mathematical description of the problem discussed in this report. $$ \begin{equation} \frac{dp_0}{dt} = -\lambda_0 p_0(t) + \int_0^1 \mu_1(x)p_1(x,t)dx + \int_0^1u^{*}(x,t)dx \end{equation} $$

$$ \begin{equation} \frac{\partial p_1(x,t)}{\partial t} + \frac{\partial p_1(x,t)}{\partial x} = -\mu_1(x)p_1(x,t) - u^{*}(x,t) \end{equation} $$

Given Initial Conditions

$$ \begin{eqnarray} p_1(x,0) &=& 0\\ p_0(0) &=& 1 \end{eqnarray} $$

Given Boundary Conditions:

$$ \begin{eqnarray} p_1(0,t) &=& \lambda_0 p_0(t)\\ p_1(1,t) &=& 0 \end{eqnarray} $$

The function $u^*$ is given by $$ \begin{eqnarray} u^{*}(x,t) &=& (0.3+0.1 sin(x))b(t) \end{eqnarray} $$

The function $b(t)$ represents the input. It is related to the cost constraint function $c(t)$ as given below.

$$ \begin{eqnarray} b(t)+\int_0^1\mu_1(x)f(x,t)dx -0.3p_0^{*}(t) = c(t)\\ f(x,t) = 0.1\cos(\pi t)\sin^2(1-x) \end{eqnarray} $$

Our objective is to find the input $b(t)$ such that the resulting distribution $p_0(t)$ is closest (as measured by the 2-norm) to the optimal distribution $p_0^*(t)$ given below. $$ \begin{equation} p_0^{*}(t) = 0.85+0.05\cos(2\pi t) \end{equation} $$

Methodology

We make couple of substitutions, following the notation that $z_i^j$ refers to the value of $z$ evaluated at time point $i$ and at position $j$. The repair time is divided into $m$ subintervals, while the system running time is divided into $n$ subintervals. For the purposes of numerical implementation, we chose $m$ and $n$ to be $20$ and $400$ respectively.

$$ \begin{align}p_0(t_j) &= v_j \quad 0 \le j \le n\\ p_1(x_i,t_j) &= w_j^i \quad 0 \le i \le m, \, 0\le j \le n\\ \mu_1(x_i) &= \mu^i \quad 0 \le i \le m\\ \lambda &= \lambda_0 \\ \end{align}$$

Using the new notation, the boundary conditions and initial conditions may be written as follows.

Initial conditions: $$\begin{eqnarray} w_0^{i} &=& 0 \quad \forall 0\le i \le m\\ v_0 &=& 1 \end{eqnarray}$$

Boundary Conditions:

$$\begin{eqnarray} w_j^{20} &=& 0 \quad \forall 0\le j\le n\\ w_j^0 &=& \lambda v_j \end{eqnarray}$$

Also condensing,

$$\begin{eqnarray} I_j^{*}=u^{*}(x_i,t_j) &=& g^ib_j \\ \int_0^1 u^{*}(x,t_j)dx = \alpha b_j\\ where\ g^i = (0.3+0.1sin(x^i)) \nonumber \end{eqnarray}$$

And,

$$\begin{eqnarray} b(t)+\int_0^1\mu_1(x)f(x,t)dx -0.3p_0^{*}(t) = c(t) \nonumber \\ b_j = c_j - f_j \\ where\ f_j = \int_0^1\mu_1(x)f(x,t_j)dx -0.3p_0^{*}(t_j) \end{eqnarray}$$

Discretizing

$$\begin{align} \frac{v_{j+1}-v_j}{\tau} &= -\lambda v_j + I_j + I_j^{*} \\ I_j &= h[\frac{\mu^0 w_j^0}{2} + \sum_{k=1}^{19} \mu^k w_j^k + \frac{\mu^{20} w_j^{20}}{2} ] \\ &= h[\frac{\mu^0 w_j^0}{2} + \sum_{k=1}^{19} \mu^k w_j^k ] \nonumber \\ I_j^{*} &= \alpha b_j \end{align}$$

Discretizing $$ \begin{align} \frac{w_{j+1}^i-w_{j}^i}{\tau} + \frac{w_j^{i+1}-w_j^{i-1}}{2h} = -\mu^iw_j^i - g^ib_j \nonumber\\ w_{j+1}^i=w_{j}^i-\frac{\tau}{2h}(w_j^{i+1}-w_j^{i-1})-\tau\mu^i w_j^i - \tau g^i b_j \end{align} $$

Applying LAX scheme $$w_j^i = \frac{w_j^{i-1} + w_j^{i+1}}{2}$$ we get,

$$ \begin{align*} w^i_{j+1} =& \left( \frac{w_j^{i+1} + w_j^{i-1}}{2} \right) -\frac{\tau}{2h}(w_j^{i+1}-w_j^{i-1}) \\ &-\tau\mu^i\left( \frac{w_j^{i+1} + w_j^{i-1}}{2} \right) \\ &- \tau g^i b_j\\ w^i_{j+1} =& \frac{1}{2}\left( 1-\mu^i\tau + \frac{\tau}{h} \right) w^{i-1}_j + \\ & \frac{1}{2}\left( 1-\mu^i\tau - \frac{\tau}{h} \right) w^{i+1}_j - \\ & \tau g^i b_j \end{align*} $$

Under an appropriately defined matrix $A$, we can re-write the above equation to read

$$ \begin{align} \vec{w}_{j+1} =& A\vec{w}_j - b_j\tau\vec{g} + \vec{e_1}v_{j+1} \\ =& (A)^{j+1} \vec{w}_0 - \left[ \sum_{k=0}^j b_k (A)^{j-k} \right]\vec{g}\tau \\ &+ \left[ \sum_{k=0}^j v_{k+1}(A)^{j-k} \right]\vec{e_1} \nonumber \end{align} $$

where $\vec{e_1}$ is an $m\times1$ matrix given by

$$ \begin{align} \vec{e_1} = \left[ \lambda,0,\dots,0 \right]^T \end{align} $$

Matrix A has the form:

$$ \begin{align} %\[ \begin{pmatrix} w^0_{j+1} \\ w^1_{j+1} \\ \cdots \\ w^{n-2}_{j+1} \\ w^{n-1}_{j+1} \\ w^{n}_{j+1} \end{pmatrix} &= \left( \begin{matrix} 0 & 0 & 0 & \cdots & 0 & 0 \\ %1 & a_{1} & 0 & a_{3} & \cdots & 0 & 0\\ 0 & a_{2} & 0 & a_{4} & \cdots & 0\\ 0 & 0 & a_{3} & 0 & a_5 & 0\\ \vdots \\ 0 & 0 & \cdots & 0 & 0 & 0 \end{matrix} \right) \begin{pmatrix} w^0_j \\ w^1_j \\ \cdots \\ w^{n-2}_j \\ w^{n-1}_j \\ w^{n}_j\end{pmatrix} \\ &+ b\tau \begin{pmatrix} g^0 \\ g^1 \\ \vdots \\ g^{n-2} \\ g^{n-1} \\ g^{n} \end{pmatrix} + v_{j+1} \begin{pmatrix} \lambda \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{align} $$

From $$ \begin{align} v_{j+1} &= (1-\lambda \tau)v_j + \tau I_j + \tau I_j^{*} \\ &= (1-\lambda \tau + \frac{h\tau}{2})v_j + h\tau\vec{\mu}^T\vec{w}_j + \alpha b_j\tau %v_{j+1} = av_j + \tau I_j + \tau I_j^{*}\\ %v_{j} = av_{j-1} + \tau I_{j-1} + \tau I_{j-1}^{*}\\ %v_{j+1} = a^2v_{j-1} + \tau I_j + a\tau I_{j-1} + \tau I_{j}^{*} + a\tau I_{j-1}^{*} \end{align}$$ Substitute the expression for the time evolution for $\vec{w}$ in the above to obtain, $$\begin{align*} v_{j+1} =& (1-\lambda \tau+ \frac{h\tau}{2})v_j \\ & + h\tau\vec{\mu}^T(A)^{j}\vec{w}_0 \\ & - \vec{\mu}^T\left[ \sum_{k=0}^{j-1} b_{k}(A)^{j-1-k} \right]\vec{g}\tau\\ & + \alpha b_j\tau \end{align*}$$ Let's define $$\begin{align} \beta_{j,k} &= \vec{\mu}^T (A)^{j-1-k}\vec{g} \\ \omega_j &=\vec{\mu}^T(A)^{j}\vec{w}_0 \\ \gamma &=(1-\lambda \tau+ \frac{h\tau}{2}) \end{align} $$

$$ \begin{align} \vec{\beta_0} = \begin{pmatrix} \alpha \\ 0 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \\ \end{align} $$$$ \begin{align} v_{j+1} &= \gamma v_j + h\tau\omega_j - \tau\sum_{k=0}^{j-1}\beta_{j,k}b_k + \alpha b_j\tau \\ &= \gamma v_j + h\tau\omega_j - \tau \vec{\beta}_j^T \vec{b}\\ &= \gamma^{j+1}v_0 + h\tau\sum_{k=0}^{j} \gamma^{j-k}\omega_k - \left(\sum_{k=0}^{j} \gamma^{j-k} \vec{\beta}_k\right)^T \vec{b} \end{align} $$

Under appropriately defined strictly lower triangular matrices $G$ and $B$, $$ \begin{align} \vec{v} = - B\vec{b}+\vec{d} + h\tau G\vec{\omega} \\ \vec{d}=\begin{pmatrix} 1 \\ \gamma \\ \gamma^2 \\ \gamma^3 \\ \vdots \\ \gamma^{n} \end{pmatrix}v_0 \end{align} $$ where row $j+1$ of matrix $B$ is given by $$\begin{equation} \left(\sum_{k=0}^{j} \gamma^{j-k} \vec{\beta}_k\right)^T \end{equation}$$

From the initial conditions, we have $\vec{w_0} = \vec{0}$ Thus $\vec{\omega} = \vec{0}$. Consequently, we have, $$ \begin{align} \vec{v} &= -B\vec{b} + \vec{d} \end{align} $$ Note that $\vec{d}$ is a known quantity. Hence our optimization problem reduces to finding $\vec{b}$ that best fits the equation $$ \begin{align} B\vec{b} = \vec{d}-\vec{v}^* \end{align} $$ The matrix $B$ is invertible in this case and hence our estimate of $\vec{v}$ is given by $$ \begin{align} \hat{\vec{b}} &= B^{-1} \left( \vec{d} - \vec{v}^* \right)\\ \vec{v} &= -B\hat{\vec{b}} + \vec{d} \\ &= \vec{v}^* \end{align} $$

We approximate the integral using: $$ \begin{equation} \int_{-1}^{1}f(x,t)dx = \sum_{i=1}^{n} \rho_if(x_i,t) \end{equation} $$ We used Gauss-Legendre quadrature with $n=4$. Thus, $$ \begin{align} \int_{0}^{1}f(x,t)dx &= \frac{1}{2}\sum_{i=1}^{n} \rho_if\left(\frac{x_i}{2} + \frac{1}{2},t\right)\\ \therefore c_j &= b_j + f_j \end{align} $$

Observation and Conclusions

The inversion of matrix $B$ does present some difficulties. It is easily seen from the definition of $B$ that it is lower triangular. In addition, $$ \begin{equation} (B)_{ii} = \alpha\tau \quad \forall 0\le i \le n \end{equation} $$ As $B$ is lower triangular, its determinant is the product of the main diagonal terms. $$ \begin{equation} |B| = (\alpha\tau)^{n+1} \end{equation} $$ We have $\alpha \approx 0.354$, $\tau = 0.025$ and $n=401$. Thus, $$ \begin{equation} |B| \approx 10^{-802} \end{equation} $$ which makes $B$ dangerously close to being singular. In fact, taking a direct inverse in Julia results in multiple entries going to $$\texttt{NaN}$$. We resolve this issue by computing the Moore-Penrose pseudoinverse $B^+$ of $B$. For an invertible matrix $B$, $B^- = B^+$


In [2]:
m = 21          # The number of 'space' nodes
n = 401         # The number of 'time' nodes
τ = 10/(n-1)    # Time step size
h = 1/(m-1)     # Space step size

λ = 0.3
α = 0.3+0.1*(1-cos(1))         # Integral of g(x) = 0.3 +0.1sin(x) = 0.3+0.1(1-cos1)
x = vec(linspace(0,1,m))
t = vec(linspace(0,10,n))

α = 0.3+0.1*(1-cos(1))         # Integral of g(x) = 0.3 +0.1sin(x) = 0.3+0.1(1-cos1)
μ = 1./(1-x).^2; μ[1] = 0; μ[m] = 0;
β = zeros(m,m)
γ = (1-λ*τ+h*τ/2)

optv = 0.85 + 0.05cos(2π*t)   # The optimal distribution we are aiming at


## Calcuate A
A = zeros(m,m)

for i=2:m-1
    A[i,i-1] = 1 - τ*μ[i] + τ/h
    A[i,i+1] = 1 - τ*μ[i] - τ/h
end



## Calculate g
g = (0.3+0.1*sin(x))

## Quadrature
function mu(x)
    return 1/(1-x)^2
end

function f(x)
    ## Use just the x part
    ## Remaining cos(πt) should be multiplied later
    return 0.1*sin(1-x)*sin(1-x)
end

function fmu(x)
    return f(x)mu(x)
end

weights = [(18+√30)/36,(18-√30)/36]
point1 = (3-2(6)/(5))/√7
point2 = (3+2(6)/(5))/√7

## Integrate wrt x
fx = weights[1]*fmu((point1+1)/2)+weights[1]*fmu((-point1+1)/2)+weights[2]*fmu((point2+1)/2)+weights[2]*fmu((-point2+1)/2)
fx = fx/2
## Multiply by time vector
fxt = fx*cos(π*t)

## Calculate β
β = zeros(n,n)
β[1,1] = α
for j=2:n
    for k = 1:j-1
       a = (A^(j-1-k))'*μ
       β[j,k] = dot(μ,g)[1]
    end
       β[j,j] = α
end

B = zeros(n,n)
B[1,1] = α
for j=2:n
    for k=1:j
        B[j,:] += γ^(j-k)*β[k,:]
    end
end
B = τ*B
d = zeros(n,1)
for i=1:n
  d[i] = γ^(i-1)*0.9
end

estb = pinv(B)*vec(d-optv)
estv = -B*estb + d
errv = norm(estv-optv,2);
c = estb + fxt - 0.3optv


w = zeros(m,n)
w[1,:] = λ*estv
w[1,1] = 0
for i=2:n
    w[:,i] = A*w[:,i]-τ*estb[i]*g
end
M = ones(m)
p1hat = M'*w*h

using Gadfly

In [7]:
plot(x=t, y=estv, Guide.XLabel("Time"), Guide.YLabel("Estimated p0"),Guide.XTicks(ticks=[0:1:10]))


Out[7]:
Time 0 1 2 3 4 5 6 7 8 9 10 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 0.54 0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.80 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 1.16 0.4 0.6 0.8 1.0 1.2 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 1.12 1.14 1.16 Estimated p0

In [4]:
plot(x=t, y=abs(estv-optv), Guide.XLabel("Time"), Guide.YLabel("Absolute Error"),Guide.XTicks(ticks=[0:1:10]))


Out[4]:
Time 0 1 2 3 4 5 6 7 8 9 10 -1.2×10⁻⁵ -1.0×10⁻⁵ -8.0×10⁻⁶ -6.0×10⁻⁶ -4.0×10⁻⁶ -2.0×10⁻⁶ 0 2.0×10⁻⁶ 4.0×10⁻⁶ 6.0×10⁻⁶ 8.0×10⁻⁶ 1.0×10⁻⁵ 1.2×10⁻⁵ 1.4×10⁻⁵ 1.6×10⁻⁵ 1.8×10⁻⁵ 2.0×10⁻⁵ 2.2×10⁻⁵ -1.00×10⁻⁵ -9.50×10⁻⁶ -9.00×10⁻⁶ -8.50×10⁻⁶ -8.00×10⁻⁶ -7.50×10⁻⁶ -7.00×10⁻⁶ -6.50×10⁻⁶ -6.00×10⁻⁶ -5.50×10⁻⁶ -5.00×10⁻⁶ -4.50×10⁻⁶ -4.00×10⁻⁶ -3.50×10⁻⁶ -3.00×10⁻⁶ -2.50×10⁻⁶ -2.00×10⁻⁶ -1.50×10⁻⁶ -1.00×10⁻⁶ -5.00×10⁻⁷ 0 5.00×10⁻⁷ 1.00×10⁻⁶ 1.50×10⁻⁶ 2.00×10⁻⁶ 2.50×10⁻⁶ 3.00×10⁻⁶ 3.50×10⁻⁶ 4.00×10⁻⁶ 4.50×10⁻⁶ 5.00×10⁻⁶ 5.50×10⁻⁶ 6.00×10⁻⁶ 6.50×10⁻⁶ 7.00×10⁻⁶ 7.50×10⁻⁶ 8.00×10⁻⁶ 8.50×10⁻⁶ 9.00×10⁻⁶ 9.50×10⁻⁶ 1.00×10⁻⁵ 1.05×10⁻⁵ 1.10×10⁻⁵ 1.15×10⁻⁵ 1.20×10⁻⁵ 1.25×10⁻⁵ 1.30×10⁻⁵ 1.35×10⁻⁵ 1.40×10⁻⁵ 1.45×10⁻⁵ 1.50×10⁻⁵ 1.55×10⁻⁵ 1.60×10⁻⁵ 1.65×10⁻⁵ 1.70×10⁻⁵ 1.75×10⁻⁵ 1.80×10⁻⁵ 1.85×10⁻⁵ 1.90×10⁻⁵ 1.95×10⁻⁵ 2.00×10⁻⁵ -1×10⁻⁵ 0 1×10⁻⁵ 2×10⁻⁵ -1.0×10⁻⁵ -9.0×10⁻⁶ -8.0×10⁻⁶ -7.0×10⁻⁶ -6.0×10⁻⁶ -5.0×10⁻⁶ -4.0×10⁻⁶ -3.0×10⁻⁶ -2.0×10⁻⁶ -1.0×10⁻⁶ 0 1.0×10⁻⁶ 2.0×10⁻⁶ 3.0×10⁻⁶ 4.0×10⁻⁶ 5.0×10⁻⁶ 6.0×10⁻⁶ 7.0×10⁻⁶ 8.0×10⁻⁶ 9.0×10⁻⁶ 1.0×10⁻⁵ 1.1×10⁻⁵ 1.2×10⁻⁵ 1.3×10⁻⁵ 1.4×10⁻⁵ 1.5×10⁻⁵ 1.6×10⁻⁵ 1.7×10⁻⁵ 1.8×10⁻⁵ 1.9×10⁻⁵ 2.0×10⁻⁵ Absolute Error

In [5]:
plot(x=t,y=estb, Guide.XLabel("Time"), Guide.YLabel("b(t)"),Guide.XTicks(ticks=[0:1:10]))


Out[5]:
Time 0 1 2 3 4 5 6 7 8 9 10 -0.0030 -0.0025 -0.0020 -0.0015 -0.0010 -0.0005 0.0000 0.0005 0.0010 0.0015 0.0020 0.0025 -0.00250 -0.00245 -0.00240 -0.00235 -0.00230 -0.00225 -0.00220 -0.00215 -0.00210 -0.00205 -0.00200 -0.00195 -0.00190 -0.00185 -0.00180 -0.00175 -0.00170 -0.00165 -0.00160 -0.00155 -0.00150 -0.00145 -0.00140 -0.00135 -0.00130 -0.00125 -0.00120 -0.00115 -0.00110 -0.00105 -0.00100 -0.00095 -0.00090 -0.00085 -0.00080 -0.00075 -0.00070 -0.00065 -0.00060 -0.00055 -0.00050 -0.00045 -0.00040 -0.00035 -0.00030 -0.00025 -0.00020 -0.00015 -0.00010 -0.00005 0.00000 0.00005 0.00010 0.00015 0.00020 0.00025 0.00030 0.00035 0.00040 0.00045 0.00050 0.00055 0.00060 0.00065 0.00070 0.00075 0.00080 0.00085 0.00090 0.00095 0.00100 0.00105 0.00110 0.00115 0.00120 0.00125 0.00130 0.00135 0.00140 0.00145 0.00150 0.00155 0.00160 0.00165 0.00170 0.00175 0.00180 0.00185 0.00190 0.00195 0.00200 -0.004 -0.002 0.000 0.002 -0.0025 -0.0024 -0.0023 -0.0022 -0.0021 -0.0020 -0.0019 -0.0018 -0.0017 -0.0016 -0.0015 -0.0014 -0.0013 -0.0012 -0.0011 -0.0010 -0.0009 -0.0008 -0.0007 -0.0006 -0.0005 -0.0004 -0.0003 -0.0002 -0.0001 0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 0.0008 0.0009 0.0010 0.0011 0.0012 0.0013 0.0014 0.0015 0.0016 0.0017 0.0018 0.0019 0.0020 b(t)

In [6]:
plot(x=t,y=p1hat, Guide.XLabel("Time"), Guide.YLabel("p1hat(t)"), Guide.YTicks(ticks=[0:0.0025:0.0225]), Guide.XTicks(ticks=[0:1:10]))


Out[6]:
Time 0 1 2 3 4 5 6 7 8 9 10 0.0000 0.0025 0.0050 0.0075 0.0100 0.0125 0.0150 0.0175 0.0200 0.0225 p1hat(t)

In [ ]: