In this example, we consider the design of an $M/M/N$ queuing system, and we study the sensitivity of the design with respect to various parameters, such as the maximum total delay and total service rate.
This example was described in the paper Differentiating through Log-Log Convex Programs.
We consider the optimization of a (Markovian) queuing system, with $N$ queues. A queuing system is a collection of queues, in which queued items wait to be served; the queued items might be threads in an operating system, or packets in an input or output buffer of a networking system. A natural goal to minimize the service load of the system, given constraints on various properties of the queuing system, such as limits on the maximum delay or latency. In this example, we formulate this design problem as a log-log convex program (LLCP), and compute the sensitivity of the design variables with respect to the parameters. The queuing system under consideration here is known as an $M/M/N$ queue.
We assume that items arriving at the $i$th queue are generated by a Poisson process with rate $\lambda_i$, and that the service times for the $i$th queue follow an exponential distribution with parameter $\mu_i$, for $i=1, \ldots, N$. The \emph{service load} of the queuing system is a function $\ell : \mathbf{R}^{N}_{++} \times \mathbf{R}^{N}_{++} \to \mathbf{R}^{N}_{++}$ of the arrival rate vector $\lambda$ and the service rate vector $\mu$, with components
$$ \ell_i(\lambda, \mu) = \frac{\mu_i}{\lambda_i}, \quad i=1, \ldots, N. $$(This is the reciprocal of the traffic load, which is usually denoted by $\rho$.) Similarly, the queue occupancy, the average delay, and the total delay of the system are (respectively) functions $q$, $w$, and $d$ of $\lambda$ and $\mu$, with components
$$ q_i(\lambda, \mu) = \frac{\ell_i(\lambda, \mu)^{-2}}{1 - \ell_i(\lambda, \mu)^{-1}}, \quad w_i(\lambda, \mu) = \frac{q_i(\lambda, \mu)}{\lambda_i} + \frac{1}{\mu_i}, \quad d_i(\lambda, \mu) = \frac{1}{\mu_i - \lambda_i} $$These functions have domain $\{(\lambda, \mu) \in \mathbf{R}^{N}_{++} \times \mathbf{R}^{N}_{++} \mid \lambda < \mu \}$, where the inequality is meant elementwise. The queuing system has limits on the queue occupancy, average queuing delay, and total delay, which must satisfy
$$ q(\lambda, \mu) \leq q_{\max}, \quad w(\lambda, \mu) \leq w_{\max}, \quad d(\lambda, \mu) \leq d_{\max}, $$where $q_{\max}$, $w_{\max}$, and $d_{\max} \in \mathbf{R}^{N}_{++}$ are parameters and the inequalities are meant elementwise. Additionally, the arrival rate vector $\lambda$ must be at least $\lambda_{\mathrm{min}} \in \mathbf{R}^{N}_{++}$, and the sum of the service rates must be no greater than $\mu_{\max} \in \mathbf{R}_{++}$.
Our design problem is to choose the arrival rates and service times to minimize a weighted sum of the service loads, $\gamma^T \ell(\lambda, \mu)$, where $\gamma \in \mathbf{R}^{N}_{++}$ is the weight vector, while satisfying the constraints. The problem is
$$ \begin{array}{ll} \mbox{minimize} & \gamma^T \ell(\lambda, \mu) \\ \mbox{subject to} & q(\lambda, \mu) \leq q_{\max} \\ & w(\lambda, \mu) \leq w_{\max} \\ & d(\lambda, \mu) \leq d_{\max} \\ & \lambda \geq \lambda_{\mathrm{min}}, \quad \sum_{i=1}^{N} \mu_i \leq \mu_{\max}. \end{array} $$Here, $\lambda, \mu \in \mathbf{R}^{N}_{++}$ are the variables and $\gamma, q_{\max}, w_{\max}, d_{\max}, \lambda_{\mathrm{min}} \in \mathbf{R}^{N}_{++}$ and $\mu_{\max} \in \mathbf{R}_{++}$ are the parameters. This problem is an LLCP. The objective function is a posynomial, as is the constraint function $w$. The functions $d$ and $q$ are not posynomials, but they are log-log convex; log-log convexity of $d$ follows from the composition rule, since the function $(x, y) \mapsto y - x$ is log-log concave (for $0 < x < y$), and the ratio $(x, y) \mapsto x/y$ is log-log affine and decreasing in $y$. By a similar argument, $q$ is also log-log convex.
In [1]:
import cvxpy as cp
import numpy as np
import time
mu = cp.Variable(pos=True, shape=(2,), name='mu')
lam = cp.Variable(pos=True, shape=(2,), name='lambda')
ell = cp.Variable(pos=True, shape=(2,), name='ell')
w_max = cp.Parameter(pos=True, shape=(2,), value=np.array([2.5, 3.0]), name='w_max')
d_max = cp.Parameter(pos=True, shape=(2,), value=np.array([2., 2.]), name='d_max')
q_max = cp.Parameter(pos=True, shape=(2,), value=np.array([4., 5.0]), name='q_max')
lam_min = cp.Parameter(pos=True, shape=(2,), value=np.array([0.5, 0.8]), name='lambda_min')
mu_max = cp.Parameter(pos=True, value=3.0, name='mu_max')
gamma = cp.Parameter(pos=True, shape=(2,), value=np.array([1.0, 2.0]), name='gamma')
lq = (ell)**(-2)/cp.one_minus_pos(ell**(-1))
q = lq
w = lq/lam + 1/mu
d = 1/cp.diff_pos(mu, lam)
constraints = [
w <= w_max,
d <= d_max,
q <= q_max,
lam >= lam_min,
cp.sum(mu) <= mu_max,
ell == mu/lam,
]
objective_fn = gamma.T @ ell
problem = cp.Problem(cp.Minimize(objective_fn), constraints)
problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')
Out[1]:
The solution is printed below.
In [2]:
print('mu ', mu.value)
print('lam ', lam.value)
print('ell ', ell.value)
In [3]:
problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')
for var in [lam, mu, ell]:
print('Variable ', var.name())
print('Gradient with respect to first component')
var.gradient = np.array([1., 0.])
problem.backward()
for param in problem.parameters():
if np.prod(param.shape) == 2:
print('{0}: {1:.3g}, {2:.3g}'.format(param.name(), param.gradient[0], param.gradient[1]))
else:
print('{0}: {1:.3g}'.format(param.name(), param.gradient))
print('Gradient with respect to second component')
var.gradient = np.array([0., 1.])
problem.backward()
for param in problem.parameters():
if np.prod(param.shape) == 2:
print('{0}: {1:.3g}, {2:.3g}'.format(param.name(), param.gradient[0], param.gradient[1]))
else:
print('{0}: {1:.3g}'.format(param.name(), param.gradient))
var.gradient = np.zeros(2)
print('')
In [4]:
problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')
mu_value = mu.value
lam_value = lam.value
delta = 0.01
for param in problem.parameters():
param.delta = param.value * delta
problem.derivative()
lam_pred = (lam.delta / lam_value) * 100
mu_pred = (mu.delta / mu_value) * 100
print('lam predicted (percent change): ', lam_pred)
print('mu predicted (percent change): ', mu_pred)
for param in problem.parameters():
param._old_value = param.value
param.value += param.delta
problem.solve(cp.SCS, gp=True, eps=1e-14, max_iters=10000)
lam_actual = ((lam.value - lam_value) / lam_value) * 100
mu_actual = ((mu.value - mu_value) / mu_value) * 100
print('lam actual (percent change): ', lam_actual)
print('mu actual (percent change): ', mu_actual)