Seminar 18

Linear programming. Primal interior point method

Reminder

  • Linear programming problem statement
  • Examples of application
  • Simplex mathod and its drawbacks

Review of existing packages to solve linear programming problems

  • Solving linear programming problems is complete technology, therefore usually you just need to call appropriate function from the appropriate package
  • Review is available here
  • Comparison is available here

A bit history

  • In 1979 L. Khachiyan proposed ellipsoid method and proved, that it solves any linear programming problem in polynomial time. Article about this publication was published in the first page of New York Times on November, 7, 1979.
  • However complexity of the ellipsoid method is $O(n^6 L^2)$, and it was slower than simplex method for practical problems
  • In 1984 N. Karmarkar proposed another polynomial time method for solving linear programming problem, which was significantly faster than ellipsoid method, pacticularly $O(n^{3.5} L^2)$.
  • One of the recent paper on this topic proposes the method to solve linear programming problem in $\tilde{O}\left(\sqrt{rank(A)}L\right)$. $f(n)\in\tilde{O}(g(n)) \leftrightarrow \exists k:f(n) \in O(g(n) \log^kg(n))$.
  • Both ellipsoid method and Karmarkar's method are primal-dual methods or interior point methods that will cover further...

Alternatives to simplex method

  • Simplex method is based on the search of solution among the vertices
  • Other approach is to move from interior of the feasible set to one of the vertex - solution of the problem
  • Therefore, such methods are called interior point methods

Trade-off between number of iterations and cost of one iteration

  • One of the main watershed in numerical optimization (cf. gradient descent and Newton method)
  • Linear programming is solved in polynomial time by performing small number of cost iterations
  • Further we will study methods that perform a lot of cheap iterations

Duality in linear programming

Theorem.

  • If primal (dual) problem has finite solution, then dual (primal) problem has finite solution too and their optimal objectives are equal
  • If primal (dual) problem is unbounded, then feasible set of the dual (primal) problem is empty.

Interior point method idea

Original problem \begin{align*} &\min_x c^{\top}x \\ \text{s.t. } & Ax = b\\ & x_i \geq 0, \; i = 1,\dots, n \end{align*}

Intermediate problem \begin{align*} &\min_x c^{\top}x {\color{red}{- \mu \sum\limits_{i=1}^n \ln x_i}} \\ \text{s.t. } & Ax = b\\ \end{align*} for some $\mu > 0$

Barrier function

Definition. A function $B(x, \mu) = -\mu\ln x$ is called barrier for the problem with constraint $x \geq 0$.

More details we will known in seminar about general convex nonlinear optimization problems...

What happened?

  • Convert linear problem to nonlinear
  • Move inequality constraints into the objective function
  • Introduce additional parameter $\mu$

Why is it good?

Move to the problem with equality constraints only $\to$ simplification of the optimality conditions, in particular

  • Exclude complementary slackness conditions from optimality conditions
  • Exclude non-negativity of the Lagrange multipliers for inequality constraints

Optimality conditions

  • Lagrangian: $L = c^{\top}x - \mu\sum\limits_{i=1}^n \ln x_i + \lambda^{\top}(Ax - b)$
  • Stationary point $L$:
$$ c - \mu X^{-1}e + A^{\top}\lambda = 0, $$

where $X = \mathrm{diag}(x_1, \dots, x_n)$ and $e = [1, \dots, 1]$

  • Equality constraints: $Ax = b$

Let $s = \mu X^{-1}e$, then optimality conditions can be re-written as follows:

  • $A^{\top}\lambda + c - s = 0 $
  • $Xs = {\color{red}{\mu e}}$
  • $Ax = b$

Also $x > 0 \Rightarrow s > 0$

Comparison with optimality conditions for original problem

  • Lagrangian: $L = c^{\top}x + \lambda^{\top}(Ax - b) - s^{\top}x$
  • First order optimality condition: $c + A^{\top}\lambda - s = 0$
  • Primal feasibility: $Ax = b, \; x \geq 0$
  • Dual fasibility: $s \geq 0$
  • Complementary slackness: $s_ix_i = 0$

After re-arrangement

  • $A^{\top}\lambda + c - s = 0$
  • $Ax = b$
  • $Xs = {\color{red}{0}}$
  • $x \geq 0, \; s \geq 0$

Summary

  • Introducing barrier function with coefficient $\mu$ is equivalent to relaxation with parameter $\mu$ of complementary slackness of the original problem.
  • If $\mu \to 0$ solutions are equal!
  • Idea: solve problems with barrier function iteratively decreasing $\mu$. Sequence of solutions convereges to the vertex of simplex along trajectory consisting of the points which lie inside the simplex

General scheme

def GeneralInteriorPointLP(c, A, b, x0, mu0, rho, tol):
    x = x0
    mu = mu0
    e = np.ones(c.shape[0])
    while True:
        primal_var, dual_var = StepInsideFeasibleSet(c, A, b, x, mu)
        mu *= rho
        if converge(primal_var, dual_var, c, A, b, tol) and mu < tol:
            break
    return x

How to solve the problem with barrier function?

  • Primal method - the next slide
  • Primal-dual method - some weeks later

Primal method

Remind original problem: \begin{align*} &\min_x c^{\top}x - \mu \sum\limits_{i=1}^n \ln x_i \\ \text{s.t. } & Ax = b\\ \end{align*}

Idea: approximate objective function up to second order like in Newton method.

Implementation

In the $(k+1)$-th iteration one has to solve the following problem: \begin{align*} &\min_p \frac{1}{2}p^{\top}Hp + g^{\top}p\\ \text{s.t. } & A(x_k + p) = b,\\ \end{align*} where $H = \mu X^{-2}$ is hessian, and $g = c - \mu X^{-1}e$ is gradient.

KKT again

KKT conditions for this problem

  • $Hp + g + A^{\top}\lambda = 0$
  • $Ap = 0$

or

$$\begin{bmatrix} H & A^{\top}\\ A & 0 \end{bmatrix} \begin{bmatrix} p\\ \lambda \end{bmatrix} = \begin{bmatrix} -g \\ 0 \end{bmatrix}$$

From the first line:

$$ -\mu X^{-2}p + A^{\top}\lambda = c - \mu X^{-1}e $$$$ -\mu Ap + AX^{2}A^{\top}\lambda = AX^2c - \mu AXe $$$$ AX^{2}A^{\top}\lambda = AX^2c - \mu AXe $$

Due to $X \in \mathbb{S}^n_{++}$ and $A$ is full-rank, then equation has unique solution $\lambda^*$.

Find direction $p$

$$ -\mu p + X^2A^{\top}\lambda^* = X^2c - \mu Xe = X^2c - \mu x $$$$ p = x + \frac{1}{\mu}X^2(A^{\top}\lambda^* - c) $$

How to solve KKT system

  1. Direct method: compose $(n + m) \times (n + m)$ matrix and explicitly solve system - $\frac{1}{3}(n + m)^3$ flops
  2. Sequential variable elimination:

    • $Hp + A^{\top}\lambda = -g$, $p = -H^{-1}(g + A^{\top}\lambda)$
    • $Ap = -AH^{-1}(g + A^{\top}\lambda) = -AH^{-1}A^{\top}\lambda - AH^{-1}g = 0$

      where matrix $-AH^{-1}A^{\top}$ is called Schur complement of the matrix $H$.

  3. Method for sequential variable elimination
    • Compute $H^{-1}g$ and $H^{-1}A^{\top}$ - $f_H + (m+1)s_H$ flops
    • Compute Schur complement $-AH^{-1}A^{\top}$ - $\mathcal{O}(m^2n)$
    • Find $\lambda$ - $\frac{1}{3}m^3$ flops
    • Find $p$ - $s_H + \mathcal{O}(mn)$ flops
  4. Summary: $f_H + ms_H + \frac{m^3}{3} + \mathcal{O}(m^2n)$ is much less than direct method

Use structure of $H$

  • In our case $H = \mu X^{-2}$ is diagonal matrix!
  • $f_H$ - $n$ flops
  • $s_H$ - $n$ flops
  • Total complexity $\frac{m^3}{3} + \mathcal{O}(m^2n)$ flops, where $m \ll n$
  • Standard backtracking rules
  • Constraint $A(x_k + \alpha p) = b$ is satisfied automatically

Pseudocode of the primal barrier method

def PrimalBarrierLP(c, A, b, x0, mu0, rho, tol):
    x = x0
    mu = mu0
    e = np.ones(x.shape[0])
    while True:
        p, lam = ComputeNewtonDirection(c, x, A, mu)
        alpha = line_search(p, mu, c, x)
        x = x + alpha * p
        mu = rho * mu
        if mu < tol and np.linalg.norm(x.dot(c - A.T.dot(lam)) - mu * e) < tol:
            break
    return x

Comparison barrier method and primal interior point method

  • Klee-Minty's example from the last seminar \begin{align*} & \max_{x \in \mathbb{R}^n} 2^{n-1}x_1 + 2^{n-2}x_2 + \dots + 2x_{n-1} + x_n\\ \text{s.t. } & x_1 \leq 5\\ & 4x_1 + x_2 \leq 25\\ & 8x_1 + 4x_2 + x_3 \leq 125\\ & \ldots\\ & 2^n x_1 + 2^{n-1}x_2 + 2^{n-2}x_3 + \ldots + x_n \leq 5^n\\ & x \geq 0 \end{align*}
  • What is simplex method complexity?
  • Reduction to standard form
$$ \begin{align*} & \min_{x, \; z} -c^{\top}x \\ \text{s.t. } & Ax + z = b\\ & x \geq 0, \quad z \geq 0 \end{align*} $$
  • Compare running time of simplex method and primal barrier method

In [26]:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.optimize as scopt
import scipy.linalg as sclin

USE_COLAB = False
if not USE_COLAB:
    plt.rc("text", usetex=True)

In [27]:
def NewtonLinConstraintsFeasible(f, gradf, hessf, A, x0, line_search, linsys_solver, args=(), 
                                 disp=False, disp_conv=False, callback=None, tol=1e-6, max_iter=100, **kwargs):
    x = x0.copy()
    n = x0.shape[0]
    iteration = 0
    lam = np.random.randn(A.shape[0])
    while True:
        gradient, hess = gradf(x, *args), hessf(x, *args)
        h = linsys_solver(hess, A, gradient)
        descent_dir = h[:n]
        decrement = descent_dir.dot(hessf(x, *args).dot(descent_dir))
        if decrement < tol:
            if disp_conv:
                print("Tolerance achieved! Decrement = {}".format(decrement))
            break
        alpha = line_search(x, descent_dir, f, gradf, args, **kwargs)
        if alpha < 1e-16:
            if disp_conv:
                print("Step is too small!")
        x = x + alpha * descent_dir
        if callback is not None:
            callback((descent_dir, x))
        iteration += 1
        if disp:
            print("Current function val = {}".format(f(x, *args)))
            print("Newton decrement = {}".format(decrement))
        if iteration >= max_iter:
            if disp_conv:
                print("Maxiter exceeds!")
            break
    res = {"x": x, "num_iter": iteration, "tol": decrement}
    return res

In [28]:
def simple_solver(hess, A, gradient):
    n = hess.shape[0]
    n_lin_row, n_lin_col = A.shape
    modified_hess = np.zeros((n + n_lin_row, n + n_lin_row))
    modified_hess[:n, :n] = hess
    modified_hess[n:n + n_lin_row, :n_lin_col] = A
    modified_hess[:n_lin_col, n:n + n_lin_row] = A.T
    rhs = np.zeros(n + n_lin_row)
    rhs[:n] = -gradient
    h = np.linalg.solve(modified_hess, rhs)
    return h

def elimination_solver(hess, A, gradient):
    inv_hess_diag = np.divide(1.0, np.diag(hess))
    inv_hess_grad = np.multiply(-inv_hess_diag, gradient)
    rhs = A.dot(inv_hess_grad)
    L_inv_hess = np.sqrt(inv_hess_diag)
    AL_inv_hess = A * L_inv_hess
    # print(AL_inv_hess.shape)
    S = AL_inv_hess.dot(AL_inv_hess.T)
    cho_S = sclin.cho_factor(S)
    w = sclin.cho_solve(cho_S, rhs)
#     w = np.linalg.solve(S, rhs)
    v = np.subtract(inv_hess_grad, np.multiply(inv_hess_diag, A.T.dot(w)))
#     h = np.zeros(hess.shape[1] + A.shape[0])
#     h[:hess.shape[1]] = v
#     h[hess.shape[1]:hess.shape[1] + A.shape[0]] = w
    return v

In [29]:
def backtracking(x, descent_dir, f, grad_f, args, **kwargs):
    beta1 = kwargs["beta1"]
    rho = kwargs["rho"]
    alpha = 1
    while f(x + alpha * descent_dir, *args) >= f(x, *args) + beta1 * alpha * grad_f(x, *args).dot(descent_dir) \
            or np.isnan(f(x + alpha * descent_dir, *args)):
        alpha *= rho
        if alpha < 1e-16:
            break
    return alpha

In [30]:
def generate_KleeMinty_test_problem(n):
    c = np.array([2**i for i in range(n)])
    c = -c[::-1]
    bounds = [(0, None) for i in range(n)]
    b = np.array([5**(i+1) for i in range(n)])
    a = np.array([1] + [2**(i+1) for i in range(1, n)])
    A = np.zeros((n, n))
    for i in range(n):
        A[i:, i] = a[:n-i]
    return c, A, b, bounds

In [31]:
n = 7
c, A, b, _ = generate_KleeMinty_test_problem(n)
eps = 1e-10
def f(x, c, mu):
    n = c.shape[0]
    return c.dot(x[:n]) - mu * np.sum(np.log(eps + x))

def gradf(x, c, mu):
    grad = np.zeros(len(x))
    n = c.shape[0]
    grad[:n] = c - mu / (eps + x[:n])
    grad[n:] = -mu / (eps + x[n:])
    return grad

def hessf(x, c, mu):
    return mu * np.diag(1. / (eps + x)**2)

A_lin = np.zeros((n, n + A.shape[0]))
A_lin[:n, :n] = A
A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0])
mu = 0.1

Check gradient


In [32]:
scopt.check_grad(f, gradf, np.random.rand(n), c, mu)


Out[32]:
8.207673676669207e-07

Select initial guess vector from feasible set and from the domain of the objective


In [33]:
x0 = np.zeros(2*n)
x0[:n] = np.random.rand(n)
x0[n:2*n] = b - A.dot(x0[:n])
print(np.linalg.norm(A_lin.dot(x0) - b))
print(np.sum(x0 <= 1e-6))


4.689582056016661e-13
0

Check convergence


In [34]:
hist_conv = []
def cl(x):
    hist_conv.append(x)
res = NewtonLinConstraintsFeasible(f, gradf, hessf, A_lin, x0, backtracking, elimination_solver, (c, mu), callback=cl,
                           max_iter=2000, beta1=0.1, rho=0.7)
print("Decrement value = {}".format(res["tol"]))
fstar = f(res["x"], c, mu)
hist_conv_f = [np.abs(fstar - f(descdir_x[1], c, mu)) for descdir_x in hist_conv]
plt.figure(figsize=(12, 5))
plt.subplot(1,2,1)
plt.semilogy(hist_conv_f)
plt.xlabel("Number of iteration, $k$", fontsize=18)
plt.ylabel("$f^* - f_k$", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)

hist_conv_x = [np.linalg.norm(res["x"] - x[1]) for x in hist_conv]
plt.subplot(1,2,2)
plt.semilogy(hist_conv_x)
plt.xlabel("Number of iteration, $k$", fontsize=18)
plt.ylabel("$\| x_k - x^*\|_2$", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
plt.tight_layout()


/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Decrement value = 4.862770418574955e-09

Implementation of primal barrier method


In [35]:
def BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, linesearch, linsys_solver, 
                           tol=1e-8, max_iter=500, disp_conv=False, **kwargs):
    x = x0.copy()
    n = x0.shape[0]
    mu = mu0
    while True:
        res = NewtonLinConstraintsFeasible(f, gradf, hessf, A, x, linesearch, linsys_solver, (c, mu), 
                                           disp_conv=disp_conv, max_iter=max_iter, beta1=0.01, rho=0.5)
        x = res["x"].copy()
        if n * mu < tol:
            break
        mu *= rho_mu
    return x

In [36]:
mu0 = 5
rho_mu = 0.5
x = BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100)
%timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100)
%timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver, max_iter=100)
print(x[:n])


/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
2.23 s ± 90.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
22.1 ms ± 2.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[9.21322571e-09 1.85264514e-08 3.71311496e-08 7.00335912e-08
 9.71878844e-08 8.95342930e-08 7.81315161e+04]

Running time comparison


In [37]:
mu0 = 2
rho_mu = 0.5
n_list = range(3, 10)
n_iters = np.zeros(len(n_list))
times_simplex = np.zeros(len(n_list))
times_barrier_simple = np.zeros(len(n_list))
for i, n in enumerate(n_list):
    print("Current dimension = {}".format(n))
    c, A, b, bounds = generate_KleeMinty_test_problem(n)
    res = scopt.linprog(c, A, b, bounds=bounds, options={"maxiter": 2**max(n_list) + 1})
    time = %timeit -o -q scopt.linprog(c, A, b, bounds=bounds, options={"maxiter": 2**max(n_list) + 1})
    times_simplex[i] = time.best
    A_lin = np.zeros((n, n + A.shape[0]))
    A_lin[:n, :n] = A
    A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0])
    x0 = np.zeros(2*n)
    x0[:n] = np.random.rand(n)
    x0[n:2*n] = b - A.dot(x0[:n])
    time = %timeit -o -q BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver)
    times_barrier_simple[i] = time.best


Current dimension = 3
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 4
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 5
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 6
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 7
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 8
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
Current dimension = 9
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  

In [38]:
plt.figure(figsize=(8, 5))
plt.semilogy(n_list, times_simplex, label="Simplex")
plt.semilogy(n_list, times_barrier_simple, label="Primal barrier")
plt.legend(fontsize=18)
plt.xlabel("Dimension, $n$", fontsize=18)
plt.ylabel("Computation time, sec.", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)


Remarks

  • It was shown that primal barrier method is equivalent to Karmarkar's method
  • Use only primal problem information
  • Initial guess has to lie inside feasible set - separate problem

Comparison of method to solve linear system in Newton method

  • Compose matrix explicitly
  • Use block structure and sequential variable eliminating

In [39]:
def generate_linprog_problem(n, m=10):
    x = np.random.rand(n)
    A = np.random.randn(m, n)
    b = A.dot(x)
    c = np.random.randn(n)
    c[c < 0] = 1
    return c, A, b, x

In [40]:
n = 20
c, A, b, x0 = generate_linprog_problem(n)
res = scopt.linprog(c, A_eq=A, b_eq=b, bounds=[(0, None) for i in range(n)])
m = A.shape[0]
# x0 = np.random.rand(n)
# resid = b - A[:, m:].dot(x0[m:])
# x0[:m] = np.linalg.solve(A[:m, :m], resid)
print(np.linalg.norm(A.dot(x0) - b))
mu0 = 1
rho_mu = 0.5


0.0

In [41]:
x_bar_simple = BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, 
                                      simple_solver, max_iter=100, tol=1e-8)
x_bar_elim = BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, 
                                    elimination_solver, tol=1e-8)
print(np.linalg.norm(res["x"] - x_bar_simple))
print(np.linalg.norm(res["x"] - x_bar_elim))
print(c.dot(res["x"]))
print(c.dot(x_bar_elim))


7.454027024366833e-08
8.297346244593269e-08
3.4614053214801457
3.461405402686456
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  

In [42]:
n_list = [10*i for i in range(2, 63, 10)]
times_simple = np.zeros(len(n_list))
times_elim = np.zeros(len(n_list))
mu0 = 5
rho_mu = 0.5
for i, n in enumerate(n_list):
    print("Current dimension = {}".format(n))
    c, A, b, x0 = generate_linprog_problem(n)
    time = %timeit -o BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, simple_solver)
    times_simple[i] = time.average
    time = %timeit -o BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, elimination_solver)
    times_elim[i] = time.average


Current dimension = 20
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
10 ms ± 569 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
14.6 ms ± 325 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Current dimension = 120
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
31.4 ms ± 392 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
25.4 ms ± 2.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Current dimension = 220
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
65.7 ms ± 4.73 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
32.7 ms ± 2.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Current dimension = 320
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
147 ms ± 11.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
65.5 ms ± 4.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Current dimension = 420
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
184 ms ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
1.18 s ± 80.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Current dimension = 520
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
290 ms ± 8.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
87.8 ms ± 7.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Current dimension = 620
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
719 ms ± 20.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
  
275 ms ± 55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [43]:
plt.semilogy(n_list, times_elim, label="Elimination solver")
plt.semilogy(n_list, times_simple, label="Simple solver")
# plt.semilogy(n_list, 10**(-6)*np.array(n_list)**3 / 3)
plt.legend(fontsize=18)
plt.xlabel("Dimension, $n$", fontsize=18)
plt.ylabel("Computation time, sec.", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)


Recap

  1. History notes on development of methods for solving linear programming problems
  2. Recent advances in this field
  3. Idea of interior point methods
  4. Primal barrier method for solving linear programming problem