This tutorial includes data and information that you need to set up decision optimization engines and build mathematical programming models to solve a Generalized Assignment Problem using Lagrangian relaxation.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Watson Studio Cloud (Sign up for a free IBM Cloud account and you can start using Watson Studio Cloud right away).
Some familiarity with Python is recommended.
This notebook illustrates how to solve an optimization model using Lagrangian relaxation technics. It solves a generalized assignment problem (GAP), as defined by Wolsey, using this relaxation technic.
The main aim is to show multiple optimization through modifications of different models existing in a single environment, not to show how to solve a GAP problem.
In the field of Mathematical Programming, this technic consists in the approximation of a difficult constrained problem by a simpler problem: you remove difficult constraints by integrating them in the objective function, penalizing it if the constraint is not respected.
The method penalizes violations of inequality constraints using a Lagrange multiplier, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem.
For more information, see the following Wikipedia articles: Generalized assignment problem and Lagrangian relaxation.
This notebook first solves the standard problem (which is not important here), then shows how to reformulate it to meet the Lagrangian Relaxation features.
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It considers specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control over business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle a future situation. Organizations that act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
With prescriptive analytics, you can:
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
In [ ]:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
If CPLEX is not installed, install CPLEX Community edition.
In [ ]:
try:
import cplex
except:
raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
In [ ]:
B = [15, 15, 15]
C = [
[ 6, 10, 1],
[12, 12, 5],
[15, 4, 3],
[10, 3, 9],
[8, 9, 5]
]
A = [
[ 5, 7, 2],
[14, 8, 7],
[10, 6, 12],
[ 8, 4, 15],
[ 6, 12, 5]
]
In [ ]:
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
We will firt create an optimization problem, composed of 2 basic constraints blocks, then we will resolve it using Lagrangian Relaxation on 1 of the constraints block.
In [ ]:
from docplex.mp.model import Model
mdl = Model("GAP per Wolsey")
In [ ]:
print("#As={}, #Bs={}, #Cs={}".format(len(A), len(B), len(C)))
number_of_cs = len(C)
# variables
x_vars = [mdl.binary_var_list(c, name=None) for c in C]
In [ ]:
# constraints
cts = mdl.add_constraints(mdl.sum(xv) <= 1 for xv in x_vars)
mdl.add_constraints(mdl.sum(x_vars[ii][j] * A[ii][j] for ii in range(number_of_cs)) <= bs for j, bs in enumerate(B))
# objective
total_profit = mdl.sum(mdl.scal_prod(x_i, c_i) for c_i, x_i in zip(C, x_vars))
mdl.maximize(total_profit)
mdl.print_information()
In [ ]:
s = mdl.solve()
assert s is not None
obj = s.objective_value
print("* GAP with no relaxation run OK, best objective is: {:g}".format(obj))
Let's consider for the demonstration of the Lagrangian Relaxation that this model was hard to solve for CPLEX. We will approximate this problem by doing an iterative model, where the objective is modified at each iteration.
(Wait a few seconds for the solution, due to a time limit parameter.)
We first remove the culprit constraints from the model
In [ ]:
for ct in cts:
mdl.remove_constraint(ct)
In [ ]:
#p_vars are the penalties attached to violating the constraints
p_vars = mdl.continuous_var_list(C, name='p') # new for relaxation
In [ ]:
# new version of the approximated constraint where we apply the penalties
mdl.add_constraints(mdl.sum(xv) == 1 - pv for xv, pv in zip(x_vars, p_vars))
;
In [ ]:
#Define the maximum number of iterations
max_iters = 10
In [ ]:
number_of_cs = len(C)
c_range = range(number_of_cs)
In [ ]:
# Langrangian relaxation loop
eps = 1e-6
loop_count = 0
best = 0
initial_multiplier = 1
multipliers = [initial_multiplier] * len(C)
# Objective function
# I'd write the key perfromance indicator (kpi) as
# total_profit = mdl.sum(mdl.sum(x_vars[task][worker] * C[task][worker]) for task, worker in zip(tasks, workers))
total_profit = mdl.sum(mdl.scal_prod(x_i, c_i) for c_i, x_i in zip(C, x_vars))
mdl.add_kpi(total_profit, "Total profit")
print("starting the loop")
In [ ]:
while loop_count <= max_iters:
loop_count += 1
# Rebuilt at each loop iteration
total_penalty = mdl.scal_prod(p_vars, multipliers)
mdl.maximize(total_profit + total_penalty)
s = mdl.solve()
if not s:
print("*** solve fails, stopping at iteration: %d" % loop_count)
break
best = s.objective_value
penalties = [pv.solution_value for pv in p_vars]
print('%d> new lagrangian iteration:\n\t obj=%g, m=%s, p=%s' % (loop_count, best, str(multipliers), str(penalties)))
do_stop = True
justifier = 0
for k in c_range:
penalized_violation = penalties[k] * multipliers[k]
if penalized_violation >= eps:
do_stop = False
justifier = penalized_violation
break
if do_stop:
print("* Lagrangian relaxation succeeds, best={:g}, penalty={:g}, #iterations={}"
.format(best, total_penalty.solution_value, loop_count))
break
else:
# Update multipliers and start the loop again.
scale_factor = 1.0 / float(loop_count)
multipliers = [max(multipliers[i] - scale_factor * penalties[i], 0.) for i in c_range]
print('{0}> -- loop continues, m={1!s}, justifier={2:g}'.format(loop_count, multipliers, justifier))
In [ ]:
print(best)
You can see that with this relaxation method applied to this simple model, we find the same solution to the problem.
In [ ]: