This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Watson Studio Cloud (Sign up for a free IBM Cloud account and you can start using Watson Studio Cloud right away).
Table of contents:
This notebook describes how to use CPLEX Modeling for Python together with pandas to manage the assignment of nurses to shifts in a hospital.
Nurses must be assigned to hospital shifts in accordance with various skill and staffing constraints.
The goal of the model is to find an efficient balance between the different objectives:
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
With prescriptive analytics, you can:
In [ ]:
import pip
REQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'
try:
import pandas as pd
assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION
except:
raise Exception("Version %s or above of Pandas is required to run this notebook" % REQUIRED_MINIMUM_PANDAS_VERSION)
In [ ]:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
The input data consists of several tables:
In [ ]:
CSS = """
body {
margin: 0;
font-family: Helvetica;
}
table.dataframe {
border-collapse: collapse;
border: none;
}
table.dataframe tr {
border: none;
}
table.dataframe td, table.dataframe th {
margin: 0;
border: 1px solid white;
padding-left: 0.25em;
padding-right: 0.25em;
}
table.dataframe th:not(:empty) {
background-color: #fec;
text-align: left;
font-weight: normal;
}
table.dataframe tr:nth-child(2) th:empty {
border-left: none;
border-right: 1px dashed #888;
}
table.dataframe td {
border: 2px solid #ccf;
background-color: #f4f4ff;
}
table.dataframe thead th:first-child {
display: none;
}
table.dataframe tbody th {
display: none;
}
"""
In [ ]:
from IPython.core.display import HTML
HTML('<style>{}</style>'.format(CSS))
from IPython.display import display
In [ ]:
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
In [ ]:
try:
from urllib2 import urlopen
except ImportError:
from urllib.request import urlopen
In [ ]:
# This notebook requires pandas to work
import pandas as pd
from pandas import DataFrame
# Make sure that xlrd package, which is a pandas optional dependency, is installed
# This package is required for Excel I/O
try:
import xlrd
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install xlrd
else:
!pip install --user xlrd
# Use pandas to read the file, one tab for each table.
data_url = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/mp/jupyter/nurses_data.xls?raw=true"
nurse_xls_file = pd.ExcelFile(urlopen(data_url))
df_skills = nurse_xls_file.parse('Skills')
df_depts = nurse_xls_file.parse('Departments')
df_shifts = nurse_xls_file.parse('Shifts')
# Rename df_shifts index
df_shifts.index.name = 'shiftId'
# Index is column 0: name
df_nurses = nurse_xls_file.parse('Nurses', header=0, index_col=0)
df_nurse_skilles = nurse_xls_file.parse('NurseSkills')
df_vacations = nurse_xls_file.parse('NurseVacations')
df_associations = nurse_xls_file.parse('NurseAssociations')
df_incompatibilities = nurse_xls_file.parse('NurseIncompatibilities')
# Display the nurses dataframe
print("#nurses = {}".format(len(df_nurses)))
print("#shifts = {}".format(len(df_shifts)))
print("#vacations = {}".format(len(df_vacations)))
In addition, we introduce some extra global data:
In [ ]:
# maximum work time (in hours)
max_work_time = 40
# maximum number of shifts worked in a week.
max_nb_shifts = 5
Shifts are stored in a separate DataFrame.
In [ ]:
df_shifts
We need to precompute additional data for shifts. For each shift, we need the start time and end time expressed in hours, counting from the beginning of the week: Monday 8am is converted to 8, Tuesday 8am is converted to 24+8 = 32, and so on.
We start by adding an extra column dow
(day of week) which converts the string "day" into an integer in 0..6 (Monday is 0, Sunday is 6).
In [ ]:
days = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"]
day_of_weeks = dict(zip(days, range(7)))
# utility to convert a day string e.g. "Monday" to an integer in 0..6
def day_to_day_of_week(day):
return day_of_weeks[day.strip().lower()]
# for each day name, we normalize it by stripping whitespace and converting it to lowercase
# " Monday" -> "monday"
df_shifts["dow"] = df_shifts.day.apply(day_to_day_of_week)
df_shifts
In [ ]:
df_shifts["wstart"] = df_shifts.start_time + 24 * df_shifts.dow
Computing the absolute end time is a little more complicated as certain shifts span across midnight. For example, Shift #3 starts on Monday at 18:00 and ends Tuesday at 2:00 AM. The absolute end time of Shift #3 is 26, not 2. The general rule for computing absolute end time is:
abs_end_time = end_time + 24 * dow + (start_time>= end_time ? 24 : 0)
Again, we use pandas to add a new calculated column wend
. This is done by using the pandas apply
method with an anonymous lambda
function over rows. The raw=True
parameter prevents the creation of a pandas Series for each row, which improves the performance significantly on large data sets.
In [ ]:
# an auxiliary function to calculate absolute end time of a shift
def calculate_absolute_endtime(start, end, dow):
return 24*dow + end + (24 if start>=end else 0)
# store the results in a new column
df_shifts["wend"] = df_shifts.apply(lambda row: calculate_absolute_endtime(
row.start_time, row.end_time, row.dow), axis=1, raw=True)
In [ ]:
df_shifts["duration"] = df_shifts.wend - df_shifts.wstart
Minimum demand is the product of duration (in hours) by the minimum required number of nurses. Thus, in number of
nurse-hours, this demand is stored in another new column min_demand
.
Finally, we display the updated shifts DataFrame with all calculated columns.
In [ ]:
# also compute minimum demand in nurse-hours
df_shifts["min_demand"] = df_shifts.min_req * df_shifts.duration
# finally check the modified shifts dataframe
df_shifts
In [ ]:
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
In [ ]:
from docplex.mp.model import Model
mdl = Model(name="nurses")
In [ ]:
# first global collections to iterate upon
all_nurses = df_nurses.index.values
all_shifts = df_shifts.index.values
# the assignment variables.
assigned = mdl.binary_var_matrix(keys1=all_nurses, keys2=all_shifts, name="assign_%s_%s")
Some shifts overlap in time, and thus cannot be assigned to the same nurse. To check whether two shifts overlap in time, we start by ordering all shifts with respect to their wstart and duration properties. Then, for each shift, we iterate over the subsequent shifts in this ordered list to easily compute the subset of overlapping shifts.
We use pandas operations to implement this algorithm. But first, we organize all decision variables in a DataFrame.
For convenience, we also organize the decision variables in a pivot table with nurses as row index and shifts as columns. The pandas unstack operation does this.
In [ ]:
# Organize decision variables in a DataFrame
df_assigned = DataFrame({'assigned': assigned})
df_assigned.index.names=['all_nurses', 'all_shifts']
# Re-organize the Data Frame as a pivot table with nurses as row index and shifts as columns:
df_assigned_pivot = df_assigned.unstack(level='all_shifts')
# Create a pivot using nurses and shifts index as dimensions
#df_assigned_pivot = df_assigned.reset_index().pivot(index='all_nurses', columns='all_shifts', values='assigned')
# Display first rows of the pivot table
df_assigned_pivot.head()
We create a DataFrame representing a list of shifts sorted by "wstart" and "duration". This sorted list will be used to easily detect overlapping shifts.
Note that indices are reset after sorting so that the DataFrame can be indexed with respect to the index in the sorted list and not the original unsorted list. This is the purpose of the reset_index() operation which also adds a new column named "shiftId" with the original index.
In [ ]:
# Create a Data Frame representing a list of shifts sorted by wstart and duration.
# One keeps only the three relevant columns: 'shiftId', 'wstart' and 'wend' in the resulting Data Frame
df_sorted_shifts = df_shifts.sort_values(['wstart','duration']).reset_index()[['shiftId', 'wstart', 'wend']]
# Display the first rows of the newly created Data Frame
df_sorted_shifts.head()
Next, we state that for any pair of shifts that overlap in time, a nurse can be assigned to only one of the two.
In [ ]:
number_of_incompatible_shift_constraints = 0
for shift in df_sorted_shifts.itertuples():
# Iterate over following shifts
# 'shift[0]' contains the index of the current shift in the df_sorted_shifts Data Frame
for shift_2 in df_sorted_shifts.iloc[shift[0] + 1:].itertuples():
if (shift_2.wstart < shift.wend):
# Iterate over all nurses to force incompatible assignment for the current pair of overlapping shifts
for nurse_assignments in df_assigned_pivot.iloc[:, [shift.shiftId, shift_2.shiftId]].itertuples():
# this is actually a logical OR
mdl.add_constraint(nurse_assignments[1] + nurse_assignments[2] <= 1)
number_of_incompatible_shift_constraints += 1
else:
# No need to test overlap with following shifts
break
print("#incompatible shift constraints: {}".format(number_of_incompatible_shift_constraints))
When the nurse is on vacation, he cannot be assigned to any shift starting that day.
We use the pandas merge operation to create a join between the "df_vacations", "df_shifts", and "df_assigned" DataFrames. Each row of the resulting DataFrame contains the assignment decision variable corresponding to the matching (nurse, shift) pair.
In [ ]:
# Add 'day of week' column to vacations Data Frame
df_vacations['dow'] = df_vacations.day.apply(day_to_day_of_week)
# Join 'df_vacations', 'df_shifts' and 'df_assigned' Data Frames to create the list of 'forbidden' assigments.
# The 'reset_index()' function is invoked to move 'shiftId' index as a column in 'df_shifts' Data Frame, and
# to move the index pair ('all_nurses', 'all_shifts') as columns in 'df_assigned' Data Frame.
# 'reset_index()' is invoked so that a join can be performed between Data Frame, based on column names.
df_assigned_reindexed = df_assigned.reset_index()
df_vacation_forbidden_assignments = df_vacations.merge(df_shifts.reset_index()[['dow', 'shiftId']]).merge(
df_assigned_reindexed, left_on=['nurse', 'shiftId'], right_on=['all_nurses', 'all_shifts'])
# Here are the first few rows of the resulting Data Frames joins
df_vacation_forbidden_assignments.head()
In [ ]:
for forbidden_assignment in df_vacation_forbidden_assignments.itertuples():
# to forbid an assignment just set the variable to zero.
mdl.add_constraint(forbidden_assignment.assigned == 0)
print("# vacation forbids: {} assignments".format(len(df_vacation_forbidden_assignments)))
Some pairs of nurses get along particularly well, so we wish to assign them together as a team. In other words, for every such couple and for each shift, both assignment variables should always be equal. Either both nurses work the shift, or both do not.
In the same way we modeled vacations, we use the pandas merge operation to create a DataFrame for which each row contains the pair of nurse-shift assignment decision variables matching each association.
In [ ]:
# Join 'df_assignment' Data Frame twice, based on associations to get corresponding decision variables pairs for all shifts
# The 'suffixes' parameter in the second merge indicates our preference for updating the name of columns that occur both
# in the first and second argument Data Frames (in our case, these columns are 'all_nurses' and 'assigned').
df_preferred_assign = df_associations.merge(
df_assigned_reindexed, left_on='nurse1', right_on='all_nurses').merge(
df_assigned_reindexed, left_on=['nurse2', 'all_shifts'], right_on=['all_nurses', 'all_shifts'], suffixes=('_1','_2'))
# Here are the first few rows of the resulting Data Frames joins
df_preferred_assign.head()
The associations constraint can now easily be formulated by iterating on the rows of the "df_preferred_assign" DataFrame.
In [ ]:
for preferred_assign in df_preferred_assign.itertuples():
mdl.add_constraint(preferred_assign.assigned_1 == preferred_assign.assigned_2)
Similarly, certain pairs of nurses do not get along well, and we want to avoid having them together on a shift. In other terms, for each shift, both nurses of an incompatible pair cannot be assigned together to the sift. Again, we state a logical OR between the two assignments: at most one nurse from the pair can work the shift.
We first create a DataFrame whose rows contain pairs of invalid assignment decision variables, using the same pandas merge
operations as in the previous step.
In [ ]:
# Join assignment Data Frame twice, based on incompatibilities Data Frame to get corresponding decision variables pairs
# for all shifts
df_incompatible_assign = df_incompatibilities.merge(
df_assigned_reindexed, left_on='nurse1', right_on='all_nurses').merge(
df_assigned_reindexed, left_on=['nurse2', 'all_shifts'], right_on=['all_nurses', 'all_shifts'], suffixes=('_1','_2'))
# Here are the first few rows of the resulting Data Frames joins
df_incompatible_assign.head()
The incompatibilities constraint can now easily be formulated, by iterating on the rows of the "df_incompatible_assign" DataFrame.
In [ ]:
for incompatible_assign in df_incompatible_assign.itertuples():
mdl.add_constraint(incompatible_assign.assigned_1 + incompatible_assign.assigned_2 <= 1)
Regulations force constraints on the total work time over a week; and we compute this total work time in a new variable. We store the variable in an extra column in the nurse DataFrame.
The variable is declared as continuous though it contains only integer values. This is done to avoid adding unnecessary integer variables for the branch and bound algorithm. These variables are not true decision variables; they are used to express work constraints.
From a pandas perspective, we apply a function over the rows of the nurse DataFrame to create this variable and store it into a new column of the DataFrame.
In [ ]:
# auxiliary function to create worktime variable from a row
def make_var(row, varname_fmt):
return mdl.continuous_var(name=varname_fmt % row.name, lb=0)
# apply the function over nurse rows and store result in a new column
df_nurses["worktime"] = df_nurses.apply(lambda r: make_var(r, "worktime_%s"), axis=1)
# display nurse dataframe
df_nurses
Work time variables must be constrained to be equal to the sum of hours actually worked.
We use the pandas groupby operation to collect all assignment decision variables for each nurse in a separate series. Then, we iterate over nurses to post a constraint calculating the actual worktime for each nurse as the dot product of the series of nurse-shift assignments with the series of shift durations.
In [ ]:
# Use pandas' groupby operation to enforce constraint calculating worktime for each nurse as the sum of all assigned
# shifts times the duration of each shift
for nurse, nurse_assignments in df_assigned.groupby(level='all_nurses'):
mdl.add_constraint(df_nurses.worktime[nurse] == mdl.dot(nurse_assignments.assigned, df_shifts.duration))
# print model information and check we now have 32 extra continuous variables
mdl.print_information()
In [ ]:
# we use pandas' apply() method to set an upper bound on all worktime variables.
def set_max_work_time(v):
v.ub = max_work_time
# Optionally: return a string for fancy display of the constraint in the Output cell
return str(v) + ' <= ' + str(v.ub)
df_nurses["worktime"].apply(convert_dtype=False, func=set_max_work_time)
Each shift requires a minimum number of nurses. For each shift, the sum over all nurses of assignments to this shift must be greater than the minimum requirement.
The pandas groupby operation is invoked to collect all assignment decision variables for each shift in a separate series. Then, we iterate over shifts to post the constraint enforcing the minimum number of nurse assignments for each shift.
In [ ]:
# Use pandas' groupby operation to enforce minimum requirement constraint for each shift
for shift, shift_nurses in df_assigned.groupby(level='all_shifts'):
mdl.add_constraint(mdl.sum(shift_nurses.assigned) >= df_shifts.min_req[shift])
The objective mixes different (and contradictory) KPIs.
The first KPI is the total salary cost, computed as the sum of work times over all nurses, weighted by pay rate.
We compute this KPI as an expression from the variables we previously defined by using the panda summation over the DOcplex objects.
In [ ]:
# again leverage pandas to create a series of expressions: costs of each nurse
total_salary_series = df_nurses.worktime * df_nurses.pay_rate
# compute global salary cost using pandas sum()
# Note that the result is a DOcplex expression: DOcplex if fully compatible with pandas
total_salary_cost = total_salary_series.sum()
mdl.add_kpi(total_salary_cost, "Total salary cost")
In [ ]:
mdl.minimize(total_salary_cost)
mdl.print_information()
In [ ]:
# Set Cplex mipgap to 1e-5 to enforce precision to be of the order of a unit (objective value magnitude is ~1e+5).
mdl.parameters.mip.tolerances.mipgap = 1e-5
s = mdl.solve(log_output=True)
assert s, "solve failed"
mdl.report()
We take advantage of pandas to analyze the results. First we store the solution values of the assignment variables into a new pandas Series.
Calling solution_value
on a DOcplex variable returns its value in the solution (provided the model has been successfully solved).
In [ ]:
# Create a pandas Series containing actual shift assignment decision variables value
s_assigned = df_assigned.assigned.apply(lambda v: v.solution_value)
# Create a pivot table by (nurses, shifts), using pandas' "unstack" method to transform the 'all_shifts' row index
# into columns
df_res = s_assigned.unstack(level='all_shifts')
# Display the first few rows of the resulting pivot table
df_res.head()
In [ ]:
s_demand = df_shifts.min_req * df_shifts.duration
total_demand = s_demand.sum()
avg_worktime = total_demand / float(len(all_nurses))
print("* theoretical average work time is {0:g} h".format(avg_worktime))
Let's analyze the series of deviations to the average, stored in a pandas Series.
In [ ]:
# a pandas series of worktimes solution values
s_worktime = df_nurses.worktime.apply(lambda v: v.solution_value)
# returns a new series computed as deviation from average
s_to_mean = s_worktime - avg_worktime
# take the absolute value
s_abs_to_mean = s_to_mean.apply(abs)
total_to_mean = s_abs_to_mean.sum()
print("* the sum of absolute deviations from mean is {}".format(total_to_mean))
To see how work time is distributed among nurses, print a histogram of work time values. Note that, as all time data are integers, work times in the solution can take only integer values.
In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline
# we can also plot as a histogram the distribution of worktimes
s_worktime.plot.hist(color='LightBlue')
plt.xlabel("worktime")
Let's now analyze the solution from the number of shifts perspective. How many shifts does each nurse work? Are these shifts fairly distributed amongst nurses?
We compute a new column in our result DataFrame for the number of shifts worked, by summing rows (the "axis=1" argument in the sum() call indicates to pandas that each sum is performed by row instead of column):
In [ ]:
# a pandas series of #shifts worked
df_worked = df_res[all_shifts].sum(axis=1)
df_res["worked"] = df_worked
df_worked.plot.hist(color="gold", xlim=(0,10))
plt.ylabel("#shifts worked")
We see that one nurse works significantly fewer shifts than others do. What is the average number of shifts worked by a nurse? This is equal to the total demand divided by the number of nurses.
Of course, this yields a fractional number of shifts that is not practical, but nonetheless will help us quantify the fairness in shift distribution.
In [ ]:
avg_worked = df_shifts["min_req"].sum() / float(len(all_nurses))
print("-- expected avg #shifts worked is {}".format(avg_worked))
worked_to_avg = df_res["worked"] - avg_worked
total_to_mean = worked_to_avg.apply(abs).sum()
print("-- total absolute deviation to mean #shifts is {}".format(total_to_mean))
As the above diagram suggests, the distribution of shifts could be improved. We implement this by adding one extra objective, fairness, which balances the shifts assigned over nurses.
Note that we can edit the model, that is add (or remove) constraints, even after it has been solved.
number of shifts worked and positive and negative deviations to the average.
In [ ]:
# add two extra variables per nurse: deviations above and below average
df_nurses["worked"] = df_nurses.apply(lambda r: make_var(r, "worked%s"), axis=1)
df_nurses["overworked"] = df_nurses.apply(lambda r: make_var(r, "overw_%s"), axis=1)
df_nurses["underworked"] = df_nurses.apply(lambda r: make_var(r, "underw_%s"), axis=1)
In [ ]:
# Use the pandas groupby operation to enforce the constraint calculating number of worked shifts for each nurse
for nurse, nurse_assignments in df_assigned.groupby(level='all_nurses'):
# nb of worked shifts is sum of assigned shifts
mdl.add_constraint(df_nurses.worked[nurse] == mdl.sum(nurse_assignments.assigned))
for nurse in df_nurses.itertuples():
# nb worked is average + over - under
mdl.add_constraint(nurse.worked == avg_worked + nurse.overworked - nurse.underworked)
In [ ]:
# finally, define kpis for over and under average quantities
total_overw = mdl.sum(df_nurses["overworked"])
mdl.add_kpi(total_overw, "Total over-worked")
total_underw = mdl.sum(df_nurses["underworked"])
mdl.add_kpi(total_underw, "Total under-worked")
Finally, let's modify the objective by adding the sum of over_worked and under_worked
to the previous objective.
Note: The definitions of over_worked
and under_worked
as described above are not sufficient to give them an unambiguous value. However, as all these variables are minimized, CPLEX ensures that these variables take the minimum possible values in the solution.
In [ ]:
mdl.minimize(total_salary_cost + total_overw + total_underw) # incorporate over_worked and under_worked in objective
Our modified model is ready to solve.
The log_output=True
parameter tells CPLEX to print the log on the standard output.
In [ ]:
sol2 = mdl.solve(log_output=True) # solve again and get a new solution
assert sol2, "Solve failed"
mdl.report()
Instead of aggregating all objectives into a single goal, one may prioritize goals. For instance, one may first minimize cost, and then optimize fairness. This is a 2 steps optimization which may be configured by exploiting the multi-objective API of docplex as follows:
First, redefine the optimization problem as a multi-objective minimization. Objectives are grouped into sub-problems with respect to the list of priorities that is provided. The number of items in the prioriries list must be the same as the number of objectives.
Sub-problems are solved one after the other in the order of decreasing priorities.
Objectives with same priorities are blended into an aggregated goal, where each objective is weighted according to the values provided in the weights list. If no list of weights is provided, all objectives have unitary weight.
In [ ]:
from docplex.mp.constants import ObjectiveSense
mdl.set_multi_objective(ObjectiveSense.Minimize, [total_salary_cost, total_overw, total_underw], priorities=[1, 0, 0])
For multi-objectives probles, solve is performed in the same way:
In [ ]:
sol3 = mdl.solve(log_output=True) # solve the multi-objective problem
assert sol3, "Solve failed"
mdl.report()
In this example, one gets the same solution when blending all the objectives together (Step #3) or when decomposing the problems into two sub-problems, setting higher priority for cost minimization.
However, if one wants to optimize fairness first, and then cost, multi-objective solve returns a different solution:
In [ ]:
mdl.set_multi_objective(ObjectiveSense.Minimize, [total_overw, total_underw, total_salary_cost], priorities=[1, 1, 0])
sol4 = mdl.solve(log_output=True) # solve the multi-objective problem
assert sol4, "Solve failed"
mdl.report()
You learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization on Cloud.
You also learned how to setup your problems if different objective are considered for evaluating solutions.
Copyright © 2017-2019 IBM. IPLA licensed Sample Materials.
In [ ]: