Please enable JavaScript.
Coggle requires JavaScript to display documents.
MAST90014 Optimisation for Industry (Methods of solving MILPs (Cutting…
MAST90014
Optimisation for Industry
Linear programming
Duality theory
Basics
Every LP has a dual LP
Possible feasibility relationships
unbounded \( \Rightarrow \) infeasible
infeasible \( \Leftarrow \) unbounded
infreasbile and infeasible
feasible and feasible
(equal objectives - Strong duality!)
\( \begin{cases} p^* = \max c^T x \\ s.t. Ax \leq b \\ x \geq 0 \end{cases} \qquad \Leftrightarrow \qquad \begin{cases} d^* = \min b^t y \\ s.r. A^T y \geq c \\ y \geq 0 \end{cases} \qquad p \leq d, \qquad p^* = d^* \)
Weak duality: a feasible primal solution is a lower bound for dual solution and visa versa:
\( p = c^T x \leq b^T y = d \)
Strong duality: if optimal exists, then the optimal value for the primal equals the optimal value for the dual: \( p^* = c^T x^* = b^T y^* = d^* \)
Economical interpretation
Values of dual variables represent how much gain is possible in the objective value if the value of the corresponding constraint was relaxed - per unit of relaxation
Value of \(y_i\) is the
dual price of resource \(i\)
Constructing the dual
\( \begin{align} \leq & \Leftrightarrow \pi_i \geq 0 \\\ = &\Leftrightarrow \pi_i \text{unres.} \\ \geq & \Leftrightarrow \pi_i \leq 0 \\ \min &\Leftrightarrow\max \end{align} \)
Alternatively, convert to the canonical form
Basics
Standard form
\( \min cx \\ s.t. Ax \geq b \) :
Canonical form
\( \min cx \\ s.t. Ax=b \\ x \geq 0 \)
Conversion to canonical form
Add slack variables
\(x + y \leq b \Rightarrow \begin{cases} x + y + s = b & \\ z \geq 0 & \end{cases}\)
Unrestricted variables
\( z \in \mathbb{R} \Rightarrow z = z^+ - z^-, \qquad z^+,z^- \geq 0 \)
Convexity
\(z = \alpha x_1 + (1-\alpha) x_2\)
\( 0 \leq \alpha \leq 1 \)
\(Az = A[\alpha x_1 + (1 - \alpha) x_2 ] = b\)
Consequence of convexity of the feasible region of LP is that optima always occur at extreme points
Simplex method
Overview
1) Find feasible extreme point
2) If optimal, stop
3) Otherwise, find neighbouring extreme point with better objective and repeat 2
Solutions
Feasible
: satisfies \(Ax=b\)
Optimal
: Satisfies \(Ax=b\) and no neighbouring extreme points improve the objective
Basic
: Satisfies \(Ax=b\) with \(n-m\) variables set to zero
\( z = [ x_i, x_j, ... | 0, 0, 0, 0 ] \)
Basic/non-basic partition
\( x = \begin{bmatrix} x_B \\ x_N \end{bmatrix} \\ A = [B | N]\\ Ax=b \\ Bx_B + Nx_N = b\\ x_N = 0\)
Negative reduced costs
Negative reduced costs/relative costs represent the (negative) gains you get for bringing a variable into the basis; specifically, the gain you get per unit for walking in the direction of that variable
\( Bx_B + Nx_N = b \equiv x_B = B^{-1}b - B^{-1}N x_N \\ \text{So, } \min c_B^T x_B + c_N^T x_N \\= \min c_B^t (B^{-1}b - B^{-1}N x_N) + c_N^T x_N \\= \min \underbrace{c_B^t B^{-1}b}_{\text{=current value}} + (\underbrace{c_N^T -c_B^T B^{-1}N}_{\text{=relative costs}}) \underbrace{x_N}_{=0} \)
For each non-basic variable \( i \) the relative cost is:
\( \hat{c}_{N_i} = c_{N_i} - c_B^T B^{-1} N \)
If you are minimizing and all the relative costs are non-negative, you are at an optimal solution
If you are maximizing and all the relative costs are non-positive, you are at an optimal solution
Methods of solving MILPs
Branch and bound
Key idea
branch on each value of integer variable which value fixed in each branch
\( \min_{x \in S} c^T x \equiv \min_{1 \leq i \leq n} \{ \min_{x \in S_i} c^t x | x_j = i\} \)
Branching part is equivalent to compete enumeration
Bounds
When minimizing
Every relaxed solution in B&B tree is a lower bound
Every feasible solution in B&B tree is an upper bound
When maximizing
Every relaxed solution in B&B tree is an upper bound
Every feasible solution in B&B tree is a lower bound
Searching successor nodes cannot yield a better solution than predecessor nodes, since successor nodes are always more constrained
B&B algorithm
[maximization version]
For each new node.... if
Infeasible
PRUNE :red_cross:
Feasible
\( z< LB \)
PRUNE :red_cross:
\( z \geq LB \)
Non-integral
Accept value as UB :question:
Add node to candidate list
Integral
Accept value as LB :question:
if \( LB= UB \), then optimal :check:
Choose node from candidate list
Choose non-integer variable \( x_i \) with value \(v \)
Branch on \( x_i \leq \lfloor{v} \rfloor\) and \( x_i \geq \lceil{v} \rceil\)
Behavior
Node selection
Best-bound first
Depth first
Hybrids
Variable selection
Most fractional (ie. closest to 0.5) variable [performs worse than random!]
Choose variable for which branching leads to greatest [bound decrease on left times bound decrease on right]
Pseudocost branching - track history of success for each variable as heuristic indication of quality of variable
Hybrids
Cutting planes
Valid inequalities
Types of valid inequalities
Knapsack inequalities for binary variables
if \( w_1 + w_2 + w_3 > M\) then add Knapsack inequality:
\( x_1 + x_2 + x_3 \leq 2 \)
Theoretically, can consider all subsets and add inequality in for each combination which exceeds capacity
\( \min a_1 x_1 + a_2 x_2 + \dots a_m x_m \\ w_1 x_1 + \dots w_m x_m \leq M \)
Mixed 0-1 set
Example:
\( X = \{(x,y): x \leq 9999y, 0 \leq x \leq 5, y \in B^1 \} \)
\( \Rightarrow x \leq 5y \)
Mixed integer set
Example:
\( X = \{ (x,y) : x \leq 10y, 0 \leq x \leq 14, y \in \mathbb{Z}_+^1 \\ \Rightarrow x \leq 6 + 4y \)
General expression (where \( C \) does not divide \( b \)): :pencil2:
\( X = \{(x,y): x \leq Cy, 0 \leq x \leq b, y \in X_+^1 \}, \\ \Rightarrow x \leq b - \gamma(K - y),\\ K = \lceil \tfrac{b}{C} \rceil, \gamma = b - (K-1)C \)
Integer rounding
The idea
If all terms positive... find magic number to divide through by.... round up coefficients on left ... LHS integer so round up RHS too.... a new valid inequality is created
Example
\( X = P \cap \mathbb{Z}^4\\ P = \{ x \in \mathbb{R}_+^4 : 13 x_1 + 20 x_2 + 11 x_3 + 6 x_4 \geq 72 \)
Divide through by 11
Round up LHS coefficients
Round up RHS
New valid inequality: \( 2x_1 + 2x_2 + x_3 + x_4 \geq 7 \)
The idea
An inequality \( \pi x \leq \pi_0 \) is valid for \(X \subset \mathbb{R}^n \)
if \( \pi x \leq \pi_0 \) for all \( x \in X\)
Does not change the integer problem but tightens the LP relaxation, so B&B can be much quicker
Chvatal-Gomory procedure
Similar to integer rounding above, for each constraint:
Multiply through by some \( u \geq 0 \)
Round down the LHS coefficients (still valid)
Round down the RHS coefficients (since LHS is integer now)
New valid inequality
Procedure for constructing valid inequalities for integer variables
Gomory's fractional cutting planes algorithm
The idea
Solve LP relaxation;
Find optimal basis;
Find basic variable which is not integer;
Generate a Chavatal-Gomory inequality on the constraint associated with this variable so as to cut off the LP solution.
NOTE: Fractional part of negative numbers is assumed to be the gap from the next lower integer! (eg. fractional part of \( -\tfrac13\) is \( \tfrac23 \) )
Process
1) Get LP relaxation solution
2) Identify variable \(x_u\) with fractional value and associated constraint:
\( \qquad x_u + \sum_{j \in NB} a_{uj} x_j = b_u \)
where \( x_u = b_u \), which is fractional
3) Round down the coefficients of non-basis variables (turns into inequality)
\( \qquad x_u + \sum_{j \in NB} \lfloor a_{uj} \rfloor x_h \leq b_u \)
4) Round down the RHS (valid since \(x_u \) has to be integer; this cuts off the fractional value)
\( \qquad x_u + \sum_{j \in NB} \lfloor a_{uj} \rfloor x_j \leq \lfloor b_u \rfloor \)
5) Subtract (4) from (2) to obtain:
\( \qquad \sum_{j \in NB} (a_{uj} - \lfloor a_{uj} \rfloor ) x_j \geq (b_u - \lfloor b_u \rfloor) \)
7) Add the valid inequality from (6) in as a new constraint (with new slack variable) and repeat from (1) until integral solution found
6) Re-express this inequality in terms of the original variables (easy to substitute them in)
Simplified process
1) Look at the tableau for the LP solution
2) Find row with fractional value in RHS
3) Extract the fractional parts of the non-basis coefficients and the fractional part of the RHS to produce new inequality for example:
\( \qquad x_1 + \tfrac12 x_4 + \tfrac76 x_5 + \tfrac92 x_6 = \tfrac83 \\ \Rightarrow \\ \qquad \tfrac12 x_4 + \tfrac16 x_5 + \tfrac12 x_6 \geq \tfrac23 \)
4) Re-express the new cut in terms of the original variables & add it to the problem
The idea
Approximate the convex hull \(conv(X)\) by a series of cutting planes
So given (IP):
\(\max \{c^T x: x \in X \}, X = \{ x:Ax\leq b, x \in \mathbb{Z}_+^n \} \\ \equiv \max \{ c^T x: x \in conv(X) \} , conv(X) = \{ x:\tilde{A} x \leq \tilde{b}, x \geq 0 \} \)
Other types of cuts
Clique cuts
The idea: Find variables which cannot be present together
Example:
\( x_1 + x_3 \leq 1 \)
\( x_1 - x_2 \leq 0 \equiv x_1 + \bar{x_2} \leq 1 \)
Cover cuts
Knapsack cover cuts as described above
Example:
\(\qquad 3x_1 + 5x_2 + 7x_3 + 8x_4 \leq 11 \\ \Rightarrow \\ \qquad x_2 + x_3 \leq 1 \)
etc
Extended cover adds any extra variable with coefficient at least as great as the existing coefficients in the set.... sum can be bounded by the same value \( (|C| - 1 ) \)
Branch and cut
Combine Branch and Bound with Cutting Planes - cutting planes added throughout to tighten up LP relaxation and speed up B&B
Miscellaneous techniques
Feasibility identification
Remove each constraint \( i\);
Compute \( z = \min [LHS_i] \);
If \( z > b_i \), then problem is infeasible!
Problem is as hard as the original, but we can try to find a lower bound for \(z \)
Redundancy identification
Remove each constraint \(i\);
Compute \(z = \max[LHS_i]\);
if \(z > b_i\), then the constraint is redundant!
Problem is as hard as the original, but we can try and found an upper bound for \(z\)
Probing
Fix some \(x_k = 1\)
Remove some constraint \( i\);
Compute \(z_k = \min [LHS_i] \);
if \(z > b_i\), then \(x_k = 1 \) renders the problem infeasible and so can fix \(x_k = 0\)
Theorem (Gomory, Chvatal)
if \( x \in \mathbb{Z}^n, Ax \leq b, u \geq 0, uA \in \mathbb{Z}^m \)
then \(uAx \leq \lfloor ub \rfloor \) :pencil2:
MILP Decomposition techniques
Column generation
The idea
Master problem becomes simple: select numbers of columns utilized (minimize)
Sub problem: create the column with the least negative reduced cost. If the value is < 0 then add to the master problem. Otherwise, no such column exists.
The master problem may not be integral. For cutting stock you can simply round up. In general, it may simply serve as a lower bound eg. in branch and price.
NOTE: The constraints in the sub problem do not change, only the objective function (based on the dual variables in the master problem)
Branch and Price
Simply applies column generation at each node to find the lower bound for the B&B algorithm
Process
1) Start with feasible solution with valid set of columns (eg. generated heuristically)
2) Solve LP relaxation with current set of columns
3) Get dual variables \(u \) and formulate the sub problem objective:
\( \underbrace{c_N - c_B B^{-1} N}_{\text{negative reduced costs}}, \qquad \underbrace{1 - c_B B^{-1} p}_{\text{...of new column}}, \qquad u := \underbrace{ c_B B^{-1}}_{=\text{dual}} \)
4) Solve sub-problem:
\( \qquad \min c - u^T p \\ \qquad s.t. \text{(is valid column)} \)
5) If value of sub-problem objective is strictly negative, add \(p\) column in to master problem and repeat (2). Otherwise, stop.
Benders decomposition
Benders algorithm :pencil2:
1) Fix some feasible \( \bar{y} \)
2) Update formulation of sub-problem
3) If sub-problem unbounded then add feasibility constraint for each unbounded ray \( r\):
\( \qquad r(b-B\bar{y}) \leq 0 \)
where \(r\) is a direction for which \(r(b-B\bar{y}) > 0 \) ie. unbounded
4) If sub-problem bounded then add optimality cut at each extreme point \(u\):
\( \qquad z \geq u(b - B\bar{y}) \)
5) Update LB, UB, repeat (1), until LB=UB (?)
Problem reformulation
Original problem
\( \min cx + dy \\ Ax + By \geq b \\ Dy \geq e \\ x \geq 0, y \in \mathbb{Z}^+\)
Reformulated problem
\(\equiv \min_{\bar{y} \in Y} \{ d\bar{y} + \min_{x \geq 0} \{ cx : Ax + B\bar{y} \geq b \} \} \)
\( \equiv \min_{\bar{y} \in Y} \{ d\bar{y} + \min_{x \geq 0} \{ cx : Ax \geq b - B\bar{y} \} \} \)
\( \equiv \min_{\bar{y} \in Y} \{ d\bar{y} + \max_{u \geq 0} \{ u(b - B \bar{y}) : \underbrace{uA \leq c}_{\text{feasible, ind. of } \bar{y}} \} \} \)
where \( Y = \{ y : Dy \geq e, y \geq0, y \in \mathbb{Z} \} \)
Equivalently:
\(\min\{d\bar{y} + z : Dy \geq e, z \geq -M,y \in \mathbb{Z}^+\} \)
(don't forget initial bound for \( z \) !!)
Langragian duality
The idea
Move difficult constraints to the objective function, weighted by the dual variables. This gives an upper bound (if maximizing). It may be a better upper bound than the LP relaxation.
Also by dualizing some constraints, the problem may become separable into independent sub problems.
Reformulation
Original problem
\( \max cx \\ Ax \leq b \\ Dx \leq d \\ x \in \mathbb{Z}^+ \)
Lagrangian relaxation
\( \max cx + \underbrace{u}_{\geq 0}(\underbrace{d - Dx}_{\geq 0}) \\ Ax \leq b\\ x \in \mathbb{Z}^+ \)
Process
1) Move some constraints to the objective function (dualize them) to create IP(u)
2) Get some \(u\) values (somehow!?)
3) Rearrange objective of IP(u) in terms of \( (c_i - u_i) x_i \) etc
4) Break up into independent sub-problems
5) Solve each sub-problem (may be simple/obvious)
6) Calculate upper bound for original problem
Modelling
Basic techniques
Logical operations
Implication
"
If binary option x selected, then binary option y
also selected"
\( x \rightarrow y \)
\( x \leq y \equiv y - x \geq 0 \)
NAND
"Only 1 of the binary options can be selected"
\( \lnot(x \land y) \)
\( x + y \leq 1 \)
OR
"At least one binary option must be selected"
\( x \lor y \)
\( x + y \geq 1 \)
Disjunctive constraints (XOR)
\( y_1 = 1\) if respective constraint 1, otherwise 0
\( y_2 = 1 - y_1 \)
\(-M(1 - y_1) + p_L \leq p \leq p_U + M(1 - y_1) \\ -M(1 - y_2) + q_L \leq q \leq q_U + M(1 - y_2) \)
Standard problems
Knapsack problem
Maximize value of items packed in knapsack
\( \max \sum_i c_i x_i \\ \sum_i w_i x_i \leq K \)
Bin packing problem
Pack all items into the fewest possible number of bins
\( \min \sum_i y_i\\ \sum_j x_{ij} = 1, \forall i \\ \sum_i w_i x_{ij} \leq K_j , \forall j \\ x_{ij} \leq j , \forall i,j \)
Set selection problems
Cover problem
\( \sum_j x_{ij} \geq 1 \)
Partition problem
\(\sum_j x_{ij} = 1 \)
Packing problem
\( \sum_j x_{ij} \leq 1 \)
Minimum spanning tree
With exponential constraints
\(P_{sub}\) Subtour elimination
\( \min \sum_{ij} w_{ij} x_{ij} \\ \sum_{ij} x_{ij} = n-1 \\ \sum_{ij \in S} x_{ij} \leq |S| - 1 \)
Force each subset of nodes to have number of edges strictly less than number
of nodes
\( P_{cut} \) Cut method
\( \min \sum_{ij} w_{ij} x_{ij} \\ \sum_{ij} x_{ij} = n-1 \\ \sum_{ij \in d(S)} x_{ij} \geq 1 \)
Force each subset of nodes to have at least one edge connecting it to the complement of the nodes
\(P_{sub} \rightarrow P_{cut}\), but \( P_{cut} \not\rightarrow P_{sub}\)
So \( P_{sub} \) is a stronger/tighter model with better bound
Travelling salesman problem
Basic formulation
\( \min \sum_{ij} w_{ij} e_{ij} \\ \sum_{ij} x_{ij} = 2 \)
plus SEC
Subtour elimination constraints
Dantzig
As above, \( P_{sub} \)
MTZ
\( u_i - u_j + (n-1)x_{ij} \leq n - 2 \equiv u_i \leq u_j + (n-1)(1-x_{ij}) - 1 \\ 1 \leq u_i \leq n-1 \)
GG
\( \sum_{j=1}^n g_{ji} - \sum_{j=2}^n g_{ij} = 1\\ 0 \leq g_{ij} \leq (n-1)x_{ij} \)
For each node, the outgoing flow must equal the incoming flow minus 1; the flow is only active on edges used
FGG4
Expand x variables with indeices for hops
Wong
Multi-commodity flows
Cutting stock problem
Cut \( n\) pieces paper of sizes \( w_i \) from rolls of size \( W \);
Minimize number of rolls used.
Identical to bin packing problem where each item has variable (integer) number of instances to pack
Model every possible pattern (exponential number!) then solve the optimal selection of numbers of each pattern
\( \min \sum_j y_j \text{(Use fewest possible stocks)}\\ \sum_j x_{ij} \geq b_i \text{ (demand met)}\\ \sum_i l_i x_{ij} \leq C y_j \text{ (stock size not exceeded)} \\ x_{ij} \in \mathbb{Z}^+, y_j \in \{ 0,1 \} \)
Extensions for non-linearity
Cost linearisation by parts :pencil2:
Break up \( f(x) \) at \( k \) points at \( a_i \)
\( \sum_i y_i = 1 \\ \sum_i \lambda_i = 1 \\ \lambda_1 \leq y_1 \\ \lambda_i \leq y_{i-1} + y_i \\ \lambda_k \leq y_{k-1} \\ x = \sum_i \lambda_i a_i \)
\( \lambda_i \geq 0, i \in \{1,\dots,k\} \\ y_i \in \{ 0, 1 \} , i = 1 , \dots, k-1 \)
\( \min \sum_i \lambda_i f(a_i) \)
Stochastic programming
Simply separate the current decision variables from the future stochastic variables; make future stochastic variables indexed by scenario; add constraints in for each scenario; weight the costs for each scenario by the probability of that scenario.
Possible solutions
Perfect information solution (future is known; find optimal solution for each scenario)
Stochastic solution (break up scenarios; weight objectives by probability)
Average solution (just assume average outcomes)
Usually PIS > SS > AS
(but not guaranteed)
Values
EVPI (Estimated/Expected value of perfect information)
Perfect information solution - average solution
Value of stochastic solution
Objective value of stochastic - objective value of average
Basics
Types of optimisation
LP - linear programming; objective and constraints are all linear
COP: combinatorial optimisation problem, there exists a finite number of feasible solution
IP: integer programming problem, all unknown variables are required to be integers
MIP: mixed integer problem, there is a mix of integer and non-integer variables
MILP: mixed integer linear problem, a MIP with all linear constraints and objective