4.5 Sensitivity Example with 2 Variables

Two-variable Problem

Consider the following problem with two variables.

Minimize

- 5 x1 - 7 x2 OBJ

subject to

x1 + x2 ≤ 4 ROW1

2 x1 + x2 ≤ 5 ROW2

- x1 + 4 x2 ≥ 1 ROW3


x1 ≥ 0, x2 ≥ 0

The geometric representation of this problem appears in Figure 3.

Figure 3: Geometrical Representation of 2-variable Problem

The shaded area OABC represents the feasible region. The line PQ represents the objective function. The optimal solution to the problem lies at vertex C where:

x1 = 0

x2 = 4

OBJ = - 28.0


Reduced Cost

Variable x1 is therefore non basic at its lower bound of zero in the optimal solution. Suppose we force variable x1 off its lower bound by 0.5 by adding to the problem a bound:

x1 ≥ 0.5

The geometric representation of the modified problem is shown in Figure 4.

Figure 4: 2-variable Problem with Bound Added

The new solution to the problem now lies at vertex C where:

x1 = 0.5

x2 = 3.5

OBJ = - 27.0

Note that forcing the non-basic variable off its lower bound has worsened the value of the objective function (it is greater and we are minimising). We have seen that:

OBJ with variable x1 at its lower bound            = - 28.0

OBJ with x1 forced off its lower bound by 0.5 = - 27.0

We can therefore define the unit cost of forcing the non basic variable x1 off its lower bound as:

Change in Objective function value = 1 = 2.0
Change in x1 0.5

Therefore the unit cost of forcing the non basic variable x1 off its bound is +2.0 This value is called the reduced cost of variable x1 and implies that for every unit we force the variable x1 off its bound by the objective function value will change by +2.0.

It is important to note that all sensitivity analysis information addresses individual changes to the problem only. That is for instance, if we force one non basic variable off its bound to form a new problem the solution will be worsened by its reduced cost / unit. We cannot then proceed to force another non basic variable off its bound and measure the detriment in the objective function using the reduced cost information from the original problem. This is true of all the sensitivity analysis information presented. Therefore, although sensitivity analysis is extremely useful, it does have its limitations.


Alternative Optimal Solutions

Let us now return to our original problem and examine the effect of lowering the cost coefficient in the objective function of the non basic variable x1 by its reduced cost of 2.0. The objective function has now changed from:

Minimize - 5 x1 - 7 x2

to

Minimize - 7 x1 - 7 x2

The geometric representation of the problem now appears as in Figure 5.

Figure 5: Alternative Optimal Solutions

The line PQ in Figure 5 represents the original objective function and the line P'Q' the new objective. Note that there is now a choice of solutions at either vertex C or B. The solution at vertex B is:

x1 = 1.0

x2 = 3.0

OBJ = - 28.0

Note that the variable x1 has now become basic in the optimal solution. Therefore, lowering the cost of the non basic variable x1 by its reduced cost has made it basic. This is a general rule. The reduced cost of a non basic variable therefore also represents the required change in its cost coefficient for it to become basic.

From this last property of reduced costs, note that the reduced cost of a basic variable is zero. This is logical since no change in its cost coefficient is required for it to become basic since it already is. It is possible to have a non basic variable with a reduced cost of zero. This indicates that we can make this variable basic (and another suitable variable non basic) with no change in the objective function value. Thus, a non basic variable with zero reduced cost indicates that an alternative optimal solution to the problem exists.


Shadow Prices

Let us now change the objective function in our original problem in this section to :

Minimize - 5 x1 - 4 x2

Also, consider the ROW2 constraint:

2 x1 + x2 ≤ 5

Suppose we were to increase the right hand side of this constraint by 0.5 to 5.5. We refer to this as "relaxing" the constraint by 0.5 because it gives the model greater freedom. The geometric representation of the problem now appears in Figure 6.

Figure 6: Relaxing a Constraint

The original ROW2 constraint is represented by the line EF in Figure 6. The relaxed version of ROW2 is represented by the line E'F'. Note that the feasible region to the problem has now changed from OABC to OA'B'C because of the relaxed constraint. The dashed line represents the objective function. Note that the optimal solution to the original problem lies at vertex B. The solution at this point is:

x1 = 1.0

x2 = 3.0

OBJ = - 17.0

Because ROW2 passes through the optimal vertex (it is a binding constraint) and we have relaxed it, the optimal solution to the new problem (ROW2 relaxed) now lies at the "moved" vertex B'. The solution at this vertex is:

x1 = 1.5

x2 = 2.5

OBJ = - 17.5

Note that relaxing constraint ROW2 has improved the value of the objective function (it has decreased and we are minimising). We deduced that:

OBJ with ROW2 unrelaxed        = - 17.0

OBJ with ROW2 relaxed by 0.5 = - 17.5

We can therefore define the unit cost of relaxing constraint ROW2 by:

Change in objective function value = 0.5 = - 1.0
Change in RHS of ROW2 0.5

Therefore the unit cost of relaxing constraint ROW2 is -1.0. This value is called the shadow price of ROW2 and implies that for every unit we relax the RHS of constraint ROW2 by the objective function will change by -1.0. This value is symmetric in that increasing ("tightening") the right hand side of ROW2 will change the value of the objective function by +1.0 / unit .

Note that, for a non-binding constraint such as ROW3 in our above problem, relaxing or tightening it has no affect on the optimal solution since, although the feasible region will be changed, the constraint does not pass through the optimal vertex B and therefore relaxing or tightening it will not "move" vertex B. The solution will therefore remain at vertex B. Consequently , the shadow price of a non binding constraint is zero.

On a more technical note, consider a constraint, say ROW1 in our model with its slack variable, say e1 added:

x1 + x2 + e1 = 4 ROW1

If ROW1 is binding, the slack variable e1 will be non basic at zero. Suppose we increase the right hand side of ROW1 by 1 unit, ie

x1 + x2 ≤ 4 + 1 ROW1

If ROW1 remains binding, we must have the following relationship:

x1 + x2 - 1 = 4 ROW1

The slack variable e1 must therefore, technically become -1. Now the reduced cost of a non- basic variable indicates how much the objective function will be worsened per unit increase of the non basic variable from its lower bound. This is also a measure of the improvement in the objective function value if the non-basic variable could be decreased below its lower bound, ie. in this case to -1. The shadow price of a constraint is actually therefore the reduced cost of the non-basic slack variable if the constraint is binding.

If the constraint is non-binding, its slack variable will be greater than zero and therefore basic. The reduced cost of a basic variable is zero. The shadow price of a non-binding constraint is therefore zero.

Comments