MP in Action‎ > ‎How to Use MP‎ > ‎

What Can Integer Programming Do?

This article picks up from Non-Linear Optimization using Extensions to LP and continues an exploration of what can be achieved using extensions to Linear Programming.

Integer Programming (IP) is a very powerful technique for tackling problems which have a mixture of continuous and discrete activities, notably those involving scheduling.

Unlike Linear Programming (LP), IP cannot simply be treated as a "black box". Problems must be specified carefully and the analyst must experiment with the IP code to find the best tactics to use. This article explores what IP can do and explains why some approaches to formulating IP problems are vastly more effective than others.

Is IP a Suitable Technique?

The first issue which must be addressed is whether IP is a suitable technique for tackling the problem. In many respects the criteria are similar to those for deciding whether the problem is amenable to LP. That is, there must be:

  • many potentially acceptable solutions;
  • some means of assessing the quality of alternative solutions;
  • some interconnectedness between the variable elements of the system.

In addition we must answer a crucial question:

  • how far from the LP optimum do we have to move to reach an integer solution?

Roughly speaking, the less distance that we have to move from the LP optimum to reach integer solutions, the more likely that IP will be a suitable technique to find good, and possibly the best, solution.

Thus IP is good at solving problems which involve "rounding" the LP solution. This is usually true of problems in which a relatively small proportion of decision variables must be integer. It also applies to problems in which the integer variables are semi-continuous or take integer values greater than 1. Many problems involving special ordered sets are also relatively easy to solve, but some are not.

IP is less good where there is a significant "combinatorial" element, i.e. where some combination of items must be chosen and the costs (and feasibility) of the combination are more a function of the combination than of the component items. The Travelling Salesman Problem is a classic example. The problem is to find the shortest route visiting a set of points. This is hard to solve because the length of a route is essentially a function of the sequence of points visited. The number of such sequences of points rises as n! (factorial n) where n is the number of points.

Travelling Salesman Offers Hope

The Travelling Salesman Problem (TSP) has been the subject of much research. Somewhat surprisingly, IP-based techniques have been developed which are able to find the optimum solution routinely to problems with several hundred points. Moreover, they are able to prove that they have found the optimum solution. Unfortunately, IP is only one component among several techniques which are used to find the optimum solution. As a result, it is not possible to generalise from success with the TSP and apply IP software to solve other combinatorial problems.

Nonetheless, certain techniques which are used in solving TSPs do have wider applicability. Chief among these is the concept of adding constraints to the model only when they are needed. The standard formulation which is used for the TSP permits solutions in which the salesman, rather than visiting each point in turn, makes a number of disconnected subtours. That is, rather than going 1, 2, 4, 5, 6, 3, 1 the model will propose two separate tours: 1, 2, 3, 1 and 4, 5, 6, 4 (see Figure 1).

Figure 1: Solutions to Travelling Salesman Problem

A subtour elimination constraint prohibits one of the subtours (e.g. 4, 5, 6, 4). The problem with the subtour elimination constraints is that, as with the number of possible routes, the number of such constraints grows exponentially with the number of points. It is therefore totally impracticable to include all such constraints in the model. Instead, once a solution has been found, a routine is called which identifies subtours and then adds the appropriate subtour elimination constraint to the matrix. The problem is re-solved and the process iterates.

Cutting-Stock Problems

A similar process is used in tackling so-called cutting-stock problems. These involve working out how best to cut some available stock of material to meet demand for specific sizes. Such problems may be one-dimensional (slitting a continuous roll of paper, carpet or film stock for instance into rolls of various widths) or two-dimensional (cutting specific shapes from sheets of metal or plastic).

The technique of dynamic column generation has proved successful in tackling such problems. There is an enormous number of possible cutting patterns which could be used. A set of candidate patterns is developed (generally these will be those with minimum wastage) and the LP problem is solved to find that combination of cutting patterns which meets the demand at minimum cost.

In some cases the problem will then be solved to an integer optimum using the existing set of cutting patterns. In other cases the next stage may be an "external pricing routine" which seeks to identify other cutting patterns which would have been used in the LP solution if they had been available. These are added to the problem and it is re-solved. This process is repeated and finally the problem is solved to an integer optimum using the augmented set of cutting patterns.

Aircrew Scheduling

Aircrew scheduling is another class of problems which has been tackled successfully using column generation techniques. Each flight in an airline's timetable must be staffed with the appropriate aircrew. There are complex rules which govern the working hours of the various categories of staff and determine how much it costs to deploy them on a particular tour. The problem is to determine the minimum-cost set of tours which ensures that all aircraft are satisfactorily crewed.

This is tackled by generating columns which represent potential tours and combining them in an enormous LP. Such matrices are among the largest in routine use and may exceed 100,000 rows and columns. Lest readers consider that cost minimization is dehumanising, it is reported that crews prefer optimized schedules because they minimize time spent hanging around airports waiting for the next flight.

Intersections of Decisions

Underlying the difficulties of finding solutions to these problems lie some fundamental theoretical results. These shed light on what is going on and help guide us towards formulations for IP problems which are more likely to work.

The central problem in integer programming arises from the intersection of decisions. In other words, you make life hard for the IP code where some activity arises from the intersection of two or more decisions, each of which is represented by a separate decision variable. If you use a single decision variable to represent the combination of decisions, the problems disappear.

This is precisely the approach adopted with the cutting stock problems. The decision variables which are used embed the "hard" part of the problem. The cutting pattern is taken as given and the decision is then only how many times to replicate that pattern. With the TSP the difficulties arise because the decision variables each represent only a tiny fraction of the complete set of decisions whose intersection determines the cost of the route.

A Machine Scheduling Example

The effect of different formulations can be seen with another example. This is a fragment of a scheduling problem from a flexible manufacturing system. It is concerned with deciding when each of a number of tasks should take place.

Such a problem can be solved approximately by imposing a structure of time periods and using binary decision variables (preferably in Special Ordered Sets of Type 1) for whether a task starts at the beginning of a time period. If the time periods could be chosen such that each task took a whole number of time periods, this would yield the optimum solution.

But it tends not to be practicable to use such fine time periods. Various approaches can be taken to overcome this. Most of these involve "approximating" the problem. As a result the best solution to the approximate problem may not solve the true problem or, if it does, it may not be the best solution to the true problem. Nonetheless, such approaches can be very effective and they are probably the best way to tackle such problems.

If we want to solve the true problem exactly we need to use decision variables which represent the exact time when each task starts. Such a formulation takes for ever to solve (at least on my computer!).

Yet if you specify the sequence of tasks, the problem reduces to a small LP which takes a fraction of a second to solve. Suppose that there are 6 tasks; then there are only 6! = 720 possible sequences. So the problem can be solved to global optimality by solving a few hundred small LPs and selecting the best solution.

What is happening here is that the decisions as to when to start each task are intersecting in the constraints. By specifying a particular sequence of tasks we tie down those intersections into a single decision variable: whether to use that sequence or not. As a result, we can convert an intractable IP into a series of simple LPs. While such a transformation rarely achieves such dramatic results (consider the cases of 10 or 20 tasks), it shows the power of different formulations.

Related articles include Non-Linear Optimization using Extensions to LP and How to Succeed with Integer Programming. To find other articles, refer to the MP in Action page.

Comments