The Travelling Salesman Problem (TSP) has been the subject of much research. IP-based techniques have been developed which are able to find the optimum solution routinely to problems with several hundred points. Moreover, they are able to prove that they have found the optimum solution. Unfortunately, IP is only one component among several techniques which are used to find the optimum solution. As a result, it is not possible to generalise from success with the TSP and apply IP software to solve other combinatorial problems. Nonetheless, certain techniques which are used in solving TSPs do have wider applicability. Chief among these is the concept of adding constraints to the model only when they are needed. The standard formulation which is used for the TSP permits solutions in which the salesman, rather than visiting each point in turn, makes a number of disconnected subtours. That is, rather than going 1, 2, 4, 5, 6, 3, 1 the model will propose two separate tours: 1, 2, 3, 1 and 4, 5, 6, 4 (see Figure 7). Figure 7: Solutions to Travelling Salesman Problem A subtour elimination constraint prohibits one of the subtours (e.g. 4, 5, 6, 4). The problem with the subtour elimination constraints is that, as with the number of possible routes, the number of such constraints grows exponentially with the number of points. It is therefore totally impracticable to include all such constraints in the model. Instead, once a solution has been found, a routine is called which identifies subtours and then adds the appropriate subtour elimination constraint to the matrix. The problem is re-solved and the process iterates.
A similar process is used in tackling so-called cutting-stock problems. These involve working out how best to cut some available stock of material to meet demand for specific sizes. Such problems may be one-dimensional (slitting a continuous roll of paper, carpet or film stock for instance into rolls of various widths) or two-dimensional (cutting specific shapes from sheets of metal or plastic). The technique of dynamic column generation has proved successful in tackling such problems. There is an enormous number of possible cutting patterns which could be used. A set of candidate patterns is developed (generally these will be those with minimum wastage) and the LP problem is solved to find that combination of cutting patterns which meets the demand at minimum cost. In some cases the problem will then be solved to an integer optimum using the existing set of cutting patterns. In other cases the next stage may be an "external pricing routine" which seeks to identify other cutting patterns which would have been used in the LP solution if they had been available. These are added to the problem and it is re-solved. This process is repeated and finally the problem is solved to an integer optimum using the augmented set of cutting patterns.
Aircrew scheduling is another class of problems which has been tackled successfully using column generation techniques. Each flight in an airline's timetable must be staffed with the appropriate aircrew. There are complex rules which govern the working hours of the various categories of staff and determine how much it costs to deploy them on a particular tour. The problem is to determine the minimum-cost set of tours which ensures that all aircraft are satisfactorily crewed. This is tackled by generating columns which represent potential tours and combining them in an enormous LP. Such matrices are among the largest in routine use and may exceed 100,000 rows and columns. Lest readers consider that cost minimisation is dehumanising, it is reported that crews prefer optimised schedules because they minimise time spent hanging around airports waiting for the next flight.
Underlying the difficulties of finding solutions to these problems lie some fundamental theoretical results. These shed light on what is going on and help guide us towards formulations for IP problems which are more likely to work. The central problem in integer programming arises from the intersection of decisions. In other words, you make life hard for the IP code where some activity arises from the intersection of two or more decisions, each of which is represented by a separate decision variable. If you use a single decision variable to represent the combination of decisions, the problems disappear. This is precisely the approach adopted with the cutting stock problems. The decision variables which are used embed the "hard" part of the problem. The cutting pattern is taken as given and the decision is then only how many times to replicate that pattern. With the TSP the difficulties arise because the decision variables each represent only a tiny fraction of the complete set of decisions whose intersection determines the cost of the route.
The effect of different formulations can be seen with another example. This is a fragment of a scheduling problem from a flexible manufacturing system. It is concerned with deciding when each of a number of tasks should take place. Such a problem can be solved approximately by imposing a structure of time periods and using binary decision variables (preferably in Special Ordered Sets of Type 1) for whether a task starts at the beginning of a time period. If the time periods could be chosen such that each task took a whole number of time periods, this would yield the optimum solution. But it tends not to be practicable to use such fine time periods. Various approaches can be taken to overcome this. Most of these involve "approximating" the problem. As a result the best solution to the approximate problem may not solve the true problem or, if it does, it may not be the best solution to the true problem. Nonetheless, such approaches can be very effective and they are probably the best way to tackle such problems. If we want to solve the true problem exactly we need to use decision variables which represent the exact time when each task starts. Such a formulation takes for ever to solve (at least on my computer!). Yet if you specify the sequence of tasks, the problem reduces to a small LP which takes a fraction of a second to solve. Suppose that there are 6 tasks; then there are only 6! = 720 possible sequences. So the problem can be solved to global optimality by solving a few hundred small LPs and selecting the best solution. What is happening here is that the decisions as to when to start each task are intersecting in the constraints. By specifying a particular sequence of tasks we tie down those intersections into a single decision variable: whether to use that sequence or not. As a result, we can convert an intractable IP into a series of simple LPs. While such a transformation rarely achieves such dramatic results (consider the cases of 10 or 20 tasks), it shows the power of different formulations. |

LP Training > 5. Integer Programming >