December 1998By Robert FourerSoftware for building, solving, and analyzing optimization (or mathematical programming) problems has seen a surge of development over the past decade. The attractiveness of the optimization models addressed by this software can be explained in part by the obvious appeal of minimizing costs or maximizing profits in operations research and management science applications. More broadly, the concept of an objective function has proved to be a valuable modeling tool in describing the desired solutions for a range of planning and operational problems. It can be a lot easier to say that a certain combination of costs must be minimized than to formulate constraints that rule out every possibility for uneconomical behavior. Software for optimization does tend to be more complicated to buy and configure than comparable packages for simulation, statistics or decision analysis. The market for optimization software encompasses two distinct kinds of products: solvers that implement various optimizing algorithms, and modeling systems that support the development and application of mathematical programs. A product of either kind is typically available for use with a number of products of the other, through separate purchase or various package deals. This arrangement has given modelers a valuable degree of flexibility in combining methods and interfaces to meet their needs, but at the cost of some additional complexity in connecting software from different sources.This survey of software for optimization is naturally also divided into two main sections, which consider new developments and likely future trends in solvers and in modeling systems. A third section describes developments in Internet optimization services, which involve optimization software of both kinds. To keep the presentation to a reasonable length, it is focused on features rather than on particular implementations and products. Pointers to more specific information are given at the end. SolversLinear programming would seem to be the least likely area for advances in solver technology. Solvers based on the simplex method have been under development for over 40 years, and even the interior-point codes are by now fairly standard in design. Yet substantial reductions in computing time continue to be reported, particularly for the largest LPs. Apparently, the combination of faster computers, cheaper memory and easier-to-use modeling systems is encouraging a steady increase in the size of linear programs that people want to solve; the developers of optimization codes no longer view hundreds of thousands of variables as anything unusual. Jumps in problem size can subtly affect the economics of an algorithm, so that previously ineffective ideas become key refinements. Thus, even though the underlying algorithms are well known, large-scale LP codes are sufficiently complex in detail to provide a continuing stream of ideas worth testing on new and larger problems. Linear programming with integer variables has benefited from comparable efficiencies, as well as improvements in formulation through the automatic generation of cutting planes. Yet arguably the biggest news in this area is being provided by the family of solution techniques collectively known as constraint programming (or constraint logic programming). Like the established branch-and-bound solvers for integer programming, constraint programming solvers rely on implicit enumeration of a tree of solutions. Their advantage lies in being able to deal directly with the highly combinatorial constraints of scheduling, sequencing, assignment, and configuration problems, avoiding often awkward reformulations into integer programs. The decision in a constraint program can be the allocation of a different job to each of a sequence of schedule slots, or the choice of crews as subsets of employees. Decisions can be subject to "if" and "either-or" conditions, or can be restricted according to the number of relationships of a certain kind that are satisfied. As a result, formulations conducive to an efficient tree search are often easier to discern, and large numbers of numerical decision variables — especially zero-one variables — can often be avoided. To achieve these benefits, however, current constraint programming methods give up the opportunity to use bounds and other information from the solutions to LP relaxations. Thus a branch-and-bound approach is still the preferred way of solving problems for which integer-valued variables are a natural formulation, or for which there are known formulations giving particularly tight bounds. Constraint programming techniques have been developed primarily as a topic in artificial intelligence, within the field of computer science, and until recently have been nearly invisible to practitioners of operations research. (The term "constraints" in AI terminology refers to any description of a problem in terms of the properties that a solution must satisfy, rather than in terms of steps required to construct a solution — a distinction taken for granted by anyone who has studied optimization in OR/MS.) The commercialization of constraint technology in recent years has brought to bear many of the same practical concerns familiar in integer programming, leading to a greater overlap with integer programming techniques. Indeed, the tree-search approaches inherent in constraint programming and integer programming have more in common with each other than either have in common with heuristic solution-search techniques such as tabu search or simulated annealing. It would not be surprising to see a more complete merging of tree-search techniques into powerful general-purpose codes that combine the best of the integer programming and constraint programming ideas. Despite continuing improvements to integer and constraint programming software, however, it is still relatively easy to create a poor or overly ambitious formulation that provokes more-or-less exponential growth of the search tree. This is not a problem that can be solved simply by moving to faster computers, larger disks, more memory, or even larger numbers of processors. It can sometimes be addressed by giving problem-specific instructions to the search routines; possibilities for making these instructions easier to describe are discussed below. Still, many uses are likely to remain for discrete optimization software that is more or less specialized. One can find good packages, for example, that are dedicated to network problems, or logistics problems, or vehicle routing problems, or even school-bus routing problems. In addition to offering application-specific interfaces, these packages can take advantage of efficient problem-specific computational methods, of which there are a huge number in the combinatorial optimization literature. Thus modelers are likely to continue to be faced with a range of tradeoffs between generality and efficiency. Another growth area related to linear programming is stochastic programming, where the goal is to find a solution that is adaptable to a variety of future scenarios, even though it may be somewhat less than ideal for any particular outcome. The resulting optimization problems tend to be large and difficult, but they are continuing to benefit from innovative decomposition schemes and scenario sampling strategies. Classical nonlinear programming, where the goal is quick convergence to a local optimum from a suitably chosen start, has always relied on a mix of solvers. One solver may excel on large problems having mostly linear constraints, for example, while another is more advantageous for small but highly nonlinear problems. Current development efforts are likely to further enrich the solver mix. There have been substantial advances in codes that handle complementarity constraints, either alone or in conjunction with ordinary equations and inequalities (where they are also called mathematical programs with equilibrium constraints, or MPECs). In addition to handling various problems in engineering and economics, these codes are proving to be effective in solving some ordinary nonlinear programs by treating their optimality conditions as complementarity problems. Considerable progress is also being made in extending interior-point methods from linear and quadratic to general nonlinear contexts. In recent years there has also been a surge in implementation of interior-point methods for semi-definite programming, wherein the familiar nonnegative vector of variables is generalized to a positive semidefinite matrix of variables. A common theme of these and other developments is the use of second derivative information to improve convergence, taking advantage of sparsity or (more generally) separability to keep the computational requirements to a reasonable level. The more literal sense of nonlinear optimization, denoting any problem in terms of functions that are not all linear, encompasses a variety of problems that are most often studied under the label of global optimization. Software is this area is notable for its variety, making any general characterization difficult. For any particular application, some searching is unavoidable to determine whether suitable software is available. Modeling SystemsOne major factor in expanding the use of mathematical programming techniques has been the continuing development of general-purpose modeling systems for large-scale optimization. Systems based on algebraic modeling languages — minimizing or maximizing some function of decision variables, subject to equalities and inequalities — are most numerous, followed by those based on matrix-schematic representations and on class libraries for C++ or Java. These systems have undoubtedly made optimization techniques more accessible, but have they enabled substantially more managers and analysts to become good modelers? That would be an exciting development, but there is little evidence in support of it. The principal impact of modeling systems seems rather to lie in helping people who already have the talent for modeling, so that they become much more productive modelers. For new applications especially, the time and effort required to go from an idea to a working prototype have been greatly reduced, with the result that more ideas are pursued to a point where they can prove their value through realistic test runs. This is not to say that managers and analysts have been unaffected by developments in optimization. They are in fact the chief users of applications that combine mathematical programming and other software into packages for particular purposes. Developers of optimization modeling systems are thus increasingly concerned to provide features that support the transition from a promising prototype to a successful application. Some of these systems come with their own application development tools and environments. Others were not originally designed with application development in mind, but have been adapted through more-or-less awkward devices to allow for "run-time" versions that can be embedded within applications; the appeal of this approach lies in its compatibility with general-purpose application development tools that are already well known to many programmers. One seemingly attractive possibility for streamlining application development would be a kind of compiler, which would take a modeling language description of an optimization problem as input, and would produce, say, C code that could be dropped into a larger application. This approach can pose serious maintenance problems, however, when the model is updated. A more attractive approach might thus be to compile a model into an object (of a suitable object-oriented programming language), in such a way that the object's public classes and methods would be under the developer's control and would not change unexpectedly upon model revisions. Another alternative is to expand upon the established concept of a solver callable library, to provide also some of the functionality of a modeling system. This is a somewhat lower-level approach, requiring more programming but offering the developer more control. (As a bonus, this approach can make aspects of the modeler's formulation directly available to solver routines, a valuable arrangement for implementing iterative schemes that take advantage of model structure.) Underlying the design of most modeling systems are modeling languages that are also being enhanced in important ways. New features of greatest interest tend to be those that would support the solver advances previously described. Thus, extensions are available or under development for stochastic programming, for complementarity problems and MPECs, and for constraint logic programs. In each of these cases, current languages can be pressed into service by adoption of certain conventions, but the full potential of a modeling language can be realized only through a more substantial extension that allows users to express more problems in what they consider to be a natural form. Successful designs manage to achieve this expressiveness without unduly cluttering the language with new concepts and syntaxes. Simply allowing variables to appear in boolean and subscripting operations, as an example, is sufficient to naturally express several of the key formulation ideas of constraint programming. Iterated operator forms, originally motivated by indexed summations in linear programming, can be adapted to common constraint programming operations such as counting satisfied conditions or specifying that all members of an array must be different. Direct extensions of these kinds have the potential to considerably broaden the usefulness of algebraic modeling languages for combinatorial problems. Whether expressed in constraint programming or more traditional integer programming terms, however, the hardest combinatorial problems are solvable by tree-search techniques only with some problem-specific suggestions from the modeler, indicating, for example, how the next node or branch is to be selected for processing. This kind of guidance has typically been possible only by writing "user exit" or "callback" functions in a general programming language. Considerably more modelers might be encouraged to use search directives if they could be specified in the terms of a modeling language, using just a few new constructs to permit reference to search concepts such as nodes and branches. Other enhancements to modeling systems occur behind the scenes, without any change to model formulations. Most modeling systems that support nonlinear programming, for example, provide solvers with routines that can calculate any function's value and derivatives at any specified point. The calculations are combined efficiently by use of an approach known as automatic differentiation, which has been implemented in many ways for first derivatives. An extension to second derivatives is much more difficult, particularly if it takes advantage of separability properties of nonlinear functions in order to speed the computations. The additional derivatives enable the modeling language's support to be extended, however, to additional classes of solvers as mentioned above. Finally, modeling systems are playing an increasing role in the analysis of optimization problems, both before and after solving. Post-optimal sensitivity analyses are the best known example, though they are of limited practical import. One especially promising use of modeling languages is in conjunction with software that analyzes nonlinearities to help with the choice of a solver and possibly a starting point. Results from analyses of the causes of infeasibility are also readily communicated to modelers through modeling language expressions. Remote executionWeb sites offering optimization services began to appear not long after the World Wide Web became popular. Most current sites of this kind are designed to let prospective users submit tests or benchmarks for a particular solver; a few offer collections of solvers, mainly noncommercial ones from several sources. There is little agreement yet as to how these sites should work. Facilities for submission of problems and return of results range from e-mail and ftp to socket-based tools and web forms. Inputs include relatively low-level formats (such as the venerable MPS form for linear and integer programs) and higher-level modeling language formulations. Some sites impose fixed limits on problem size or computation time, while others apply less formal restrictions. It is still too early to say whether web optimization services will become a significant part of the practice of mathematical programming. Allocating or charging for computing resources is an issue of Internet commerce that remains to be resolved. Sites that offer a convenient and consistent interface to a variety of solvers could become valuable resources for both education and practice, particularly if the posting of new implementations becomes a standard practice. If these sites are to realize their potential, however, they will need to provide more complete instructions on selecting and using solvers, together with more reliable maintenance. It remains to be seen whether these functions will be supported consistently through universities and government labs, or whether they can be profitably pursued as commercial activities. The independence of optimization systems from solvers offers another attractive possibility: a local modeling system that invokes remote solvers. A locally installed system, while offering a richer and more responsive modeling environment than web pages, could still make use of network facilities to run solvers anywhere on the Internet. This arrangement could be flexible enough to encompass solvers installed on networks internal to the modeler's organization, as well as libraries of solvers collected by university centers and government labs, and demonstration versions of solvers made available by commercial vendors. It could encourage people to try new solvers, by eliminating a lot of the work of downloading and installation. Initial efforts along these lines are already being made, but the form that a standard network solver interface should take is still being worked out. Other experiments are beginning to address the allocation of computing resources to remote solvers. Using technology that is already well developed, it is possible to automatically allocate optimization tasks to idle machines on a network, so that the pool of available servers is greatly increased. Recovery procedures, when a machine crashes or is reclaimed by its owner, can also be automatic. In addition to providing resources for optimization requests from separate users, this kind of setup has considerable potential for support of decomposition and related high-level iterative schemes that are conveniently described by use of some of the more advanced modeling languages. These schemes typically give rise to independent subproblems that could be distributed among available processors for solution. More informationA separate box on this page suggests several web sites that provide links and other contact information for specific developers and distributors. A list of products and links keyed specifically to this article is also available on the web at http://iems.nwu.edu/~4er/optsoft.html. Robert Fourer is a professor in the Department of Industrial Engineering and Management Sciences at Northwestern University.
Table of ContentsOR/MS Today Home Page copyright © 1998 by the Institute for Operations Research and the Management Sciences. All rights reserved.OR/MS TodayLionheart Publishing, Inc.506 Roswell Street, Suite 220, Marietta, GA 30060, USA Phone: 770-431-0867 | Fax: 770-432-6969 E-mail: lpi@lionhrtpub.com URL: http://www.lionhrtpub.com Web Site © Copyright 1998 by Lionheart Publishing, Inc. All rights reserved. |