see As discussed in [ 62 ] , by scaling the square matrix M if necessary, I n - M is nonsingular and the GLCP 2 - 5 can be reduced to the following system:. Since the objective function is nonnegative on the feasible set of CP 53 , the following result holds:. Theorem 3. Alternatively, a sequential linear programming SLP algorithm introduced in [ 62 ] can be applied to find a stationary point of CP The main drawback of this approach is that there is no theoretical guarantee that these methods find a global minimum of the CP.
However, numerical experiments reported in [ 62 ] indicate that the SLP algorithm is, in general, able to terminate successfully with a solution of the LCP. Hence, this approach appears to be interesting to exploit in the future for computing a feasible solution of an MPLCC. Otherwise, consider the following 0 - 1 Linear Integer Program:. Then the following result holds [ 41 ] :. Note that this theorem confirms that finding a feasible solution for an MPLCC is easier than showing that such a problem is infeasible. The existence of very efficient codes for 0 -1 Linear Integer Programming makes this approach quite useful in practice.
As in nonlinear programming [ 14 , 69 ] , it is important to derive KKT-type characterizations of stationary points for the design of local algorithms that deal with the MPLCC. Their definitions are as follows:. The algorithm was initially proposed in [ 78 ] and subsequently improved, implemented and tested in [ 53 ]. Feasibility i. Let r be an index of a nonbasic variable z r that does not satisfy the previous conditions. Change the nonbasic variable z r with a basic variable z t updating the sets of basic and nonbasic variables, as in simplex method, to obtain a new BFS.
Repeat with instead of z. It follows from the description of the steps of the BRS method that the algorithm is a simple extension of the simplex method which uses a modified rule for the choice of the nonbasic variables for the purpose of maintaining complementarity i. The algorithm is guaranteed to terminate with an MSP provided an usual anti-cycle rule [ 66 ] is used.
Finally an extension of the BRS method has been recently proposed in [ 30 ] , which guarantees in theory the termination in a BSP. Computing a BSP for an LPLCC is usually more demanding but the extension of the BRS algorithm for dealing with this case also performs very well and seems to outperform other alternative local techniques, such as penalty, regularization, smoothing, nonsmooth, interior-point and SQP approaches that have been designed for mathematical programs with linear and nonlinear complementarity constraints and can also be applied to the MPLCC [ 30 ].
There are some special instances of the MPLCC where such objective is relatively easy to be fulfilled. Next, we discuss two of these cases. Furthermore assume that the objective function only depends on the y - variables, i. Consider the Relaxed Convex Program. Otherwise [ 19 ] the LCP. Another interesting case that often appears in applications of the MPLCC is when the global optimal value is known, i. Finding a global minimum of 84 can be done efficiently by an enumerative method similar to the one described in Section 3 for finding a feasible solution of an MPLCC. An interesting example of such an approach is the enumerative algorithm discussed in [ 27 ] for computing a solution of the Eigenvalue Complementarity Problem Apart from these and other similar instances, finding a global minimum of an MPLCC is a quite difficult task.
In the next section we discuss the most important approaches for this goal, namely a sequential algorithm, branch-and-bound methods and 0 - 1 Integer Programming. In each iteration, a stationary point is at hand and the algorithm proceeds as follows:. The algorithm requires an update rule for guaranteeing that the condition 85 holds. Hence the enumerative method or the 0 - 1 integer programming approach discussed in Section 3 should be used to either compute such a feasible solution or show that the augmented LPLCC is infeasible.
As discussed in Section 3, giving a certificate of optimality i. Furthermore, in general the algorithm faces difficulties in providing a certificate of optimality. The design of a more efficient procedure to provide a certificate of global optimality has been the subject of intense research. An interesting approach is to design an underestimating optimization problem whose global minimum is relatively easy to compute and yields a positive lower bound for the program defined by 45 and Despite promising results in some cases, much research has to be done to assure the general efficiency of these techniques in practice.
The simplest technique of this type has been introduced by Bard and Moore in [ 12 ] for finding a global minimum of a linear bilevel program by exploiting its LPLCC formulation. For instance, the RCP 5 associated with node 5 of the binary tree of Figure 2 takes the following form:. The tree is then pruned at the node k and a new open node is investigated. Termination of the algorithm occurs when there is no open node whose lower bound is smaller than the best upper bound computed by the algorithm.
In this case the solution x , y , w associated with this upper bound is a global minimum for the MPLCC. The branch-and-bound algorithm should include good heuristics rules for choosing the open node and the pair of complementary variables for branching. The algorithm terminates in a finite number of iterations nodes with a global minimum or with a certificate that the MPLCC is either infeasible or unbounded.
Computational experience reported in [ 2 , 6 , 23 , 46 , 47 , 48 ] indicates that the algorithm is not very efficient for dealing with MPLCC, as the number of nodes tends to greatly increase with the number n of pairs of complementary variables. During the past several years, a number of methodologies have been recommended by many authors to improve the Bard and Moore branch-and-bound algorithm when the objective function is linear LPLCC [ 2 , 6 , 16 , 23 , 37 , 87 ]. These improvements have been concerned with the quality of the lower bounds and upper bounds and the branching procedure.
Cutting planes [ 2 , 6 , 64 , 87 ] , RLT [ 82 ] and SDP [ 16 , 17 ] have been used for computing better lower bounds than the ones given by the relaxed linear programs. On the other hand, some ideas of combinatorial optimization have been employed to design more efficient branching strategies that lead to better upper bounds for the branch-and-bound method [ 2 , 6 , 23 , 37 ]. Computational experiments reported in [ 2 , 6 , 16 , 23 , 37 , 87 ] clearly indicate that these techniques portend significant improvements for the efficiency of branch-and-bound methods in general.
It is important to add that such an equivalence also provides a certificate of infeasibility and unboundedness for the MPLCC from those pertaining to the MIP. Note that the integer programming approach for solving the MPLCC is much more interesting in this last case, as MIP is a linear integer program and there exist very efficient codes for dealing with this optimization problem. An idea for avoiding the use of a large constant has been introduced for the LPLCC in [ 39 ] and has subsequently been applied to the special case of the LPLCC associated with nonconvex quadratic programs [ 40 ].
By recognizing this fact and using a minimax integer programming formulation of MIP 92 , a Benders decomposition technique has been designed in [ 39 ] that uses extreme points and unbounded rays of the dual constraint set. This algorithm has been shown to converge in a finite number of iterations into a global minimum of the LPLCC or to give a certificate of infeasibility or unboundedness [ 40 , 39 ].
Simple or disjunctive cuts and a recovery procedure for obtaining a feasible solution of the LPLCC from a linear feasible solution are recommended in a preprocessing phase to enhance the efficiency of the algorithm [ 39 ]. Computational experiments reported in [ 40 , 39 ] indicate that the method is in general efficient in practice. Furthermore, the preprocessing phase has a very important impact on the computational performance of the algorithm. The possible use of the sequential algorithm discussed in Section 6 in the preprocessing phase seems to be an interesting topic for future research.
In this paper, we have reviewed a number of applications and formulations of important optimization problems as mathematical programs with linear complementarity constraints MPLCC. Active-set, interiorpoint and DC methods and absolute value programming seem to work well for special cases, but not in general.
An enumerative method that incorporates a local quadratic solver can efficiently find such a solution in general. Linear Integer Programming can also be useful for such a goal. A complementarity active set method is recommended for finding a strongly stationary, an M-stationary or a B-stationary point for the MPLCC. Computing a global minimum of an LPLCC is a much more difficult task that can be done by using a sequential algorithm or by branch-and-bound methods applied directly to the LPLCC or to an equivalent linear integer program.
Despite the promising numerical performance of these techniques for computing a feasible solution, a stationary point, and a global minimum for the MPLCC, much research has to be done on finding better methodologies and more efficient certificates of optimality. Another important topic for future research is the development of more efficient techniques for the solution of some of the optimization problems that can be formulated as MPLCCs.
The Eigenvalue Complementarity Problem and Optimization with Cardinality Constraints are two important examples of these problems that have received much attention recently and should continue to be investigated in the near future. Copositivity and constrained fractional quadratic programs.
Mathematical Programming , Series A, A symmetrical linear maxmin approach to disjoint bilinear programming. Mathematical Programming 85 An implicit enumeration procedure for the general linear complementarity problem. Mathematical Programming Studies , 31 On using the elastic mode in nonlinear programming approaches for mathematical programs with equilibrium constraints. A nonsmooth algorithm for cone-constrained eigenvalue problems. Computational Optimization and Applications , 49 New branch-and-bound algorithm for bilevel linear programming.
Journal of Optimization Theory and Applications , Elastic-mode algorithms for mathematical programs with equilibrium constraints: global convergence and stationarity properties. Mathematical Programming , BARD J. Practical Bilevel Optimization: Algorithms and Applications. Kluwer Academic Publishers, Dordrecht. Variational inequality formulation of the asymmetric eigenvalue problem and its solution by means of gap functions.
Pacific Journal of Optimization , 8: An investigation of feasible descent algorithms for estimating the condition number of a matrix. On a reformulation of mathematical programs with cardinality constraints. To appear in Advances in Global Optimization. A branch and bound algorithm for the bilevel programming problem. On convex quadratic programs with linear complementarity constraints. Computational Optimization and Applications , Nonlinear Programming: Theory and Algorithms. Interior-point algorithms, penaltymethods and equilibrium problems.
A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations. Mathematical Programming, Series A , Globally solving nonconvex quadratic programming problems via completely copositive programming. Mathematical Programming Computation , 4: Bilevel programming: a survey. The Linear Complementarity Problem.
Academic Press, New York. A note on a modified simplex approach for solving bilevel linear programming problems. European Journal of Operational Research , Foundations of Bilevel Programming. Kluwer Academic Publishers, Dordecht. Mathematical programs with equilibrium constraints: automatic reformulation and solution via constrained optimization. Frontiers in Applied General Equilibrium Modeling , pages Cambridge University Press. A computational study of global algorithms for linear bilevel programming. Numerical Algorithms , Solution of a general linear complementarity problem using smooth optimization and its application to bilinear programming and lcp.
Applied Mathematics and Optimization , On the symmetric quadratic eigenvalue complementarity problem. Optimization Methods and Software , A smoothing method for mathematical programs with equilibrium constraints. On an enumerative algorithm for solving eigenvalue complementarity problems. On the computation of all the eigenvalues for the eigenvalue complementarity problem. Journal of Global Optimization , Solving mathematical programs with complementarity constraints as nonlinear programs.
A pivoting algorithm for linear programming with linear complementarity constraints. A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints. Computational Optimization Applications , Local convergence of SQP methods for mathematical programs with equilibrium constraints. I and II. Springer, New York. An implementable active-set algorithm for computing a Bstationary point of a mathematical program with linear complementarity constraints.
Global optimization of mixed-integer bilevel programming problems. Computational Management Science , 2: Practical Optimization. Academic Press, London. New branch-and-bound rules for linear bilevel programming. Theoretical and numerical comparison of relaxation methods for mathematical programs with complementarity constraints. On the global solution of linear programs with linear complementarity constraints.
Introduction to Global Optimization. Kluwer, New York. Convergence of a penalty method for mathematical programming with complementarity constraints. Complementarity programming. Operations Research , Cutting-planes for complementarity constraints. An experimental investigation of enumerative methods for the linear complementarity problem. Computers and Operations Research , A computational analysis of LCP methods for bilinear and concave quadratic programming.
A SLCP method for bilevel linear programming. There are no free ones of note, but many do offer free demo versions. Common alternatives to modelling languages and systems include spreadsheet front ends to optimization, and custom optimization applications written in general-purpose programming languages. You can find a discussion of the pros and cons of these approaches in What modelling Tool Should I Use? Since any real operation that you are modelling must remain within the constraints of reality, infeasibility most often indicates an error of some kind. Simplex-based LP software efficiently detects when no feasible solution is possible; some early interior-point codes could not detect an infeasible situation as reliably, but remedies for this flaw have been introduced.
The source of infeasibility is often difficult to track down. It may stem from an error in specifying some of the constraints in your model, or from some wrong numbers in your data. It can be the result of a combination of factors, such as the demands at some customers being too high relative to the supplies at some warehouses.
Upon detecting infeasibility, LP codes typically show you the most recent infeasible solution that they have encountered. Sometimes this solution provides a good clue as to the source of infeasibility. If it fails to satisfy certain capacity constraints, for example, then you would do well to check whether the capacity is sufficient to meet the demand; perhaps a demand number has been mistyped, or an incorrect expression for the capacity has been used in the capacity constraint, or or the model simply lacks any provision for coping with increasing demands.
More often, unfortunately, LP codes respond to an infeasible problem by returning a meaninglessly infeasible solution, such as one that violates material balances. A more useful approach is to forestall meaningless infeasibilities by explicitly modelling those sources of infeasibility that you view as realistic. As a simple example, you could add a new "slack" variable on each capacity constraint, having a very high penalty cost.
Then infeasibilities in your capacities would be signalled by positive values for these slacks at the optimal solution, rather than by a mysterious lack of feasibility in the linear program as a whole.
Many modelers recommend the use of "soft constraints" of this kind in all models, since in reality many so-called constraints can be violated for a sufficiently high price. One useful appoach is to apply auxiliary algorithms that look for small groups of constraints that can be considered to "cause" the infeasibility of the LP. Several codes include methods for finding an "irreducible infeasible subset" IIS of constraints that has no feasible solution, but that becomes feasible if any one constraint is removed.
A minimal IIS cover is the smallest subset of constraints whose removal makes the linear program feasible. A bibliography on optimization modelling systems collected by Harvey Greenberg of the University of Colorado at Denver contains cross-references to over papers on the subject of model analysis. For MIP models, it's also difficult - if there exists no feasible solution, then you must go through the entire Branch and Bound procedure or whatever algorithm you use to prove this. There are no shortcuts in general, unless you know something useful about your model's structure e.
Instead, you will need to look for a solution or solutions that achieve an acceptable tradeoff between objectives. Deciding what tradeoffs are "acceptable" is a topic of investigation in its own right. There are a few free software packages specifically for multiple objective linear programming, including: ADBASE computes all efficient i. It is available without charge for research and instructional purposes.
If someone has a genuine need for such a code, they should send a request to: Ralph E. Other approaches that have worked are: Goal Programming treat the objectives as constraints with costed slacks , or, almost equivalently, form a composite function from the given objective functions; Pareto preference analysis essentially brute force examination of all vertices ; Put your objective functions in priority order, optimize on one objective, then change it to a constraint fixed at the optimal value perhaps subject to a small tolerance , and repeat with the next function.
There is a section on this whole topic in [Nemhauser]. As a final piece of advice, if you can cast your model in terms of physical realities, or dollars and cents, sometimes the multiple objectives disappear! The commercial code OSL has features to assist in decomposing so-called Dantzig-Wolfe and staircase structures. With any other code, you'll have to create your own decomposition framework and then call an LP solver to handle the subproblems. The folklore is that generally decomposition schemes take a long time to converge, so that they're slower than just solving the model as a whole -- although research continues.
For now my advice, unless you are using OSL or your model is so huge that you can't buy enough memory to hold it, is to not bother decomposing it. It's probably more cost effective to upgrade your solver than to invest more time in programming a good piece of advice in many situations. It includes codes for convex hull computation, as well as for the opposite problem of generating all extreme points and extreme rays of a general convex polyhedron given by a system of linear inequalities.
Here are further comments on some of these codes: Ken Clarkson has written Hull , an ANSI C program that computes the convex hull of a point set in general dimension. The input is a list of points, and the output is a list of facets of the convex hull of the points, each facet presented as a list of its vertices. Qhull computes convex hulls as well as Delaunay triangulations, halfspace intersections about a point, Voronoi diagrams, and related objects. It uses the "Beneath Beyond" method, described in [Edelsbrunner].
Komei Fukuda's cdd solves both the convex hull and vertex enumeration problems, using the Double Description Method of Motzkin et al. VE , another implementation of this approach by Fukuda and Mizukoshi, is available in a Mathematica implementation. The Center for the Computation and Visualization of Geometric Structures at the University of Minnesota maintains a list of its downloadable software , and hosts a directory of computational geometry software compiled by Nina Amenta.
Other algorithms for such problems are described in [Swart] , [Seidel] , and [Avis]. Such topics are said to be discussed in [Schrijver] page , [Chvatal] chapter 18 , [Balinski] , and [Mattheis] as well. Dash's Xpress-Parallel includes a branch-and-bound mixed-integer programming code designed to exploit both multi-processor computers and networks of workstations.
OOPS is an object-oriented parallel implementation of the interior point algorithm, developed by Jacek Gondzio gondzio maths. The code can exploit any special structure of the problem. It runs on all parallel computing platforms that support MPI. Two parallel branch-cut-price frameworks are available to those who want to program specialized solvers for hard combinatorial problems that can be approached via integer programming: Symphony requires the user to supply model-specific preprocessing and separation functions, while other components including search tree, cut pool, and communication management are handled internally.
Source code is included for basic applications to traveling salesman and vehicle routing problems. The distributed version runs in any environment supported by the PVM message passing protocol, and can also be compiled for shared-memory architectures using any OpenMP compliant compiler. Performance evaluations of parallel solvers must be interpreted with care. One common measurement is the "speedup" defined as the time for solution using a single processor divided by the time using multiple processors.
A speedup close to the number of processors is ideal in some sense, but it is only a relative measure. The greatest speedups tend to be achieved by the least efficient codes, and especially by those that fail to take advantage of the sparsity predominance of zero coefficients in the constraints. For problems having thousands of constraints, a sparse single-processor code will tend to be faster than a non-sparse multiprocessor code running on current-day hardware. A network for this problem is viewed as a collection of nodes or circles or locations and arcs or lines or routes connecting selected pairs of nodes.
Arcs carry a physical or conceptual flow of some kind, and may be directed one-way or undirected two-way. Some nodes may be sources permitting flow to enter the network or sinks permitting flow to leave. This is a special case of the general linear programming problem. The transportation problem is an even more special case in which the network is bipartite: all arcs run from nodes in one subset to the nodes in a disjoint subset. A variety of other well-known network problems, including shortest path problems, maximum flow problems, and certain assignment problems, can also be modeled and solved as network linear programs.
Details are presented in many books on linear programming and operations research. Network linear programs can be solved 10 to times faster than general linear programs of the same size, by use of specialized optimization algorithms. Some commercial LP solvers include a version of the network simplex method for this purpose.
That method has the nice property that, if it is given integer flow data, it will return optimal flows that are integral. Integer network LPs can thus be solved efficiently without resort to complex integer programming software. Unfortunately, many different network problems of practical interest do not have a formulation as a network LP. These include network LPs with additional linear "side constraints" such as multicommodity flow problems as well as problems of network routing and design that have completely different kinds of constraints.
In principle, nearly all of these network problems can be modeled as integer programs. Some "easy" cases can be solved much more efficiently by specialized network algorithms, however, while other "hard" ones are so difficult that they require specialized methods that may or may not involve some integer programming.
Contrary to many people's intuition, the statement of a hard problem may be only marginally more complicated than the statement of some easy problem. A canonical example of a hard network problem is the " traveling salesman " problem of finding a shortest tour through a network that visits each node once. A canonical easy problem not obviously equivalent to a linear program is the "minimum spanning tree" problem to find a least-cost collection of arcs that connect all the nodes. But if instead you want to connect only some given subset of nodes the "Steiner tree" problem then you are faced with a hard problem.
These and many other network problems are described in some of the references below. Software for network optimization is thus in a much more fragmented state than is general-purpose software for linear programming. The following are some of the implementations that are available for downloading.
Most are freely available for many purposes, but check their web pages or "readme" files for details. It is a hard NP-complete problem just like integer programming , but the obvious integer programming formulations of it are not especially useful in getting good solutions within a reasonable amount of time. The TSP has attracted many of the best minds in the optimization field, and serves as a kind of test-bed for methods subsequently applied to more complex and practical problems.
Methods have been explored both to give proved optimal solutions, and to give approximate but "good" solutions, with a number of codes being developed as a result: Concorde has solved a number of the largest TSPs for which proved optimal solutions are known. It employs a polyhedral approach, which is to say that it relies on a range of exceedingly sophisticated linear programming constraints, in a framework that resembles integer programming branch-and-bound methods.
The constraints are selectively generated as the solution process proceeds. The full C code is available without cost for research purposes. Public domain code for the Asymmetric TSP with travel between two cities significantly cheaper in one of the two directions is available in TOMS routine , documented in [Carpaneto]. Code for a solver can be obtained via instructions in [Volgenant]. Chad Hurwitz churritz cts. Numerical Recipes [Press] contains code that uses simulated annealing. Stephan Mertens's TSP Algorithms in Action uses Java applets to illustrate some simple heursitics and compare them to optimal solutions, on node problems.
Onno Waalewijn has constructed Java TSP applets exhibiting the behavior of different methods for heuristic and exhaustive search on various test problems. Other good references are [Lawler] and [Reinelt]. Sophisticated and widely used heuristics for getting a "good" solution are described in the article by Lin and Kernighan in Operations Research 21 For practical purposes, the traveling salesman problem is only the simplest case of what are generally known as vehicle-routing problems. Thus commercial software packages for vehicle routing -- or more generally for "supply chain management" or "logistics" -- may have TSP routines buried somewhere within them.
In the one-dimensional version, cutting only reduces a single measurement, usually referred to as the length or width of the pieces; examples include cutting wide rolls of paper or sheet steel into specified numbers of smaller widths also called the "roll trim" problem , and cutting long pieces of wood or pipe into specified specified numbers of shorter pieces. In the two-dimensional version, both a length and width may be specified for both the large pieces you start with and the smaller ones to be cut, or the shapes to be cut may be more general.
The material may again be wood or metal, or paper or fabric, or even cookie dough. The packing problem can be regarded as a kind of cutting in reverse, where the goal is to fill large spaces with specified smaller pieces in the most economical or profitable way. As with cutting, there are one-dimensional problems also called knapsack problems and two-dimensional problems, but there are also many three-dimensional cases such as arise in filling trucks or shipping containers. The size measure is not always length or width; it may be weight, for example.
Except for some very special cases, cutting and packing problems are hard NP-complete like integer programming or the TSP. The simpler one-dimensional instances are often not hard to solve in practice, however: For knapsack problems, a good MIP solver applied to a straightforward integer-programming formulation can often be used. Specialized algorithms are said to be available in [ Syslo ] and [Martello]. There has been a great deal written on cutting and packing topics, but it tends to be scattered.
You might want to start by looking at the web page of the Special Interest Group on Cutting and Packing and the "application-oriented research bibliography" in [ Sweeney ]. In fact, even an ordinary web search engine can find you a lot on this topic; try searching on "cutting stock". Particular application areas, from paper to carpeting , have also given rise to their own specialized cutting-stock tools, which can often be found by a web search on the area of interest. Instructions for joining a stochastic programming mailing list can also be found at this site.
The two broad classes of stochastic programming problems are recourse problems and chance-constrained or probabilistically constrained problems. Recourse Problems are staged problems wherein one alteranates decisions with realizations of stochastic data.
The objective is to minimize total expected costs of all decisions. The main sources of code not necessarily public domain depend on how the data is distributed and how many stages decision points are in the problem. Gassmann dal. Also, for not huge discretely distributed problems, a deterministic equivalent can be formed which can be solved with a standard solver. Frauendorfer frauendorfer sgcl1. Systems Springer-Verlag. POSTS is a small test set of recourse problems designed to highlight different qualities; it is meant as a common test bed for reporting the computational characteristics of state-of-the-art SLP algorithms and their implementations.
Some test problems from a financial application have been posted by the Centre for Financial Research. I don't know of any public domain codes for CCP probs. Prekopa prekopa cancer. Ermoliev, and R.
These improvements have been concerned with the quality of the lower bounds and upper bounds and the branching procedure. United Kingdom. Examples of important optimization problems that should be reformulated as MPLCCs are nonconvex quadratic programming, bilinear programming, bilevel programming, linear complementarity problem, eigenvalue complementarity problem, total least-squares, absolute value programming, optimization with cardinality constraints, computation of independent or clique number and estimation of the condition number of a matrix [ 1 , 2 , 10 , 11 , 19 , 22 , 33 , 41 , 40 , 55 , 60 , 61 , 63 , 67 , 70 , 73 , 76 , 80 , 87 ]. Stephan Mertens's TSP Algorithms in Action uses Java applets to illustrate some simple heursitics and compare them to optimal solutions, on node problems. SapnaOnline provides online shopping for over 10 Million Book Titles in various languages and genres. Another helpful tactic: if your optimization code has more than one solution algorithm, you can alternate among them.
Wets, eds. Both Springer Verlag texts mentioned above are good introductory references to Stochastic Programming. Also called Ranging or Sensitivity Analysis, it gives information about how the coefficients in the problem could change without affecting the nature of the solution. Most LP textbooks, such as [Nemhauser] , describe this. Unfortunately, all this theory applies only to LP. For a MIP model with both integer and continuous variables, you could get a limited amount of information by fixing the integer variables at their optimal values, re-solving the model as an LP, and doing standard post-optimal analyses on the remaining continuous variables; but this tells you nothing about the integer variables, which presumably are the ones of interest.
Another MIP approach would be to choose the coefficients of your model that are of the most interest, and generate "scenarios" using values within a stated range created by a random number generator. Perhaps five or ten scenarios would be sufficient; you would solve each of them, and by some means compare, contrast, or average the answers that are obtained.
Noting patterns in the solutions, for instance, may give you an idea of what solutions might be most stable. A third approach would be to consider a goal-programming formulation; perhaps your desire to see post-optimal analysis is an indication that some important aspect is missing from your model. Any reasonable simplex-based LP code can construct a starting vertex or "basic solution" for you, given the constraints and the objective function. Most codes go through a so-called two-phase procedure, wherein they first look for a feasible solution, and then work on getting an optimal solution.
The two phases are often grandly titled phase I and phase II. The first phase, if properly written, can begin at any infeasible basic solution, and commercial codes typically have a so-called crash routine to pick a reasonable start. It is generally not worth going to a lot of trouble to look for a better starting basic solution, unless you have already solved a similar problem.
The optimal basis the list of basic variables from a similar problem often does help the simplex solver to get good start that substantially reduces the number of iterations to optimality. Commercial codes generally provide for saving an optimal basis and reusing it in this way, but free codes may not. Interior-point methods for LP have entirely different requirements for a good starting point.
Any reasonable interior-point-based LP code has its own routines for picking a starting point that is "well-centered" away from the constraints, in an appropriate sense. There is not much advantage to supplying your own starting point of any kind -- at least, not at the current state of the art -- and some codes do not even provide an option for giving a starting point. While this specific behavior is rather rare in practice, it is quite common for the algorithm to reach a point where it temporarily stops making forward progress in terms of improvement in the objective function; this is termed "stalling", or more loosely known as "degeneracy" since it is caused by one or more basic variables taking on the value of a lower or upper bound.
In most cases, the algorithm will work through this nest of coincident vertices, then resume making tangible progress. However, in extreme cases the degeneracy is so bad that to all intents and purposes it can be considered cycling. However, obviously that is not always an option money! Besides, they say it's a poor workman who blames his tools.
So, when one cannot change the optimizer, it's expedient to change the model. Not drastically, of course, but a little "noise" can usually help to break the ties that occur during the Simplex method. A procedure that can work nicely is to add, to the values in the RHS, random values roughly six orders of magnitude smaller. Depending on your model's formulation, such a perturbation may not even seriously affect the quality of the solution values.
However, if you want to switch back to the original formulation, the final solution basis for the perturbed model should be a useful starting point for a "cleanup" optimization phase. Depending on the code you are using, this may take some ingenuity to do, however. Another helpful tactic: if your optimization code has more than one solution algorithm, you can alternate among them.
When one algorithm gets stuck, begin again with another algorithm, using the most recent basis as a starting point. For instance, alternating between a primal and a dual method can move the solution away from a nasty point of degeneracy. Using partial pricing can be a useful tactic against true cycling, as it tends to reorder the columns. And of course Interior Point algorithms are much less affected by though not totally immune to degeneracy.
Unfortunately, the optimizers richest in alternate algorithms and features also tend to be least prone to problems with degeneracy in the first place. It is divided into the following categories: General reference Textbooks Presentations of LP modelling systems Books containing source code Additional books Periodicals Articles of interest. Regarding the common question of the choice of textbook for a college LP course, it's difficult to give a blanket answer because of the variety of topics that can be emphasized: brief overview of algorithms, deeper study of algorithms, theorems and proofs, complexity theory, efficient linear algebra, modelling techniques, solution analysis, and so on.
A small and unscientific poll of ORCS-L mailing list readers in uncovered a consensus that [ Chvatal ] was in most ways pretty good, at least for an algorithmically oriented class; of course, some new candidate texts have been published in the meantime. For a class in modelling, a book about a commercial code would be useful LINDO , AMPL , GAMS were suggested , especially if the students are going to use such a code; and many are fond of [Williams] , which presents a considerable variety of modelling examples.
Bazaraa, Jarvis and Sherali. Linear Programming and Network Flows. Grad level. Graduate-level text on linear programming, network flows, and discrete optimization. Chvatal , Linear Programming, Freeman, Undergrad or grad. Cook, W. Combinatorial Optimization , Wiley Interscience, Daellenbach, Hans G.
Good for engineers. Currently out of print. Fang, S. Prentice Hall, Dantzig, George B.
The most widely cited early textbook in the field. Gass, Saul I. International Thomson Publishing, Ignizio, J. Covers usual LP topics, plus interior point, multi-objective and heuristic techniques. Updated version of an old standby. Murtagh, B. McGraw-Hill, Good one after you've read an introductory text. Murty, K. Nash, S. Nazareth, J. Nemhauser, G. An advanced text that covers many theortical and computational topics.
Nering, E. John Wiley, Chichester, Saigal, R. Schrijver, A. Taha, H. Thie, P. Vanderbei, Robert J. Kluwer Academic Publishers, Balanced coverage of simplex and interior-point methods. Source code available on-line for all algorithms presented. Williams, H. Little on algorithms, but excellent for learning what makes a good model. Wright, Stephen J. SIAM Publications , Covers theoretical, practical and computational aspects of the most important and useful class of interior-point algorithms. The web page for this book contains current information on interior-point codes for linear programming, including links to their websites.
Wiley, Greenberg, H. Schrage, L. Books containing source code Best and Ritter, Linear Programming: active set analysis and computer programs, Prentice-Hall, Bertsekas, D. Bunday, Linear Programming in Basic presumably the same publisher. A special case of LP; contains Fortran source code. Lau, H. Contains a section on optimization. Contains Fortran code, comes with a disk - also covers Assignment Problem.
Comment: use their LP code with care. Beasley, ed. Oxford University Press, Each chapter is a self-contained essay on one aspect of the subject. Murty, Network Programming, Prentice Hall, Also contains a discussion of complexity of simplex method. Reeves, ed. Contains chapters on tabu search, simulated annealing, genetic algorithms, neural nets, and Lagrangian relaxation. It also has the most extensive collection of advertisements for commercial linear programming and other optimization software packages.
Interfaces frequently publishes interesting accounts of applications that make use of linear programming. Publications of the Mathematical Programming Society : Mathematical Programming contains technical articles on theory and computation. Optima , the Society's newsletter, incorporates survey articles and book reviews. Balas, E.