Hasil (
Bahasa Indonesia) 1:
[Salinan]Disalin!
Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their low- degree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables.In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practi- cal convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimization methods and software. In the final part of this paper we will therefore address a redesign of interior point methods to allow them to work in a matrix free regime and to make them well-suited to solving even larger problems.Linear programs (LPs) have been at the centre of attention of the optimization field since the1940s. For several decades the simplex algorithm [60, 23] was the only method available to solve this important class of optimization problems. Although in theory the simplex method is a non-polynomial algorithm (in the worst-case it might have to make a very large number of steps which depends exponentially on the problem dimension), in practice it has proved to be a very efficient and reliable method. By its design, the simplex method visits the vertices of the polytope defined by the constraints and, because their number may be astronomical, the method is exposed to a danger of having to visit many of them before reaching an optimal one. No polynomial simplex-type pivoting algorithm is known to date and it is hard to believe that one will ever be found although researchers have not lost hope, and continue their search for one [99].The first polynomial algorithm for LP was developed by Khachiyan [66]. His ellipsoid algorithm constructs a series of ellipsoids inscribed into the feasible set. The centres of these ellipsoids form a sequence of points convergent to an optimal solution of the LP. The construction of ellipsoids provides a guarantee that steady progress towards optimality can be made from one iteration to another. The development of the ellipsoid method made a huge impact on the theory of linear programming but the method has never become a competitive alternative to the simplex method because the per-iteration cost of the linear algebra operations to update the ellipsoids is too high [41].Karmarkar’s projective LP algorithm [61] could be interpreted as a refinement of the ellipsoid method. Instead of inscribing an ellipsoid into the illconditioned corner of the feasible polytope, Karmarkar’s algorithm employs projective geometry to transform this “corner” into a regular well-conditioned simplex polytope, inscribes a ball into it, and exploits the fact that optimization on a ball is a trivial operation. Additionally, Karmarkar’s method uses a notion of a potential function (a sort of merit function) to guarantee a steady reduction of a distance to optimality at each iteration. Although a single iteration of Karmarkar’s method is expensive (it requires a projection operation to be applied, and this operation changes at each iteration), optimality is reached after a relatively small number of iterations which makes the algorithm computationally attractive.Karmarkar’s proof of the worst-case complexity result was rather complicated. Its development was accompanied by claims of unprecedented efficiency for the new method, which managed to attract huge interest from the optimization community. The efforts of numerous researchers soon led to improvements and clarifications of the theory. Gill et al. [40] established an equivalence between Karmarkar’s projective method and the projected Newton barrier method. This increased interest in the role of barrier functions in the theory of interior point methods and has drawn the community’s attention to numerous advantageous features of logarithmic barrier functions. (Interestingly, the use of a logarithmic barrier method in the context of optimization had already been proposed in 1955 by Frisch [37] and studied extensively by Fiacco and McCormick [32] in the context of nonlinear optimization.)It is broadly accepted today that an infeasible-primal-dual algorithm is the most efficient interior point method. A number of attractive features of this method follow from the fact that a logarithmic barrier method is applied to the primal and the dual problems at the same time. This was first suggested by Megiddo [77]. Independently, Kojima, Mizuno and Yoshise [69] developed the theoretical background of this method and gave the first complexity results. Further progress was made by Kojima, Megiddo and Mizuno [68] who provided good theoretical results for the primal-dual algorithm with extra safe-guards and those could be translated into computational practice. The interpretation of interior point methods as algorithms which follow a path of centres (central trajectory) on their way to an optimal solution was gaining wide acceptance [47]. In the late 80’s, Mehrotra and independently Lustig, Marsten, Shanno and their collaborators made impressive progress in the implementation of interior point methods and provided also a better understanding of the crucial role played by the logarithmic barrier functions in the theory [76]. By the early 90’s sufficient evidence was already gathered to justify claims of the spectacular efficiency of IPMs for very large scale linear programming [78, 73, 74]. A new class of optimization methods able to compete with the simplex method was quickly gaining wide appreciation.It is worth mentioning at this point that the presence of interior point methods have put consid- erable pressure on developers of commercial simplex implementations and have led to impressive developments of the simplex method over the last 25 years [13, 33, 54, 75, 98]. Both methods are widely used nowadays and continue to compete with each other. Although the large size of the problem generally seems to favour interior point methods, it is not always possible to predict the winner on a particular class of problems. For example, the sparsity structure of the problem determines the cost of linear algebra operations and therefore determines the efficiency of a given algorithm leading sometimes to astonishing results by significantly favouring one method over the other. The simplex method easily takes advantage of any hyper-sparsity in a problem [54] but its sequential nature makes it difficult to parallelise [53]. On the other hand, interior point methods are able to exploit any block-matrix structure in the linear algebra operations and therefore significant speed-ups can be achieved by massive parallelisation [44].Having applied a nonlinear programming technique (based on the use of logarithmic barrier function) to solve the linear optimization problem was the key reason of IPMs success. Soon after the major role played by the logarithmic barrier function [40] had been understood, a similar methodology was applied to solve quadratic [103] and nonlinear optimization problems [104] and indeed, as it was nicely pointed out by Forsgren et al. [34] “an especially appealing aspect of the interior-point revolution is its spirit of unification, which has brought together areas of optimization that for many years were treated as firmly disjoint”.Nesterov and Nemirovskii [85] provided an insightful explanation why the logarithmic function is such a well-suited barrier function. Its advantage results from the self-concordance property which makes the logarithmic function particularly attractive to be applied in an optimization technique based on the Newton method. The theory of self-concodrant barriers [85] expanded further the area of IPM applications and covered a semidefinite programming and more gener- ally a conic optimization which also includes another important class of the second-order cone programming. In this survey we will focus on the linear and convex quadratic programming problems, the classes of optimization problems which are by far the most frequently used in various real-life applications. The readers interested in nonlinear, semidefinite and second-order cone programming are referred to excellent surveys of Forsgren et al. [34], Vandenberghe and Boyd [101], and Lobo et al. [70], respectively.In this paper we will (gently) guide the reader through major issues related to the fascinating theory and implementation of IPMs. The survey is organised as follows. In Section 2 we will introduce the quadratic optimization problem, define the notation used in the paper and discuss in detail an essential difference between the simplex and interior point method, namely the way in which these methods deal with the complementarity condition. In Section 3 we will perform the worstcase analysis of a particular interior point algorithm for convex quadratic programming. We will analyse the feasible algorithm operating in a small neighbourhood of the central path induced by the 2-no
Sedang diterjemahkan, harap tunggu..