Abstract:This paper introduces QPDO, a primal-dual method for convex quadratic programs which builds upon and weaves together the proximal point algorithm and a damped semismooth Newton method. The outer proximal regularization yields a numerically stable method, and we interpret the proximal operator as the unconstrained minimization of the primal-dual proximal augmented Lagrangian function. This allows the inner Newton scheme to exploit sparse symmetric linear solvers and multi-rank factorization updates. Moreover, t… Show more
“…In this section we follow essentially the developments from [5,15] specializing our discussion for the operator T L . The Proximal Point Method (PPM) [22] finds zeros of maximal monotone operators by recursively applying their proximal operator.…”
In this work, in the context of Linear and Quadratic Programming, we interpret Primal Dual Regularized Interior Point Methods (PDR-IPMs) in the framework of the Proximal Point Method. The resulting Proximal Stabilized IPM (PS-IPM) is strongly supported by theoretical results concerning convergence and the rate of convergence, and can handle degenerate problems. Moreover, in the second part of this work, we analyse the interactions between the regularization parameters and the computational footprint of the linear algebra routines used to solve the Newton linear systems. In particular, when these systems are solved using an iterative Krylov method, we propose general purpose preconditioners which, exploiting the regularization and a new rearrangement of the Schur complement, remain attractive for a series of subsequent IPM iterations. Therefore they need to be recomputed only in a fraction of the total IPM iterations. The resulting regularized second order methods, for which low-frequency-updates of the preconditioners are allowed, pave the path for an alternative third way in-between first and second order methods.
“…In this section we follow essentially the developments from [5,15] specializing our discussion for the operator T L . The Proximal Point Method (PPM) [22] finds zeros of maximal monotone operators by recursively applying their proximal operator.…”
In this work, in the context of Linear and Quadratic Programming, we interpret Primal Dual Regularized Interior Point Methods (PDR-IPMs) in the framework of the Proximal Point Method. The resulting Proximal Stabilized IPM (PS-IPM) is strongly supported by theoretical results concerning convergence and the rate of convergence, and can handle degenerate problems. Moreover, in the second part of this work, we analyse the interactions between the regularization parameters and the computational footprint of the linear algebra routines used to solve the Newton linear systems. In particular, when these systems are solved using an iterative Krylov method, we propose general purpose preconditioners which, exploiting the regularization and a new rearrangement of the Schur complement, remain attractive for a series of subsequent IPM iterations. Therefore they need to be recomputed only in a fraction of the total IPM iterations. The resulting regularized second order methods, for which low-frequency-updates of the preconditioners are allowed, pave the path for an alternative third way in-between first and second order methods.
“…The latter reformulation amounts to a proximal dual regularization of (P) and corresponds to a lifted representation of min L µ (•, y), thus showing that the approach effectively consists in solving a sequence of subproblems, each one being a proximally regularized version of (P). Yielding feasible and more regular subproblems, this (proximal) regularization strategy has been explored and exploited in different contexts; some recent works are, e.g., [33,41,40,20].…”
We investigate and develop numerical methods for finite dimensional constrained structured optimization problems. Offering a comprehensive yet simple and expressive language, this problem class provides a modeling framework for a variety of applications. A general and flexible algorithm is proposed that interlaces proximal methods and safeguarded augmented Lagrangian schemes. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems. Adopting a proximal gradient method with an oracle as a formal tool, it is demonstrated how the inner subproblems can be solved by off-the-shelf methods for composite optimization, without introducing slack variables and despite the possibly set-valued projections. Finally, we describe our open-source matrix-free implementation ALPS of the proposed algorithm and test it numerically. Illustrative examples show the versatility of constrained structured programs as a modeling tool, expose difficulties arising in this vast problem class and highlight benefits of the implicit approach developed.
“…Most globalized Newton-like approaches or proximal point variants studied in the literature are developed for composite programming problems in which either g(x) = 0 (e.g. see [13,20,30,36,41]) or K = R n (e.g. see [24,33,40,46,50]).…”
Section: Introductionmentioning
confidence: 99%
“…see [13,32,43,45,46,50,52,56,65]), variants of the proximal point method (e.g. see [20,24,25,38,39,41,49,59]), or interior point methods (IPMs) applied to a reformulation of (P) (e.g. see [21,28,31,51]).…”
Section: Introductionmentioning
confidence: 99%
“…Unlike most proximal point methods given in the literature (e.g. see the primal approaches in [38,39,49], the dual approaches in [40,41,73] or the primal-dual approaches in [30,20,25,59]), the proposed method is introducing proximal terms for each primal and dual variable of the problem, and this results in linear systems which are easy to precondition and solve, within the semi-smooth Newton method. Additionally, we explicitly deal with each of the two non-smooth terms of the objective in (P).…”
In this paper we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strict convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems. The numerical results demonstrate the benefits of using regularization in IPMs as well as the reliability of the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.