Interior gradient (subgradient) and proximal methods for convex constrained minimization have been much studied, in particular for optimization problems over the nonnegative octant. These methods are using non-Euclidean projections and proximal distance functions to exploit the geometry of the constraints. In this paper, we identify a simple mechanism that allows us to derive global convergence results of the produced iterates as well as improved global rates of convergence estimates for a wide class of such methods, and with more general convex constraints. Our results are illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems. In particular, we derive a class of interior gradient algorithms which exhibits an O(k −2 ) global convergence rate estimate.
We consider a wide class of penalty and barrier methods for convex programming which includes a number of specific functions proposed in the literature. We provide a systematic way to generate penalty and barrier functions in this class, and we analyze the existence of primal and dual optimal paths generated by these penalty methods, as well as their convergence to the primal and dual optimal sets. For linear programming we prove that these optimal paths converge to single points.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.