In this paper, we study the convergence rate of the gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases, in particular when all step lengths lie in the interval (0, 1/L]. In addition, we derive an optimal step length with respect to the new bound.
In this paper, we study the convergence rate of the DCA (Difference-of-Convex Algorithm), also known as the convex-concave procedure. The DCA is a popular algorithm for difference-of-convex (DC) problems, and known to converge to a stationary point under some assumptions. We derive a worst-case convergence rate of O(1/ √ N ) after N iterations of the objective gradient norm for certain classes of unconstrained DC problems. For constrained DC problems with convex feasible sets, we obtain a O(1/N ) convergence rate (in a well-defined sense). We give an example which shows the order of convergence cannot be improved for a certain class of DC functions. In addition, we obtain the same convergence rate for the DCA with regularization. Our results complement recent convergence rate results from the literature where it is assumed that the objective function satisfies the Lojasiewicz gradient inequality at stationary points. In particular, we do not make this assumption.
In this paper, we derive a new linear convergence rate for the gradient method with fixed step lengths for non-convex smooth optimization problems satisfying the Polyak-Lojasiewicz (P L) inequality. We establish that the P L inequality is a necessary and sufficient condition for linear convergence to the optimal value for this class of problems. We list some related classes of functions for which the gradient method may enjoy linear convergence rate. Moreover, we investigate their relationship with the P L inequality.
In this paper, we study the convergence rate of gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases. In addition, based on the bound, we derive an optimal step length. Keywords L-smooth optimization • Gradient method • Performance estimation problem • Semidefinite programming This work was supported by the Dutch Scientific Council (NWO) grant OCENW.GROOT.2019.015, Optimization for and with Machine Learning (OPTIMAL).
In this paper, we study the gradient descent-ascent method for convex-concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class of strongly convex-strongly concave saddle-point problems for one iteration. We also investigate the algorithm without strong convexity and we provide some necessary and sufficient conditions under which the gradient descent-ascent enjoys linear convergence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.