In this paper, we study the convergence rate of the gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases, in particular when all step lengths lie in the interval (0, 1/L]. In addition, we derive an optimal step length with respect to the new bound.
In this paper, we study the convergence rate of the DCA (Difference-of-Convex Algorithm), also known as the convex-concave procedure. The DCA is a popular algorithm for difference-of-convex (DC) problems, and known to converge to a stationary point under some assumptions. We derive a worst-case convergence rate of O(1/ √ N ) after N iterations of the objective gradient norm for certain classes of unconstrained DC problems. For constrained DC problems with convex feasible sets, we obtain a O(1/N ) convergence rate (in a well-defined sense). We give an example which shows the order of convergence cannot be improved for a certain class of DC functions. In addition, we obtain the same convergence rate for the DCA with regularization. Our results complement recent convergence rate results from the literature where it is assumed that the objective function satisfies the Lojasiewicz gradient inequality at stationary points. In particular, we do not make this assumption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.