2018
DOI: 10.1093/imanum/dry022
|View full text |Cite
|
Sign up to set email alerts
|

An inexact regularized Newton framework with a worst-case iteration complexity of $ {\mathscr O}(\varepsilon^{-3/2}) $ for nonconvex optimization

Abstract: An algorithm for solving smooth nonconvex optimization problems is proposed that, in the worst-case, takes O(ε −3/2 ) iterations to drive the norm of the gradient of the objective function below a prescribed positive real number ε and can take O(ε −3 ) iterations to drive the leftmost eigenvalue of the Hessian of the objective above −ε. The proposed algorithm is a general framework that covers a wide range of techniques including quadratically and cubically regularized Newton methods, such as the Adaptive Regu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…This bound was first established for a form of cubic regularization of Newton's method [24]. Following this paper, numerous other algorithms have also been proposed which match this bound, see for example [3,8,14,13,23].…”
Section: Complexity In Nonconvex Optimizationmentioning
confidence: 90%
“…This bound was first established for a form of cubic regularization of Newton's method [24]. Following this paper, numerous other algorithms have also been proposed which match this bound, see for example [3,8,14,13,23].…”
Section: Complexity In Nonconvex Optimizationmentioning
confidence: 90%
“…A trust region method with an O(ǫ −3/2 g ) complexity for achieving approximate first-order stationarity was proposed and analyzed in [3]. This method can be seen, along with that in [1], as a special case of the general framework in [4] for achieving this order complexity. One can also derive a trust region method with a fixed trust region radius that, with a concise analysis, leads to a O(ǫ −3/2 g ) complexity.…”
Section: A Strategy With a Fixed Trust Region Radiusmentioning
confidence: 99%
“…Second, the algorithm requires exact subproblem solutions. This restriction might be relaxed using ideas such as in [4], but one cannot simply employ Cauchy steps as are allowed in the strategies in Sections 2.3 and 2.4. Third, the algorithm is dependent on the choice of ǫ, meaning that the desired accuracy needs to be chosen in advance and even early iterations will behave differently depending on the final accuracy desired.…”
Section: A Strategy With a Fixed Trust Region Radiusmentioning
confidence: 99%
“…These examples provide lower bounds on the worst-case evaluation complexity of methods in our class when applied to smooth problems satisfying the relevant assumptions. Furthermore, for α = 1, this lower bound is of the same order in ǫ as the upper bound on the worst-case evaluation complexity of the cubic regularization method and other methods in a class of methods proposed in [36] or in [65], thus implying that these methods have optimal worst-case evaluation complexity within a wider class of second-order methods, and that Newton's method is suboptimal.…”
mentioning
confidence: 90%
“…From a worst-case complexity point of view, one can do better when a cubic regularization/perturbation of the Newton direction is used [54,63,16,36]-such a method iteratively calculates step corrections by (exactly or approximately) minimizing a cubic model formed of a quadratic approximation of the objective and the cube of a weighted norm of the step. For such a method, the worst-case global complexity improves to be O ǫ −3/2 [63,16], for problems whose gradients and Hessians are Lipschitz continuous as above; this bound is also essentially sharp [15].…”
Section: Introductionmentioning
confidence: 99%