2020
DOI: 10.1093/imanum/drz074
|View full text |Cite
|
Sign up to set email alerts
|

A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees

Abstract: We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients that seeks an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and/or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(40 citation statements)
references
References 32 publications
(77 reference statements)
0
40
0
Order By: Relevance
“…In [5], a potentially expensive and complicated cubic regularized subproblem (itself a bound-constrained nonconvex problem) needs to be solved to approximate first-order optimality at each iteration. The algorithm proposed in [12] is also not practical, as our numerical experiments show.…”
Section: Introductionmentioning
confidence: 94%
See 2 more Smart Citations
“…In [5], a potentially expensive and complicated cubic regularized subproblem (itself a bound-constrained nonconvex problem) needs to be solved to approximate first-order optimality at each iteration. The algorithm proposed in [12] is also not practical, as our numerical experiments show.…”
Section: Introductionmentioning
confidence: 94%
“…There has been renewed interest in devising optimization algorithms with worst-case complexity guarantees. For nonconvex problems involving bound constraints, iteration/evaluation 1 complexity results have been derived for an interior-point method [4], log-barrier methods [8,12], and methods using cubic regularization [5]. If just gradient (first derivative) information is used, the iteration/evaluation complexity to obtain an approximate first-order point with accuracy measure is typically O( −2 ) [4,8,5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…1 A number of works, e.g. [21,59], consider an (ε 1 , ε 2 )-stationary point defined as x such that ∇f (x) 2 ≤ ε 1 and λ min ∇ 2 f (x) ≥ − ε 2 and the corresponding complexity O(max{ε…”
Section: 2)mentioning
confidence: 99%
“…Optimal worst-case iteration complexity is proven for both schemes. [59] consider problem (P) with K = R n + and without linear constraints, and propose a Newton-Conjugate-Gradient algorithm, building on [61]. They go beyond the iteration complexity and estimate the number of gradient evaluations and/or Hessian-vector products.…”
mentioning
confidence: 99%