2013
DOI: 10.1007/s10107-013-0677-5
|View full text |Cite
|
Sign up to set email alerts
|

First-order methods of smooth convex optimization with inexact oracle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

13
527
1
9

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 413 publications
(550 citation statements)
references
References 12 publications
13
527
1
9
Order By: Relevance
“…Paper [38] considers inexact accelerated hybrid extragradient-proximal methods, but actually the framework is shown to include only the case of the exact accelerated forward-backward algorithm. In [22], convergence rates for an accelerated projected-subgradient method is proved. The case of an exact projection step is considered, and the authors assume the availability of an oracle that yields global lower and upper bounds on the function.…”
Section: Main Contributionsmentioning
confidence: 99%
“…Paper [38] considers inexact accelerated hybrid extragradient-proximal methods, but actually the framework is shown to include only the case of the exact accelerated forward-backward algorithm. In [22], convergence rates for an accelerated projected-subgradient method is proved. The case of an exact projection step is considered, and the authors assume the availability of an oracle that yields global lower and upper bounds on the function.…”
Section: Main Contributionsmentioning
confidence: 99%
“…The notion of an 'inexact oracle' was introduced in [25], and complexity estimates for primal, dual, and fast gradient methods applied to smooth convex functions with that inexact oracle are obtained. Further to this work, the same authors describe an intermediate gradient method that uses an inexact oracle [26]. That work is extended in [27] to handle the case of composite functions, where a stochastic inexact oracle is also introduced.…”
Section: Introductionmentioning
confidence: 99%
“…Some of these methods consider the case when gradient information is inaccurate. This error in the gradient computation may simply be bounded in the worst case (deterministically), see, for example, [11,20], or the error is random and the estimated gradient is accurate in expectation, as in stochastic gradient algorithms, see for example, [12,19,23,21]. These methods are typically applied in a convex setting and do not extend to nonconex cases.…”
Section: Introductionmentioning
confidence: 99%
“…In the nonlinear optimization setting, the complexity of various unconstrained methods has been derived under exact derivative information [7,8,17], and also under inexact information, where the errors are bounded in a deterministic fashion [3,6,11,14,20]. In all the cases of the deterministic inexact setting, traditional optimization algorithms such as line search, trust region or adaptive regularization algorithms are applied with little modification and work in practice as well as in theory, while the error is assumed to be bounded in some decaying manner at each iteration.…”
Section: Introductionmentioning
confidence: 99%