2019
DOI: 10.48550/arxiv.1902.09001
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gradient Methods for Problems with Inexact Model of the Objective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
16
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5

Relationship

5
0

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 0 publications
0
16
0
Order By: Relevance
“…Note that in Definition 2.1 we allow L to depend on δ. Definition 2.1 is a generalization of (δ, L)-model from [29,31,65], where µ = 0 and m = 0. Further, we denote (δ, L, 0, 0, V )-model as (δ, L)-model.…”
Section: Inexact Model In Minimization Problems Definitions and Examplesmentioning
confidence: 99%
See 2 more Smart Citations
“…Note that in Definition 2.1 we allow L to depend on δ. Definition 2.1 is a generalization of (δ, L)-model from [29,31,65], where µ = 0 and m = 0. Further, we denote (δ, L, 0, 0, V )-model as (δ, L)-model.…”
Section: Inexact Model In Minimization Problems Definitions and Examplesmentioning
confidence: 99%
“…One of the goals of this paper is to describe and analyze first-order optimization methods which use a very general inexact model of the objective function, the idea being to replace the linear part in (3) by a general function ψ δ (x, x k ) and the squared norm by general Bregman divergence. The resulting model includes as a particular case inexact oracle model and relative smoothness framework, and allows to obtain many optimization methods as a particular case, including conditional gradient method [26], Bregman proximal gradient method [11] and its application to optimal transport [69] and Wasserstein barycenter [65] problems, general Catalyst acceleration technique [44], (accelerated) composite gradient methods [7,52], (accelerated) level methods [42,50]. First attempts to propose this generalization were made in [29,65] for nonaccelerated methods and in [31] for accelerated methods, yet without relative smoothness paradigm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…, where ε = O(ǫ 2 /(mn 3 )) is a required precision for inner problem Stonyakin et al [2019]. The first estimate directly follows from IBP complexity (see Theorem 1) and the second estimate (analogous to Franklin and Lorenz [1989] for Sinkhorn's algorithm) can be obtained using strong convexity of f (u, v).…”
mentioning
confidence: 99%
“…Numerical experiments and more accurate theoretical analysis can be found in the followup paper Stonyakin et al [2019].…”
mentioning
confidence: 99%