2016
DOI: 10.1134/s0005117916110114
|View full text |Cite
|
Sign up to set email alerts
|

Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…We consider stochastic mirror descent (MD) with inexact oracle [34,44,54] 3 . For a proxfunction d(x) and the corresponding Bregman divergence B d (x, x 1 ), the proximal mirror descent step is…”
Section: The Sa Approach: Stochastic Mirror Descentmentioning
confidence: 99%
“…We consider stochastic mirror descent (MD) with inexact oracle [34,44,54] 3 . For a proxfunction d(x) and the corresponding Bregman divergence B d (x, x 1 ), the proximal mirror descent step is…”
Section: The Sa Approach: Stochastic Mirror Descentmentioning
confidence: 99%
“…We use the technique, developed in [7,8] for stochastic gradient-free nonsmooth convex optimization problems (gradient-free version of mirror descent [2]) to propose a stochastic gradient-free version of saddle-point variant of mirror descent [2] for non-smooth convex-concave saddle-point problems.…”
Section: Introductionmentioning
confidence: 99%
“…Oracle complexity, O (•) p = 1 Stoch. Noise MD [Duchi et al (2015); Gasnikov et al (2016aGasnikov et al ( , 2016b]…”
Section: Assumptionsmentioning
confidence: 99%
“…Derivative-free or zeroth-order optimization Rosenbrock (1960); Brent (1973); Spall (2003) is one of the oldest areas in optimization, which constantly attracts attention of the learning community, mostly in connection to the online learning in the bandit setup Bubeck and Cesa-Bianchi (2012). We study stochastic derivativefree optimization problems in a two-point feedback situation, considered by Agarwal et al (2010); Duchi et al (2015); Shamir (2017) in the learning community and by Nesterov and Spokoiny (2017); Stich et al (2011); Ghadimi and Lan (2013); Ghadimi et al (2016); Gasnikov et al (2016a) in the optimization community. Two-point setup allows to prove complexity bounds, which typically coincide with the complexity bounds for gradient-based algorithms up to a small-degree polynomial of n, where n is the dimension of the decision variable.…”
Section: Introductionmentioning
confidence: 99%