2020
DOI: 10.1007/978-3-030-58657-7_11
|View full text |Cite
|
Sign up to set email alerts
|

Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem

Abstract: In the paper, we generalize the approach Gasnikov et. al, 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex-concave saddle-point problem. The proposed approach works, at least, like the best existing approaches. But for a special set-up (simplex type constraints and closeness of Lipschitz constants in 1 and 2 norms) our approach reduces n /log n times the required number of oracle calls (function calculations). Our method uses a stochastic… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 15 publications
1
11
0
Order By: Relevance
“…This problem can be solved in a two different ways. The first one is "margins inward approach" [8]. The second one is "continuation" f to R n with preserving of convexity and Lipschitz continuity [44]:…”
Section: Gradient-free Methodsmentioning
confidence: 99%
“…This problem can be solved in a two different ways. The first one is "margins inward approach" [8]. The second one is "continuation" f to R n with preserving of convexity and Lipschitz continuity [44]:…”
Section: Gradient-free Methodsmentioning
confidence: 99%
“…Assume that we have an access to the first-order oracle for g, i.e. gradient ∇g(x) is available, and to the biased stochastic zeroth-order oracle for f (see also [40,13]) that for a given point x returns noisy value f…”
Section: Convex Casementioning
confidence: 99%
“…Therefore, instead of solving (123) directly one can focus on the problem min x∈Q Ψ (x) = F(x) + g(x) (129) with small enough r. As mentioned earlier, this approach is universal. In particular, the analysis of gradient-free methods for non-smooth saddle-point problems can be carried out in a similar way [13]. Now we will give the main facts from [11] for zoSA algorithm itself.…”
Section: Convex Casementioning
confidence: 99%
“…Related Work and Contribution. Zeroth-order methods in the non-smooth setup were developed in a wide range of works (Polyak, 1987;Spall, 2003;Conn et al, 2009;Duchi et al, 2015;Shamir, 2017;Nesterov and Spokoiny, 2017;Gasnikov et al, 2017;Bayandina, 2017;Beznosikov et al, 2020;Gasnikov, 2022). Particularly, in (Shamir, 2017), an optimal algorithm was provided as an improvement to the work (Duchi et al, 2015) for a non-smooth case but Lipschitz continuous in stochastic convex optimization problems.…”
Section: Introductionmentioning
confidence: 99%