2022
DOI: 10.48550/arxiv.2208.05925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…For smooth convex-concave saddle point problems an optimal algorithm with ‖∇ x,y f(x k , y k )‖ 2 proportional to k −1 was proposed in [122] (see also [30] and [71] for monotone inclusion). For the stochastic case, see [20,27,74].…”
Section: Convergence In Terms Of the Gradient Norm For Sppmentioning
confidence: 99%
“…For smooth convex-concave saddle point problems an optimal algorithm with ‖∇ x,y f(x k , y k )‖ 2 proportional to k −1 was proposed in [122] (see also [30] and [71] for monotone inclusion). For the stochastic case, see [20,27,74].…”
Section: Convergence In Terms Of the Gradient Norm For Sppmentioning
confidence: 99%