2022
DOI: 10.48550/arxiv.2206.01095
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise

Abstract: Stochastic first-order methods such as Stochastic Extragradient (SEG) or Stochastic Gradient Descent-Ascent (SGDA) for solving smooth minimax problems and, more generally, variational inequality problems (VIP) have been gaining a lot of attention in recent years due to the growing popularity of adversarial formulations in machine learning. However, while high-probability convergence bounds are known to reflect the actual behavior of stochastic methods more accurately, most convergence results are provided in e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…However, the result is derived under the restrictive light-tails assumption. This last limitation was recently addressed in [49], where the authors derive the high-probability rates for the considered problem under just the bounded variance assumption. In particular, they consider the clipped-SEG for problems with 𝒵 = ℝ d :…”
Section: High-probability Convergencementioning
confidence: 99%
See 1 more Smart Citation
“…However, the result is derived under the restrictive light-tails assumption. This last limitation was recently addressed in [49], where the authors derive the high-probability rates for the considered problem under just the bounded variance assumption. In particular, they consider the clipped-SEG for problems with 𝒵 = ℝ d :…”
Section: High-probability Convergencementioning
confidence: 99%
“…where clip(x, λ) = min{1, λ/‖x‖ 2 }x is the clipping operator, a popular tool in deep learning [46,103]. In the setup when F is monotone and L-Lipschitz and Assumption 3 holds, in [49] it is proved that after k iterations of clipped-SEG with probability at least 1 − β (for any β ∈ (0, 1)) the following inequality holds:…”
Section: High-probability Convergencementioning
confidence: 99%