2021
DOI: 10.48550/arxiv.2106.02326
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems

Abstract: Modern minimax problems, such as generative adversarial network and adversarial training, are often under a nonconvex-nonconcave setting, and developing an efficient method for such setting is of interest. Recently, two variants of the extragradient (EG) method are studied in that direction. First, a two-time-scale variant of the EG, named EG+, was proposed under a smooth structured nonconvexnonconcave setting, with a slow O(1/k) rate on the squared gradient norm, where k denotes the number of iterations. Seco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 22 publications
1
5
0
Order By: Relevance
“…The following proposition establishes the equivalence of our assumptions and the negative comonotone setting recently considered by [14].…”
Section: Preliminarysupporting
confidence: 55%
See 3 more Smart Citations
“…The following proposition establishes the equivalence of our assumptions and the negative comonotone setting recently considered by [14].…”
Section: Preliminarysupporting
confidence: 55%
“…al [6] furnished the proof for the ergodic convergence of a special class of our damped EGM, that is, one with a damping parameter of λ = 1 2 , when applied to a nonconvex-nonconcave minimax problems. Lee and Kim [14] furnished the linear convergence of the so-called two-time-scale anchored extra-gradient method (FEG) to a stationary point (Definition 2.2) of an objective function with a negatively ρ-comonotone oracle. This is also known in literature as |ρ|-cohypomonotonicity (see [2], Remark 2.5 (ii)).…”
Section: Related Literaturementioning
confidence: 99%
See 2 more Smart Citations
“…However, modern nonconvex-nonconcave saddle-point optimization problems such as those appeared in deep learning go beyond monotone VIs and hence the existing results fail to apply in non-monotone settings. This motivates a surge of interest to generalized VIs and its associated algorithms [12,13,16,23,25,26,45]. We restrict our attention to the line of research that relaxes the Lipschitz continuity and monotonicity assumptions.…”
Section: Introductionmentioning
confidence: 99%