2023
DOI: 10.1007/s10107-022-01919-z
|View full text |Cite
|
Sign up to set email alerts
|

A unified single-loop alternating gradient projection algorithm for nonconvex–concave and convex–nonconcave minimax problems

Abstract: Minimax problems have recently attracted a lot of research interests. A few efforts have been made to solve decentralized nonconvex strongly-concave (NCSC) minimax-structured optimization; however, all of them focus on smooth problems with at most a constraint on the maximization variable. In this paper, we make the first attempt on solving composite NCSC minimax problems that can have convex nonsmooth terms on both minimization and maximization variables. Our algorithm is designed based on a novel reformulati… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…It is demonstrated in Appendix E that h I is µ-weakly convex in lots of cases. Following (Xie, Koyejo, and Gupta 2019;Davis and Drusvyatskiy 2019), the definition and first-order condition of µ-weakly convex function are given as follows.…”
Section: Refining Hyper-polyhedral Approximationmentioning
confidence: 99%
“…It is demonstrated in Appendix E that h I is µ-weakly convex in lots of cases. Following (Xie, Koyejo, and Gupta 2019;Davis and Drusvyatskiy 2019), the definition and first-order condition of µ-weakly convex function are given as follows.…”
Section: Refining Hyper-polyhedral Approximationmentioning
confidence: 99%
“…The minimax problem has broad applications in different areas including game theory [40], training generative adversarial networks (GANs) [41,42], adversarial and robust machine learning [43,44], resource allocation over networks [45], and distributed optimization [46,47]; to name just a few. For solving this problem, (alternating) gradient descent-ascent is the most popular method used in the existing literature; see for example [48][49][50][51][52][53][54] and the references therein. This method iteratively updates the estimates x k , y k of the desired solution as…”
Section: Minimax Optimizationmentioning
confidence: 99%
“…Reference Assumption f (x, •) Complexity batch size ALR ZO-VRAGDA Xu et al [2022] PL O(ǫ −3 ) O(ǫ −2 ) × Smoothed-AGDA Yang et al [2022] PL…”
Section: Algorithmmentioning
confidence: 99%
“…More recently, Chen et al [2022] propose a class of faster stochastic GDA (i.e.,SPIDER-GDA and AccSPIDER-GDA) methods based on variance-reduced technique for finite-sum minimax optimization under PL condition that includes PL-PL and Nonconvex-PL minimax optimizations. Meanwhile, Xu et al [2022] proposed a class of variance-reduced zeroth-order methods for stochastic Nonconvex-PL minimax problems.…”
Section: Algorithmmentioning
confidence: 99%
See 1 more Smart Citation