2021
DOI: 10.1080/10556788.2021.1895152
|View full text |Cite
|
Sign up to set email alerts
|

Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
92
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 67 publications
(95 citation statements)
references
References 15 publications
2
92
0
1
Order By: Relevance
“…The first result above (with nonsmooth f and finite-sum g) appears to be new and our sample complexity improves over the best known in the literature [45]. The second result is among the first in the literature to derive improved sample complexity for nonsmooth f and with g being an expectation (see also [52]).…”
Section: Contributions and Outlinementioning
confidence: 66%
See 4 more Smart Citations
“…The first result above (with nonsmooth f and finite-sum g) appears to be new and our sample complexity improves over the best known in the literature [45]. The second result is among the first in the literature to derive improved sample complexity for nonsmooth f and with g being an expectation (see also [52]).…”
Section: Contributions and Outlinementioning
confidence: 66%
“…Algorithms for solving stochastic composite optimization problems of the forms (2) and (3) have been studied recently in [6,28,34,45,53,54,57,58,60]. Since these are all stochastic or randomized algorithms, a common measure of performance is their sample complexity, i.e., the total number of samples of the component mappings g i or g ξ and their Jacobians required to output some point x such that E G( x) 2 ≤ , where is a predefined precision and G( x) is the composite gradient mapping at x (for a precise definition, see (11) in Sect.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations