2022
DOI: 10.48550/arxiv.2206.01132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

Abstract: In this paper, we study a large-scale multi-agent minimax optimization problem, which models many interesting applications in statistical learning and game theory, including Generative Adversarial Networks (GANs). The overall objective is a sum of agents' private local objective functions. We first analyze an important special case, empirical minimax problem, where the overall objective approximates a true population minimax risk by statistical samples. We provide generalization bounds for learning with this o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The stochastic gradient descent algorithm is widely used as an incremental gradient algorithm that offers inexpensive iterations by approximating the gradient with a minibatch of observations. Through the past decade, it has been used in a wide variety of problems with different variations, like network optimization reinforcement learning (Liu et al, 2021b;a), federated learning (Sun and Wei, 2022) and recommendation system . However, this method, at the same time, incurs gradient error when approximating the gradient.…”
Section: Related Workmentioning
confidence: 99%
“…The stochastic gradient descent algorithm is widely used as an incremental gradient algorithm that offers inexpensive iterations by approximating the gradient with a minibatch of observations. Through the past decade, it has been used in a wide variety of problems with different variations, like network optimization reinforcement learning (Liu et al, 2021b;a), federated learning (Sun and Wei, 2022) and recommendation system . However, this method, at the same time, incurs gradient error when approximating the gradient.…”
Section: Related Workmentioning
confidence: 99%
“…However, the computation of their global correction requires the full aggregation and thus do not apply to the cross-device setting. Besides, the analysis in (Sun and Wei 2022) focuses on strongly-convex-strongly-concave minimax problems, while we consider a class of nonconvexnonconcave problems, which are more challenging.…”
Section: Problem Setting and Algorithmmentioning
confidence: 99%
“…In this setting, the clients are numerous (up to 10 10 ( Kairouz et al 2019)) unreliable mobile/IoT devices with relatively slow network connections. Actually, most existing federated minimax algorithms are designed for another setting known as the cross-silo setting (Reisizadeh et al 2020;Rasouli, Sun, and Rajagopal 2020;Beznosikov, Samokhin, and Gasnikov 2020;Hou et al 2021;Guo et al 2020;Yuan et al 2021;Deng and Mahdavi 2021;Sharma et al 2022;Sun and Wei 2022), where the clients are a relatively small number of organizations or data centers with reliable network connections (Kairouz et al 2019). Compared with the cross-silo setting, it is generally more challenging to solve a problem in the cross-device setting due to the low client availability and relatively slow network connections (Kairouz et al 2019).…”
Section: Introductionmentioning
confidence: 99%