2021
DOI: 10.48550/arxiv.2110.08477
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedMM: Saddle Point Optimization for Federated Adversarial Domain Adaptation

Abstract: Federated adversary domain adaptation is a unique distributed minimax training task due to the prevalence of label imbalance among clients, with each client only seeing a subset of the classes of labels required to train a global model. To tackle this problem, we propose a distributed minimax optimizer referred to as FedMM, designed specifically for the federated adversary domain adaptation problem. It works well even in the extreme case where each client has different label classes and some clients only have … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Addressing distribution shifts is a key problem in FL, while most existing works focusing on label distribution skew through techniques such as training robust global models (Li et al, 2018c(Li et al, , 2021 or variance reduction methods (Karimireddy et al, 2020b,a). As another line of research, studies about feature distribution skew in FL mostly focus on domain generalization to train models that can generalize to unseen feature distributions (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021). All of the above methods aim to train a single robust model.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Addressing distribution shifts is a key problem in FL, while most existing works focusing on label distribution skew through techniques such as training robust global models (Li et al, 2018c(Li et al, , 2021 or variance reduction methods (Karimireddy et al, 2020b,a). As another line of research, studies about feature distribution skew in FL mostly focus on domain generalization to train models that can generalize to unseen feature distributions (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021). All of the above methods aim to train a single robust model.…”
Section: Related Workmentioning
confidence: 99%
“…Many studies focus on adapting DG algorithms for FL scenarios. For example, combining FL with Distribution Robust Optimization (DRO), resulting in robust models that perform well on all clients (Mohri et al, 2019;Deng et al, 2021); combining FL with techniques that learn domain invariant features (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021) to improve the generalization ability of trained models. All of the above methods aim to train a single robust feature extractor that can generalize well on unseen distributions.…”
Section: A Proof Of Em Stepsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some recent works have attempted to achieve this goal for convex-concave Deng et al (2020); Hou et al (2021); Liao et al (2021), for nonconvex-concave Deng et al (2020), and for nonconvex-nonconcave problems Deng & Mahdavi (2021); Reisizadeh et al (2020); Guo et al (2020); Yuan et al (2021). However, in the context of stochastic smooth nonconvex minimax problems, the convergence guarantees of the existing distributed/federated approaches are, to the best of our knowledge, either asymptotic Shen et al (2021) or suboptimal Deng & Mahdavi (2021). In particular, they do not reduce to the existing baseline results for the centralized minimax problems (n = 1).…”
Section: Introductionmentioning
confidence: 99%