2020
DOI: 10.48550/arxiv.2002.04758
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Salvaging Federated Learning by Local Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
115
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(118 citation statements)
references
References 0 publications
3
115
0
Order By: Relevance
“…Heterogeneous neural architecture is one way to personalize the model in FL. For personalization, the primal-dual framework [29], clustering [26], fine-tuning with transfer learning [37], meta-learning [4], regularization-based method [7,16] are among the popular methods explored in the FL literature. Although these techniques achieve improved personalized performance, all of them use a pre-defined architecture for each client.…”
Section: Related Workmentioning
confidence: 99%
“…Heterogeneous neural architecture is one way to personalize the model in FL. For personalization, the primal-dual framework [29], clustering [26], fine-tuning with transfer learning [37], meta-learning [4], regularization-based method [7,16] are among the popular methods explored in the FL literature. Although these techniques achieve improved personalized performance, all of them use a pre-defined architecture for each client.…”
Section: Related Workmentioning
confidence: 99%
“…(T Dinh, Tran, and Nguyen 2020) utilized Moreau envelopes as clients' regularized loss function and decoupled personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. (Yu, Bagdasaryan, and Shmatikov 2020) evaluated three techniques for local adaptation of federated models: fine-tuning, multi-task learning, and knowledge distillation. Two works most relevant to our method are Fed-Health (Chen et al 2020) and FedBN (Li et al 2021).…”
Section: Federated Learningmentioning
confidence: 99%
“…For inference attacks, researchers applied differential privacy (DP) [1] to federated learning by (1) clipping each client's update, and (2) adding random noise [11,60]. As a result, the privacy property of user data can be formally protected.…”
Section: Mobile Federated Learningmentioning
confidence: 99%
“…(3) Privacy-protection scenario. Considering the inference attack may steal some sensitive user information from the uploaded models or gradient updates, we apply differential privacy (DP) for the local training to defend this type of attack as prior works do [11,60]. Under this privacy-protection scenario, we want to examine whether our approach can still work given the DP-based local models.…”
Section: Experimental Settingsmentioning
confidence: 99%