2020
DOI: 10.48550/arxiv.2011.04050
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…There have been other FL studies inspired by the dropout method [28], [29]. Namely, "Federated Dropout" [28] and "Adaptive Federated Dropout" [29].…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
See 2 more Smart Citations
“…There have been other FL studies inspired by the dropout method [28], [29]. Namely, "Federated Dropout" [28] and "Adaptive Federated Dropout" [29].…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…There have been other FL studies inspired by the dropout method [28], [29]. Namely, "Federated Dropout" [28] and "Adaptive Federated Dropout" [29]. The main goal of both of these approaches is to increase communication efficiency by decreasing the model size to be sent and received by the local clients(mobile devices).…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the aforementioned superiorities, AQG can be used jointly with some other communication efficient methods for FL architectures, such as gradient sparsification [23], client selection based on local resources [13,20,29] and adaptively distributing subnetworks for heterogeneous clients [6,9]. Such superiorities and flexibility endow great potentials for the proposed FL framework with AQG.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, compression-based methods have been widely adopted to improve the communication efficiency of FL, where only a part of the weight or gradient information is transmitted. A dropout approach was considered in [4], where a partial network is dropped during the training to reduce the number of parameters to be transmitted. Similarly, Section II, we introduce the general FL system and the proposed FedDLR method.…”
Section: Introductionmentioning
confidence: 99%