2022
DOI: 10.1109/mis.2021.3114610
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning

Abstract: Federated learning (FL) is a novel machine learning setting that enables on-device intelligence via decentralized training and federated optimization. Deep neural networks' rapid development facilitates the learning techniques for modeling complex problems and emerges into federated deep learning under the federated setting. However, the tremendous amount of model parameters burdens the communication network with a high load of transportation. This paper introduces two approaches for improving communication ef… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…The approach of Ji et al [22] is to progressively decrease the fraction of clients that perform local computation, while reducing the amount of transmitted data by means of a mechanism that masks part of the parameters of local models. Sparse Ternary Compression (STC) is a protocol proposed by Sattler et al [23] to compress upstream and downstream communications between server and clients.…”
Section: Efficient Flmentioning
confidence: 99%
“…The approach of Ji et al [22] is to progressively decrease the fraction of clients that perform local computation, while reducing the amount of transmitted data by means of a mechanism that masks part of the parameters of local models. Sparse Ternary Compression (STC) is a protocol proposed by Sattler et al [23] to compress upstream and downstream communications between server and clients.…”
Section: Efficient Flmentioning
confidence: 99%
“…Nevertheless, vanilla -synch-based methods add extra load on the underlying communication system, as they will ask all the devices to upload their data, and the master node starts its update with the first received data. Reference [14] proposed an algorithm to adjust at every iteration. References [6], [13], [15] proposed various approaches to eliminate some unnecessary uploads.…”
Section: A Distributed Learningmentioning
confidence: 99%
“…With the development of artificial intelligence, the structure of deep learning network becomes more and more complex. As the complexity of the network model increases, the number of parameters will also increase [23]. Federated learning mainly transmits model parameters, and the large number of parameters in existing deep learning models may cause the failure of federated learning.…”
Section: Introductionmentioning
confidence: 99%