2022
DOI: 10.1109/tnnls.2020.3041185
|View full text |Cite
|
Sign up to set email alerts
|

Ternary Compression for Communication-Efficient Federated Learning

Abstract: Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and IoT devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on f… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
67
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 128 publications
(69 citation statements)
references
References 47 publications
0
67
0
2
Order By: Relevance
“…For large-scale models like the ones described above, the communication overhead of running the Federated Averaging algorithm can become a prohibitive bottleneck. Although a wide variety of methods to reduce the communication overhead in Federated Averaging have been proposed, including approaches that reduce the frequency of communication [1], use client sampling [1], [21], neural network pruning [22], message sparsification [23], [24], [25] and other lossy [26], [27], [28], [24], [29] and lossless compression techniques [30], [31], the fundamental issue of scaling to larger models persists.…”
Section: A Federated Averagingmentioning
confidence: 99%
See 1 more Smart Citation
“…For large-scale models like the ones described above, the communication overhead of running the Federated Averaging algorithm can become a prohibitive bottleneck. Although a wide variety of methods to reduce the communication overhead in Federated Averaging have been proposed, including approaches that reduce the frequency of communication [1], use client sampling [1], [21], neural network pruning [22], message sparsification [23], [24], [25] and other lossy [26], [27], [28], [24], [29] and lossless compression techniques [30], [31], the fundamental issue of scaling to larger models persists.…”
Section: A Federated Averagingmentioning
confidence: 99%
“…Quantization is a popular technique to reduce communication and has been successfully applied in Federated Averaging to reduce the size of the parameter updates [14], [24], [29]. Quantization techniques, however, so far have not been applied to Federated Distillation.…”
Section: B Soft-label Quantizationmentioning
confidence: 99%
“…Federated learning [33] originates from the attempt to address the privacy concerns in distributed learning [30], which has been widely studied and applied in real-world applications due to its capability of privacy protection and parallel computing [50,60]. Generally, the global optimization objective of a federated learning system can be written by min…”
Section: Federated Learningmentioning
confidence: 99%
“…A number of prior studies in [3][4][5][6][7][8][9][10][11][12][13][14][15][16] have investigated important problems related to wireless network optimization of FL. The works in [3][4][5][6] provided a comprehensive survey of existing studies and summarized open problems in FL.…”
Section: Related Workmentioning
confidence: 99%
“…One key challenge is the contradiction between the huge communication costs required by FL parameter transmission and the limited available communication resources [5]. Therefore, on one hand, the existing studies in [7][8][9][10][11][12] proposed to compress the FL model parameters to reduce the communication cost. In particular, the authors of [7] proposed a sparsification and quantization method that compresses the trained FL model.…”
Section: Related Workmentioning
confidence: 99%