2021
DOI: 10.48550/arxiv.2104.06023
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Communication Efficient Federated Learning with Adaptive Quantization

Abstract: Federated learning (FL) has attracted tremendous attentions in recent years due to its privacy preserving measures and great potentials in some distributed but privacy-sensitive applications like finance and health. However, high communication overloads for transmitting high-dimensional networks and extra security masks remains a bottleneck of FL. This paper proposes a communication-efficient FL framework with Adaptive Quantized Gradient (AQG) which adaptively adjusts the quantization level based on local grad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Similarly, [27,43,47,54] used quantization for data size reduction, in most cases mixed with other techniques. Furthermore, [27,44] proposed an adaptive schema for updating quantization to achieve communication-efficient training.…”
Section: Updates Compressionmentioning
confidence: 99%
“…Similarly, [27,43,47,54] used quantization for data size reduction, in most cases mixed with other techniques. Furthermore, [27,44] proposed an adaptive schema for updating quantization to achieve communication-efficient training.…”
Section: Updates Compressionmentioning
confidence: 99%