2023
DOI: 10.1038/s41598-023-38916-x
|View full text |Cite
|
Sign up to set email alerts
|

Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC

Abstract: Federated learning enables multiple nodes to perform local computations and collaborate to complete machine learning tasks without centralizing private data of nodes. However, the frequent model gradients upload/download operations required by the framework result in high communication costs, which have become the main bottleneck for federated learning as deep models scale up, hindering its performance. In this paper, we propose a two-layer accumulated quantized compression algorithm (TLAQC) that effectively r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…Quantization-based gradient compression can compress the amount of traffic that is uploaded and sent in the federated learning framework. In the 1-bit quantization proposed by Ren et al [20], the client only needs to upload the symbol of the local model gradient to the server.…”
Section: Efficient Communication Federation Learningmentioning
confidence: 99%
“…Quantization-based gradient compression can compress the amount of traffic that is uploaded and sent in the federated learning framework. In the 1-bit quantization proposed by Ren et al [20], the client only needs to upload the symbol of the local model gradient to the server.…”
Section: Efficient Communication Federation Learningmentioning
confidence: 99%