2022
DOI: 10.48550/arxiv.2202.08420
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Time-Correlated Sparsification for Efficient Over-the-Air Model Aggregation in Wireless Federated Learning

Yuxuan Sun,
Sheng Zhou,
Zhisheng Niu
et al.

Abstract: Federated edge learning (FEEL) is a promising distributed machine learning (ML) framework to drive edge intelligence applications. However, due to the dynamic wireless environments and the resource limitations of edge devices, communication becomes a major bottleneck. In this work, we propose time-correlated sparsification with hybrid aggregation (TCS-H) for communication-efficient FEEL, which exploits jointly the power of model compression and over-the-air computation. By exploiting the temporal correlations … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…. , | Ṽ |}, required subpacket (i.e., V (i) = P ( Ṽ (i))) from the set Ṽ , database n picks the column Ṽ (i) of the permutation-reversing matrix R n given in (10) indicated by R n (:, Ṽ (i)) and calculates the corresponding query given by,…”
Section: B Reading Phase At Time Tmentioning
confidence: 99%
“…. , | Ṽ |}, required subpacket (i.e., V (i) = P ( Ṽ (i))) from the set Ṽ , database n picks the column Ṽ (i) of the permutation-reversing matrix R n given in (10) indicated by R n (:, Ṽ (i)) and calculates the corresponding query given by,…”
Section: B Reading Phase At Time Tmentioning
confidence: 99%
“…We note that the ITU standard of 5G uplink user experienced data rate is only 50 Mb/s [44], and even the 6G uplink user experienced data rate at Gbit/s level may not sufficiently support such huge uploading requirements of regular FL. Additionally, to our best knowledge, existing works on improving FL efficiency (such as FedAvg [4], sparsification [45], [46] and quantization [47], [48] with or without error feedback [49], [50], federated distillation [51]- [53], pruning [54], [55] or partially trainable network [56], [57], and over-the-air computation [58]- [60]) still consider the transmission of gradient updates and can achieve a relatively limited reduction in payload and experience degradation in performance. For instance, the payload reduction is of only two orders of magnitude of the original payload on the same CIFAR-10 dataset [61], [62].…”
Section: A Contributionsmentioning
confidence: 99%
“…Apart from the privacy leakage, another drawback of FL is the large communication cost incurred by sharing model parameters and updates with millions of users in multiple rounds. Some of the solutions to this problem include, gradient quantization [25][26][27][28], federated submodel learning (FSL) [29][30][31][32][33][34][35][36][37], and gradient sparsification [38][39][40][41][42][43][44][45]. In gradient quantization, the values of the gradients are quantized and represented with a fewer number of bits.…”
Section: Introductionmentioning
confidence: 99%