2023
DOI: 10.48550/arxiv.2301.03062
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices

Abstract: In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Moreover, the unstructured pruning approach usually results in irregular weight matrixes in the pruned models that are difficult to compress, which requires specialized hardware and software libraries to accelerate the training speed [24]. To effectively decrease computation and communication overhead, the structured model pruning approach [20]- [22], [25], [26] was developed to prune both filters in convolution layers and neurons in FC layers to generate sub-models for devices to train. Note that in centralized learning, pruning filters in convolution layers have been demonstrated can effectively accelerate the learning speed without sacrificing too much accuracy [24], [27].…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the unstructured pruning approach usually results in irregular weight matrixes in the pruned models that are difficult to compress, which requires specialized hardware and software libraries to accelerate the training speed [24]. To effectively decrease computation and communication overhead, the structured model pruning approach [20]- [22], [25], [26] was developed to prune both filters in convolution layers and neurons in FC layers to generate sub-models for devices to train. Note that in centralized learning, pruning filters in convolution layers have been demonstrated can effectively accelerate the learning speed without sacrificing too much accuracy [24], [27].…”
Section: A Related Workmentioning
confidence: 99%
“…The static model pruning approach in [21], [25] or local model composition approach in [28] distributed heterogeneous sub-models to devices for training and then aggregated them into a global inference model, which effectively reduced resource consumption for FL. The model shrinking and gradient compression approach in [26] enabled the local model training with elastic computation and communication overheads. The model pruning method in [22] dynamically adjusted the model size for resource-limited devices and significantly improved the cost-efficiency of FL.…”
Section: A Related Workmentioning
confidence: 99%
“…Since then, significant strides have been made to improve FL's performance. There have been many studies trying to improve the performance of FL from different perspectives, such as: model heterogeneity, 47,48 non independently and identically distributed (non-IID) data, 18,49 communication efficiency, [50][51][52] robust FL. 26,41 However, most of the existing studies in FL assume that every client has a clean dataset are not designed for tackling with noisy labels.…”
Section: Related Workmentioning
confidence: 99%