2023
DOI: 10.1109/tnnls.2022.3166101
|View full text |Cite
|
Sign up to set email alerts
|

Model Pruning Enables Efficient Federated Learning on Edge Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
49
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 231 publications
(69 citation statements)
references
References 24 publications
0
49
0
Order By: Relevance
“…There are also some works focusing on practical aspects such as model compression and sparsification (Han, Wang, and Leung 2020;Jiang and Agrawal 2018;Jiang et al 2020;Konecny et al 2016) and partial worker participation (Bonawitz et al 2019;Chen et al 2020). These algorithms and techniques are orthogonal to our work and may be applied together with H-SGD.…”
Section: Related Workmentioning
confidence: 98%
“…There are also some works focusing on practical aspects such as model compression and sparsification (Han, Wang, and Leung 2020;Jiang and Agrawal 2018;Jiang et al 2020;Konecny et al 2016) and partial worker participation (Bonawitz et al 2019;Chen et al 2020). These algorithms and techniques are orthogonal to our work and may be applied together with H-SGD.…”
Section: Related Workmentioning
confidence: 98%
“…In [27], the authors provide an optimization problem whose goal is to minimize the total energy consumption of the system under a latency constraint; to solve the problem, an iterative algorithm is proposed where, at every step, closed-form solutions for time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy are derived. The work in [28] The work in [30] proposes a FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model. In [31], the authors consider two transmission protocols for edge devices to upload model parameters to edge server, based on non orthogonal multiple access and time division multiple access, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…In order to reduce the communication load during the propagation phase, many researches have applied sparsification and model pruning into FL. Authors in [19] inherit the simple regularization method to develop pruneFL algorithm. This algorithm can reduce the number of parameters up to 7 times.…”
Section: Introductionmentioning
confidence: 99%