2022
DOI: 10.1109/mwc.003.2100028
|View full text |Cite
|
Sign up to set email alerts
|

Toward Energy-Efficient Federated Learning Over 5G+ Mobile Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(10 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…FL has become the most prevalent distributed learning framework due to its advantages in data privacy and parallel model training. However, since FL is resource-hungry, the limited communication and computing capabilities at client devices are the bottlenecks [8]. To mitigate this issue, there are two research directions to be explored: One is to design judicious resource scheduling and transmission schemes, including client selection [20,21], resource allocation [22][23][24][25], and hierarchical model aggregation [26,27].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…FL has become the most prevalent distributed learning framework due to its advantages in data privacy and parallel model training. However, since FL is resource-hungry, the limited communication and computing capabilities at client devices are the bottlenecks [8]. To mitigate this issue, there are two research directions to be explored: One is to design judicious resource scheduling and transmission schemes, including client selection [20,21], resource allocation [22][23][24][25], and hierarchical model aggregation [26,27].…”
Section: Related Workmentioning
confidence: 99%
“…To mitigate this issue, there are two research directions to be explored: One is to design judicious resource scheduling and transmission schemes, including client selection [20,21], resource allocation [22][23][24][25], and hierarchical model aggregation [26,27]. The other is to integrate model compression frameworks [8,9]. However, the latter introduces inevitable approximation errors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, much attention has been paid to the energy-efficient FL over mobile devices, where several advanced techniques are utilized to save energy during the FL training [20]. On the one hand, gradient sparsification [21], [22] and gradient quantization [23], [24] techniques can compress model updates in the transmission process, significantly reducing the communication burdens [25].…”
Section: Related Workmentioning
confidence: 99%
“…This process is repeated for multiple communication rounds until the global model converges with a satisfactory accuracy. Although FL only requires the transmission of model updates between edge devices and the server instead of raw data, such model transfer can become a communication bottleneck, especially when dealing with modern deep neural networks (DNNs) that has a huge number of parameters (e.g., on the order of hundreds MB, or even GB) [2]. Additionally, the transmit power of edge devices is often limited in FL.…”
Section: Introductionmentioning
confidence: 99%