2022
DOI: 10.1109/tmc.2022.3213766
|View full text |Cite
|
Sign up to set email alerts
|

Energy Efficient Federated Learning over Heterogeneous Mobile Devices via Joint Design of Weight Quantization and Wireless Transmission

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…• Swarm Intelligence-based Routing and Self-Healing Protocols: Configured with 20 to 100 agents or nodes. Evaluation Phase: The performance was measured against three existing methods, referenced as [4], [18], and [25], across various metrics such as spectral efficiency, latency, and energy consumption.…”
Section: Results Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…• Swarm Intelligence-based Routing and Self-Healing Protocols: Configured with 20 to 100 agents or nodes. Evaluation Phase: The performance was measured against three existing methods, referenced as [4], [18], and [25], across various metrics such as spectral efficiency, latency, and energy consumption.…”
Section: Results Analysismentioning
confidence: 99%
“…The proposed model's integration of Dynamic Spectrum Sharing (DSS), Reinforcement Learning (RL)-based Resource Allocation, Mobile Edge Computing (MEC), Differential Privacy Techniques, and Self-Healing Network Protocols, among other innovative features, results in enhanced network efficiency, resilience, and sustainability. The advancements over the existing methods [4], [18], and [25] are evident in the model's ability to sustain high throughput, efficiently utilize available spectrum, adaptively respond to dynamic network conditions, and maintain low energy consumption and latency. This multifaceted improvement underscores the effectiveness of the proposed approach in addressing the complex demands of modern wireless communication networks.…”
Section: Overall Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…To mitigate this issue, there are two research directions to be explored: One is to design judicious resource scheduling and transmission schemes, including client selection [20,21], resource allocation [22][23][24][25], and hierarchical model aggregation [26,27]. The other is to integrate model compression frameworks [8,9]. However, the latter introduces inevitable approximation errors.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, the popular image recognition model VGG16 [7] comprises 138 million parameters (528MB in 32bit float) and requires 15.5G MACs (multiply-add computation) for forward propagation (FP) process, essentially making exclusive on-device training impractical for resourceconstrained mobile or IoT devices. One promising solution is to compress the ML model to lower both communication and computing workload [8][9][10]. However, model compression/quantization inevitably induces more inference/learning errors due to the low-precision calculations.…”
Section: Introductionmentioning
confidence: 99%