2021
DOI: 10.3390/s21206791
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul

Abstract: In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is trained through the federated learning over wireless backhaul connected to the main server. However, during the training phase, local gradients need to be transferred from multiple clients to the server over wireless bac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…Some straggler clients with low computing power or bad channel condition still train the old version global model while the global model has been updated with model parameters from other fast clients. To solve the version gap problem, Rangwala et al proposed the penalizing strategy [5] to improve the local learning rate of slow nodes and reduce the training learning rate of fast nodes. Jaehyun et al proposed the strategy of allocating transmit power [6] to allocate more power to bad channel condition nodes carrying huge data and reduce the transmit power of good channel condition nodes carrying small data.…”
Section: Efficient Federated Learningmentioning
confidence: 99%
“…Some straggler clients with low computing power or bad channel condition still train the old version global model while the global model has been updated with model parameters from other fast clients. To solve the version gap problem, Rangwala et al proposed the penalizing strategy [5] to improve the local learning rate of slow nodes and reduce the training learning rate of fast nodes. Jaehyun et al proposed the strategy of allocating transmit power [6] to allocate more power to bad channel condition nodes carrying huge data and reduce the transmit power of good channel condition nodes carrying small data.…”
Section: Efficient Federated Learningmentioning
confidence: 99%