2021
DOI: 10.1016/j.future.2021.02.012
|View full text |Cite
|
Sign up to set email alerts
|

FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 55 publications
(18 citation statements)
references
References 1 publication
0
18
0
Order By: Relevance
“…A stale model may marginally contribute to the global model, but cause large resource waste and time consumption without timely terminating model training and uploading. To address this issue, FedSA [45], a staleness-aware asynchronous FL algorithm sets a staleness threshold for each participating client based on its computing speed. However, this approach ignores the communication cost and the training time in the whole process.…”
Section: Model Staleness Controlmentioning
confidence: 99%
“…A stale model may marginally contribute to the global model, but cause large resource waste and time consumption without timely terminating model training and uploading. To address this issue, FedSA [45], a staleness-aware asynchronous FL algorithm sets a staleness threshold for each participating client based on its computing speed. However, this approach ignores the communication cost and the training time in the whole process.…”
Section: Model Staleness Controlmentioning
confidence: 99%
“…for different i th and j th client. Moreover, some research [17] also considers that the data are non-i.i.d. if the expectation of local gradients and global gradients are different:…”
Section: Recent Communication Challenges In Flmentioning
confidence: 99%
“…The authors of [106] proposed a method called FedBN for normalizing the client data and shifting the features of the data before aggregation. The authors of [107] and [108] proposed a method called FedSA [107] and FedUFO [108] to manage unstable, unreliable, inconsistentency, and feature divergence problems in distributed non-IID. Using these two methods, the learning performance is increased in federated learning.…”
Section: Cifar-10mentioning
confidence: 99%