Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking 2021
DOI: 10.1145/3434770.3459734
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated Training via Device Similarity in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…The focus of this work is on bandwidth optimization rather than the speed of accuracy convergence. Although not directly connected with the scope of our investigation, but still very relevant to the work of Wang et al [26]. In their research, they propose to group the devices based on their data similarity, followed by selecting the devices with the best performance capacity.…”
Section: Related Workmentioning
confidence: 97%
“…The focus of this work is on bandwidth optimization rather than the speed of accuracy convergence. Although not directly connected with the scope of our investigation, but still very relevant to the work of Wang et al [26]. In their research, they propose to group the devices based on their data similarity, followed by selecting the devices with the best performance capacity.…”
Section: Related Workmentioning
confidence: 97%
“…Finally, it is worth mentioning that our similarity-driven principle could be adopted for efficient model training in FL. Recently, Wang et al (2021) analyzed the impact of data heterogeneity on accelerating model training in FL by identifying groups of nodes with similar data distributions.…”
Section: Relation With Other Paradigmsmentioning
confidence: 99%