2022
DOI: 10.3390/fi14120377
|View full text |Cite
|
Sign up to set email alerts
|

FedCO: Communication-Efficient Federated Learning via Clustering Optimization

Abstract: Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…For example, a previous study proposed a clustering optimization method based on FL, in which the author utilized similarity to divide the clients into groups and then choose representative workers that communicate with a server; silhouette validation was used to ensure that the main workers are close to their current cluster. [92]. Depending on the complexity of the calculation and the type of data, there are many different criteria for validating the clustering algorithm, including optimization-and differencelike criteria The following table 4 displays the relationship between criteria and complexity: Two or more skews were used to select a client with a lower degree of non-IID to participate in model training [20] [45]…”
Section: B Application Industry Engineeringmentioning
confidence: 99%
“…For example, a previous study proposed a clustering optimization method based on FL, in which the author utilized similarity to divide the clients into groups and then choose representative workers that communicate with a server; silhouette validation was used to ensure that the main workers are close to their current cluster. [92]. Depending on the complexity of the calculation and the type of data, there are many different criteria for validating the clustering algorithm, including optimization-and differencelike criteria The following table 4 displays the relationship between criteria and complexity: Two or more skews were used to select a client with a lower degree of non-IID to participate in model training [20] [45]…”
Section: B Application Industry Engineeringmentioning
confidence: 99%
“…These all will put the baseline and boost the planned collaboration research with some industrial partners involved in HINTS, e.g., NODA Intelligent Systems. In the second area, one of the pursued research directions involved studying approaches that bring efficiency and robustness to FL settings, as discussed in [29]. In a paper [30], researchers from Theme D have proposed a novel FL model that tries coping with statistically heterogeneous environments by introducing a group-personalized FL method.…”
Section: Initial Hints Publicationsmentioning
confidence: 99%
“…Each operator in the pipeline has a property called epochs, which defnes a condition that, if satisfed, enables the execution I I of the operator for the specifc epoch. Using this property, it is possible to defne operators to be executed only in the initialization phase and others in the iteration phase of an ML algorithm, like in [7] and [9].…”
Section: F the Sentinel Microservicementioning
confidence: 99%
“…We decided to implement the FedCO algorithm proposed in [9] and the baseline FL algorithm, FedAvg [5]. The implementation of FedCo leverages the Client clustering pattern, while the implementation of FedAvg leverages the Client registry pattern.…”
Section: A the Use Casementioning
confidence: 99%