2022
DOI: 10.1007/s11280-022-01046-x
|View full text |Cite
|
Sign up to set email alerts
|

Multi-center federated learning: clients clustering for better personalization

Abstract: Personalized decision-making can be implemented in a Federated learning (FL) framework that can collaboratively train a decision model by extracting knowledge across intelligent clients, e.g. smartphones or enterprises. FL can mitigate the data privacy risk of collaborative training since it merely collects local gradients from users without access to their data. However, FL is fragile in the presence of statistical heterogeneity that is commonly encountered in personalized decision making, e.g., non-IID data … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
43
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 114 publications
(43 citation statements)
references
References 66 publications
0
43
0
Order By: Relevance
“…The flexible clustered FL framework (FlexCFL) proposed in [33] groups clients based on the similarities between the clients' optimization directions for achieving lower training divergence within clusters. Long et al [34] designed a multi-center federated loss as the objective function for client clustering in FL and proposed federated stochastic expectation maximization (FeSEM) algorithm to optimize this objective.…”
Section: A Enhanced Federated Learning Algorithmsmentioning
confidence: 99%
“…The flexible clustered FL framework (FlexCFL) proposed in [33] groups clients based on the similarities between the clients' optimization directions for achieving lower training divergence within clusters. Long et al [34] designed a multi-center federated loss as the objective function for client clustering in FL and proposed federated stochastic expectation maximization (FeSEM) algorithm to optimize this objective.…”
Section: A Enhanced Federated Learning Algorithmsmentioning
confidence: 99%
“…Such low availability and partial participation limit the available information for the clustering algorithms. This, however, is ignored by CFL (Sattler et al, 2021), multicenter (Long et al, 2022) and FL+HC (Briggs et al, 2020), and makes the deployment impractical as they requires a complete pass over the entire population to identify clus- ters. Furthermore, clients usually have limited on-board resources, but IFCA (Ghosh et al, 2020), FlexCFL (Duan et al, 2021), and FL+HC require extra computation to for every client and/or over time to assign them to a cluster, which incurs large computation and communication overhead.…”
Section: Limitations Of Existing Clustered Fl Solutionsmentioning
confidence: 99%
“…On the one hand, statistical heterogeneity makes one single global model insufficient for satisfying every client's data distributions (Li et al, 2020c;Zhao et al, 2018). Even if a personalization algorithm is used to generate individual models for each client, they may suffer from heterogeneity-borne challenges (Tang et al, 2021;Long et al, 2022). Several studies that try to mitigate the effect of statistical heterogeneity, such as FedYoGi (Reddi et al, 2021), q-FedAvg (Li et al, 2020b), FTFA (Cheng et al, 2021), have shown that their convergence depends on the Under Review degree of heterogeneity, both theoretically and empirically.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, [24] explores a benchmark for non-IID settings, they divide non-IID settings into five cases, such as label distribution skew, feature distribution skew, quantity skew, etc. Further, as [24] mentioned, some existing studies [15]- [17], [25] cover only one non-IID case, which do not give sufficient evaluations to this challenge. Therefore, to avoid the influence of biased global models and to evaluate non-IID cases as comprehensively as possible, we focus on personalized FL by optimizing the local objective of each local client under the label and feature distribution skewness.…”
mentioning
confidence: 99%