2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00309
|View full text |Cite
|
Sign up to set email alerts
|

Cluster-driven Graph Federated Learning over Multiple Domains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 64 publications
(26 citation statements)
references
References 13 publications
0
26
0
Order By: Relevance
“…Moreover, averaging gradients collected from clients having access to a limited subset of tasks may translate in oscillations of the global model, and suboptimal performance on the global distribution [53]. Therefore another line of research looks at improving the aggregation stage using server-side momentum [30] and adaptive optimizers [62], or aggregating task-specific parameters [69,9,10]. To the best of our knowledge, no prior work has attempted to explain the behavior of the model in federated scenarios by looking at the loss surface and convergence minima, which is, in our opinion, a fundamental perspective to fully understand the reasons behind the degradation of heterogeneous performance relative to centralized and homogeneous settings.…”
Section: Statistical Heterogeneity In Federated Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, averaging gradients collected from clients having access to a limited subset of tasks may translate in oscillations of the global model, and suboptimal performance on the global distribution [53]. Therefore another line of research looks at improving the aggregation stage using server-side momentum [30] and adaptive optimizers [62], or aggregating task-specific parameters [69,9,10]. To the best of our knowledge, no prior work has attempted to explain the behavior of the model in federated scenarios by looking at the loss surface and convergence minima, which is, in our opinion, a fundamental perspective to fully understand the reasons behind the degradation of heterogeneous performance relative to centralized and homogeneous settings.…”
Section: Statistical Heterogeneity In Federated Learningmentioning
confidence: 99%
“…Federated Training Converges to Sharp Minima. Many works tried to account for this difficulty arising in federated scenarios by enforcing regularization in local optimization so that the local model is not led too far apart from the global one [49,38,31,1,47], or by using momentum on the server-side [30], or learning task-specific parameters keeping distinct models on the server-side [69,9,10]. To the best of our knowledge, this is the first work analyzing and addressing such behavior by looking at the loss landscape.…”
Section: Where Heterogeneous Fl Fails At Generalizingmentioning
confidence: 99%
“…Following this direction, FedRobust [34] addresses the affine distribution shift by learning affine transformations. Another consistent line of research tackles such issue by learning domain-specific batch normalization (BN) statistics [15], [35], [16]. In particular, SiloBN [15] keeps all BN statistics strictly local and leverages Adaptive Batch Normalization [36] to address new domains at test time, while FedBN [16] introduces local BN layers.…”
Section: B Addressing Statistical Heterogeneity In Flmentioning
confidence: 99%
“…Numerous research papers have addressed data heterogeneity (i.e. non-IID data among local clients) in FL [1,7,13,23,31,39,41], such as improve client sampling fairness [27], adaptive optimization [9,28,37,38], and correct the local updation [16,20,33]. Also, federated learning had been extended in real life applications [8,24].…”
Section: Federated Learningmentioning
confidence: 99%