2023
DOI: 10.2139/ssrn.4411404
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Learning with Non-Iid Data Via Local Model Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…For the i.i.d data setting, data distribution over UEs is balanced and uniform. For the non-i.i.d data setting, the dataset distribution over UEs is unbalanced, where the unbalanced feature means that the dataset size varies greatly between different UEs [37]. In [37]- [39], the performance of FL systems is very sensitive to non-i.i.d data distributions and experiments for the non-i.i.d data setting could evaluate the robustness of the proposed algorithm.…”
Section: A Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the i.i.d data setting, data distribution over UEs is balanced and uniform. For the non-i.i.d data setting, the dataset distribution over UEs is unbalanced, where the unbalanced feature means that the dataset size varies greatly between different UEs [37]. In [37]- [39], the performance of FL systems is very sensitive to non-i.i.d data distributions and experiments for the non-i.i.d data setting could evaluate the robustness of the proposed algorithm.…”
Section: A Experimental Settingsmentioning
confidence: 99%
“…For the non-i.i.d data setting, the dataset distribution over UEs is unbalanced, where the unbalanced feature means that the dataset size varies greatly between different UEs [37]. In [37]- [39], the performance of FL systems is very sensitive to non-i.i.d data distributions and experiments for the non-i.i.d data setting could evaluate the robustness of the proposed algorithm. To this end, we take experiments with different datasets for the non-i.i.d data setting to verify the robustness of the proposed MOEA-FL.…”
Section: A Experimental Settingsmentioning
confidence: 99%
“…These techniques can reduce communication overhead and enable the system to scale to a larger number of clients. A variety of approaches have been proposed and their potential has been demonstrated in several studies [23,29,44,[57][58][59].…”
Section: Data Decoupling Techniques In Federated Learningmentioning
confidence: 99%