49th International Conference on Parallel Processing - ICPP 2020
DOI: 10.1145/3404397.3404457
|View full text |Cite
|
Sign up to set email alerts
|

Federated Learning with Proximal Stochastic Variance Reduced Gradient Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
63
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 34 publications
(63 citation statements)
references
References 10 publications
0
63
0
Order By: Relevance
“…Through the collaboration between model parameters in V , penalty terms P(V ) are computed, improving the performance of the c n under heterogeneous dataset D n . Notice that the core of PFL is the collaborative strategy among the clients, while the penalty term aims at deep optimization of ν n in [14], [16], [20], [22], which implies us to design fusion rule with considering data heterogeneity and model overfitting in this paper for each client. Moreover, when designing the fusion rule in this paper, each client's layer parameter is proposed to be viewed as the basic unit.…”
Section: Problem Formulationmentioning
confidence: 99%
See 4 more Smart Citations
“…Through the collaboration between model parameters in V , penalty terms P(V ) are computed, improving the performance of the c n under heterogeneous dataset D n . Notice that the core of PFL is the collaborative strategy among the clients, while the penalty term aims at deep optimization of ν n in [14], [16], [20], [22], which implies us to design fusion rule with considering data heterogeneity and model overfitting in this paper for each client. Moreover, when designing the fusion rule in this paper, each client's layer parameter is proposed to be viewed as the basic unit.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Where the local step l λ = 1 and the learning rate η = 0.005. 4) pFedMe introduces the idea of personalization [20], which transforms the optimization problem into a bi-level decoupling problem from client personalization loss to global loss. Where the penalty coefficient λ and the number of personalized training steps K are set to 15 and 5, respectively.…”
Section: A Experimental Setupmentioning
confidence: 99%
See 3 more Smart Citations