2021
DOI: 10.1109/tcyb.2021.3090260
|View full text |Cite
|
Sign up to set email alerts
|

Federated Continuous Learning With Broad Network Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…Note that we average the edge-model over all clients while only average the global-model over a subset S t of edge servers, since the communication between the edge servers and clients is efficient, but the communication between the edge servers and the cloud server is high cost and latency because the distance is relatively long. At the client side, i.e., at the third level, the sparse personalized client model θi,j (y t,r i,j ) of the i-th edge server and the j-th client is determined by solving (6), where y t,r i,j is the local edge-model of the i-th edge server and the j-th client at the global round t and edge round r. The sparse client model used here can reduce the communication load between clients and edge servers. Note that ( 6) can be easily solved by many first order approaches, for example the Nesterov's accelerated gradient descent, based on the gradient…”
Section: Sfedhp: Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…Note that we average the edge-model over all clients while only average the global-model over a subset S t of edge servers, since the communication between the edge servers and clients is efficient, but the communication between the edge servers and the cloud server is high cost and latency because the distance is relatively long. At the client side, i.e., at the third level, the sparse personalized client model θi,j (y t,r i,j ) of the i-th edge server and the j-th client is determined by solving (6), where y t,r i,j is the local edge-model of the i-th edge server and the j-th client at the global round t and edge round r. The sparse client model used here can reduce the communication load between clients and edge servers. Note that ( 6) can be easily solved by many first order approaches, for example the Nesterov's accelerated gradient descent, based on the gradient…”
Section: Sfedhp: Algorithmmentioning
confidence: 99%
“…That is, we solve the following minimization problem instead of solving (6) to obtain an approximated personalized client model…”
Section: Sfedhp: Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…For model developers who prototype a mobile AI with FL without a proxy dataset, achieving faster convergence on thousands to millions of devices is desired to efficiently test multiple model architectures and hyperparameters [29]. Service providers who frequently update a model with continual learning with FL require to minimize the user overhead with better time-to-accuracy performance [35].…”
Section: Introductionmentioning
confidence: 99%