2022
DOI: 10.1109/mnet.111.2100329
|View full text |Cite
|
Sign up to set email alerts
|

Exploration Across Small Silos: Federated Few-Shot Learning on Network Edge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Some preliminary work that combines FSL with FL has been done. For example, Zhao et al [272] propose a MAML based federated FSL method for supervised classification, but they focus on a cross-silo FL setting, where the data is siloed across a small number of participants so that the participants have mutually isolated classes of samples. In cross-silo FL [143], the participants are usually institutions or organizations, and they are assumed to be highly available during training.…”
Section: B Decision Making and Negotiationmentioning
confidence: 99%
“…Some preliminary work that combines FSL with FL has been done. For example, Zhao et al [272] propose a MAML based federated FSL method for supervised classification, but they focus on a cross-silo FL setting, where the data is siloed across a small number of participants so that the participants have mutually isolated classes of samples. In cross-silo FL [143], the participants are usually institutions or organizations, and they are assumed to be highly available during training.…”
Section: B Decision Making and Negotiationmentioning
confidence: 99%
“…Moreover, with the rising privacy consciousness of people and the improvement of data security law, it is difficult for the power grid company to collect and analyze the power load data from clients [9]. Federated learning (FL) [10][11][12][13], proposed by Google, can effectively protect the raw power load data privacy of all clients. Concretely, in the federated settings, the server coordinates massive clients to train a shared machinelearning model by frequently transmitting model pa- rameters instead of collecting the raw power load data on each client.…”
Section: Introductionmentioning
confidence: 99%