2023
DOI: 10.48550/arxiv.2303.02278
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Virtual Learning on Heterogeneous Data with Local-global Distillation

Abstract: Despite Federated Learning (FL)'s trend for learning machine learning models in a distributed manner, it is susceptible to performance drops when training on heterogeneous data. Recently, dataset distillation has been explored in order to improve the efficiency and scalability of FL by creating a smaller, synthetic dataset that retains the performance of a model trained on the local private datasets. We discover that using distilled local datasets can amplify the heterogeneity issue in FL. To address this, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…(1) Local logits uploading: Clients train their local models using private data and subsequently submit the logits of these models on a public dataset to the central server. Parameter-based [129] Data-based [188] Privacy No need to share parameters, less privacy risk [35] Need to share parameters, which poses privacy risks [31] Need to upload local data (compressed), which lead to privacy concerns Non-IID Clients with heterogeneous data can learn from each other, mitigating non-IID [156] Clients with heterogeneous data can learn from each other, mitigating non-IID [44] Can effectively solve non-IID [51] Communication No need to share parameters, low communication cost [59] Need to share parameters, high communication cost…”
Section: Model Compression [46] --mentioning
confidence: 99%
“…(1) Local logits uploading: Clients train their local models using private data and subsequently submit the logits of these models on a public dataset to the central server. Parameter-based [129] Data-based [188] Privacy No need to share parameters, less privacy risk [35] Need to share parameters, which poses privacy risks [31] Need to upload local data (compressed), which lead to privacy concerns Non-IID Clients with heterogeneous data can learn from each other, mitigating non-IID [156] Clients with heterogeneous data can learn from each other, mitigating non-IID [44] Can effectively solve non-IID [51] Communication No need to share parameters, low communication cost [59] Need to share parameters, high communication cost…”
Section: Model Compression [46] --mentioning
confidence: 99%