2023
DOI: 10.1007/978-3-031-20096-0_44
|View full text |Cite
|
Sign up to set email alerts
|

Data Reconstruction from Gradient Updates in Federated Learning

Abstract: The explosive growth and diversity of machine learning applications motivate a fundamental rethinking of learning with mobile and edge devices. How can we address diverse/disparate client goals and learn with scarce heterogeneous data? While federated learning aims to address these issues, it has several bottlenecks and challenges hindering a unified solution. On the other hand, large transformer models have been shown to work across a variety of tasks often achieving remarkable few-shot adaptation. This raise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 68 publications
0
3
0
Order By: Relevance
“…Recent advances (Diakonikolas et al 2019a;Lai, Rao, and Vempala 2016) design efficient algorithms for high-dimensional robust statistics. These techniques are applied to more general machine learning tasks, including linear regression (Diakonikolas, Kong, and Stewart 2019), supervised learning (Diakonikolas et al 2019b;Prasad et al 2018) and RL (Zhang et al 2022(Zhang et al , 2021. Our work utilizes robust mean estimation to defend data corruption in offline RLs.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent advances (Diakonikolas et al 2019a;Lai, Rao, and Vempala 2016) design efficient algorithms for high-dimensional robust statistics. These techniques are applied to more general machine learning tasks, including linear regression (Diakonikolas, Kong, and Stewart 2019), supervised learning (Diakonikolas et al 2019b;Prasad et al 2018) and RL (Zhang et al 2022(Zhang et al , 2021. Our work utilizes robust mean estimation to defend data corruption in offline RLs.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial RL and robust RL: RL is vulnerable to adversarial attacks (Ma et al 2019;Zhang et al 2020;Huang et al 2017;Behzadan and Munir 2017). Corruption robust RL performs policy learning under data corruption (Lykouris et al 2021;Wei, Dann, and Zimmert 2022;Zhang et al 2021Zhang et al , 2022Chen et al 2022), which usually results in a bias term in the performance guarantee due to the data corruption. (Niss and Tewari 2020;Kapoor, Patel, and Kar 2019) study multi-armed bandits under data corruption using robust statistics.…”
Section: Related Workmentioning
confidence: 99%
“…Security threats and model strengthening: There are emerging security threats in deep learning [1,2], such as the well-known backdoor attacks [4][5][6], membership inference attacks [7], adversarial sample attacks [8,9], data reconstruction attacks [10][11][12], etc., leading to serious information leakage or degraded system performance [13]. Previous solutions have focused on how to detect these security threats to notify the trainer.…”
Section: Related Workmentioning
confidence: 99%