2021
DOI: 10.48550/arxiv.2112.02918
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

When the Curious Abandon Honesty: Federated Learning Is Not Private

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(31 citation statements)
references
References 0 publications
0
31
0
Order By: Relevance
“…Hence, it provides some level of privacy protection. However, recent works in FL have shown that model parameters exchanged during training could leak the raw data [5]. This remains an open challenge with FLAME as well.…”
Section: Similarity-based Clientmentioning
confidence: 99%
“…Hence, it provides some level of privacy protection. However, recent works in FL have shown that model parameters exchanged during training could leak the raw data [5]. This remains an open challenge with FLAME as well.…”
Section: Similarity-based Clientmentioning
confidence: 99%
“…However, this previous work often requires unrealistic model modifications. Attacks in Fowl et al (2021) and Boenisch et al (2021) can only recover data from linear layers and thus require the existence of large linear layers -even one of which could be larger than standard vision models -early in a network, to avoid downsampling and striding operations that degrade information. The attack in Pasquini et al (2021) requires the ability of the server to send separate malicious parameters to individual users who each own only few data points -a threat that can in turn be overcome quickly if aggregation protocols are used in reverse to average server updates before processing them on the user side.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The attacks in Fowl et al (2021); Boenisch et al (2021) and Pasquini et al (2021) can collectively be understood as attacks based on gradient sparsity. Although the secure aggregation protocol nominally runs as it was designed, all but one of the data points (Boenisch et al, 2021) or users (Pasquini et al, 2021) return a zero gradient for some parameters in the model, such that averaging still returns these entries directly. In Fowl et al (2021), individual gradients are non-zero, but parts of the gradient obey a cumulative sum structure from which a sparse single update can be recovered.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations