2022
DOI: 10.2478/popets-2022-0043
|View full text |Cite
|
Sign up to set email alerts
|

User-Level Label Leakage from Gradients in Federated Learning

Abstract: Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…Several model inversion attacks reconstruct the training data by exploiting the shared gradients [ 22 , 78 , 97 ]. In particular, they exploit the mathematical properties of gradients in specific model architectures to infer information about the input data.…”
Section: Discussionmentioning
confidence: 99%
“…Several model inversion attacks reconstruct the training data by exploiting the shared gradients [ 22 , 78 , 97 ]. In particular, they exploit the mathematical properties of gradients in specific model architectures to infer information about the input data.…”
Section: Discussionmentioning
confidence: 99%
“…Similarly to tabular or image data, FL was applied on many time-series applications such as load forecasting [36], natural language processing [24], [49], traffic flow prediction, [78], and healthcare systems [42]. However, recent research has shown that sharing gradients and local model updates in FL lead to severe privacy leakage because membership inference and data/label reconstruction attacks are effective [53], [126], [139], [85], [95], [136], [44], [61], [125], [32], [34].…”
Section: B Privacy-preserving Federated Training Of Machine Learning ...mentioning
confidence: 99%
“…Although FL provides collaborative learning without data sharing, it still raises privacy issues: The shared intermediate model is vulnerable to privacy attacks that can reconstruct parties' input data or infer the membership of data samples in the training set, hence needs to be protected during the training process [53], [126], [139], [85], [95], [136], [44], [61], [125], [32], [34]. Moreover, it was recently shown that RNNs are particularly vulnerable to inference attacks (e.g., membership inference) compared to traditional neural networks [130].…”
Section: Introductionmentioning
confidence: 99%
“…The iDLG [23] further demonstrates that the last layer of shared gradients must leak ground truth labels when the activation function is non-negative. Wainakh et al [37] further explored the properties of gradient-based leakage of true labels under large batch. Common techniques for protecting privacy include adding noise, gradient compression, discretization, and differential privacypreserving.…”
Section: Leakage From Gradientsmentioning
confidence: 99%