2022
DOI: 10.1145/3510032
|View full text |Cite
|
Sign up to set email alerts
|

GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated Learning

Abstract: Data privacy has become an increasingly important issue in Machine Learning (ML), where many approaches have been developed to tackle this challenge, e.g. cryptography (Homomorphic Encryption (HE), Differential Privacy (DP), etc. ) and collaborative training (Secure Multi-Party Computation (MPC), Distributed Learning and Federated Learning (FL)). These techniques have a particular focus on data encryption or secure local computation. They transfer the intermediat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(14 citation statements)
references
References 27 publications
0
12
0
Order By: Relevance
“…Their proposed attack seems to be slightly better than the one proposed by Geiping et al [64], but further experimentation is required to confirm their superiority. The same can be applied to Ren et al [65], whose comparison with others than Zhu and Han [12] remains undone.…”
Section: Feature Inference Attacksmentioning
confidence: 91%
See 1 more Smart Citation
“…Their proposed attack seems to be slightly better than the one proposed by Geiping et al [64], but further experimentation is required to confirm their superiority. The same can be applied to Ren et al [65], whose comparison with others than Zhu and Han [12] remains undone.…”
Section: Feature Inference Attacksmentioning
confidence: 91%
“…With the same attacker knowledge, Li et al [63] propose a framework to measure the effectiveness of passive Feature inference attacks on logistic regression models, whose inputs are binary. Geiping et al [64] and Ren et al [65] propose different approaches to solve the initialization and stability problems of [12] and their attacks can handle batches of up to 100 and 256 elements, respectively. With the same attacker knowledge, Wei et al [66] propose an extensive study to measure the capabilities of passive reconstruction attacks focused on recovering images.…”
Section: Feature Inference Attacksmentioning
confidence: 99%
“…Wei et al [146] discussed the gradient leakage attacks targeted at the federated server, thus violating the client's privacy regarding its training data. Ren et al [115] pointed out that the proposed Generative Regression Neural Network can recover the image data in the FL framework.…”
Section: Future Research Directionsmentioning
confidence: 99%
“…We simulate the scenarios with possible information leakage risks by presenting several data leakage attacks on CIFAR-10 dataset, including up-convolutional neural network (UpCNN) [60] and variational autoencoder (VAE) [61] that try to recover the input images from the feature representations. And we also adopt the approach from the state-of-the-art gradient attack generative regression neural network (GRNN) [62] to inverse the feature prototypes. From the reconstructed results in Fig.…”
Section: Testing Data Leakage From Prototypesmentioning
confidence: 99%