2018
DOI: 10.2478/popets-2019-0008
|View full text |Cite
|
Sign up to set email alerts
|

LOGAN: Membership Inference Attacks Against Generative Models

Abstract: Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
253
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 333 publications
(253 citation statements)
references
References 29 publications
0
253
0
Order By: Relevance
“…Hayes et al [54] presented the first MIA on generative models, which utilizes GAN [52] to infer whether a data item was part of the training data by learning statistical difference in distribution. Hayes et al [54] observed that the discriminator would place a higher confidence value on samples that appeared in the training data when the target model is highly overfitted. Based on this observation, they proposed the white-box and black-box MIA.…”
Section: ) Model Inversion Attackmentioning
confidence: 99%
“…Hayes et al [54] presented the first MIA on generative models, which utilizes GAN [52] to infer whether a data item was part of the training data by learning statistical difference in distribution. Hayes et al [54] observed that the discriminator would place a higher confidence value on samples that appeared in the training data when the target model is highly overfitted. Based on this observation, they proposed the white-box and black-box MIA.…”
Section: ) Model Inversion Attackmentioning
confidence: 99%
“…Goncalves et al evaluated MC-medGAN against multiple non-adversarial generative models in a variety of privacy compromising attacks, including AD, obtaining inconsistent results for MC-medGAN (Goncalves et al, 2020). While this is not mentioned by the authors, multiple results reported in the publication point to the fact that the GAN was not properly trained or suffered mode-collapse.In black-box and white-box type attacks, including the LOGAN (Hayes et al, 2017) method, medGAN performed considerably better than WGAN-GP (Chen et al, 2019b), the algorithm which served as basis for improvements to medGAN in publications discussed in Section 3.4.1. Overall, the author notes that releasing the full model poses a high risk of privacy breaches and that smaller training sets (under 10k) also lead to a higher risk.…”
Section: Privacymentioning
confidence: 99%
“…Membership inference. Membership inference against classification models has been studied in [22,25,27,28], and later studied for generative models and language models [15,31] as well as in collaborative learning setting [23,24]. The attack in [28] focuses black-box models, exploiting the differences in the models' outputs on training and non-training inputs.…”
Section: Related Workmentioning
confidence: 99%