2019
DOI: 10.2478/popets-2019-0067
|View full text |Cite
|
Sign up to set email alerts
|

Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models

Abstract: We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
157
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 122 publications
(161 citation statements)
references
References 10 publications
4
157
0
Order By: Relevance
“…While this is an appealing approach, it has been shown that generative models such as GANs are also prone to mem-orizing their training set [15]. This has been exploited in several recent papers to explore the vulnerability of generative models to membership inference attacks [16][17][18]. [16] designed a white-box attack on the released discriminator of a GAN and showed that it can be almost 100% accurate in some cases.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…While this is an appealing approach, it has been shown that generative models such as GANs are also prone to mem-orizing their training set [15]. This has been exploited in several recent papers to explore the vulnerability of generative models to membership inference attacks [16][17][18]. [16] designed a white-box attack on the released discriminator of a GAN and showed that it can be almost 100% accurate in some cases.…”
Section: Introductionmentioning
confidence: 99%
“…They also designed a black-box attack, which is comparatively a lot less accurate. [18] designed Monte-Carlo attacks on the generators which are shown to have high accuracy for set membership inference (defined later) and slightly lower accuracy for instance membership inference.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Shokri et al (2017) designed a black-box membership inference attack against machine learning models. Subsequently, researchers introduced several variants of the attack, such as attacks on GANs (Hayes et al, 2019), VAEs (Hilprecht & Härterich, 2019), model explanations (Shokri et al, 2019), and collaborative learning models (Nasr et al, 2019). We focus on mitigating membership inference attacks on DNN classifiers in this paper.…”
Section: Membership Inference Attacksmentioning
confidence: 99%
“…The privacy property started to be considered and investigated in recent works, mostly in the form of membership inference attacks. Specifically, Hayes et al introduce membership inference attacks against GANs trained on image data (Hayes et al, 2019[ 41 ]; Hilprecht et al, 2019[ 42 ]) and a systematic analysis has been conducted by Chen et al (2020[ 13 ]). They assess an attacker's ability to infer the presence of a given sample in the GAN's training set with respect to different threat models, dataset sizes, and GAN model architectures.…”
Section: Current Research and Future Visionmentioning
confidence: 99%