2019 IEEE International Conference on Data Mining (ICDM) 2019
DOI: 10.1109/icdm.2019.00056
|View full text |Cite
|
Sign up to set email alerts
|

Performing Co-membership Attacks Against Deep Generative Models

Abstract: In this paper we propose new membership attacks and new attack methods against deep generative models including Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). Specifically, a membership attack is to check whether a given instance x was used in the training data or not. And a co-membership attack is to check whether the given bundle n instances were in the training, with the prior knowledge that the bundle was either entirely used in the training or none at all. Successful membershi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 7 publications
2
28
0
Order By: Relevance
“…Generator Forces a generator to do the wrong work via latent adversarial code. The adversarial code should not be too far away from the prior distribution of the existing [15] Infers whether a given sample belongs to the training set based on the generated data [41], [42], [44], [45], [46], [57] Infers sensitive attributes based on the generated data [46], 2020…”
Section: Component Attack Evasion Membership Inference Attribute Inference Model Extraction Poisoningmentioning
confidence: 99%
See 1 more Smart Citation
“…Generator Forces a generator to do the wrong work via latent adversarial code. The adversarial code should not be too far away from the prior distribution of the existing [15] Infers whether a given sample belongs to the training set based on the generated data [41], [42], [44], [45], [46], [57] Infers sensitive attributes based on the generated data [46], 2020…”
Section: Component Attack Evasion Membership Inference Attribute Inference Model Extraction Poisoningmentioning
confidence: 99%
“…In terms of DGMs, there has been much less work, as this survey will show. Our review unearthed the following research papers on: poisoning attacks [33], [34]; evasion attacks [15], [35], [36], [37], [38], [39], [40]; membership inference attacks [41], [42], [43], [44], [45], [46]; and attribute inference attacks [46] and model extraction attacks [47]. To the best of our knowledge, there are no surveys devoted to the security and privacy of DGMs.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to other DL algorithms, GANs have also been shown to be vulnerable to malicious privacy breaches such as membership attacks, which are adversarial attacks designed to identify which images or patients were used in model training. [66][67][68][69][70][71][72][73] These attacks essentially operate on the premise that DL algorithms perform better on images that they were trained on 74 and depend on whether the attacker has access to the code underlying the model (white-box) or not (blackbox). 75 While defense against these attacks remains an active area of research, 71,74 they are costly, 74 and some defense approaches that require re-training the model may even decrease the performance of the original DL algorithm.…”
Section: Privacymentioning
confidence: 99%
“…Generative networks were said to overfit if the statistics of training and validation recovery errors were different in some measure, for example the difference of medians. In (Liu et al, 2018), recovery errors were used similarly to perform membership attacks, where the optimization was performed over an input network to the latent space, rather than input codes.…”
Section: Recovery Attacksmentioning
confidence: 99%