2023
DOI: 10.1109/tkde.2021.3130903
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks Against Deep Generative Models on Data: A Survey

Abstract: Deep generative models have gained much attention given their ability to generate data for applications as varied as healthcare to financial technology to surveillance, and many more -the most popular models being generative adversarial networks (GANs) and variational auto-encoders (VAEs). Yet, as with all machine learning models, ever is the concern over security breaches and privacy leaks and deep generative models are no exception. In fact, these models have advanced so rapidly in recent years that work on … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(6 citation statements)
references
References 78 publications
(122 reference statements)
0
6
0
Order By: Relevance
“…The starting assumption for the partitioning method is that the synthetic data distribution approximates the real dataset distribution. 61 Therefore, the probability that the attack dataset belongs to the training dataset is proportional to the probability that the attack dataset belongs to the synthetic dataset. The partitioning method does not require a large reference dataset, which explains why it is the most commonly implemented in practice.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The starting assumption for the partitioning method is that the synthetic data distribution approximates the real dataset distribution. 61 Therefore, the probability that the attack dataset belongs to the training dataset is proportional to the probability that the attack dataset belongs to the synthetic dataset. The partitioning method does not require a large reference dataset, which explains why it is the most commonly implemented in practice.…”
Section: Methodsmentioning
confidence: 99%
“…For these simulations, we ran 50 iterations for each study point where we varied the parameters as follows: (1) the parameter was varied randomly from 0 to 1, (2) the size of the attack dataset was fixed at 1000 observations, although when we varied that parameter it had no impact on the results as we just need sufficient observations to get a stable value for F1, (3) the training dataset size was set to 5k, 15k, and 25k, (4) the Hamming distance threshold was set to 5 which is within the range of values commonly used in the literature, 61 (5) 2 generative models were used, and (6) 4 different datasets.…”
Section: Methodsmentioning
confidence: 99%
“…In conclusion, they discussed several promising directions such as developing more efficient and scalable techniques and highlighting the primary technical hurdles that need to be addressed. [25] provides a comprehensive overview of adversarial attacks targeting deep generative models (DGMs), which are machine learning models used for generating data like images, text, and audio. The survey covers various types of attacks on DGMs, including those targeting training data, latent codes, generators, discriminators, and the generated data itself.…”
Section: Related Workmentioning
confidence: 99%
“…Another study emphasizes the need for secure and robust machine learning techniques in health care, particularly focusing on privacy and security [50]. Finally, a study addresses the vulnerabilities of generative models to adversarial attacks (eg, evasion attacks and membership inference attacks), highlighting a significant area of concern in health care data security [51]. These studies collectively underscore the need for a balanced approach to leveraging the benefits of AI-driven health care innovations while ensuring robust privacy and security measures.…”
Section: Differential Privacy In Gansmentioning
confidence: 99%