2022
DOI: 10.1007/978-3-030-91390-8_2
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Networks: A Survey on Training, Variants, and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 52 publications
0
5
0
Order By: Relevance
“…This involves generating adversarial inputs and including them in the training data, thereby encouraging the model to learn features that are invariant to the adversarial perturbations. While this approach has shown promise, it is not a panacea; adversarial training can sometimes lead to a reduction in accuracy on clean data, as the model may become overly conservative or biased towards the adversarial examples it has encountered during training [15]. The balance between model robustness and performance on clean data is a delicate one, requiring careful calibration of the training process.…”
Section: Training Process and Its Effectsmentioning
confidence: 99%
“…This involves generating adversarial inputs and including them in the training data, thereby encouraging the model to learn features that are invariant to the adversarial perturbations. While this approach has shown promise, it is not a panacea; adversarial training can sometimes lead to a reduction in accuracy on clean data, as the model may become overly conservative or biased towards the adversarial examples it has encountered during training [15]. The balance between model robustness and performance on clean data is a delicate one, requiring careful calibration of the training process.…”
Section: Training Process and Its Effectsmentioning
confidence: 99%
“…One variation, referred to as the Wasserstein GAN (WGAN) ( 144 , 145 ), introduces a penalty to constrain the gradients of the discriminator’s output, resulting in a more stable and trainable model. While GANs utilize a sigmoid function in the last layer for binary classification, the WGAN approach removes this function to approximate the Wasserstein distance ( 146 ), using Lipschitz discriminators: namely, that for discriminator function there exists a constant such that for any two points and in the input space. This ensures that the gradient of the discriminator’s output with respect to its input is bounded by some constant .…”
Section: Deep Learning Approachesmentioning
confidence: 99%
“…This approach can provide an acceptable resilience against evasion attacks [97,110]. While there are different approaches to carry out adversarial training, including the so-called generative adversarial networks [31,28,30,29], non of them are flawless. To begin with, this approach was mainly designed for independent and identically distributed data.…”
Section: Adversarial Trainingmentioning
confidence: 99%