Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3358081
|View full text |Cite
|
Sign up to set email alerts
|

Towards the Gradient Vanishing, Divergence Mismatching and Mode Collapse of Generative Adversarial Nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 1 publication
0
5
0
Order By: Relevance
“…The vanishing gradients problem is another significant challenge encountered during the training phase of GANs. This issue emerges due to the complex architecture of GANs, where both G and D need to maintain a balance and learn collaboratively [221]. During the training process, as gradients are backpropagated through the layers of the network, they can diminish drastically, leading to stagnancy in learning.…”
Section: Vanishing Gradientsmentioning
confidence: 99%
“…The vanishing gradients problem is another significant challenge encountered during the training phase of GANs. This issue emerges due to the complex architecture of GANs, where both G and D need to maintain a balance and learn collaboratively [221]. During the training process, as gradients are backpropagated through the layers of the network, they can diminish drastically, leading to stagnancy in learning.…”
Section: Vanishing Gradientsmentioning
confidence: 99%
“…2. Inspired by ResBlock [52][53][54], DEM consists of a main convolutional stream and two parallel residual streams. To preserve the most fundamental features, the mainstream passes through two 3 × 3 convolutions and one 1 × 1 convolution, where the 3 × 3 convolutions are with the LReLU activation function.…”
Section: Detail Enhancement Module (Dem)mentioning
confidence: 99%
“…The generator accepts a random noise vector z and learns the data distribution to generate an image G(z), while the discriminator differentiates if G(z) is true or not, and finally the image is expected to be generated by the generator to fool the discriminator with a fake. However, the actual training results may not be able to achieve the ideal situation, and there will be problems such as gradient disappearance [11] and mode collapse [12]. Therefore, most of the current research focuses on two directions.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%