2018
DOI: 10.1109/access.2018.2804278
|View full text |Cite
|
Sign up to set email alerts
|

Improved Boundary Equilibrium Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(18 citation statements)
references
References 2 publications
0
18
0
Order By: Relevance
“…Finally, according to the convergence of discriminator loss and generator loss [10], the training end time point is determined, and the training model is obtained as the basis for processing the unprepared area of the bright area of the image again (Figure 2). e innovation of this work is to use the RGB information data after defogging of the DCP method and use the RGB information data of the fog-less image as the counter training; it has solved the unavoidable confusion of defogging distortion of the bright part of DCP effectively [11,12].…”
Section: Our Methodsmentioning
confidence: 99%
“…Finally, according to the convergence of discriminator loss and generator loss [10], the training end time point is determined, and the training model is obtained as the basis for processing the unprepared area of the bright area of the image again (Figure 2). e innovation of this work is to use the RGB information data after defogging of the DCP method and use the RGB information data of the fog-less image as the counter training; it has solved the unavoidable confusion of defogging distortion of the bright part of DCP effectively [11,12].…”
Section: Our Methodsmentioning
confidence: 99%
“…Eventually, the training will reach Nash equilibrium, where the Discriminator will be unable to differentiate between two distributions, i.e., D(x) = 1/2. Since this equilibrium is very tough to find, there are many research papers [115], [116] for cracking this issue [114] [117]. Fig.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…To solve problems of the original GAN, such as gradient disappearance, unstable training, and poor diversity, many new GAN models have been proposed to increase the stability and to improve qualities of generated results [56], [43]. In this section, we will introduce the evolution of GAN models, including deep convolutional generative adversarial network (DCGAN) [20], conditional GAN (CGAN) [28], Wasserstein GAN (WGAN) [25], WGAN with gradient penalty (WGAN-GP) [31], Energy-Based GAN (EBGAN) [30], Boundary Equilibrium GAN (BEGAN) [8], Information GAN (InfoGAN) [29], Least Squares GAN (LSGAN) [32], Auxiliary Classifier GAN (ACGAN) [6], Degenerate avoided GAN (DRAGAN) [33], Spectral Normalization GAN (SNGAN) [34], Jacobian Regularization GAN (JR-GAN) [36], CapsGAN [37], Banach Wasserstein GAN (BWGAN) [38], Decoder-Encoder GAN (DEGAN) [39].…”
Section: The Evolution Of Gan Modelmentioning
confidence: 99%