2018
DOI: 10.1007/978-3-030-01246-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Fictitious GAN: Training GANs with Historical Models

Abstract: Generative adversarial networks (GANs) are powerful tools for learning generative models. In practice, the training may suffer from lack of convergence. GANs are commonly viewed as a two-player zerosum game between two neural networks. Here, we leverage this game theoretic view to study the convergence behavior of the training process. Inspired by the fictitious play learning process, a novel training method, referred to as Fictitious GAN, is introduced. Fictitious GAN trains the deep neural networks using a m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…The generated images by the generator network are then classified as real or fake by the discriminator network during the training stage, i.e., D: y [0,1]. This two-player zero-sum game continues until satisfactory image generation is achieved [29]. Training of GAN algorithms is subject to research, and it is a continuing effort to handle efficient training of GAN models [30].…”
Section: Conditional Generative Adversarial Network (Cgan)mentioning
confidence: 99%
“…The generated images by the generator network are then classified as real or fake by the discriminator network during the training stage, i.e., D: y [0,1]. This two-player zero-sum game continues until satisfactory image generation is achieved [29]. Training of GAN algorithms is subject to research, and it is a continuing effort to handle efficient training of GAN models [30].…”
Section: Conditional Generative Adversarial Network (Cgan)mentioning
confidence: 99%
“…In recent years, a lot of research has been targeted at solving this problem and one of them has indicated that if, at the point where equilibrium is achieved, the eigenvalues of the Jacobian only falls into the negative realpart, the training of GAN can converge locally with a small learning rate [27], [28]. It has been proved that the optimal solution to (1) with p data (x) = q G (x) and D * (x) = 0.5 is actually a unique Nash equilibrium of the game [15]. So theoretically, making sure that the discriminator converging to 0.5 is key to finding the Nash equilibrium in GANs.…”
Section: B Nash Equilibriummentioning
confidence: 99%
“…Although the authors in [8] presented a few game-model GANs, they have not done a comprehensive survey in this field, and many new pieces of research have not been covered. We hope that our survey will serve as a reference for interested Proposed GAN Taxonomy Modified Game Model Stochastic game [45] Stackelberg game [46], [47] Bi-affine game [48] Modified Learning Method No regret learning [10], [49], [50] Fictitious play [27] Federated learning [51], [52] Reinforcement learning [4], [53]- [63] Modified Architecture Multiple generators, One discriminator [46], [64]- [67] One generator, Multiple discriminators [60], [68]- [72] Multiple generators, Multiple discriminators [51], [66], [73] One generator, One discriminator, One classifier [4], [74] One generator, One discriminator, One RL agent [58], [59], [75], [76] Fig. 2: The proposed taxonomy of the GAN advances by game theory.…”
Section: Gan Applicationsmentioning
confidence: 99%
“…By relating GAN with the two-player zero-sum game, Ge et al in [27] design a training algorithm to simulate the fictitious play on GAN and provide a theoretical convergence guarantee. They also show that by assuming the best response at each update in Fictitious GAN, the distribution of the mixture outputs from the generators converges to the data distribution.…”
Section: Md-ganmentioning
confidence: 99%
See 1 more Smart Citation