2020
DOI: 10.48550/arxiv.2006.02479
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Least $k$th-Order and Rényi Generative Adversarial Networks

Himesh Bhatia,
William Paul,
Fady Alajaji
et al.

Abstract: We propose a loss function for generative adversarial networks (GANs) using Rényi information measures with parameter α. More specifically, we formulate GAN's generator loss function in terms of Rényi cross-entropy functionals. We demonstrate that for any α, this generalized loss function preserves the equilibrium point satisfied by the original GAN loss based on the Jensen-Rényi divergence, a natural extension of the Jensen-Shannon divergence. We also prove that the Rényi-centric loss function reduces to the … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…For clarity, we illustrate the specific differences with InfoGAN in Appendix B. Other approaches employing information theoretic principles include Variational GAN (VGAN) [60], which uses an information bottleneck [61] to regularize the discriminator representations; with [62]- [64] extending to minimise divergences apart from the original JS divergence. In contrast to these works, our work employs the InfoMax principle to improve discriminator learning, and provides a clear connection to how this improves GAN training via the mitigation of catastrophic forgetting.…”
Section: Ablation Studies A) Rkhs Dimensionsmentioning
confidence: 99%
“…For clarity, we illustrate the specific differences with InfoGAN in Appendix B. Other approaches employing information theoretic principles include Variational GAN (VGAN) [60], which uses an information bottleneck [61] to regularize the discriminator representations; with [62]- [64] extending to minimise divergences apart from the original JS divergence. In contrast to these works, our work employs the InfoMax principle to improve discriminator learning, and provides a clear connection to how this improves GAN training via the mitigation of catastrophic forgetting.…”
Section: Ablation Studies A) Rkhs Dimensionsmentioning
confidence: 99%
“…The parameter , ubiquitous to all Rényi information measures, allows one to fine-tune the loss function to improve the quality of the GAN-generated output. This can be seen in [ 3 , 5 , 10 ], which used the Rényi differential cross-entropy, and the Natural Rényi differential cross-entropy measures, respectively, to generalize the original GAN loss function (which is recovered as ), resulting in both improved GAN system stability and performance for multiple image datasets. It is also shown in [ 5 , 10 ] that the introduced Rényi-centric generalized loss function preserves the equilibrium point satisfied by the original GAN via the so-called Jensen-Rényi divergence [ 11 ], a natural extension of the Jensen–Shannon divergence [ 12 ] upon which the equilibrium result of [ 9 ] is established.…”
Section: Introductionmentioning
confidence: 99%
“…This can be seen in [ 3 , 5 , 10 ], which used the Rényi differential cross-entropy, and the Natural Rényi differential cross-entropy measures, respectively, to generalize the original GAN loss function (which is recovered as ), resulting in both improved GAN system stability and performance for multiple image datasets. It is also shown in [ 5 , 10 ] that the introduced Rényi-centric generalized loss function preserves the equilibrium point satisfied by the original GAN via the so-called Jensen-Rényi divergence [ 11 ], a natural extension of the Jensen–Shannon divergence [ 12 ] upon which the equilibrium result of [ 9 ] is established. Other GAN systems that utilize different generalized loss functions were recently developed and analysed in [ 13 , 14 , 15 ] (see also the references therein for prior work).…”
Section: Introductionmentioning
confidence: 99%
“…Our Rényi extension may provide a way to improve the performance of the DIB method in machine learning tasks [2] or other applications such as channel quantization [7] and relay transmission [9]. In addition, the use of Rényi entropy generated interest in its own right in information theory and it has played an important role in a variety of studies, including generalized source-coding cut-off rates [20], [21], quantization [22], encoding tasks [23], guessing [24], information combining [25], generative deep networks [26], etc. It is thus of interest to examine the role of Rényi entropy in bottleneck problems.…”
Section: Introductionmentioning
confidence: 99%