2022
DOI: 10.48550/arxiv.2205.06393
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

$α$-GAN: Convergence and Estimation Guarantees

Abstract: We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated f -divergences. We then focus on α-GAN, defined via the α-loss, which interpolates several GANs (Hellinger, vanilla, Total Variation) and corresponds to the minimization of the Arimoto divergence. We show that the Arimoto divergences induced by α-GAN equivalently converge, for all α ∈ R>0∪{∞}. However, under restricted learning models and finite samples, we provide estimation … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Note that (34) is exactly the same bound as that in [30,Equation (35)]. Hence, the remainder of the proof follows from the proof of [30,Theorem 3], where α = α G .…”
Section: Appendix C Proof Of Theoremmentioning
confidence: 76%
“…Note that (34) is exactly the same bound as that in [30,Equation (35)]. Hence, the remainder of the proof follows from the proof of [30,Theorem 3], where α = α G .…”
Section: Appendix C Proof Of Theoremmentioning
confidence: 76%