2021
DOI: 10.1109/access.2021.3133762
|View full text |Cite
|
Sign up to set email alerts
|

Convergence and Optimality Analysis of Low-Dimensional Generative Adversarial Networks Using Error Function Integrals

Abstract: Due to their success at synthesising highly realistic images, many claims have been made about optimality and convergence in generative adversarial networks (GANs). But what of vanishing gradients, saturation, and other numerical problems noted by AI practitioners? Attempts to explain these phenomena have so far been based on purely empirical studies or differential equations, valid only in the limit. We take a fresh look at these questions using explicit, low-dimensional models. We revisit the well known opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…Integrals of this type arise in modelling certain adaptive machine learning systems such as generative adversarial networks [10]. They are reducible to explicit functions involving exponentials and error functions and integrals I (n) (a, b, m, s), for integers n > 0 and reals a, s > 0, of the form:…”
Section: A Background and Motivationmentioning
confidence: 99%
“…Integrals of this type arise in modelling certain adaptive machine learning systems such as generative adversarial networks [10]. They are reducible to explicit functions involving exponentials and error functions and integrals I (n) (a, b, m, s), for integers n > 0 and reals a, s > 0, of the form:…”
Section: A Background and Motivationmentioning
confidence: 99%