2020
DOI: 10.48550/arxiv.2011.12087
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

Abstract: In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. Assuming the class of uniformly bounded k-times α-Hölder differentiable (C k,α ) and uniformly positive densities, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hyp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…The feedback of D to φ is transported backwards by backpropagation [58] through the concatenated mapping D • φ in order to train the weights of the neural network φ. At the same time, the universal approximation property of (deep) neural networks guarantees that any mappings φ and D can be represented with a given precision, provided the architecture of the networks is sufficiently wide and deep, see [23,5,56,64,30,76,65,32] for qualitative and quantitative results.…”
Section: Mathematical Foundations Of Generative Learning For Ergodic ...mentioning
confidence: 99%
See 4 more Smart Citations

Generative Modeling of Turbulence

Drygala,
Winhart,
di Mare
et al. 2021
Preprint
Self Cite
“…The feedback of D to φ is transported backwards by backpropagation [58] through the concatenated mapping D • φ in order to train the weights of the neural network φ. At the same time, the universal approximation property of (deep) neural networks guarantees that any mappings φ and D can be represented with a given precision, provided the architecture of the networks is sufficiently wide and deep, see [23,5,56,64,30,76,65,32] for qualitative and quantitative results.…”
Section: Mathematical Foundations Of Generative Learning For Ergodic ...mentioning
confidence: 99%
“…Apparently, in the limit T → ∞ by ergodicity (3), the first term converges to the first term in (6) whereas the second term converges almost surely by the law of large numbers. Therefore, the generator φ that is learned from the empirical loss function (13) for large T will approximately solve the minimax problem (5), which by (7) relates to the Jensen-Shannon distance between the estimated measure φT * λ and the invariant measure µ of the ergodic system. In particular, we obtain the following: Theorem 1.…”
Section: Learning Theory For Deterministic Ergodic Systemsmentioning
confidence: 99%
See 3 more Smart Citations

Generative Modeling of Turbulence

Drygala,
Winhart,
di Mare
et al. 2021
Preprint
Self Cite