2021
DOI: 10.1007/s42985-021-00115-6
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional distribution generation through deep neural networks

Abstract: We show that every d-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a 1-dimensional uniform input distribution. What is more, this is possible without incurring a cost—in terms of approximation error measured in Wasserstein-distance—relative to generating the d-dimensional target distribution from d independent random variables. This is enabled by a vast generalization of the space-filling approach discovered in Bailey and Telgarsky (in: Bengio (eds) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…In contrast to the vast amount of studying on function approximation by neural networks, there are only a few papers estimating the generator approximation error [Lee et al, 2017, Bailey and Telgarsky, 2018, Perekrestenko et al, 2020, Chen et al, 2020, Yang et al, 2021. The existing studies often require that the source distribution and the target distribution have the same ambient dimension Lu, 2020, Chen et al, 2020] or the distributions have some special form [Lee et al, 2017, Bailey and Telgarsky, 2018, Perekrestenko et al, 2020. However, these requirements are not satisfied in practical applications.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…In contrast to the vast amount of studying on function approximation by neural networks, there are only a few papers estimating the generator approximation error [Lee et al, 2017, Bailey and Telgarsky, 2018, Perekrestenko et al, 2020, Chen et al, 2020, Yang et al, 2021. The existing studies often require that the source distribution and the target distribution have the same ambient dimension Lu, 2020, Chen et al, 2020] or the distributions have some special form [Lee et al, 2017, Bailey and Telgarsky, 2018, Perekrestenko et al, 2020. However, these requirements are not satisfied in practical applications.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…Given uniform seed, approximation error ǫ, and a target histogram distribution P of tile parameter n (see Definition 2), we construct a neural net which approximates P within ǫ. Compared to the construction in [11] (see Table I), our construction uses strictly fewer neurons in various regimes of ǫ and n. For example, in the regime where ǫ is fixed and n → ∞, we achieve a network size of O(n 3 2 ) improving upon Θ(n 2 ) in [11]. In other regimes, our construction uses at most the same number of neurons as that of [11].…”
Section: Introductionmentioning
confidence: 95%
“…Building upon [10], [11] examined the task of approximating generalisations of the uniform distribution, referred to as histogram distributions (see Definition 2). The authors in [11] constructed neural networks which are able to approximate ntiled histogram distributions using uniform seeds upto a given accuracy. However, [11] does not comment on whether their construction is optimal with respect to the size of the network.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations