2018
DOI: 10.48550/arxiv.1804.00779
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Autoregressive Flows

Abstract: Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF) (Papamakarios et al., 2017), and to accelerate stateof-the-art WaveNet-based speech synthesis to 20x faster than real-time (Oord et al., 2017), via Inverse Autoregressive Flows (IAF) (Kingma et al., 2016). We unify and generalize these approaches, replacing the (conditionally) affine univariate transformations of MAF/IAF with a more general… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(56 citation statements)
references
References 24 publications
0
54
0
Order By: Relevance
“…Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35,65]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular.…”
Section: Numerical Evaluation Of the C Ima Contrast For Spurious Nonl...mentioning
confidence: 99%
See 2 more Smart Citations
“…Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35,65]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular.…”
Section: Numerical Evaluation Of the C Ima Contrast For Spurious Nonl...mentioning
confidence: 99%
“…We remark that, while the possibility of using normalising flows to "learn" the Darmois construction is mentioned in [35,65], where a similar construction is mentioned in a theoretical argument to prove "universal approximation capacity for densities" for normalising flow models with triangular Jacobian, it has to the best of our knowledge not been tested empirically, since autoregressive modules with triangular Jacobian are typically used in combination with permutation, shuffling or linear layers which overall lead to architectures with a non-triangular Jacobian.…”
Section: E2 How To Implement the Darmois Constructionmentioning
confidence: 99%
See 1 more Smart Citation
“…Examples of neural architectures for normalizing flow are IAF [39], MAF [54], NICE [19], Real NVP [20], Glow [38], NAF [29] and BNAF [18]. We adapt Block Neural Autoregressive Flow (BNAF) in our implementation.…”
Section: Normalizing Flowmentioning
confidence: 99%
“…Flow-based models [5,6,19,33,10,18,2,12,9,32,24] aim to learn a bijective mapping between the target space and the latent space. For a high-dimensional random variable (e.g., an image) x with distribution x ∼ p(x) and a latent variable z with simple tractable distribution z ∼ p(z) (e.g., multivariate Gaussian distribution), flow models generally use an invertible neural network f θ to transform x to z: z = f θ (x).…”
Section: Preliminariesmentioning
confidence: 99%