“…Flow-based models [5,6,19,33,10,18,2,12,9,32,24] aim to learn a bijective mapping between the target space and the latent space. For a high-dimensional random variable (e.g., an image) x with distribution x ∼ p(x) and a latent variable z with simple tractable distribution z ∼ p(z) (e.g., multivariate Gaussian distribution), flow models generally use an invertible neural network f θ to transform x to z: z = f θ (x).…”