2017
DOI: 10.48550/arxiv.1703.07027
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nonparametric Variational Auto-encoders for Hierarchical Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 3 publications
0
12
0
Order By: Relevance
“…While AAE allows the prior to be arbitrary, how to select a prior that can best characterize the data distribution remains an open issue. Goyal et al [8] make an attempt to learn a non-parametric prior based on the nested Chinese restaurant process for VAEs. Learning is achieved by fitting it to the aggregated posterior distribution, which amounts to maximization of ELBO.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…While AAE allows the prior to be arbitrary, how to select a prior that can best characterize the data distribution remains an open issue. Goyal et al [8] make an attempt to learn a non-parametric prior based on the nested Chinese restaurant process for VAEs. Learning is achieved by fitting it to the aggregated posterior distribution, which amounts to maximization of ELBO.…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by these learned priors [8,9] for VAE, we propose in this paper the notion of code generators to learn a proper prior from data for AAE. The relations of our work with these prior arts are illustrated in Fig.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Several works have studied more complex prior distributions, including learnable ones[83][84][85][86][87] 2. The objective eq.…”
mentioning
confidence: 99%