2019
DOI: 10.48550/arxiv.1911.10885
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving VAE generations of multimodal data through data-dependent conditional priors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Seeking for an optimal exploitation of the entire code space C, fixed priors in the form of diffuse uniforms are chosen here in agreement with the relevant literature [43]. Following the notation in [37], these represent the maximum entropy configuration for the discrete case…”
Section: A Probabilistic Discretization Of the Latentsmentioning
confidence: 99%
“…Seeking for an optimal exploitation of the entire code space C, fixed priors in the form of diffuse uniforms are chosen here in agreement with the relevant literature [43]. Following the notation in [37], these represent the maximum entropy configuration for the discrete case…”
Section: A Probabilistic Discretization Of the Latentsmentioning
confidence: 99%
“…Some models have considered them to be independent (Dupont (2018); Kim et al (2020)); hence, they cannot extract variations exclusive to particular classes. Some other models have considered all continuous variations to be class-dependent (Jiang et al (2017); Lavda et al (2019); Gao et al (2019)) and so unable to extract variations shared between different classes of data efficiently. Note that these models can learn shared factors of variation separately for each class using a part of the dataset, which will also waste the model's computation power.…”
Section: Related Workmentioning
confidence: 99%