Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1442
|View full text |Cite
|
Sign up to set email alerts
|

Neural Gaussian Copula for Variational Autoencoder

Abstract: Variational language models seek to estimate the posterior of latent variables with an approximated variational posterior. The model often assumes the variational posterior to be factorized even when the true posterior is not. The learned variational posterior under this assumption does not capture the dependency relationships over latent variables. We argue that this would cause a typical training problem called posterior collapse observed in all other variational language models. We propose Gaussian Copula V… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…In [25], an experiment for the classification of a single image was performed by using a classifier based on the Gaussian Copula. Moreover, some works have engaged the use of copulas in combination with the NN [26][27][28][29]. Considering the analysis of SITS data, few works already exist that exploit copulas to analyze such kinds of data.…”
Section: Introductionmentioning
confidence: 99%
“…In [25], an experiment for the classification of a single image was performed by using a classifier based on the Gaussian Copula. Moreover, some works have engaged the use of copulas in combination with the NN [26][27][28][29]. Considering the analysis of SITS data, few works already exist that exploit copulas to analyze such kinds of data.…”
Section: Introductionmentioning
confidence: 99%
“…The challenge of information underrepresentation refers to the limited expressiveness of the latent space z. As shown in the left of Figure 1, current VAEs build a single latent variable z = z T based on the last hidden state of LSTM encoder [5], [6], [11], [12]. However, this is generally insufficient to summarize the input sentence [13].…”
Section: Introductionmentioning
confidence: 99%