Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021
DOI: 10.18653/v1/2021.eacl-main.32
|View full text |Cite
|
Sign up to set email alerts
|

Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation

Abstract: Learning disentangled representations of realworld data is a challenging open problem. Most previous methods have focused on either supervised approaches which use attribute labels or unsupervised approaches that manipulate the factorization in the latent space of models such as the variational autoencoder (VAE) by training with task-specific losses. In this work, we propose polarized-VAE, an approach that disentangles select attributes in the latent space based on proximity measures reflecting the similarity … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 14 publications
0
16
0
Order By: Relevance
“…We build on VAEs [29,35], a latent variable modeling framework shown to work well for learning latent representations (also called encodings/embeddings) [20,24,57,14,53,8,45,2] and capturing the generative process [36,53,46,54]. VAEs [29,35] introduce a latent variable z, an encoder q φ , a decoder p θ , and a prior distribution p on z. φ and θ are the parameters of the q and p respectively, often instantiated with neural networks.…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…We build on VAEs [29,35], a latent variable modeling framework shown to work well for learning latent representations (also called encodings/embeddings) [20,24,57,14,53,8,45,2] and capturing the generative process [36,53,46,54]. VAEs [29,35] introduce a latent variable z, an encoder q φ , a decoder p θ , and a prior distribution p on z. φ and θ are the parameters of the q and p respectively, often instantiated with neural networks.…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…For example, Hu et al (2017) generate text with specified sentiments, whereas Li et al (2018b) and Wang et al (2019a) try to transfer the sentiments or styles of the source sentences. The other track, to which our research belongs, focuses on making generated text follow a particular style or structure (Niu et al, 2017;Ficler and Goldberg, 2017;Fu et al, 2018;Iyyer et al, 2018;Li et al, 2019b;Balasubramanian et al, 2020). For instance, Niu et al (2017) constrain the output styles in neural machine translation task and impose length limitation to the summarization.…”
Section: Related Workmentioning
confidence: 99%
“…Based on the constraint source, syntactically controlled text generation models can be further divided into three groups. The first group (Chen et al, 2019b;Bao et al, 2019;Balasubramanian et al, 2020) takes sentences as syntactic exemplars. They attempt to disentangle the semantic and syntactic representations into different VAE (Kingma and Welling, 2014) latent spaces during training, and then use the exemplar to assign a prior distribution to the syntactic latent space at the inference stage.…”
Section: Related Workmentioning
confidence: 99%
“…Still, no previous work has tested whether negation, uncertainty, and content can be disentangled, as linguistic theory suggests, although previous works have disentangled attributes such as syntax, semantics, and style (Balasubramanian et al, 2021;John et al, 2019;Cheng et al, 2020b;Bao et al, 2019;Hu et al, 2017;Colombo et al, 2021). To fill this gap, we aim to answer the following research questions:…”
Section: Introductionmentioning
confidence: 99%