Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1138
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sentiment Memories for Sentiment Modification without Parallel Data

Abstract: The task of sentiment modification requires reversing the sentiment of the input and preserving the sentiment-independent content. However, aligned sentences with the same content but different sentiments are usually unavailable. Due to the lack of such parallel data, it is hard to extract sentiment independent content and reverse the sentiment in an unsupervised way. Previous work usually can not reconcile sentiment transformation and content preservation. In this paper, motivated by the fact the non-emotiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 52 publications
(34 citation statements)
references
References 17 publications
0
34
0
Order By: Relevance
“…Following previous works (Prabhumoye et al, 2018;Zhang et al, 2018a), we employ BLEU score (Papineni et al, 2002) and style accuracy as the automatic evaluation metrics to measure the content preservation degree and the style changing degree. BLEU calculates the N-gram overlap between the generated sentence and the references, thus can be used to measure the preservation of text content.…”
Section: Automatic Evaluation Metricsmentioning
confidence: 99%
“…Following previous works (Prabhumoye et al, 2018;Zhang et al, 2018a), we employ BLEU score (Papineni et al, 2002) and style accuracy as the automatic evaluation metrics to measure the content preservation degree and the style changing degree. BLEU calculates the N-gram overlap between the generated sentence and the references, thus can be used to measure the preservation of text content.…”
Section: Automatic Evaluation Metricsmentioning
confidence: 99%
“…Hidden vector approaches represent content as hidden vectors, e.g., Hu et al (2017) adversarially incorporate a VAE and a style classifier; Shen et al (2017) propose a cross-aligned AE that adversarially aligns the hidden states of the decoder; Fu et al (2018) design a multi-decoder model and a style-embedding model for better style representations; use language models as style discriminators; John et al (2018) utilize bagof-words prediction for better disentanglement of style and content. Deletion approaches represent content as the input sentence with stylized words deleted, e.g., delete stylized ngrams based on corpus-level statistics and stylize it based on similar, retrieved sentences; jointly train a neutralization module and a stylization module the with reinforcement learning; Zhang et al (2018a) facilitate the stylization step with a learned sentiment memory. As far as we know, there are two work that avoid disentangled representations.…”
Section: Related Workmentioning
confidence: 99%
“…introduced the notion of attribute markers, which are style-specific words/phrases for disentangling style and content in a sentence at the word level. There is also a line of work that studies other aspects of words based on emotional information (Xu et al, 2018;Zhang et al, 2018a). Here, we make no assumption on phrase boundaries between style and content.…”
Section: Related Wokmentioning
confidence: 99%