Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1659
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Flip the Sentiment of Reviews from Non-Parallel Corpora

Abstract: Flipping sentiment while preserving sentence meaning is challenging because parallel sentences with the same content but different sentiment polarities are not always available for model learning. We introduce a method for acquiring imperfectly aligned sentences from non-parallel corpora and propose a model that learns to minimize the sentiment and content losses in a fully end-to-end manner. Our model is simple and offers well-balanced results across two domains: Yelp restaurant and Amazon product reviews. 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…Arguing that superior performance is observed for any sequence-to-sequence task with parallel data, Cavalin et al (2020) employed a semantic similarity measure to derive parallel data from (non-parallel) Amazon and Yelp reviews. Also, Jin et al (2019) and Kruengkrai (2019) derived a pseudo-parallel corpus from mono-style data by aligning semantically similar sentences from the sides of the source and target attributes. For a subset of the Yelp reviews, they collected human-generated styled variations p…”
Section: Intended Stylesmentioning
confidence: 99%
“…Arguing that superior performance is observed for any sequence-to-sequence task with parallel data, Cavalin et al (2020) employed a semantic similarity measure to derive parallel data from (non-parallel) Amazon and Yelp reviews. Also, Jin et al (2019) and Kruengkrai (2019) derived a pseudo-parallel corpus from mono-style data by aligning semantically similar sentences from the sides of the source and target attributes. For a subset of the Yelp reviews, they collected human-generated styled variations p…”
Section: Intended Stylesmentioning
confidence: 99%
“…Arguing that superior performance is observed for any sequence-to-sequence task with parallel data, Cavalin et al (2020) employed a semantic similarity measure to derive parallel data from non-parallel ones which consisted of Amazon and Yelp reviews. Also Jin et al (2019) and Kruengkrai (2019) derived a pseudo-parallel corpus from mono-style data by aligning semantically similar sentences from source and target attribute sides. For a subset of the Yelp reviews, they collected human-generated styled variations d .…”
Section: Sentimentmentioning
confidence: 99%
“…Most methods either rely on adversarial objectives (Shen et al, 2017;Hu et al, 2017;Fu et al, 2018), retrieval (Li et al, 2018), or backtranslation (Lample et al, 2019;Logeswaran et al, 2018) to make the latent codes independent of the style attribute. Notable exceptions are Transformer-based Sudhakar et al, 2019), use reinforcement learning for backtranslating through the discrete space , build pseudo-parallel corpora (Kruengkrai, 2019;Jin et al, 2019), or modify the latent-variable at inference time by following the gradient of a style classifier (Wang et al, 2019;Liu et al, 2020). Similar to our motivation, Li et al (2019) aim at improving in-domain performance by incorporating out-of-domain data into training.…”
Section: Related Workmentioning
confidence: 99%