Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1403
|View full text |Cite
|
Sign up to set email alerts
|

Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised

Abstract: We present a neural framework for opinion summarization from online product reviews which is knowledge-lean and only requires light supervision (e.g., in the form of product domain labels and user-provided ratings). Our method combines two weakly supervised components to identify salient opinions and form extractive summaries from multiple reviews: an aspect extractor trained under a multi-task objective, and a sentiment predictor based on multiple instance learning. We introduce an opinion summarization datas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
187
1
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 155 publications
(191 citation statements)
references
References 33 publications
2
187
1
1
Order By: Relevance
“…• MATE-*: The MATE model of Angelidis and Lapata (2018) with various configurations: initialization of the aspect embeddings A k using the unweighted/weighted average of seed word embeddings and an extra multitask training objective (MT). 8…”
Section: Experimental Settingsmentioning
confidence: 99%
See 3 more Smart Citations
“…• MATE-*: The MATE model of Angelidis and Lapata (2018) with various configurations: initialization of the aspect embeddings A k using the unweighted/weighted average of seed word embeddings and an extra multitask training objective (MT). 8…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The average number of training, validation, and test segments across domains is around 1 million, 700, and 700 segments, respectively. Segment statistics per domain are reported in the supplementary material of Angelidis and Lapata (2018).…”
Section: A1 Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior to the deep learning era, these information were used as effective categorical features Tan et al, 2011;Gao et al, 2013;Park et al, 2015) for the machine learning model. Recent work has used them to improve the overall performance (Chen et al, 2016;Dong et al, 2017), interpretability (Amplayo et al, 2018a;Angelidis and Lapata, 2018), and personalization (Ficler and Goldberg, 2017) of neural network models in different tasks such as sentiment classification (Tang et al, 2015), review summarization (Yang et al, 2018a), and text generation (Dong et al, 2017).…”
Section: Introductionmentioning
confidence: 99%