Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.461
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Opinion Summarization as Copycat-Review Generation

Abstract: Opinion summarization is the task of automatically creating summaries that reflect subjective information expressed in multiple documents, such as product reviews. While the majority of previous work has focused on the extractive setting, i.e., selecting fragments from input reviews to produce a summary, we let the model generate novel sentences and hence produce abstractive summaries. Recent progress in summarization has seen the development of supervised models which rely on large quantities of document-summ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
181
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 83 publications
(182 citation statements)
references
References 29 publications
0
181
1
Order By: Relevance
“…Unsupervised text generation methods (Freitag and Roy 2018;Fevry and Phang 2018;Chu and Liu 2019) conventionally make use of variational autoencoders (Kingma and Welling 2014), while employing relatively simple decoders in order to mitigate posterior collapse (Kingma and Welling 2014;Bowman et al 2016). A more recent line of work (Bražinskas, Lapata, and Titov 2019;Amplayo and Lapata 2020) creates synthetic datasets in cases where gold standard summaries are not available which in turn allow to train models in a supervised setting and make use of of effective decoding techniques such as attention and copy. Our method is in line with this work, but ultimately different in its use of content planning to guide both summarization and synthetic data creation.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Unsupervised text generation methods (Freitag and Roy 2018;Fevry and Phang 2018;Chu and Liu 2019) conventionally make use of variational autoencoders (Kingma and Welling 2014), while employing relatively simple decoders in order to mitigate posterior collapse (Kingma and Welling 2014;Bowman et al 2016). A more recent line of work (Bražinskas, Lapata, and Titov 2019;Amplayo and Lapata 2020) creates synthetic datasets in cases where gold standard summaries are not available which in turn allow to train models in a supervised setting and make use of of effective decoding techniques such as attention and copy. Our method is in line with this work, but ultimately different in its use of content planning to guide both summarization and synthetic data creation.…”
Section: Related Workmentioning
confidence: 99%
“…The fused encodings are then passed to a decoder that uses the mean aspect and sentiment distributions as a content plan to generate output summary y. We do not employ an encoder in our model, but rather reuse the encodings from the content plan induction model, which improves memory-efficiency in comparison to related architectures (Chu and Liu 2019;Bražinskas, Lapata, and Titov 2019;Amplayo and Lapata 2020). At test time, the same model is used to summarize actual reviews.…”
Section: Opinion Summarizationmentioning
confidence: 99%
See 3 more Smart Citations