Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1209
|View full text |Cite
|
Sign up to set email alerts
|

Scoring Sentence Singletons and Pairs for Abstractive Summarization

Abstract: When writing a summary, humans tend to choose content from one or two sentences and merge them into a single summary sentence. However, the mechanisms behind the selection of one or multiple source sentences remain poorly understood. Sentence fusion assumes multi-sentence input; yet sentence selection methods only work with single sentences and not combinations of them. There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 89 publications
(62 citation statements)
references
References 43 publications
(83 reference statements)
0
62
0
Order By: Relevance
“…Coverage An annotator is asked to rate how well highlighted article sentences "covered" the information contained in the summary sentence. Two article sentences that best match a summary sentence are selected according to a heuristic developed by Lebanoff et al (2019). The same heuristic is also used to determine whether a summary sentence is created by compression or fusion (more details later in this section).…”
Section: Task Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Coverage An annotator is asked to rate how well highlighted article sentences "covered" the information contained in the summary sentence. Two article sentences that best match a summary sentence are selected according to a heuristic developed by Lebanoff et al (2019). The same heuristic is also used to determine whether a summary sentence is created by compression or fusion (more details later in this section).…”
Section: Task Designmentioning
confidence: 99%
“…In order to answer the Method of Merging and Coverage questions, the annotator must be provided with which two article sentences were fused together to create the summary sentence in question. We use the heuristic proposed by Lebanoff et al (2019) to estimate which pair of sentences should be chosen. They use averaged ROUGE-1, -2, -L scores (Lin, 2004) to represent sentence similarity.…”
Section: Task Designmentioning
confidence: 99%
“…Extracting summary-worthy sentences from the source documents is important even if the ultimate goal is to generate abstracts. Recent abstractive studies recognize the importance of separating "salience estimation" from "text generation" so as to reduce the amount of training data required by encoder-decoder models (Gehrmann et al, 2018;Lebanoff et al, 2018Lebanoff et al, , 2019). An extractive method is often leveraged to identify salient source sentences, then a neural text generator rewrites the selected sentences into an abstract.…”
Section: Related Workmentioning
confidence: 99%
“…The model uses BERT as the encoder to extract the input document's features and then uses the Transformer to decode to generate the initial results, masked the output, and then made predictions through another BERT. Logan Lebanoff [28] proposed to map single sentences and pairs of sentences to a unified space for sorting. According to this sorting, single sentences and paired sentences that are important for the summary are selected.…”
Section: Abstractive Modelsmentioning
confidence: 99%