Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.110
|View full text |Cite
|
Sign up to set email alerts
|

A New Approach to Overgenerating and Scoring Abstractive Summaries

Abstract: We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs. Abstractive summarizers trained on single reference summaries may struggle to produce outputs that achieve multiple desirable properties, i.e., capturing the most important information, being faithful to the original, grammatical and fluent. In this paper, we propose a two-staged strategy to generate a diverse set of candidate s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 51 publications
0
6
0
Order By: Relevance
“…Second, we focus on auto-regressive methods in this paper. However, we believe our framework could also be applied and adopted to non auto-regressive generation models (Song et al, 2021)…”
Section: Discussionmentioning
confidence: 99%
“…Second, we focus on auto-regressive methods in this paper. However, we believe our framework could also be applied and adopted to non auto-regressive generation models (Song et al, 2021)…”
Section: Discussionmentioning
confidence: 99%
“…None of these studies can control the length explicitly. Song et al (2021) is able to precisely control the length by progressively filling a predetermined number of decoding slots, analogous to the vanilla NAR model in our non-autoregressive setting.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, the need for controlling aspects of generated text such as length, style, and mentioned entity has been acknowledged lately in an effort to enable models to adapt to user input (Fan, Grangier, & Auli, 2018) or predefined user preferences (Song, Wang, Feng, & Liu, 2021). Most importantly, previous work (Sun et al, 2019;Schumann et al, 2020) pointed out how the lack of control in produced summaries led recent summarization systems to unwillingly exploit the susceptibility of automated metrics to summary length.…”
Section: Content Selection Redundancy and Length Controlmentioning
confidence: 99%