Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1264
|View full text |Cite
|
Sign up to set email alerts
|

Guiding Extractive Summarization with Question-Answering Rewards

Abstract: Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(25 citation statements)
references
References 43 publications
0
25
0
Order By: Relevance
“…Prior work proposed heuristic-based reward functions to train crossinput RL summarisers, in order to strengthen certain properties of the generated summaries. Arumae and Liu (2019) propose four reward functions to train an RL-based extractive summariser, including the question-answering competency rewards, which encourage the RL agent to generate summaries that can answer cloze-style questions. Such questions are automatically created by removing some words in the reference summaries.…”
Section: Related Workmentioning
confidence: 99%
“…Prior work proposed heuristic-based reward functions to train crossinput RL summarisers, in order to strengthen certain properties of the generated summaries. Arumae and Liu (2019) propose four reward functions to train an RL-based extractive summariser, including the question-answering competency rewards, which encourage the RL agent to generate summaries that can answer cloze-style questions. Such questions are automatically created by removing some words in the reference summaries.…”
Section: Related Workmentioning
confidence: 99%
“…We proceed by evaluating our method against state-of-the-art extractive and abstractive summarization systems. Arumae and Liu (2019) present an approach to extract summary segments using question-answering as supervision signal, assuming a high quality summary can serve as document surrogate to answer questions. See et al 2017present pointer-generator networks, an abstractive summarization model and a reliable baseline for being both state-of-the-art, and also a vital tool for guiding our data creation.…”
Section: Resultsmentioning
confidence: 99%
“…The encoder hidden state dimension is set at 768, with 12 layers and 12 attention heads (BERT BASE uncased). We utilize dropout with p = 0.1, and λ is empirically set to 0.1. and an extractive summarizer (Arumae and Liu, 2019) whose CNN/DM results are macro-averaged. The bottom two sections showcase our models.…”
Section: Methodsmentioning
confidence: 99%
“…Besides selective rationalization, the cooperative game has been studied in the latter direction above (Lee et al, 2018). It has also been applied to a relevant problem on summarization (Arumae and Liu, 2019), where the selected summary should be sufficient for answering questions related to the document. In this problem, the sum-mary is a special type of rationale of a document.…”
Section: Related Workmentioning
confidence: 99%