Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.457
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward

Abstract: Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive. We argue that, to address these issues, the summarizer should acquire semantic interpretation over input, e.g., via structured representation, to allow the generation of more informative summaries. In this paper, we present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
83
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 125 publications
(83 citation statements)
references
References 30 publications
0
83
0
Order By: Relevance
“…(2) Reinforcement Learning (RL). RL-based training strategies can incorporate any user-defined metrics, including non-differentiable ones, as rewards to train summarization models [56,62,107,109]. These metrics can be ROUGE [80], BERTScore [154], or saliency and entailment rewards [107] inferred from the Natural Language Inference task [13].…”
Section: Newsroom-summary Newsroom-title Bytecupmentioning
confidence: 99%
“…(2) Reinforcement Learning (RL). RL-based training strategies can incorporate any user-defined metrics, including non-differentiable ones, as rewards to train summarization models [56,62,107,109]. These metrics can be ROUGE [80], BERTScore [154], or saliency and entailment rewards [107] inferred from the Natural Language Inference task [13].…”
Section: Newsroom-summary Newsroom-title Bytecupmentioning
confidence: 99%
“…Although these methods are good at generating readable summaries to a certain extent, the problem of factual inconsistencies persists with them. To alleviate this issue, several new methods (Lebanoff et al, 2020;Huang et al, 2020) has been proposed to generate more factually correct summaries. Few other recent works (Falke et al, 2019;Kryściński et al, 2019;Wang et al, 2020a) have exploited question answering and natural language inference (NLI) models to identify factual coherence in the generated summary.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, early abstractive models using sentence-fusion and paraphrasing (Filippova and Strube, 2008;Banerjee et al, 2015;Bing et al, 2015) achieve less success. Inspired by the recent success of single-document abstractive models (See et al, 2017;Paulus et al, 2018;Gehrmann et al, 2018;Huang et al, 2020), some works (Liu et al, 2018;Zhang et al, 2018) try to transfer single-document models to multi-document settings to alleviate the limitations of small-scale datasets. Specifically, Liu et al (2018) defines Wikipedia generation problem and contributes the large-scale WikiSum dataset.…”
Section: Multi-document Summarizationmentioning
confidence: 99%