Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1150
|View full text |Cite
|
Sign up to set email alerts
|

Deep Communicating Agents for Abstractive Summarization

Abstract: We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization. With deep communicating agents, the task of encoding a long text is divided across multiple collaborating agents, each in charge of a subsection of the input text. These encoders are connected to a single decoder, trained end-to-end using reinforcement learning to generate a focused and coherent summary. Empirical results demonstrate that multiple comm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
254
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 295 publications
(259 citation statements)
references
References 27 publications
5
254
0
Order By: Relevance
“…Modern abstractive summarizers excel at finding and extracting salient content (See et al, 2017;Chen and Bansal, 2018;Celikyilmaz et al, 2018;Liu and Lapata, 2019). However, one of the key tenets of summarization is consolidation of information, and these systems can struggle to combine content from multiple source texts, yielding output summaries that contain poor grammar and even incorrect facts.…”
Section: Introductionmentioning
confidence: 99%
“…Modern abstractive summarizers excel at finding and extracting salient content (See et al, 2017;Chen and Bansal, 2018;Celikyilmaz et al, 2018;Liu and Lapata, 2019). However, one of the key tenets of summarization is consolidation of information, and these systems can struggle to combine content from multiple source texts, yielding output summaries that contain poor grammar and even incorrect facts.…”
Section: Introductionmentioning
confidence: 99%
“…FastAbs (Chen and Bansal, 2018) regards ROUGE scores as reward signals with reinforcement learning, which brings a great performance gain. DCA (Celikyilmaz et al, 2018) proposes deep communicating agents with reinforcement setting and achieves the best results on CNN/Daily Mail. Although our experimental results have not outperformed the state-of-the-art models, our model has a much simpler structure with fewer parameters.…”
Section: Automatic Evaluation Resultsmentioning
confidence: 99%
“…Abstractive document summarization (Rush et al, 2015;Nallapati et al, 2016;Tan et al, 2017;Chen and Bansal, 2018;Celikyilmaz et al, 2018) attempts to produce a condensed representation of the most salient information of the document, aspects of which may not appear as parts of the original input text. One popular framework used in abstractive summarization is the sequence-tosequence model introduced by Sutskever et al (2014).…”
Section: Introductionmentioning
confidence: 99%
“…Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor. Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization (Ranzato et al, 2016;Bahdanau et al, 2017;Paulus et al, 2018;Celikyilmaz et al, 2018;Narayan et al, 2018). Bahdanau et al (2017) use actor-critic methods for language generation, using reward shaping (Ng et al, 1999) to solve the sparsity of training signals.…”
Section: Duc-2002mentioning
confidence: 99%