Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/619
|View full text |Cite
|
Sign up to set email alerts
|

A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization

Abstract: In this paper, we propose a deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization. Through jointly attending to topics and word-level alignment, our approach can improve coherence, diversity, and informativeness of generated summaries via a biased probability generation mechanism. On the other hand, reinforcement training, like SCST, direc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
68
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 110 publications
(70 citation statements)
references
References 1 publication
2
68
0
Order By: Relevance
“…By the end of the training process (where η = 1), the model completely uses the REINFORCE loss for training. This mixed training loss was used in many of the recent works on text summarization [13], [46], [157], [158], paraphrase generation [159], image captioning [40], video captioning [160], speech recognition [161], dialogue generation [162], question answering [163], and question generation [62].…”
Section: A Policy Gradient and Reinforce Algorithmmentioning
confidence: 99%
“…By the end of the training process (where η = 1), the model completely uses the REINFORCE loss for training. This mixed training loss was used in many of the recent works on text summarization [13], [46], [157], [158], paraphrase generation [159], image captioning [40], video captioning [160], speech recognition [161], dialogue generation [162], question answering [163], and question generation [62].…”
Section: A Policy Gradient and Reinforce Algorithmmentioning
confidence: 99%
“…A CNN-based architecture is recently employed by Gehring et al (2017) using ConvS2S, which applies CNN on both encoder and decoder. Later, Wang et al (2018) build upon ConvS2S with topic words embedding and encoding, and train the system with reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%
“…allowed because of limitations in the space where the headline appears. The technology of automatic headline generation has the potential to contribute greatly to this domain, and the problems of news headline generation have motivated a wide range of studies (Wang et al, 2018;Chen et al, 2018;Kiyono et al, 2018;Cao et al, 2018;Wang et al, 2019). Table 1 shows sample headlines in three different lengths written by professional editors of a media company for the same news article: The length of the first headline for the digital media is restricted to 10 characters, the second to 13 charac-ters, and the third to 26 characters.…”
Section: トヨタ、 エンジン車だけの車種ゼ ロへ 2025年ごろmentioning
confidence: 99%