Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 2 2017
DOI: 10.18653/v1/e17-2047
|View full text |Cite
|
Sign up to set email alerts
|

Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization

Abstract: This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(36 citation statements)
references
References 17 publications
0
36
0
Order By: Relevance
“…[T] indicates a token-level objective and [S] indicates a sequence level objectives. ABS+ refers to Rush et al (2015), RNN MLE/MRT (Ayana et al, 2016), WFE (Suzuki and Nagata, 2017), SEASS , DRGD . ong et al, 2015).…”
Section: Comparison To Beam-search Optimizationmentioning
confidence: 99%
“…[T] indicates a token-level objective and [S] indicates a sequence level objectives. ABS+ refers to Rush et al (2015), RNN MLE/MRT (Ayana et al, 2016), WFE (Suzuki and Nagata, 2017), SEASS , DRGD . ong et al, 2015).…”
Section: Comparison To Beam-search Optimizationmentioning
confidence: 99%
“…Furthermore, we compare the performance of MC-RNN with other recent methods on this task. We find that both ROUGE-1 and ROUGE-2 of our model outperform all current top systems, including (Shen et al 2016;Gehring et al 2017;Suzuki and Nagata 2017).…”
Section: Experimental Results Frommentioning
confidence: 51%
“…Recurrent neural networks (RNNs), designed with recurrent units and parameter sharing, have demonstrated outstanding ability in modeling sequential data and achieved success in various Nature Language Processing (NLP) tasks, such as language modeling (Merity, Keskar, and Socher 2018), machine translation (Bahdanau et al 2017;Huang et al 2018), abstractive summarization (Suzuki and Nagata 2017), and dialog systems (Asri, He, and Suleman 2016). Traditional RNNs produce hidden state vectors one by one through recurrent computations, treating all tokens in the sequence uniformly and equally.…”
Section: Introductionmentioning
confidence: 99%
“…Our re-ranking strategy selects a headline that contains source-side words the most. Table 3 shows that Transformer+LRP E+P E with this reranking (+Re-ranking) achieved better scores than the state-of-the-art (Suzuki and Nagata, 2017).…”
Section: Resultsmentioning
confidence: 99%