2020
DOI: 10.1007/978-3-030-45442-5_22
|View full text |Cite
|
Sign up to set email alerts
|

Neural Query-Biased Abstractive Summarization Using Copying Mechanism

Abstract: This paper deals with the query-biased summarization task. Conventional non-neural network-based approaches have achieved better performance by primarily including the words overlapping between the source and the query in the summary. However, recurrent neural network (RNN)-based approaches do not explicitly model this phenomenon. Therefore, we model an RNN-based query-biased summarizer to primarily include the overlapping words in the summary, using a copying mechanism. Experimental results, in terms of both … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…Query-based summarization aims to generate a brief summary according to a source document and a given query. There are works studying this task (Daumé III and Marcu, 2006;Otterbacher et al, 2009;Wang et al, 2016;Litvak and Vanetik, 2017;Nema et al, 2017;Baumel et al, 2018;Ishigaki et al, 2020;Kulkarni et al, 2020;Laskar et al, 2020). However, the models focus on news (Dang, 2005(Dang, , 2006, debate (Nema et al, 2017), and Wikipedia (Zhu et al, 2019).…”
Section: Query-based Summarizationmentioning
confidence: 99%
“…Query-based summarization aims to generate a brief summary according to a source document and a given query. There are works studying this task (Daumé III and Marcu, 2006;Otterbacher et al, 2009;Wang et al, 2016;Litvak and Vanetik, 2017;Nema et al, 2017;Baumel et al, 2018;Ishigaki et al, 2020;Kulkarni et al, 2020;Laskar et al, 2020). However, the models focus on news (Dang, 2005(Dang, , 2006, debate (Nema et al, 2017), and Wikipedia (Zhu et al, 2019).…”
Section: Query-based Summarizationmentioning
confidence: 99%
“…They found utilizing transfer learning to be quite effective for the SD-QFAS task in that dataset. More recently, newer models based on the recurrent neural network architecture (Sutskever, Vinyals, and Le 2014) that did not utilize transfer learning failed to outperform the RSA model in terms of different ROUGE scores (Aryal and Chali 2020;Ishigaki et al 2020). This may indicate that the utilization of transfer learning to tackle the few-shot learning problem has a strong effect on performance improvement in the Debatepedia dataset.…”
Section: Single-document Query-focused Abstractive Text Summarizationmentioning
confidence: 99%
“…Here, 'R', 'P', and 'F' denote 'Recall', 'Precision', and 'F1', respectively, while 'QD' denotes 'Query-Document Attention' and 'BSA' denotes 'Bidirectional Self-Attention'. The results for the DDA, the Selection Driven, the Overlap-Wind, and the RSA model are collected from Nema et al (2017), Aryal and Chali (2020), Ishigaki et al (2020), andBaumel, Eyal, andElhadad (2018), respectively. on the MS-MARCO dataset.…”
Section: Tablementioning
confidence: 99%
“…Query-based summarization aims to generate a brief summary according to a source document and a given query. There are works studying this task (Daumé III and Marcu, 2006;Otterbacher et al, 2009;Wang et al, 2016;Litvak and Vanetik, 2017;Nema et al, 2017;Baumel et al, 2018;Ishigaki et al, 2020;Kulkarni et al, 2020;Laskar et al, 2020). However, the models focus on news (Dang, 2005(Dang, , 2006, debate (Nema et al, 2017), and Wikipedia (Zhu et al, 2019).…”
Section: Query-based Summarizationmentioning
confidence: 99%