2021
DOI: 10.48550/arxiv.2106.00104
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Text Summarization with Latent Queries

Abstract: The availability of large-scale datasets has driven the development of neural models that create summaries from single documents, for generic purposes. When using a summarization system, users often have specific intents with various language realizations, which, depending on the information need, can range from a single keyword to a long narrative composed of multiple questions. Existing summarization systems, however, often either fail to support or act robustly on this query focused summarization task. We i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…The topic of query-focused abstractive summarization has remained underexplored for the multi-document scenario as well (Kulkarni et al 2020). More importantly, the currently available query focused multi-document abstractive summarization (MD-QFAS) datasets (e.g., DUC 2 2005, 2006, 2007) do not contain any labeled training data, that is, these datasets only provide test data (Baumel, Eyal, and Elhadad 2018;Goodwin, Savery, and Demner-Fushman 2020;Su et al 2020;Xu and Lapata 2021). To tackle the lack of training data for the MD-QFAS task, most previous work was based on various unsupervised approaches that could only generate extractive summaries (Wang et al 2008;Haghighi and Vanderwende 2009;Wan and Xiao 2009;Yao, Wan, and Xiao 2015;Zhong et al 2015;Wan and Zhang 2014;Ma, Deng, and Yang 2016;Feigenblat et al 2017).…”
Section: Multi-document Query-focused Abstractive Text Summarizationmentioning
confidence: 99%
“…The topic of query-focused abstractive summarization has remained underexplored for the multi-document scenario as well (Kulkarni et al 2020). More importantly, the currently available query focused multi-document abstractive summarization (MD-QFAS) datasets (e.g., DUC 2 2005, 2006, 2007) do not contain any labeled training data, that is, these datasets only provide test data (Baumel, Eyal, and Elhadad 2018;Goodwin, Savery, and Demner-Fushman 2020;Su et al 2020;Xu and Lapata 2021). To tackle the lack of training data for the MD-QFAS task, most previous work was based on various unsupervised approaches that could only generate extractive summaries (Wang et al 2008;Haghighi and Vanderwende 2009;Wan and Xiao 2009;Yao, Wan, and Xiao 2015;Zhong et al 2015;Wan and Zhang 2014;Ma, Deng, and Yang 2016;Feigenblat et al 2017).…”
Section: Multi-document Query-focused Abstractive Text Summarizationmentioning
confidence: 99%
“…Query-focused text summarization is a specific type of summarization that generates a summary of the given text that is focused on answering a specific question (Laskar et al, 2020c) or addressing a particular topic, rather than providing a general overview of the text. (Baumel et al, 2018;Goodwin et al, 2020;Su et al, 2020;Xu and Lapata, 2021;Laskar et al, 2020aLaskar et al, ,b, 2022.…”
Section: Introductionmentioning
confidence: 99%
“…Supervised approaches to query-focused summarisation have the inherent problem of the paucity of annotated data. This problem has been highlighted, for example, by [1], and the biomedical domain is no exception. The BioASQ Challenge provides annotated data for multiple tasks, including question answering [2].…”
Section: Introductionmentioning
confidence: 99%
“…This paper describes our contribution to the BioASQ Synergy task and phase B of the BioASQ9b challenge. 1 For the BioASQ Synergy task, we use a system that has been trained on the BioASQ8b training data, whereas for phase B of the BioASQ9b challenge we explore the use of Transformer architectures. In particular, we integrate BERT variants and fine-tune them with the BioASQ9b training data.…”
Section: Introductionmentioning
confidence: 99%