2020
DOI: 10.1007/978-3-030-43887-6_56
|View full text |Cite
|
Sign up to set email alerts
|

Classification Betters Regression in Query-Based Multi-document Summarisation Techniques for Question Answering

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Motivated by these shortcomings, we propose a novel, humancurated test set for QfS on well-formed questions consisting of high-quality 250 instances. RL for Summarization/QfS: The usage of RL in QfS has been limited to extractive summarization only (Mollá and Jones, 2019;Mollá et al, 2020;Chali and Mahmud, 2021;Shapira et al, 2022). Mollá and Jones (2019); Mollá et al (2020); Shapira et al (2022) use RL to train sentence selector models, which select sentences to be incorporated into the summary.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Motivated by these shortcomings, we propose a novel, humancurated test set for QfS on well-formed questions consisting of high-quality 250 instances. RL for Summarization/QfS: The usage of RL in QfS has been limited to extractive summarization only (Mollá and Jones, 2019;Mollá et al, 2020;Chali and Mahmud, 2021;Shapira et al, 2022). Mollá and Jones (2019); Mollá et al (2020); Shapira et al (2022) use RL to train sentence selector models, which select sentences to be incorporated into the summary.…”
Section: Related Workmentioning
confidence: 99%
“…RL for Summarization/QfS: The usage of RL in QfS has been limited to extractive summarization only (Mollá and Jones, 2019;Mollá et al, 2020;Chali and Mahmud, 2021;Shapira et al, 2022). Mollá and Jones (2019); Mollá et al (2020); Shapira et al (2022) use RL to train sentence selector models, which select sentences to be incorporated into the summary. Chali and Mahmud (2021) present a hybrid summarization, where an extractive module selects text from the document, which is then used by the abstractive module to generate an abstractive summary.…”
Section: Related Workmentioning
confidence: 99%
“…The "MQ" team, as in past years, focused on ideal answers, approaching the task as query-based summarisation. In some of their systems the retrain their previous classification and regression approaches [28] in the new training dataset. In addition, they also employ reinforcement learning with Proximal Policy Optimization (PPO) [41] and two variants to represent the input features, namely Word2Vec-based and BERT-based embeddings.…”
Section: Systemsmentioning
confidence: 99%
“…Finally, the "sbert" team, also focused on ideal answers. They experimented with different embedding models and multi-task learning in their systems, using parts from previous "MQU " systems for the pre-processing of data and the prediction step based on classification and regression [28]. In particular, they used a Universal Sentence Embedding Model [9] (BioBERT-NLI 16 ) based on a version of BioBERT fine-tuned on the the SNLI [6] and the MultiNLI datasets as in Sentence-BERT [39].…”
Section: Systemsmentioning
confidence: 99%
“…The Macquarie University ("MQU ") team focused on ideal answers and approached the task under a classification approach for snippet relevance [33]. Extending their previous work [31,32] the snippets are marked as summary relevant or not, utilizing w2vec embeddings and tf-idf vectors of the question-sentence pairs, showcasing that a classification scheme is more appropriate than a regression one.…”
Section: Task 7bmentioning
confidence: 99%