Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.457
|View full text |Cite
|
Sign up to set email alerts
|

A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection

Abstract: An essential task of most Question Answering (QA) systems is to re-rank the set of answer candidates, i.e., Answer Sentence Selection (AS2). These candidates are typically sentences either extracted from one or more documents preserving their natural order or retrieved by a search engine. Most state-of-the-art approaches to the task use huge neural models, such as BERT, or complex attentive architectures. In this paper, we argue that by exploiting the intrinsic structure of the original rank together with an e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 28 publications
1
4
0
Order By: Relevance
“…• Joint Model Multi-classifier performs lower than PR for all measures and all datasets. This is in line with the findings of Bonadiman and Moschitti (2020), who also did not obtain improvement when jointly used all the candidates altogether in a representation.…”
Section: Comparative Resultssupporting
confidence: 90%
See 1 more Smart Citation
“…• Joint Model Multi-classifier performs lower than PR for all measures and all datasets. This is in line with the findings of Bonadiman and Moschitti (2020), who also did not obtain improvement when jointly used all the candidates altogether in a representation.…”
Section: Comparative Resultssupporting
confidence: 90%
“…The loss function for training such networks is constituted by the contribution of all elements of its ranked items. The closest work to our research is by Bonadiman and Moschitti (2020), who designed several joint models. These improved early neural networks based on CNN and LSTM for AS2, but failed to improve the state of the art using pre-trained Transformer models.…”
Section: Answer Sentence Selection (As2)mentioning
confidence: 99%
“…Answer Sentence Selection TANDA (Garg et al, 2020) established the SOTA for AS2 using a large dataset (ASNQ) for transfer learning. Other approaches for AS2 include: separate encoders for question and answers (Bonadiman and Moschitti, 2020), and compare-aggregate and clustering to improve answer relevance ranking (Yoon et al, 2019). and SOP (Lan et al, 2020) have been widely explored for transformers to improve accuracy for downstream classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Then, Bonadiman and Moschitti (2020) attempted to design several joint models, which improved early neural models for AS2 but, when used in Transformer-based rerankers, they failed to improve the state of the art. Jin et al (2020) used the relation between candidates in Multi-task learning approach for AS2 but as they do not exploit transformer models, their results are rather lower than the state of the art.…”
Section: Related Workmentioning
confidence: 99%