Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing 2021
DOI: 10.18653/v1/2021.sustainlp-1.8
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Rank in the Age of Muppets: Effectiveness–Efficiency Tradeoffs in Multi-Stage Ranking

Abstract: It is well known that rerankers built on pretrained transformer models such as BERT have dramatically improved retrieval effectiveness in many tasks. However, these gains have come at substantial costs in terms of efficiency, as noted by many researchers. In this work, we show that it is possible to retain the benefits of transformer-based rerankers in a multi-stage reranking pipeline by first using feature-based learning-to-rank techniques to reduce the number of candidate documents under consideration withou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Table 2 shows retrieval performance on BSARD test set. Although we report model performance on two rank-aware metrics (i.e., mAP and mRP), we emphasize that our approach is specifically aimed at improving the pre-fetching component of a retriever (Zhang et al, 2021a) and therefore focuses on optimizing rank-unaware metrics (i.e., R@k). First, we compare the performance of our proposed G-DSR model (8) against other well-known retrieval approaches and find it significantly outperforms all of them on SAR.…”
Section: Resultsmentioning
confidence: 99%
“…Table 2 shows retrieval performance on BSARD test set. Although we report model performance on two rank-aware metrics (i.e., mAP and mRP), we emphasize that our approach is specifically aimed at improving the pre-fetching component of a retriever (Zhang et al, 2021a) and therefore focuses on optimizing rank-unaware metrics (i.e., R@k). First, we compare the performance of our proposed G-DSR model (8) against other well-known retrieval approaches and find it significantly outperforms all of them on SAR.…”
Section: Resultsmentioning
confidence: 99%
“…In this paper, we investigate multiple linear and non-linear interpolation ensemble methods to analyze the performance of them for combining BM25 and CE CAT scores in comparison to CE BM25CAT . For the sake of a fair analysis, we do not compare CE BM25CAT with a Learning-to-rank approach that is trained on 87 features by [65]. The use of ensemble methods brings additional overhead in terms of efficiency because it adds one more extra step to the re-ranking pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, the theoretical analysis of 1 https://github.com/webis-de/ICTIR-22 duoBERT and duoT5 is still in its infancy; previous work even found such models difficult to be interpreted [27,41]. The effectiveness of duoBERT or duoT5 relies on computing preferences for all pairs of documents, at the expense of their efficiency [45] limiting their applicability in search scenarios with run time constraints.…”
Section: Related Workmentioning
confidence: 99%
“…The high computational cost of re-ranking documents with pre-trained transformers has recently received attention [20]. Even for pointwise approaches, the inference overhead can be prohibitive for practical applications [45]. There are two ideas to improve the efficiency of neural re-rankers: (1) improving the efficiency of the ranking model, and (2) reducing the required number of inferences.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation