2020
DOI: 10.48550/arxiv.2010.06467
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pretrained Transformers for Text Ranking: BERT and Beyond

Abstract: The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query for a particular task. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing applications. This survey provides an overview of text ranking with neural network architectures known as transformers, of which BERT is the best-known example. The combination of transformers and self-supervised pretraining has, without ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(65 citation statements)
references
References 359 publications
(408 reference statements)
2
63
0
Order By: Relevance
“…The success of pre-trained transformer-based language models such as BERT [19] and T5 [47] on several IR benchmarks-a comprehensive account of the effectiveness gains can be found in [29]-has lead to research on understanding their behaviour and the reasons behind their significant gains in ranking effectiveness [12,32,43,46,61].…”
Section: Model Understandingmentioning
confidence: 99%
See 1 more Smart Citation
“…The success of pre-trained transformer-based language models such as BERT [19] and T5 [47] on several IR benchmarks-a comprehensive account of the effectiveness gains can be found in [29]-has lead to research on understanding their behaviour and the reasons behind their significant gains in ranking effectiveness [12,32,43,46,61].…”
Section: Model Understandingmentioning
confidence: 99%
“…We use different ranking models that cover from lexical traditional models (Trad) such as BM25, to neural ranking models (NN) such as KNRM and neural ranking models that employ transformer-based language models (TNN) such as BERT. For all of our experiments, we apply BM25 as a first stage retriever and re-rank the top 100 results with the neural ranking models, which is an established and efficient approach [29].…”
Section: Ranking Modelsmentioning
confidence: 99%
“…In this section, we briefly review relevant work in information retrieval, and application of machine learning to this problem. This is not an exhaustive review, and we refer the reader to Manning et al (2008), Mitra et al (2018 and Lin et al (2020) for a more complete introduction to the field.…”
Section: Related Workmentioning
confidence: 99%
“…Our work is orthogonal to the extensive research on neural encoders in that we provide a new Learning to rank (LTR) is a long-established interdisciplinary research area at the intersection of machine learning and information retrieval (Liu, 2009). Neural rankers are dominating in ranking virtually all modalities recently, including text ranking (Lin et al, 2020), image retrieval (Gordo et al, 2016), and tabular data ranking (Qin et al, 2021). Many LTR papers focus on more effective loss functions (Qin et al, 2010;Bruch et al, 2020) to rank items with respect to a query.…”
Section: Related Workmentioning
confidence: 99%