Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3511955
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Neural Ranking using Forward Indexes

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 31 publications
0
8
0
Order By: Relevance
“…Re-ranking. Both LADR variants build upon the high effectiveness of the re-ranking technique [20,41]. Using a lexical model for reranking alone is a major drawback, however: it inherently limits the retrievable documents to those with lexical matches.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Re-ranking. Both LADR variants build upon the high effectiveness of the re-ranking technique [20,41]. Using a lexical model for reranking alone is a major drawback, however: it inherently limits the retrievable documents to those with lexical matches.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…Most notably, they suffer in terms of recall, which is meant to be one of the key benefits of dense retrieval. Remarkably, simply re-ranking an initial pool of lexical results (such as BM25) remains highly competitive [20,41]. We posit that this is due to two reasons.…”
Section: R@1k Ladrmentioning
confidence: 95%
See 1 more Smart Citation
“…TILDEv2 [34] uses exact contextualized term matching to reduce the memory requirements. Leonhardt et al [20] propose vector forward indexes that allow for efficient interpolationbased re-ranking using dual-encoders.…”
Section: Neural Retrieval and Rankingmentioning
confidence: 99%
“…The relevance of a query-document pair is then computed as the dot product of the query and document representation vectors. This is referred to as two-tower, bi-encoder or dual-encoder architecture and has been used for retrieval [14,16,22] and re-ranking [15,20,35]. Typically, the query and document encoder either (1) are architecturally identical and initialized using the same pre-trained model or (2) even share their weights in a Siamese fashion.…”
Section: Introductionmentioning
confidence: 99%