Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining 2022
DOI: 10.1145/3488560.3498495
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Composite Re-Ranking for Efficient Keyword Search with BERT

Abstract: Recently transformer-based ranking models have been shown to deliver high relevance for document search and the relevanceefficiency tradeoff becomes important for fast query response times. This paper presents BECR (BERT-based Composite Re-Ranking), a lightweight composite re-ranking scheme that combines deep contextual token interactions and traditional lexical term-matching features. BECR conducts query decomposition and composes a query representation using pre-computable token embeddings based on uni-grams… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 41 publications
(64 reference statements)
0
2
0
Order By: Relevance
“…Our scheme uses a hybrid combination of BM25 and learned term weights, motivated by the previous work in composing lexical and neural ranking [16,22,25,26,42]. GTI adopts that for final ranking.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Our scheme uses a hybrid combination of BM25 and learned term weights, motivated by the previous work in composing lexical and neural ranking [16,22,25,26,42]. GTI adopts that for final ranking.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Our skipping and final ranking adopts a hybrid formula to bound and combine rank scores based on BM25 weights and learned term weights. That is motivated by the recent studies in composing lexical and neural models in re-ranking [43] and in combining scores from sparse retrieval and dense retrieval [15,22,23]. We choose VBMW [29] to demonstrate our scheme because VBMW is generally acknowledged to represent the state of the art [27] for many cases.…”
Section: Background and Related Workmentioning
confidence: 99%