Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3531774
|View full text |Cite
|
Sign up to set email alerts
|

Faster Learned Sparse Retrieval with Guided Traversal

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 24 publications
0
19
0
Order By: Relevance
“…LSR is compatible with many techniques from sparse retrieval, such as inverted indexing and accompanying query processing algorithms. However, differences in LSR weights can mean that existing query processing optimizations become much less helpful, motivating new optimizations [21,22,24].…”
Section: Learned Sparse Retrievalmentioning
confidence: 99%
“…LSR is compatible with many techniques from sparse retrieval, such as inverted indexing and accompanying query processing algorithms. However, differences in LSR weights can mean that existing query processing optimizations become much less helpful, motivating new optimizations [21,22,24].…”
Section: Learned Sparse Retrievalmentioning
confidence: 99%
“…Similarly, uniCOIL [24] extends the work of COIL [15] for contextualized term weights. Document retrieval with term weights learned from a transformer has been found slow in [29,31]. Mallia et al [31] state that the MaxScore retrieval algorithm does not efficiently exploit the DeepImpact scores.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Document retrieval with term weights learned from a transformer has been found slow in [29,31]. Mallia et al [31] state that the MaxScore retrieval algorithm does not efficiently exploit the DeepImpact scores. Mackenzie et al [29] view that the learned sparse term weights are "wacky" as they affect document skipping during retrieval thus they advocate ranking approximation with score-at-a-time traversal.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations