Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482162
|View full text |Cite
|
Sign up to set email alerts
|

Query Embedding Pruning for Dense Retrieval

Abstract: There may be differences between this version and the published version. You are advised to consult the publisher's version if you wish to cite from it.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…In [133], the authors propose a method called DensePhrases, which uses phrase retrieval to learn passage retrieval. In [134], the authors propose a method for pruning query embeddings to improve dense retrieval performance. In [135], the authors propose a method for learning multi-view document representations to improve open-domain dense retrieval.…”
Section: Multi-vector Representationmentioning
confidence: 99%
“…In [133], the authors propose a method called DensePhrases, which uses phrase retrieval to learn passage retrieval. In [134], the authors propose a method for pruning query embeddings to improve dense retrieval performance. In [135], the authors propose a method for learning multi-view document representations to improve open-domain dense retrieval.…”
Section: Multi-vector Representationmentioning
confidence: 99%
“…in the IR setting. Tonellotto and Macdonald [48] prune the embeddings of the query terms, however they do not reduce the number of embeddings for the documents in the index.…”
Section: Related Workmentioning
confidence: 99%
“…embeddings in the top-ranked documents can be added to the query as a form of pseudo-relevance feedback (PRF) [28]). Furthermore, the per-token embeddings are interpretable [20], and exhibit correlations with IDF [5,27]. On the other hand, access to the voluminous accurate document embeddings in main memory required for the efficient application of the second stage is a major drawback of ColBERT.…”
Section: Multiple Representation Dense Retrievalmentioning
confidence: 99%