Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.640
|View full text |Cite
|
Sign up to set email alerts
|

Matching-oriented Embedding Quantization For Ad-hoc Retrieval

Abstract: Product quantization (PQ) is a widely used technique for ad-hoc retrieval. Recent studies propose supervised PQ, where the embedding and quantization models can be jointly trained with supervised learning. However, there is a lack of appropriate formulation of the joint training objective; thus, the improvements over previous non-supervised baselines are limited in reality. In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…One potential problem about the above works is that VQ is learned purely based on document embeddings, whereas the relationships with queries are not taken into consideration. Besides, it is also analyzed in [45] that the reduction of the reconstruction loss does not necessarily improve the retrieval quality. Realizing such defects, the latest works (e.g., MoPQ [45], JPQ [50], RepCONC [51]) propose to jointly learn VQ and the embedding models via contrastive learning, in which the discrimination of the ground-truth documents for each query can be optimized (referred as retrieval loss minimization), as Figure 1 (C).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…One potential problem about the above works is that VQ is learned purely based on document embeddings, whereas the relationships with queries are not taken into consideration. Besides, it is also analyzed in [45] that the reduction of the reconstruction loss does not necessarily improve the retrieval quality. Realizing such defects, the latest works (e.g., MoPQ [45], JPQ [50], RepCONC [51]) propose to jointly learn VQ and the embedding models via contrastive learning, in which the discrimination of the ground-truth documents for each query can be optimized (referred as retrieval loss minimization), as Figure 1 (C).…”
Section: Related Workmentioning
confidence: 99%
“…Besides, it is also analyzed in [45] that the reduction of the reconstruction loss does not necessarily improve the retrieval quality. Realizing such defects, the latest works (e.g., MoPQ [45], JPQ [50], RepCONC [51]) propose to jointly learn VQ and the embedding models via contrastive learning, in which the discrimination of the ground-truth documents for each query can be optimized (referred as retrieval loss minimization), as Figure 1 (C). Compared with the previous minimization of the reconstruction loss, the retrieval loss minimization achieves substantial improvements in terms of the retrieval performance.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations