2023
DOI: 10.1007/978-3-031-43415-0_37
|View full text |Cite
|
Sign up to set email alerts
|

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers

Kun Zhou,
Xiao Liu,
Yeyun Gong
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…In particular, we focus on dense retrieval (DR) that learn low-dimensional semantic embeddings for both queries and documents and then measure the embedding similarities as the relevance scores. To implement DR, a PLM-based dual-encoder has been widely adopted (Karpukhin et al, 2020;Xiong et al, 2020;Zhou et al, 2023) by learning two separate encoders for relevance matching. Compared with sparse retrievers (e.g., BM25), DR models highly rely on large-scale high-quality training data to achieve good performance Ren et al, 2021a;Zhou et al, 2022).…”
Section: Zero-shot Dense Retrievalmentioning
confidence: 99%
“…In particular, we focus on dense retrieval (DR) that learn low-dimensional semantic embeddings for both queries and documents and then measure the embedding similarities as the relevance scores. To implement DR, a PLM-based dual-encoder has been widely adopted (Karpukhin et al, 2020;Xiong et al, 2020;Zhou et al, 2023) by learning two separate encoders for relevance matching. Compared with sparse retrievers (e.g., BM25), DR models highly rely on large-scale high-quality training data to achieve good performance Ren et al, 2021a;Zhou et al, 2022).…”
Section: Zero-shot Dense Retrievalmentioning
confidence: 99%