2022
DOI: 10.48550/arxiv.2204.11673
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

Abstract: Passage re-ranking is to obtain a permutation over the candidate passage set from retrieval stage. Re-rankers have been boomed by Pre-trained Language Models (PLMs) due to their overwhelming advantages in natural language understanding. However, existing PLM based re-rankers may easily suffer from vocabulary mismatch and lack of domain specific knowledge. To alleviate these problems, explicit knowledge contained in knowledge graph is carefully introduced in our work. Specifically, we employ the existing knowle… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 37 publications
(59 reference statements)
0
3
0
Order By: Relevance
“…Researchers have devoted substantial efforts to knowledgeenhanced techniques due to the superiority of diversified knowledge from different domain [17]- [19]. Recently, considerable knowledge-enhanced RS have been proposed for accurate interest modeling [20]- [24].…”
Section: B Knowledge-enhanced Techniquesmentioning
confidence: 99%
“…Researchers have devoted substantial efforts to knowledgeenhanced techniques due to the superiority of diversified knowledge from different domain [17]- [19]. Recently, considerable knowledge-enhanced RS have been proposed for accurate interest modeling [20]- [24].…”
Section: B Knowledge-enhanced Techniquesmentioning
confidence: 99%
“…Pretrained language models, such as BERT [9] and T5 [51], have demonstrated superior performance on many natural language tasks including information retrieval [13,38,66,74]. PLMs are usually pretrained on general corpora and then finetuned on the target corpus.…”
Section: Pretrained Language Models For Searchmentioning
confidence: 99%
“…Hence, IR researchers propose to utilize pre-trained language models (PLM), i.e., large-scale neural models trained without supervised data for language understanding, to conduct effective retrieval [13,17,36,43]. Previous studies [7,10,16,42] have shown that PLM such as BERT [9] and RoBERTa [18] significantly outperform existing neural retrieval models on passage and document retrieval datasets like MS MARCO and TREC DL in both zero-shot and few-shot settings [8,25,35].…”
Section: Introductionmentioning
confidence: 99%