Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080809
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Neural Ad-hoc Ranking with Kernel Pooling

Abstract: This paper proposes K-NRM, a kernel based neural model for document ranking. Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score. The whole model is trained end-to-end. The ranking layer learns desired feature patterns from the pairwise ranking loss. The kerne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
469
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 493 publications
(475 citation statements)
references
References 21 publications
5
469
1
Order By: Relevance
“…Additionally, we evaluate the following document matching methods from the literature: ARC-I and ARC-II [21], DSSM [22], C-DSSM [44], DRMM [17], MatchPyramid [34], aNMM [49], Duet [30], MVLSTM [46], KNRM [48], CONV-KNRM [12] and HAR [52]. For implementing these models, we followed Zhu et al [52] and used the open source implementation MatchZoo [18] with pre-trained glove.840B.300d vectors [35].…”
Section: Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, we evaluate the following document matching methods from the literature: ARC-I and ARC-II [21], DSSM [22], C-DSSM [44], DRMM [17], MatchPyramid [34], aNMM [49], Duet [30], MVLSTM [46], KNRM [48], CONV-KNRM [12] and HAR [52]. For implementing these models, we followed Zhu et al [52] and used the open source implementation MatchZoo [18] with pre-trained glove.840B.300d vectors [35].…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Interaction-based matching models focus on the complex interaction between query and passage. These models use CNNs on sentence level (ARC-II [21]), match query terms and words using word count histograms (DRMM [17]), word-level dot product similarity (MatchPyramid [34]), attention-based neural networks (aNMM [49]), kernel pooling (K-NRM [48]) or convolutional n-gram kernel pooling (Conv-KNRM [12]). Eventually, Zhu et al [52] utilize hierarchical attention on word and sentence level (HAR) to capture interaction of the query with local context in long passages.…”
Section: Introductionmentioning
confidence: 99%
“…We call this variant BiPACRR. We also experimented with other neural rankers (e.g., KNRM [20]), but BiPACRR was the most effective. -Textual similarity regression (SimReg) [2].…”
Section: Methodsmentioning
confidence: 99%
“…For classification task, we experimented with six deep learning techniques from the Matchzoo framework 23 : MVLSTM, DUET, KNRM, CONVKNRM, DSSM, and ARC-II [24][25][26][27][28][29] . DUET is applied for the document ranking task and is composed of two separate deep neural networks, one matches the query and the document using a local representation, and another matches the query and the document using learned distributed representations.…”
Section: Deep Learning Modelsmentioning
confidence: 99%