Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval 2015
DOI: 10.1145/2766462.2767738
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
525
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 623 publications
(538 citation statements)
references
References 23 publications
0
525
0
2
Order By: Relevance
“…on the swing-up pendulum) as it scales up quadratically with the number of ranking constraints. Other ranking approaches with linear learning complexity will be considered (e.g., based on neural nets [30] or ranking forests [8]) to address this limitation. A third and most important limitation concerns the non-reversible MDP case, where the transition from s to s might take much longer than from s to s. Further work is on-going to address the non reversible case.…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…on the swing-up pendulum) as it scales up quadratically with the number of ranking constraints. Other ranking approaches with linear learning complexity will be considered (e.g., based on neural nets [30] or ranking forests [8]) to address this limitation. A third and most important limitation concerns the non-reversible MDP case, where the transition from s to s might take much longer than from s to s. Further work is on-going to address the non reversible case.…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…We compared the performance of our deep learning model against: BM25; the Unigram Query Likelihood Model (UQLM) with Dirichlet Smoothing (Zhai and Lafferty, 2004); Word Mover's Distance (WMD) that leverages pretrained word-vectors; and a couple of neural network models based on the architecture described in (Severyn and Moschitti, 2015).…”
Section: Methods Comparedmentioning
confidence: 99%
“…More formally the similarity between query q and snippet s is computed as: sim(q,s)=q T W s (1) For our CNN model, we use the short text ranking system proposed by Severyn (Severyn and Moschitti, 2015). The convolution filter width is set to 5, the feature map size to 150, and the batch size to 50.…”
Section: Snippet Rankingmentioning
confidence: 99%