Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080832
|View full text |Cite
|
Sign up to set email alerts
|

Neural Ranking Models with Weak Supervision

Abstract: Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. e reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available. Hence, in this paper, we propose to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
320
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 301 publications
(333 citation statements)
references
References 35 publications
4
320
2
Order By: Relevance
“…We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA. Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks (Huang et al, 2013;Guo et al, 2016;Mitra et al, 2017;Dehghani et al, 2017). In typical IR settings, systems are required to retrieve and rank (Nguyen et al, 2016) elements from a collection of documents based on their relevance to the query.…”
Section: Related Workmentioning
confidence: 99%
“…We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA. Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks (Huang et al, 2013;Guo et al, 2016;Mitra et al, 2017;Dehghani et al, 2017). In typical IR settings, systems are required to retrieve and rank (Nguyen et al, 2016) elements from a collection of documents based on their relevance to the query.…”
Section: Related Workmentioning
confidence: 99%
“…Parameters and training. We train the neural models using pairwise crossentropy loss [3]. Hyper-parameters are tuned using nCDG@5 on the dev set.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, it is possible to use heuristic methods to generate weak supervision signals and to go beyond them by employing proper learning objectives and network designs [14].…”
Section: Objectivesmentioning
confidence: 99%