2015
DOI: 10.1016/j.ipm.2015.07.002
|View full text |Cite
|
Sign up to set email alerts
|

A cross-benchmark comparison of 87 learning to rank methods

Abstract: Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered by nonexistence of a standard set of evaluation benchmark collections. In this paper we propose a way to compare learning to rank methods based on a sparse set of evaluation results on a set of benchmark datasets. O… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 62 publications
(40 citation statements)
references
References 116 publications
1
39
0
Order By: Relevance
“…We hypothesized that LambdaMART would be the best based on its superior performance in Web search, and the results indeed con rm this hypothesis; it achieves the highest test performance for each target objective, followed by AdaRank and RankNet. is observation is consistent with prior benchmark studies on web search dataset [30]. Linear classi cation based approaches such as L1 regularized Logistic Regression (L1LR) and L1 Regularized L2 loss SVM Classi er (L1L2SVM) also perform well.…”
Section: Comparison Of Letor Methodssupporting
confidence: 87%
See 1 more Smart Citation
“…We hypothesized that LambdaMART would be the best based on its superior performance in Web search, and the results indeed con rm this hypothesis; it achieves the highest test performance for each target objective, followed by AdaRank and RankNet. is observation is consistent with prior benchmark studies on web search dataset [30]. Linear classi cation based approaches such as L1 regularized Logistic Regression (L1LR) and L1 Regularized L2 loss SVM Classi er (L1L2SVM) also perform well.…”
Section: Comparison Of Letor Methodssupporting
confidence: 87%
“…Over the past decade, Learning to Rank (LETOR) methods, which involve applying machine learning techniques on ranking problems, have proven to be very successful in optimizing search engines; speci cally, they have been extensively studied in the context of Web search [3,7,18,30] to combine multiple features to optimize ranking. us, not surprisingly, learning to rank is also the backbone technique for optimizing the ranking of products in product search.…”
Section: Introductionmentioning
confidence: 99%
“…The first way to extend our work is to do more experiments to cover better the parameter space of the problem of comparing supervised ML algorithms. That implies using more data sets where the notion of dominant algorithm can be extended [20], as well as trying all possible evaluation techniques. Another extension would be to vary the number of features and consider more algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…To evaluate the overall performance of activity filtering techniques, we use the number of other filtering techniques that it can beat over all the seventeen event logs of Table 2. This metric, known as winning number, is commonly used for evaluation in the Information Retrieval (IR) field [32,37]. Formally, winning number is defined as…”
Section: Aggregated Analysis Over All Event Logsmentioning
confidence: 99%