2016
DOI: 10.1587/transinf.2015dap0001
|View full text |Cite
|
Sign up to set email alerts
|

BLM-Rank: A Bayesian Linear Method for Learning to Rank and Its GPU Implementation

Abstract: SUMMARYRanking as an important task in information systems has many applications, such as document/webpage retrieval, collaborative filtering and advertising. The last decade has witnessed a growing interest in the study of learning to rank as a means to leverage training information in a system. In this paper, we propose a new learning to rank method, i.e. BLM-Rank, which uses a linear function to score samples and models the pairwise preference of samples relying on their scores under a Bayesian framework. A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Given the high computational costs involved, parallel processing has also been used in this context but only to accelerate eager learning methods. () For instance, Jin et al introduced a GPU parallel linear RankSVM optimizing the L2‐loss function over all queries in the training set and obtaining no more than 23 x of speedup over a serial RankSVM version. Despite exploiting the GPU parallelism, their eager learning approach would require retraining of the ranking model every time a change would occur in the training dataset (on demand scenario).…”
Section: Related Work and Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the high computational costs involved, parallel processing has also been used in this context but only to accelerate eager learning methods. () For instance, Jin et al introduced a GPU parallel linear RankSVM optimizing the L2‐loss function over all queries in the training set and obtaining no more than 23 x of speedup over a serial RankSVM version. Despite exploiting the GPU parallelism, their eager learning approach would require retraining of the ranking model every time a change would occur in the training dataset (on demand scenario).…”
Section: Related Work and Backgroundmentioning
confidence: 99%
“…Given the high computational costs involved, parallel processing has also been used in this context but only to accelerate eager learning methods. [17][18][19] For instance, Jin et al 18 In this work, we address the flexibility issue by improving the performance of a L2R framework with parallel algorithms for a lazy top ranker method and for a training dataset reduction method. Different from the eager methods, lazy methods build the ranking model only after the new query is obtained, providing a customized model for the query, at the cost of creating a ranking model for each new query.…”
Section: Related Work and Backgroundmentioning
confidence: 99%