2012
DOI: 10.1007/978-3-642-35063-4_38
|View full text |Cite
|
Sign up to set email alerts
|

Improving On-Demand Learning to Rank through Parallelism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
6
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 12 publications
2
6
0
Order By: Relevance
“…Different from the eager methods, lazy methods build the ranking model only after the new query is obtained, providing a customized model for the query, at the cost of creating a ranking model for each new query. As far as we know, the work of De Sousa et al is the only L2R proposal similar to ours. The authors propose a parallel version of the LRAR algorithm, called PLRAR, that runs on a GPU, and make use of a reduced training dataset but using a serial implementation of SSARP .…”
Section: Related Work and Backgroundsupporting
confidence: 73%
“…Different from the eager methods, lazy methods build the ranking model only after the new query is obtained, providing a customized model for the query, at the cost of creating a ranking model for each new query. As far as we know, the work of De Sousa et al is the only L2R proposal similar to ours. The authors propose a parallel version of the LRAR algorithm, called PLRAR, that runs on a GPU, and make use of a reduced training dataset but using a serial implementation of SSARP .…”
Section: Related Work and Backgroundsupporting
confidence: 73%
“…However, the use of parallelism in L2R has focused on accelerating the training phase of standard solutions, i.e., those based on a batch strategy. The work in [De Sousa et al 2012] is the only one, as far as we know, that supports on demand learning to rank, similarly to our proposal.…”
Section: Related Worksupporting
confidence: 71%
“…However, almost none of these studies targeted the L2R sub-field of information retrieval. With the growing importance of the subject, some researchers have proposed efficient L2R through the use of parallel processing [Shukla et al 2012, Wang et al 2015, Jin et al 2015, De Sousa et al 2012]. However, the use of parallelism in L2R has focused on accelerating the training phase of standard solutions, i.e., those based on a batch strategy.…”
Section: Related Workmentioning
confidence: 99%
“…During the dissertation, we published some papers in the world leading In-formation Retrieval conferences and journals, such as [Sousa et al 2016](A1) and [Sousa et al 2019](A2). Beside them, we also published other papers in L2R area, as [Sousa et al 2012](B1), [Freitas et al 2016](B3) 3 , and [Freitas et al 2018](A2).…”
Section: Research Goalsmentioning
confidence: 99%