2014
DOI: 10.1145/2559170
|View full text |Cite
|
Sign up to set email alerts
|

Document Score Distribution Models for Query Performance Inference and Prediction

Abstract: Modelling the distribution of document scores returned from an information retrieval (IR) system in response to a query is of both theoretical and practical importance. One of the goals of modelling document scores in this manner is the inference of document relevance. There has been renewed interest of late in modelling document scores using parameterised distributions. Consequently, a number of hypotheses have been proposed to constrain the mixture distribution from which document scores could be drawn.In th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(14 citation statements)
references
References 61 publications
0
14
0
Order By: Relevance
“…The traditional strategy to evaluate how good a query performance predictor is, consists in computing a traditional retrieval performance measure, such as Average Precision (AP), for each of the query, and determine how much such measure correlates with the prediction scores computed by the QPP model [23][24][25][26]28,[32][33][34][35]38,[41][42][43]. Notice that, there are two main aspects that might impair traditional QPP models in our specific setting:…”
Section: Typementioning
confidence: 99%
“…The traditional strategy to evaluate how good a query performance predictor is, consists in computing a traditional retrieval performance measure, such as Average Precision (AP), for each of the query, and determine how much such measure correlates with the prediction scores computed by the QPP model [23][24][25][26]28,[32][33][34][35]38,[41][42][43]. Notice that, there are two main aspects that might impair traditional QPP models in our specific setting:…”
Section: Typementioning
confidence: 99%
“…Indeed, several QPPs from the literature rely on document scores. Examples of post-retrieval predictors are: the agreement between the entire query results and the results obtained when using sub-queries [25], Query Feedback (QF) [7], Weighted Information Gain (WIG) [7], CLARITY [2], Normalized Query Commitment (NQC) [32], and scoredistribution models [6]. Roitman et al proposed an enhanced QPP estimator based on calibrating the retrieved document scores through learning document-level features [33].…”
Section: Related Workmentioning
confidence: 99%
“…Post-retrieval QPP features have been found to be more effective that pre-retrieval features, although they are much more expensive to calculate, as they need the IR system to run the query in order to make the prediction. While the first studies on QPP used single features [2,3,4,5,6], a more recent path is to combine various query features [7,4,8,9,10]. While combining multiple post-retrieval features improves accuracy, the method becomes applicable in real-world scenarios only if the number of features is limited to just a few, due to the increased computational time required for obtaining these features.…”
Section: Introductionmentioning
confidence: 99%
“…Arampatzis et al (, ) utilized SD models for threshold optimization in a legal search task. Cummins () employed SDs for query performance prediction. Arampatzis et al experimented with SDs in image retrieval (), Parapar, Presedo‐Quindimil, and Barreiro () employed SDs in pseudo‐relevance feedback, and Losada, Parapar, and Barreiro () proposed a rank fusion approach based on SDs for prioritizing assessments in IR evaluation.…”
Section: Related Workmentioning
confidence: 99%