Proceedings of the 37th International ACM SIGIR Conference on Research &Amp; Development in Information Retrieval 2014
DOI: 10.1145/2600428.2609472
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating non-deterministic retrieval systems

Abstract: The use of sampling, randomized algorithms, or training based on the unpredictable inputs of users in Information Retrieval often leads to non-deterministic outputs. Evaluating the effectiveness of systems incorporating these methods can be challenging since each run may produce different effectiveness scores. Current IR evaluation techniques do not address this problem. Using the context of distributed information retrieval as a case study for our investigation, we propose a solution based on multivariate lin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 6 publications
0
1
0
Order By: Relevance
“…In this paper, we extend our prior work on non-deterministic system evaluation (Jayasinghe, Webber, Sanderson, Dharmasena, & Culpepper, 2014). In our previous work, we used a linear model to evaluate non-deterministic IR systems, and a limited case study using a single evaluation metric (NDCG@10) on the TREC GOV2 dataset was presented.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we extend our prior work on non-deterministic system evaluation (Jayasinghe, Webber, Sanderson, Dharmasena, & Culpepper, 2014). In our previous work, we used a linear model to evaluate non-deterministic IR systems, and a limited case study using a single evaluation metric (NDCG@10) on the TREC GOV2 dataset was presented.…”
Section: Introductionmentioning
confidence: 99%