2002
DOI: 10.1145/582415.582418
|View full text |Cite
|
Sign up to set email alerts
|

Cumulated gain-based evaluation of IR techniques

Abstract: Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and preci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,998
0
19

Year Published

2004
2004
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 3,684 publications
(2,023 citation statements)
references
References 19 publications
6
1,998
0
19
Order By: Relevance
“…The results of these experiments are reported in the first part of Table 19, using the standard IR evaluation measures: precision at a cut-off of 10 documents (P@10), normalized discounted cumulative gain [132] at 10 documents (N@10), and mean average precision (MAP) [133]. The cross-lingual MAP scores are also compared with the monolingual ones, i.e., those obtained by using the reference (English) translations of the test topics to see how the system would perform if the queries were translated perfectly (see columns denoted as MAP rel EN ).…”
Section: Information Retrieval Qualitymentioning
confidence: 99%
“…The results of these experiments are reported in the first part of Table 19, using the standard IR evaluation measures: precision at a cut-off of 10 documents (P@10), normalized discounted cumulative gain [132] at 10 documents (N@10), and mean average precision (MAP) [133]. The cross-lingual MAP scores are also compared with the monolingual ones, i.e., those obtained by using the reference (English) translations of the test topics to see how the system would perform if the queries were translated perfectly (see columns denoted as MAP rel EN ).…”
Section: Information Retrieval Qualitymentioning
confidence: 99%
“…2 Significance tests are based on the theoretical two-tailed t-test of tau and confidence intervals by bootstrap resampling (n = 1000, α = 0.05). NDCG is considered as an additional ranking metric (Järvelin and Kekäläinen, 2002).…”
Section: Learning Methods and Evaluationmentioning
confidence: 99%
“…We also used another common measurement for search precision or relevance, cumulative gain, and its corollary, discounted cumulative gain. Cumulative gain is a standard strategy for assigning degrees of relevance search results (Dupret, 2011;Järvelin and Kekäläinen, 2002;Roelleke, 2013). 13 The NSDL portal uses Lucene, an open-source Apache search engine, as the core of its discovery service.…”
Section: Relevance and Efficiency Of The Searchmentioning
confidence: 99%