Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2009
DOI: 10.1145/1571941.1572029
|View full text |Cite
|
Sign up to set email alerts
|

Including summaries in system evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(36 citation statements)
references
References 21 publications
0
36
0
Order By: Relevance
“…Highly relevant documents are effective in RFB [3] and users can readily recognize them in search results [12]. Apparently relevant query-biased summaries are also good indicators of document relevance [11]. Thus users could provide effective feedback and summaries would be effective sources of search keys.…”
Section: Introductionmentioning
confidence: 99%
“…Highly relevant documents are effective in RFB [3] and users can readily recognize them in search results [12]. Apparently relevant query-biased summaries are also good indicators of document relevance [11]. Thus users could provide effective feedback and summaries would be effective sources of search keys.…”
Section: Introductionmentioning
confidence: 99%
“…The reason for using unnormalized version of the metrics is that the total number of relevant documents is unknown. The second group of metrics have the same forms as those in the first group, but one document is considered to be relevant iff the document and its snippet are both relevant [13]. The third groups of metrics are effective time ratio and its extensions.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…On the contrary, users may miss relevant documents or waste time clicking and examining irrelevant documents due to bad snippets. Turpin et al [13] investigated this problem and showed that it makes difference by including snippet quality in search engine evaluation. In this paper, we interpret traditional IR metric precision as effective time ratio of the real user, i.e., the ratio between the time used in reading relevant information and the total search time, and extend it in the scenario when the search engines provide document snippets.…”
Section: Beyond Document Relevancementioning
confidence: 99%
“…Our model of user behavior is an extension of the existing behavior modeled by other metrics that incorporate document summaries [5,12,14] with the important addition of time. Viewing a summary and deciding to click on it takes a certain amount of time.…”
Section: Stochastic Simulationmentioning
confidence: 99%