2011
DOI: 10.1007/978-3-642-20161-5_15
|View full text |Cite
|
Sign up to set email alerts
|

A Methodology for Evaluating Aggregated Search Results

Abstract: Abstract. Aggregated search is the task of incorporating results from different specialized search services, or verticals, into Web search results. While most prior work focuses on deciding which verticals to present, the task of deciding where in the Web results to embed the vertical results has received less attention. We propose a methodology for evaluating an aggregated set of results. Our method elicits a relatively small number of human judgements for a given query and then uses these to facilitate a met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
53
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(55 citation statements)
references
References 14 publications
2
53
0
Order By: Relevance
“…Existing methods for vertical selection and presentation use machine learning to combine different types of predictive evidence: query-string features [2,4,5,19,23], vertical query-log features [2,4,5,11,23], vertical content features [2,4,5,11], and implicit feedback features from previous presentations of the vertical [11,23]. Model tuning and evaluation is typically done with respect to editorial relevance judgements [2,3,4,5,19] or, in a production environment, with respect to user-generated clicks and skips [11,23]. In the first case, users do not actively participate in the evaluation.…”
Section: Related Work 21 Aggregated Searchmentioning
confidence: 99%
See 2 more Smart Citations
“…Existing methods for vertical selection and presentation use machine learning to combine different types of predictive evidence: query-string features [2,4,5,19,23], vertical query-log features [2,4,5,11,23], vertical content features [2,4,5,11], and implicit feedback features from previous presentations of the vertical [11,23]. Model tuning and evaluation is typically done with respect to editorial relevance judgements [2,3,4,5,19] or, in a production environment, with respect to user-generated clicks and skips [11,23]. In the first case, users do not actively participate in the evaluation.…”
Section: Related Work 21 Aggregated Searchmentioning
confidence: 99%
“…Most published research in aggregated search has focused on automatic methods for predicting which verticals to present (vertical selection) [4,5,11,19] and where in the Web results to present them (vertical presentation) [2,3,23]. Evaluation of these systems has typically been conducted by using editorial vertical relevance judgements as the gold standard [2,3,4,5,19], or by using user-generated clicks on vertical results as a proxy for relevance [11,23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Here I took help of some of my friends and given the job of a assigning the relevance score to the retrieved link as the response for their particular desired query. Thus they acted as evaluator and assigned the relevance score as given below in the Table No.1.However, there are other methodologies discussed for evaluating aggregated search results [8].…”
Section: Evaluationsmentioning
confidence: 99%
“…Recent attempts to evaluate the utility of the whole aggregated search page [3,17] consider the three key components of aggregated search (VS, IS, RP) together. Our work takes a similar holistic approach and proposes a general evaluation framework for measuring aggregated search page quality.…”
Section: Related Workmentioning
confidence: 99%