2007
DOI: 10.1108/14684520710841793
|View full text |Cite
|
Sign up to set email alerts
|

Relevance ranking is not relevance ranking or, when the user is not the user, the search results are not search results

Abstract: PurposeThe purpose of this paper is to examine the significance of the differences between the actual technical principles determining relevance ranking, and how relevance ranking is understood, described and evaluated by the developers of relevance ranking algorithms and librarians.Design/methodology/approachThe discussion uses descriptions by PLWeb Turbo and C2 of their relevance ranking products and a librarian's description on her blog with the responses which it drew, contrasting these with relevancy as i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…Evaluation of the search output refers to the assessment of the information objects encountered for relevance. The notion of evaluation is fraught with layers of complexity including the cognitive elements required for evaluation (Bade, 2007;Barry, 1998;Borlund, 2003;Fitzgerald & Galloway, 2001;Saracevic, 2007aSaracevic, , 2007bSchamber, 1991;Vakkari, 1999;Vakkari & Hakala, 2000). Users often spend more time determining the value of a given document through evaluation than the other types of information search activities (Xie, Benoit, & Zhang, 2010).…”
Section: Evaluation Of the Search Outputmentioning
confidence: 99%
“…Evaluation of the search output refers to the assessment of the information objects encountered for relevance. The notion of evaluation is fraught with layers of complexity including the cognitive elements required for evaluation (Bade, 2007;Barry, 1998;Borlund, 2003;Fitzgerald & Galloway, 2001;Saracevic, 2007aSaracevic, , 2007bSchamber, 1991;Vakkari, 1999;Vakkari & Hakala, 2000). Users often spend more time determining the value of a given document through evaluation than the other types of information search activities (Xie, Benoit, & Zhang, 2010).…”
Section: Evaluation Of the Search Outputmentioning
confidence: 99%
“…This option was available in four of the eight databases used (ASSIA, ERIC, Scopus and SSCI). Relevance sorting refers to ‘various statistical methods for ordering documents matching a search term’ undertaken by the database software [39]. While exact formulae for relevance sorting are kept secret by the database provider companies, at a basic level they are algorithms designed to identify and match search terms with documents based on pre-defined criteria [40].…”
Section: Methodsmentioning
confidence: 99%
“…Borlund (2003) outlines a "relevance framework," based on the many different conceptions of relevance in previous research including, "classes, types, degrees, criteria, and levels of relevance" (p. 923). Furthermore, Bade (2007) argued against misinterpretations of relevance evaluation arguing for a multidimensional understanding over a binary one. Viewing relevance as either objective or subjective leads to two major flaws: "The first is that any relevance judgment by any human being will always be subjective to some degree, no matter Search result list evaluation how objective that person may strive to be.…”
Section: Criteria -Lists and Documentsmentioning
confidence: 99%
“…Previous studies focused more on relevance criteria than elements or other dimensions of evaluation. Studies comparing both list and document evaluation criteria focus on relevance (Bade, 2007;Borlund, 2003;Saracevic, 2007a,b;Savolainen and Kari, 2006;Vakkari and Hakala, 2000). Search result list evaluation receives significantly less attention than document evaluation.…”
Section: Introductionmentioning
confidence: 99%