1968
DOI: 10.1016/0020-0271(68)90002-8
|View full text |Cite
|
Sign up to set email alerts
|

The influence of scale form on relevance judgments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

1988
1988
2015
2015

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 3 publications
1
18
0
Order By: Relevance
“…He found that the addition of a center anchor to a scale with two end categories resulted in increased reliability. Recall that Katter (1968) made a similar observation.…”
Section: Introductionmentioning
confidence: 66%
See 2 more Smart Citations
“…He found that the addition of a center anchor to a scale with two end categories resulted in increased reliability. Recall that Katter (1968) made a similar observation.…”
Section: Introductionmentioning
confidence: 66%
“…Similarly, Eisenberg and Hu (1987) labeled only the two end categories for their seven-point scale. Based on empirical results, Katter (1968) argued that providing a single-item anchor at the midpoint was not adequate and that "adding additional anchors at the extremes seems a possible solution" (p. 10). In the psychometric tradition, Bendig (1953) studied the effect on reliability of verbal anchoring.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…Although it is difficult to ensure the same level of subject knowledge for all judges, the deviation can be much more with random volunteers than with evaluation organisers. It is indeed important to understand how this factor affects the reliability of an evaluation's results since it has been acknowledged in literature that the more knowledge and familiarity the judges have with the subject area, the less leniency they have for accepting documents as relevant (Rees and Schultz, 1967;Cuadra, 1967;Katter, 1968). Interestingly, Blanco et al (2013) analysed the impact of this factor on the reliability of the SemSearch evaluations and concluded that 1) experts are more pessimistic in their scoring and thus, accept fewer items as relevant when compared to workers (which agrees with the previous studies) and 2) crowdsourcing judgements, hence, cannot replace expert evaluations.…”
Section: Relevance Judgementsmentioning
confidence: 99%
“…A number of authors attempted to introduce graded notions of relevance, either in terms of discrete categories [38,39,65], or in terms of points or confidence intervals on the real line [42,60]. Some authors also prefer to re-cast graded relevance as utility [28], or as a decision-theoretic notion of risk [153].…”
Section: The Binary Nature Of Relevancementioning
confidence: 99%