1993
DOI: 10.1177/009102609302200105
|View full text |Cite
|
Sign up to set email alerts
|

How to Set Cutoff Scores for Knowledge Tests Used in Promotion, Training, Certification, and Licensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

1996
1996
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 5 publications
0
29
0
Order By: Relevance
“…Finally, Cascio et al noted a trend toward the use of elaborate methods for setting cutoffs, such as the use of expert judgments. Biddle (1993) described the legal issues involved in the use of such a judgmental method (the Angoff method) in setting cutoff scores for knowledge tests. He noted that the Supreme Court has upheld the modified Angoff method (to be discussed later) as a means for determining cutoff scores.…”
Section: Legal Issuesmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, Cascio et al noted a trend toward the use of elaborate methods for setting cutoffs, such as the use of expert judgments. Biddle (1993) described the legal issues involved in the use of such a judgmental method (the Angoff method) in setting cutoff scores for knowledge tests. He noted that the Supreme Court has upheld the modified Angoff method (to be discussed later) as a means for determining cutoff scores.…”
Section: Legal Issuesmentioning
confidence: 99%
“…The Angoff method (Angoff,197 1) has received by far the greatest attention in the personnel psychology literature, largely due to its proven legal defensibility (see Biddle, 1993). In the Angoff method judges are asked to estimate for each item the percentage of MCPs who could answer the item correctly or the likelihood that an MCP would answer the item correctly.…”
Section: Downloaded By [Queensland University Of Technology] At 20:21mentioning
confidence: 99%
“…Biddle suggested the following recommendations to create a reliable examination cut score: 12 (1) use at least 7 to 10 subject matter experts as judges; (2) ask each judge to state the probability for each test item that the minimally acceptable person would answer the item correctly; (3) sum the judges' estimates for each test item, average the score per test item, and then sum the averages for the items on the test to create the cut score; (4) calculate the reliability and standard deviation for the test scores after the test is administered; and (5) consider the standard error of measurement before setting the final cut score.…”
Section: Introductionmentioning
confidence: 99%
“…A second potential derivation of Kane's (1987) index is to focus on the individual items' z-ratios within the brackets of Equation 2, treating them as onesample z-tests-or more accurately (because of relatively small recommended sample sizes for rater panels generally falling in the range of 7 to 15; Biddle, 1993;Hurtz & Hertz, 1999;Jaeger, 1991) one-sample t-tests which account for the number of raters in their degrees of freedom. With this approach one could simply count the proportion of items for which the rater panel's mean, or M iR , significantly differed from the value provided by the ICC, or P i (θ*).…”
Section: Indexes Based On the Standard Errors Of Ratingsmentioning
confidence: 99%
“…Others have addressed the conversion of ratings or cutoff scores to the θ scale via the item or test characteristic curves (Hurtz, Jones, & Jones, 2008;Kane, 1987;Plake & Kane, 1991;van der Linden, 1982). Given the widespread use of the classical Angoff-based methods and their general support in the face of professional and legal scrutiny (Berk, 1986;Biddle, 1993;Biddle, 2007;Cascio, Alexander, & Barrett, 1988;Kane, 1987;Maurer & Alexander, 1992), it is worthwhile to continue exploration of how well these classical methods perform in IRT-based examination programs. Van der Linden (1982) and Kane (1987) proposed methods for converting ratings that are given in the form of expected probabilities of answering items correctly (e.g., the Angoff (1971) and Nedelsky (1954) methods) to a θ-metric cutoff score, denoted here as θ*.…”
mentioning
confidence: 99%