While the Angoff (1971) is a commonly used cut score method, critics (Berk, 1996;Impara & Plake, 1997) argue the Angoff places too-high cognitive demands on raters. In response to criticisms of the Angoff, a number of modifications to the method have been proposed. Some suggested Angoff modifications include using an iterative rating process, presenting judges with normative data about item performance, revising the rating judgment into a Yes/No decision, assigning relative weights to dimensions within a test, and using item response theory in setting cut scores. In this study, subject matter expert raters were provided with a 'difficulty anchored' rating scale to use while making Angoff ratings; this scale can be viewed as a variation of the Angoff normative data modification. The rating scale presented test items having known p-values as anchors, and served as a simple means of providing normative information to guide the Angoff rating process. Results are discussed regarding reliability of the mean Angoff rating (.73) and the correlation of mean Angoff ratings with item difficulty (observed r ranges from .65 to .73).
In this study, a proposed extension to the job component validity model from the Position Analysis Questionnaire was tested. Job component validity, a form of synthetic validation, allows researchers to select useful predictors and to estimate the criterion-related validity of tests based on conducting a job analysis which includes the Position Analysis Questionnaire. Morris and colleagues described a method for estimating the multiple correlation of a test battery assembled via job component validity estimates. In the current study, job component validity estimates, derived from the multiple correlation procedure proposed by Morris, et al., were compared to unit-weighted validity estimates obtained in a criterion-related validity study of six job progressions. The multivariate job component validity estimates were comparable to unit-weighted validity coefficients obtained using supervisory ratings as criteria. Multivariate job component validity estimates were conservative compared to corrected unit-weighted validity coefficients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.