The concept of dynamic criteria has been the subject of a recent debate regarding both the definition and prevalence of the phenomenon (Austin, Humphreys, & Hulin, 1989; Barrett & Alexander, 1989; Barrett, Caldwell, & Alexander, 1985). The present paper questions the adequacy of the conceptual framework underlying the debate and provides data supporting a refined concept of dynamic criteria. The incidence and possible causes of change in relative performance were investigated using weekly performance data from 509 sewing machine operators. Analyses were conducted to determine the degree of performance consistency, potential moderators of consistency, and the stability of predictor‐criteria relationships using multiple predictors and criteria. Results revealed a steady decline in performance stability coefficients as the interval between measures increased. This decay was evident regardless of employees' prior job experience, cognitive ability, or psychomotor ability. Analyses of predictive validity coefficients revealed temporal changes in validity for both objective and subjective criteria, but not in the expected direction. The validity of cognitive ability increased, the validity of psychomotor ability was stable, and that of prior job experience decreased over time. Implications for theory and research are discussed.
This study explores the construct of collective efficacy for self-managed work teams in a manufacturing setting. The construct is developedfrom a historical perspective through the team literature and the self-efficacy literature. Collective efficacy and performance behaviors are measured at four time periods for eight work teams. A positive relationship was discovered using repeated measures analysis of variance, indicating that higher efficacy is related to higher levels of performance.
Job evaluation studies have been used by comparable worth advocates as a basis for sex-based pay discrimination litigation and as a vehicle to generate support for pay equity legislation. However, the adequacy of job evaluation measures for determining the relative worth of jobs has not yet been established; previous studies indicate deficiencies on various measurement criteria. The present study examines three methods of comparable worth job evaluation from a psychometric qualities perspective. Evaluation scores for 20 positions in a state agency were generated by four experienced analysts via each method. Reliability, discriminant validity, and convergence of the measures were examined in the context of comparable worth pay classification decision making. Results suggest that (a) reliability coefficients above .95 could still be inadequate for comparable worth job evaluation applications, (b) factor (dimension) redundancy is potentially a major shortcoming of job evaluation measures; (c) evaluation methods differ in terms of measurement quality, and (d) classification decisions are likely to be method dependent.This article is based on the author's dissertation, which was completed at Michigan State University. The advice and support of my committee members, Tom Patten, Ben Schneider, Larry Foster, and Mike Moore are gratefully acknowledged.Requests for reprints should be sent to R.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.