Greater understanding of the complex interrelationships among workrelevant constructs has increased the number of constructs on organizational surveys. Good psychometric practice also dictates the use of multiple items per construct. The net result has been longer surveys. Longer surveys take more time to complete, tend to have more missing data, and have higher refusal rates than short surveys. Arguably, then, techniques for reducing the length of scales while maintaining psychometric quality are worthwhile. Little guidance exists on how to reduce the length of a multi-item scale and we argue that the most common technique, maximizing internal consistency, is problematic and should be avoided. We present a set of item "quality indices" to help conceptualize the competing issues that influence item retention decisions. Statistical analysis of an example case using these indices suggested that there are 3 key aspects of item quality to consider when reducing a scale. We describe strategies that can assist scale developers in using these 3 aspects of item quality when making scale reduction decisions.
This presents a comprehensive review of the empirical literature bearing on the effects of cognitive feedback (CFB) on multiple measures of performance. CEB refers to the process of presenting the person information about the relations in the environment (task information [TI]), relations perceived by the person (cognitive information [CIj), and relations between the environment and the person's perceptions of the environment (functional validity information [FVI]). Overall, CFB does improve performance on judgment tasks. Specifically, the research indicates that TI rather than CI is the aspect of CFB that influences performance. Factors influencing the effects of CFB on performance are discussed, and both current and potential applications of CFB are explored.A major theme emerging from extensive work in cognitive psychology is that people are limited in their ability to process information in uncertain environments (Nisbett & Ross, 1980), notably so with respect to human judgment and decision making (Kahneman, Slovic, & Tversky, 1982). Research has shown that people have difficulty inferring environmental relationships from unaided experience (Brehmer, 1980) and often lack sufficient insight into their judgment strategies to permit them to communicate those strategies (Balke, Hammond, & Meyer, 1973). More effective judgment and decision making would enhance the lives of individuals and the performance of organizations, and researchers have devoted considerable attention to investigating means of improving these cognitive activities
The present study focused on the development and validation of scores on the Stress in General scale. Three diverse samples of workers ( n = 4,322, n = 574, n = 34) provided psychometric and validity evidence. All evidence converged on the existence of two distinct subscales, each of which measured a different aspect of general work stress. The studies also resulted in meaningful patterns of correlations with stressor measures, a physiological measure of chronic stress (blood-pressure reactivity), general job attitude measures, and intentions to quit.
We examined methodological and theoretical issues related to accuracy measures used as criteria in performance-rating research. First, we argued that existing operational definitions of accuracy are not all based on a common accuracy definition; we report data that show generally weak relations among different accuracy operational definitions. Second, different methods of true score development are also examined, and both methodological and theoretical limitations are explored. Given the difficulty of obtaining true scores, criteria are discussed for examining the suitability of expert ratings as surrogate true score measures. Last, the usefulness of using accuracy measures in performance-rating research is examined to highlight situations in which accuracy measures might be desirable criterion measures in rating research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.