Recent conceptual and methodological advances in behavioral safety research afford an opportunity to integrate past and recent research findings. Building on theoretical models of worker performance and work climate, this study quantitatively integrates the safety literature by meta-analytically examining person- and situation-based antecedents of safety performance behaviors and safety outcomes (i.e., accidents and injuries). As anticipated, safety knowledge and safety motivation were most strongly related to safety performance behaviors, closely followed by psychological safety climate and group safety climate. With regard to accidents and injuries, however, group safety climate had the strongest association. In addition, tests of a meta-analytic path model provided support for the theoretical model that guided this overall investigation. The implications of these findings for advancing the study and management of workplace safety are discussed.
Tests for experiments with matched groups or repeated measures designs use error terms that involve the correlation between the measures as well as the variance of the data. The larger the correlation between the measures, the smaller the error and the larger the test statistic. If an effect size is computed from the test statistic without taking the correlation between the measures into account, effect size will be overestimated. Procedures for computing effect size appropriately from matched groups or repeated measures designs are discussed.
Previous meta-analytic examinations of group cohesion and performance have focused primarily on contextual factors. This study examined issues relevant to applied researchers by providing a more detailed analysis of the criterion domain. In addition, the authors reinvestigated the role of components of cohesion using more modern meta-analytic methods and in light of different types of performance criteria. The results of the authors' meta-analyses revealed stronger correlations between cohesion and performance when performance was defined as behavior (as opposed to outcome), when it was assessed with efficiency measures (as opposed to effectiveness measures), and as patterns of team workflow became more intensive. In addition, and in contrast to B. Mullen and C. Copper's (1994) meta-analysis, the 3 main components of cohesion were independently related to the various performance domains. Implications for organizations and future research on cohesion and performance are discussed.
We predicted that the dispositional construct negative affectivity (NA) would be related to self-report measures of job stress and job strain and that observed relationships between these stress and strain measures would be inflated considerably by NA. Results of a study of 497 managers and professionals were largely consistent with those expectations. Thus, we discuss implications for NA as both a methodological nuisance and a substantive cause of stressful work events, and conclude that NA should no longer remain an unmeasured variable in the study of job stress.
The authors present guidelines for establishing a useful range for interrater agreement and a cutoff for acceptable interrater agreement when using Burke, Finkelstein, and Dusig’s average deviation (AD) index as well as critical values for tests of statistical significance with the AD index. Under the assumption that judges respond randomly to an item or set of items in a measure, the authors show that a criterion for acceptable interrater agreement or practical significance when using the AD index can be approximated as c/6, where c is the number of response options for a Likert-type item. The resulting values of 0.8, 1.2, 1.5, and 1.8 are discussed as standards for acceptable interrater agreement when using the AD index with 5-, 7-, 9-, and 11-point items, respectively. Using similar logic, the AD agreement index and interpretive standard are generalized to the case of a response scale that involves percentages or proportions, rather than discrete categories, or at the other extreme, the assessment of interrater agreement with respect to the rating of a single target on a dichotomous item (e.g., yes-no, agree-disagree, true-false item formats). Finally, the usefulness of these guidelines for judging acceptable levels of interrater agreement with respect to the metric (or units) of the original response scale is discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.