Interpersonal deviance (ID) and organizational deviance (OD) are highly correlated (R. S. Dalal, 2005). This, together with other empirical and theoretical evidence, calls into question the separability of ID and OD. As a further investigation into their separability, relationships among ID, OD, and their common correlates were meta-analyzed. ID and OD were highly correlated (rho = .62) but had differential relationships with key Big Five variables and organizational citizenship behaviors, which lends support to the separability of ID and OD. Whether the R. J. Bennett and S. L. Robinson (2000) instrument was used moderated some relationships. ID and OD exhibited their strongest (negative) relationships with organizational citizenship, Agreeableness, Conscientiousness, and Emotional Stability. Correlations with organizational justice were small to moderate, and correlations with demographic variables were generally negligible.
The bulk of personality research has been built from self-report measures of personality. However, collecting personality ratings from other-raters, such as family, friends, and even strangers, is a dramatically underutilized method that allows better explanation and prediction of personality's role in many domains of psychology. Drawing hypotheses from D. C. Funder's (1995) realistic accuracy model about trait and information moderators of accuracy, we offer 3 meta-analyses to help researchers and applied psychologists understand and interpret both consistencies and unique insights afforded by other-ratings of personality. These meta-analyses integrate findings based on 44,178 target individuals rated across 263 independent samples. Each meta-analysis assessed the accuracy of observer ratings, as indexed by interrater consensus/reliability (Study 1), self-other correlations (Study 2), and predictions of behavior (Study 3). The results show that although increased frequency of interacting with targets does improve accuracy in rating personality, informants' interpersonal intimacy with the target is necessary for substantial increases in other-rating accuracy. Interpersonal intimacy improved accuracy especially for traits low in visibility (e.g., Emotional Stability) but only minimally for traits high in evaluativeness (e.g., Agreeableness). In addition, observer ratings were strong predictors of behaviors. When the criterion was academic achievement or job performance, other-ratings yielded predictive validities substantially greater than and incremental to self-ratings. These findings indicate that extraordinary value can gained by using other-reports to measure personality, and these findings provide guidelines toward enriching personality theory. Various subfields of psychology in which personality variables are systematically assessed and utilized in research and practice can benefit tremendously from use of others' ratings to measure personality variables.
This paper presents an overview of a useful approach for theory testing in the social sciences that combines the principles of psychometric meta-analysis and structural equations modeling. In this approach to theory testing, the estimated true score correlations between the constructs of interest are established through the application of metaanalysis (Hunter & Schmidt, 1990), and structural equations modeling is then applied to the matrix of estimated true score correlations. The potential advantages and limitations of this approach are presented. The approach enables researchers to test complex theories involving several constructs that cannot all be measured in a single study. Decision points are identified, the options available to a researcher are enumerated, and the potential problems as well as the prospects of each are discussed.Over the years the importance of theory testing has been increasingly emphasized (e.g., Campbell, 1990;Schmidt, 1992;Schmitt & Landy, 1993). This is consistent with the prediction of Schmidt and Kaplan (1971) that as a nascent field matures, scientists unencumbered by the need to constantly prove the value of their profession to the general society and in the pantheon of sciences, devote more attention to the explanation of the processes underlying the observed relationships and engage more frequently in explicitly articulating the theories that guide their practice. To explicate the underlying processes and theories which Both authors contributed equally; order of authorship is arbitrary. An earlier version of this paper was presented in J. S. Phillips (Chair), Someproblems and innovative solutions in structural equations modeling used for management theory building. Symposium conducted at the 54th annual meeting of the Academy of Management, Dallas, TX. We thank Frank Schmidt for his collaboration on an earlier manuscript. We also acknowledge Jack Hunter for his pioneering work on combining meta-analysis and path analysis. We thankfully acknowledge the contributions of three anonymous reviewers; this manuscript has greatly benefited from all their extensive comments.Correspondence and requests for reprints should be addressed to Chockalingam Viswesvaran,
The authors conducted a comprehensive meta-analysis based on 665 validity coefficients across 576,460 data points to investigate whether integrity test validities are generalizable and to estimate differences in validity due to potential moderating influences. Results indicate that integrity test validities are substantial for predicting job performance and counterproductive behaviors on the job, such as theft, disciplinary problems, and absenteeism. The estimated mean operational predictive validity of integrity tests for predicting supervisory ratings of job performance is .41. Results from predictive validity studies conducted on applicants and using external criterion measures (i.e., excluding self-reports) indicate that integrity tests predict the broad criterion of organizationally disruptive behaviors better than they predict employee theft alone. Despite the influence of moderators, integrity test validities are positive across situations and settings.Over the last 10 years, interest in and use of integrity testing has increased substantially. The publication of a series of literature reviews attests to the interest in this area and its dynamic nature (
This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of job performance used in the literature; ratings of overall job performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall job performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice.Several measures of job performance have been used over the years as criterion measures (cf.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.