Reviews of self-Supervisor, self-peer, and peer-supervisor ratings have generally concluded that there is at best a modest correlation between different rating sources. Nevertheless, there has been much inconsistency across studies. Accordingly, a meta-analysis was conducted. The results indicated a relatively high correlation between peer and supervisor ratings ( p = -62) but only a moderate correlation between self-supervisor (p = .35) and self-peer ratings (p = .36). While rating format (dimensional versus global) and rating scale (trait versus behavioral) had little impact as moderators, job type (managerial/ professional versus blue-collar/ service) did seem to moderate self-peer and self-supervisor ratings.The use of multiple sources for performance ratings has gained considerable acceptance in the past two decades. Numerous advantages of using multiple raters have been cited: for example, enhanced ability to observe and measure various job facets Henderson, 1984), greater reliability, fairness, and ratee acceptance (Latham & Wexley, 1982), and improved defensibility of the performance appraisal program from a legal standpoint (Bemardin & Beatty, 1984). The literature on multiple raters has focused on self-ratings. A number of scholars have argued that self-ratings can promote personal development, improve communication between supervisors and subordinates, and clarify differences of opinion between supervisors and other managers (Carroll & Schneier, 1982;Cummings & Schwab, 1973).Despite the alleged gains from self-ratings, empirical research shows frequent lack of agreement between self-ratings and those provided by other sources. Mabe and West (1982) reviewed a number of studies and found, on average, a low correlation between self-ratings and others' ratings, including supervisor and peer appraisals. Thornton (1980), in his summary of self-ratings, concluded that "individuals have a significantly different view of their own job performance than that held by other people" (p. 268). Nevertheless, both Mabe and West and Thomton found that research oftenThe authors were equal contributors to this manuscript; order of listing was determined The authors would like to thank Michael A. Campion for his helpful comments. Correspondence and requests for reprints should be addressed to Michael M. Harris, alphabetically.Krannert Graduate School of Management, Purdue University, West Lafayette, IN 47907. COPYRIGHT
Previous cross‐sectional field and laboratory research has provided mixed results as to whether recruiter characteristics and behaviors influence applicant reactions to employment opportunities. The present research was conducted to examine the effect of recruiter characteristics using a pre‐post study design in a naturally occurring setting. In addition, the effects of several potential moderators on recruiter influence were tested. Results indicated that recruiter characteristics had an impact on perceived job attributes, regard for job and company, and likelihood of joining the company. There was little evidence that the effect of recruiter characteristics was moderated by selected applicant, job, or interviewer variables.
Literature since the last comprehensive review of research on the employment interview is summarized, and suggestions for future studies in this area are described. Major changes in findings regarding the validity of the interview, the impact of applicant sex, and the effect of interviewer characteristics/behavior on applicant reactions, as well as other issues, are reported. Contrary to the widely held belief that the interview has low validity, recent research indicates at least modest validity for this selection tool. Conversely, the effect of the campus interview on applicant reactions has been seriously questioned. Researchers are urged to examine several areas in social psychology, including the literature on attitudes‐intentions‐behavior, the elaboration likelihood model, and theories of discrimination to achieve greater understanding of the employment interview.
A series of Monte Carlo computer simulations was conducted to investigate (a) the likelihood that meta-analysis will detect true differences in effect sizes rather than attributing differences to methodological artifact and (b) the likelihood that meta-analysis will suggest the presence of moderator variables when in fact differences in effect sizes are due to methodological artifact. The simulations varied the magnitude of the true population differences between correlations, the number of studies included in the meta-analysis, and the average sample size. Simulations were run both correcting for and not correcting for measurement error. The power of three indexes-the Schmidt-Hunter ratio of expected to observed variance, the Callender-Osburn procedure, and a chi-square test-to detect true differences was investigated. Small true differences will not be detected regardless of sample size and number of studies, and moderate true differences will not be detected with small numbers of studies or small sample sizes. Hence there is a need for caution in attributing observed variation across studies to artifact.Meta-analysis cumulates findings from different studies of the same phenomenon to determine whether meaningful general conclusions can be made and justified. Typically, psychologists have relied on narrative procedures, for example, "8 studies found a significant relationship between unemployment and self-esteem; 5 studies found no relationship; thus more research is needed to resolve the issue," Techniques have been developed independently by Glass (e.g., Glass, McGaw, & Smith, 1981) and by Schmidt and Hunter (e.g., Hunter, Schmidt, & Jackson, 1982) to cumulate research findings quantitatively. Essentially, in metaanalysis, studies replace individuals as the unit of analysis. If 100 studies have been done examining the relation between, say, verbal fluency and job performance, the correlation between verbal fluency and job performance in each study becomes the measure of effect size. Glassian meta-analysis then seeks to identify the factors that influence the variability in these effect size measures from study to study. Features of each study, such as type of job, employee age, and length of service are correlated with the effect size measures to determine which features explain variability in effect sizes across studies and thus serve as moderators of the verbal fluency-job performance relation. (Note that a variety of effect size measures can be used. In experimental studies, the most common measure is d, the difference between two group means divided by the standard deviation; in studies involving two continuous variables, the most common measure is the correlation coefficient. Due to the frequent usage of correlational research in organizational settings, this article focuses
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.