2010
DOI: 10.1111/j.1744-6570.2009.01164.x
|View full text |Cite
|
Sign up to set email alerts
|

Rater Source Effects Are Alive and Well After All

Abstract: Recent research has questioned the importance of rater perspective effects on multisource performance ratings (MSPRs). Although making a valuable contribution, we hypothesize that this research has obscured evidence for systematic rater source effects as a result of misspecified models of the structure of multisource performance ratings and inappropriate analytic methods. Accordingly, this study provides a reexamination of the impact of rater source on multisource performance ratings by presenting a set of con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

19
156
1
1

Year Published

2012
2012
2019
2019

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 124 publications
(177 citation statements)
references
References 71 publications
19
156
1
1
Order By: Relevance
“…Evidence supports that raters from different levels provide unique performance information (Hoffman et al, 2010;Hoffman & Woehr, 2009), implicating overlapping but distinct nomological networks for different sources' ratings. Yet the preponderance of past likingperformance rating research has used a single source of ratings, prohibiting a direct comparison across sources.…”
Section: Performance Appraisal Characteristicsmentioning
confidence: 93%
See 1 more Smart Citation
“…Evidence supports that raters from different levels provide unique performance information (Hoffman et al, 2010;Hoffman & Woehr, 2009), implicating overlapping but distinct nomological networks for different sources' ratings. Yet the preponderance of past likingperformance rating research has used a single source of ratings, prohibiting a direct comparison across sources.…”
Section: Performance Appraisal Characteristicsmentioning
confidence: 93%
“…Multiple sources of systematic variance characterize performance ratings (Murphy & DeShon, 2000). Variance related to actual ratee performance is considered construct valid, or true score variance; nonsystematic variance is conceptualized as error; and systematic variance that is unrelated to true score is conceptualized as bias (Hoffman, Lance, Bynum, & Gentry, 2010;Lance, Hoffman, Gentry, & Baranik, 2008). Although the preponderance of research has conceptualized the overlap between rater liking and performance ratings as indicative of rater bias, others have suggested that the relationship between liking and performance ratings reflects "true" differences in ratee performance (Allen & Rush, 1998;Lefkowitz, 2000;Varma et al, 1996).…”
Section: True Performance Interpretationmentioning
confidence: 99%
“…Leaders must satisfy multiple stakeholders (Tsui & Ashford, 1994;Tsui, Ashford, St. Clair, & Xin, 1995; see also the ecological perspective of multisource ratings : Hoffman, Lance, Bynum, & Gentry, 2010;Lance, Baxter, & Mahan, 2006;Lance, Hoffman, Gentry, & Baranik, 2008), and they must understand what signals their boss and their peers 3 in particular attend to when evaluating them. Relying solely on top-down (i.e., boss) ratings of career derailment potential is somewhat inconsistent with the practical realities of today's socially complex workplace (Gentry & Sosik, 2010) with different stakeholders evaluating leaders based on their own specific expectations of what leaders should do and how they should behave (Tsui & Ashford, 1994;Tsui et al, 1995).…”
Section: Tablementioning
confidence: 99%
“…Although much debate has revolved around the meaning in rating distinctions across the various rater categories (e.g., Hoffman, Lance, Bynum, & Gentry, 2010), the inherent value of 360°feedback is that it provides the focal participant with a behaviorally based assessment of his/her performance that is less likely to be criterion deficient (compared with a single-source method). In researching all of the early works purporting the use of 360°feedback, all researchers suggested that utilizing a single-source methodology (i.e., manager ratings) for understanding performance was likely to be criterion deficient (e.g., Edwards & Ewen, 1996;Murphy & Cleveland, 1995).…”
Section: Minimizing Criterion Deficiency With Qualitative Methodologiesmentioning
confidence: 99%