2024
DOI: 10.1037/apl0001146
|View full text |Cite
|
Sign up to set email alerts
|

Meta-analytical estimates of interrater reliability for direct supervisor performance ratings: Optimism under optimal measurement designs.

Andrew B. Speer,
Angie Y. Delacruz,
Lauren J. Wegmeyer
et al.

Abstract: Performance appraisal (PA) is used for various organizational purposes and is vital to human resources practices. Despite this, current estimates of PA reliability are low, leading to decades of criticism regarding the use of PA in organizational contexts. In this article, we argue that current meta-analytical interrater reliability (IRR) coefficients are underestimates and do not reflect the reliability of interest to most practitioners and researchers-the reliability of an employee's direct supervisor. To es… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
1

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 46 publications
0
1
1
Order By: Relevance
“…Furthermore, none of the other moderators significantly influenced the interrater reliability. Although Speer et al (2023) found a higher reliability estimate for ratings made by same-level raters (i.e., two direct supervisors) than the overall reliability reported by previous studies, which seems to suggest a moderating effect of rater job levels on interrater reliability, by directly comparing same-level versus different-level raters, we found a nonsignificant difference between the two regardless of the estimator being used. Moreover, as we argued above, the Morris estimator is more suitable for meta-analyzing this literature, and it produced an overall reliability value of .65 that is comparable to Speer et al's estimate.…”
Section: Discussioncontrasting
confidence: 99%
See 1 more Smart Citation
“…Furthermore, none of the other moderators significantly influenced the interrater reliability. Although Speer et al (2023) found a higher reliability estimate for ratings made by same-level raters (i.e., two direct supervisors) than the overall reliability reported by previous studies, which seems to suggest a moderating effect of rater job levels on interrater reliability, by directly comparing same-level versus different-level raters, we found a nonsignificant difference between the two regardless of the estimator being used. Moreover, as we argued above, the Morris estimator is more suitable for meta-analyzing this literature, and it produced an overall reliability value of .65 that is comparable to Speer et al's estimate.…”
Section: Discussioncontrasting
confidence: 99%
“…is the best available indicator given the limited information reported in the studies examined. While this article was under review, a meta-analysis was published (Speer et al, 2023) that looked at exactly this issue and obtained a reliability estimate of .65 for ratings made by two direct supervisors. In the discussion section, we compare our findings with those of Speer et al…”
Section: Rater Perspectivementioning
confidence: 99%