2016
DOI: 10.1017/iop.2016.88
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Rater Groups: How To Disentangle Rating Reliability From Construct-Level Disagreements

Abstract: In this commentary, we build on Bracken, Rose, and Church's (2016) definition stating that 360°feedback should involve "the analysis of meaningful comparisons of rater perceptions across multiple ratees, between specific groups of raters" (p. 764). Bracken et al. expand on this component of the definition later by stressing that "the ability to conduct meaningful comparisons of rater perceptions both between (inter) and within (intra) groups is central and, indeed, unique to any true 360°feedback process" (p. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
1
1
Order By: Relevance
“…On the surface, these results appear inconsistent with the observation of measurement equivalence across the different levels of opportunity to observe in Study 1. The results might also be viewed as inconsistent with work by Viswesvaran, Schmidt, and Ones () and Viswesvaran, Ones, and Schmidt (), suggesting that job performance ratings are composed of true score variance and rater idiosyncratic error and are not impacted by source of rating (e.g., supervisors vs. peers). One possible explanation could be that at low levels of opportunity to observe, supervisors' ratings are contaminated by other variables (e.g., extroversion) that are a source of shared variance across supervisors but are not related to job performance.…”
Section: Discussioncontrasting
confidence: 75%
“…On the surface, these results appear inconsistent with the observation of measurement equivalence across the different levels of opportunity to observe in Study 1. The results might also be viewed as inconsistent with work by Viswesvaran, Schmidt, and Ones () and Viswesvaran, Ones, and Schmidt (), suggesting that job performance ratings are composed of true score variance and rater idiosyncratic error and are not impacted by source of rating (e.g., supervisors vs. peers). One possible explanation could be that at low levels of opportunity to observe, supervisors' ratings are contaminated by other variables (e.g., extroversion) that are a source of shared variance across supervisors but are not related to job performance.…”
Section: Discussioncontrasting
confidence: 75%
“…In all these cases, statistical and psychometric misunderstandings led to misinformed analyses and/or inaccurate results. Significant resources and subsequent articles had to be devoted to correcting the scientific record (e.g., Harris et al, 2012;Ones, Viswesvaran, & Schmidt, 2012;Roth et al, 2014;Roth, Le, Oh, Van Iddekinge, & Robbins, 2017;Viswesvaran, Ones, & Schmidt, 2016;and Viswesvaran, Ones, Schmidt, Le, & Oh, 2014).…”
Section: Standards For Generalization and Indices For Precisionmentioning
confidence: 99%