2002
DOI: 10.1207/s15327043hup1504_02
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Common Rater Variance on Construct Validity of Assessment Center Dimension Judgments

Abstract: In an assessment center (AC), assessors generally rate an applicant's performance on multiple dimensions in just 1 exercise. This rating procedure introduces common rater variance within exercises but not between exercises. This article hypothesizes that this phenomenon is partly responsible for the consistently reported result that the AC lacks construct validity. Therefore, in this article, the rater effect is standardized on discriminant and convergent validity via a multitrait-multimethod design in which e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
24
0

Year Published

2004
2004
2017
2017

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(25 citation statements)
references
References 22 publications
1
24
0
Order By: Relevance
“…But the results of Kolk et al (2002) did show a reduction in DDSE correlations for the experimental rating method as compared to the more traditional within‐exercise method. Does that mean that the latter method engenders within‐exercise halo error?…”
mentioning
confidence: 82%
See 1 more Smart Citation
“…But the results of Kolk et al (2002) did show a reduction in DDSE correlations for the experimental rating method as compared to the more traditional within‐exercise method. Does that mean that the latter method engenders within‐exercise halo error?…”
mentioning
confidence: 82%
“…Thus, OPRs as one index of assessors’ overall impressions of AC candidates’ performance are clearly performance related and are not merely indicative of performance‐irrelevant halo error . As such, I suggest that although the mean difference in DDSE correlations between the experimental and the control (within‐exercise rating method) groups in the study of Kolk et al (2002) may reflect a reduction in halo error in the experimental group, it may also reflect attenuation in these correlations in the experimental group due to reduced reliability as a function of using a single assessor’s ratings versus some aggregate of multiple assessors’ ratings. Additional research is needed to settle this issue.…”
mentioning
confidence: 99%
“…In fact, articles on the internal construct-related validity of ACs typically end by stating that determining the amount of exercise/dimension variance in AC ratings reflects only one side of the equation (Haaland & Christiansen, 2002;Kolk, Born, & van der Flier, 2002;Lievens & Conway, 2001;Robie, Osburn, Morris, Etchegaray, & Adams, 2000;Woehr & Arthur, 2003). Many of these researchers have called for an investigation of how much variance exercises and dimensions explain in an external criterion such as job performance or extrinsic career success (i.e., salary, career progression).…”
Section: Construct-related and Criterion-related Validitymentioning
confidence: 99%
“…Finally, it should be noted that not all assessment center effects were included in our analyses. Previous studies have suggested that individual raters may influence assessment center PEDRs (e.g., Arthur et al, 2000;Kolk, Born, & Flier, 2002). Since multiple ratings are collapsed in the formation of MTMM matrices, the assessment of such an effect was not possible.…”
Section: Limitationsmentioning
confidence: 99%