1998
DOI: 10.1111/1468-2389.00085
|View full text |Cite
|
Sign up to set email alerts
|

Factors which Improve the Construct Validity of Assessment Centers: A Review

Abstract: This article reviews 21 studies which manipulated specific variables to determine their impact on the construct validity of assessment centers. This review shows that the studies regarding the impact of different observation, evaluation, and integration procedures yielded mixed results. Conversely, dimension factors (number, conceptual distinctiveness, and transparency), assessor factors (type of assessor and type of assessor training), and exercise factors (exercise form and use of role-players) were found to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
106
1
3

Year Published

2001
2001
2017
2017

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 91 publications
(114 citation statements)
references
References 41 publications
4
106
1
3
Order By: Relevance
“…However, empirical findings accumulated over the last 20 years do not support this theoretical architecture-correlations between different dimensions within the same exercise tend to be much larger than the correlations between dimension across different exercises (Woehr & Arthur, 1999). In other words, exercise effects dominate over dimension effects on PEDRs and, as has been documented by several reviews (e.g., Lievens, 1998;Lievens & Conway, 2001;Sackett & Tuzinski, 2001;Woehr & Arthur, 1999), various AC design modifications have had little or no effect on this basic pattern of findings. Rather, and contrary to the original design of the AC architecture, candidate performance seems to be relatively undifferentiated across the various dimensions defined for each exercise and cross-situationally specific as exercises define different performance situations.…”
Section: Discussionmentioning
confidence: 94%
“…However, empirical findings accumulated over the last 20 years do not support this theoretical architecture-correlations between different dimensions within the same exercise tend to be much larger than the correlations between dimension across different exercises (Woehr & Arthur, 1999). In other words, exercise effects dominate over dimension effects on PEDRs and, as has been documented by several reviews (e.g., Lievens, 1998;Lievens & Conway, 2001;Sackett & Tuzinski, 2001;Woehr & Arthur, 1999), various AC design modifications have had little or no effect on this basic pattern of findings. Rather, and contrary to the original design of the AC architecture, candidate performance seems to be relatively undifferentiated across the various dimensions defined for each exercise and cross-situationally specific as exercises define different performance situations.…”
Section: Discussionmentioning
confidence: 94%
“…On the basis of prior research, several design considerations have been suggested for increasing the construct-related validity of ACs (see Lievens, 1998;Lievens & Conway, 2001;Woehr & Arthur, 2003). Consider, for instance, the suggestion to limit the number of exercises.…”
Section: Discussionmentioning
confidence: 99%
“…Consider, for instance, the suggestion to limit the number of exercises. Whereas a more diverse set of exercises seems to reduce the convergence of dimension ratings across exercises (i.e., lower convergent validity; Lievens, 1998;Schneider & Schmitt, 1992), the opposite might be true for criterion-related validity (i.e., a more diverse set of job-related exercises might increase criterion-related validity). We need to test these predictions by integrating different validity designs in the future.…”
Section: Discussionmentioning
confidence: 99%
“…As is common in ACs, the configuration here was not fully crossed in that not every dimension was assessed in every exercise (see Appendix, Table A3). In accordance with design suggestions about minimizing cognitive load, the number of dimensions assessed within any given exercise was kept to a minimum (Chan, 1996;Lievens, 1998). Because this was a high-stakes evaluation, dimensions were not revealed to participants prior to the AC taking place.…”
Section: Ac Design and Developmentmentioning
confidence: 99%
“…Our AC was, however, developed in keeping with international guidelines (International Task Force on Assessment Center Guidelines, 2009; International Taskforce on Assessment Center Guidelines, 2015) as well as with guidelines in the literature for the development of dimensions (Arthur et al, 2003;Guenole et al, 2013;Lievens, 1998), exercises (Thornton & Mueller-Hanson, 2004), and on training for ACs (Gorman & Rentsch, 2009;Macan et al, 2011).…”
Section: Limitations and Future Directionsmentioning
confidence: 99%