Quelles retombées peut avoir le fait de dévoiler aux participants les dimensions mesurées dans un centre d'évaluation ? Cette question est abordée dans deux études indépendantes qui font appel à des exercices individuels. Les résultats de la première étude n'indiquent aucune différence dans la validité de construction entre un groupe d'étudiants universitaires "transparent" ( N = 99) et un autre "non transparent" ( N = 50) ; ceci est contraire à ce qu'avaient trouvé Kleinmann & al. (1996) et Kleinmann (1997 avec des exercices de groupe. Les évaluations moyennes ne changent pas à l'exception de la "sensibilité" qui augmente légèrement avec la transparence. Par contre, les résultats de la deuxième étude, qui faisait appel à un échantillon de candidats à un poste réel, débouchèrent sur une amélioration significative de la validité de construction chez le groupe "transparent" ( N = 297) pa rapport au group "non transparent" ( N = 393). La encore, les évaluation moyennes des deux groupes n'ont pas différé. Les apports de ces résultats pour la pratique et des suggestions pour de futures recherches sont présentés dans cet article.What are the effects of revealing dimensions to candidates in an assessment centre? This question is addressed in two independent studies, using individual exercises. Results in Study 1 showed no difference in construct-related validity between a transparent ( N = 99) and a non-transparent group of university students ( N = 50), contrary to previous findings by Kleinmann, Kuptsch, and Köller (1996) and Kleinmann (1997), who used group exercises. Also, mean ratings did not alter, the exception being the dimension "Sensitivity", which increased slightly after the transparency treatment. Conversely, results in Study 2, which contained a sample of actual job applicants, showed a significant improvement in construct-related validity for the transparent group ( N = 297) compared with the non-transparent group ( N = 393). Again, mean ratings did not differ between these two groups. Implications of these findings for practice and suggestions for future research are discussed in this paper.
In an assessment center (AC), assessors generally rate an applicant's performance on multiple dimensions in just 1 exercise. This rating procedure introduces common rater variance within exercises but not between exercises. This article hypothesizes that this phenomenon is partly responsible for the consistently reported result that the AC lacks construct validity. Therefore, in this article, the rater effect is standardized on discriminant and convergent validity via a multitrait-multimethod design in which each matrix cell is based on ratings of different assessors. Two independent studies (N = 200, N = 52) showed that, within exercises, correlations decrease when common rater variance is excluded both across exercises (by having assessors rate only 1 exercise) and within exercises (by having assessors rate only 1 dimension per exercise). Implications are discussed in the context of the recent discussion around the appropriateness of the within-exercise versus the within-dimension evaluation method.
This study explores the traditional procedure of observing assessment center exercises while taking notes vs. an alternative procedure where assessors merely observe and postpone note-taking until immediately after the exercise. The first procedure is considered to be cognitively demanding due to the requirement of simultaneous notetaking and observing. Also, dual task processing (concurrent observing and notetaking) is considered to be especially demanding for assessors without rating experience. The procedures are evaluated using a 2  2 design (with note-taking/ without note-taking  experienced/inexperienced). Some 121 experienced and inexperienced assessors rated videotaped candidates, observing either with or without taking notes. Results showed that experienced assessors yield significantly higher differential accuracy than inexperienced assessors. We did not find an effect of observation procedure on accuracy, interrater reliability or halo. Implications for future research are described.
This study examined the influence on construct validity of implementing the triad Feeling, Thinking, and Power as a taxonomy for behavioral dimensions in assessment center (AC) exercises. A sample of 1567 job applicants participated in an AC specifically developed according to this taxonomy. Each exercise tapped three dimensions, one dimension from each cluster of the taxonomy. Confirmatory Factor Analysis of the multitraitmultimethod matrix showed evidence for construct validity. Thus, the ratings matched the a priori triadic grouping to a good extent. Practical implications are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.