Abstract. This paper presents Multiple Speed Assessments as an umbrella term to encompass a variety of approaches that include multiple (e.g., 20), short (e.g., 3 min), and often integrated interpersonal simulations to elicit overt behavior in a standardized way across participants. Multiple Speed Assessments can be used to get insight into the behavioral repertoire of a target person in situations sampled from a predefined target domain and their intraindividual variability across these situations. This paper outlines the characteristics and theoretical basis of Multiple Speed Assessments. We also discuss various already existing examples of Multiple Speed Assessments (Objective Structured Clinical Examinations, Multiple Mini-Interviews, and constructed response multimedia tests) and provide an overview of design variations. Finally, we present current research evidence and future research directions related to Multiple Speed Assessments. Although we present Multiple Speed Assessments in the context of personnel selection, it can also be used for assessment in the educational, personality, or clinical psychology field.
Recently, multiple, speeded assessments (e.g., "speeded" or "flash" role-plays) have made rapid inroads into the selection domain. So far, however, the conceptual underpinning and empirical evidence related to these short, fast-paced assessment approaches has been lacking. This raises questions whether these speeded assessments can serve as reliable and valid indicators of future performance. This article uses the notions of stimulus and response domain sampling to conceptualize multiple, speeded behavioral job simulations as a hybrid of established simulation-based selection methods. Next, we draw upon the thin slices of behavior paradigm to theorize about the quality of ratings made in multiple, speeded behavioral simulations. In two studies, various assessor pools assessed a sample of 96 MBA students in 18 3-min role-plays designed to capture situations in the junior management domain. At the individual speeded role-play level, reliability and validity were not ensured. Yet, aggregated across all assessors' ratings of all speeded role-plays, the overall score for predicting future performance was high (.54). Validities remained high when assessors evaluated only the first minute (vs. full 3 min) or received only a control training (vs. traditional assessor training). Aggregating ratings of performance in multiple, heterogeneous situations that elicit a variety of domain-relevant behavior emerged as key requirement to obtain adequate domain coverage, capture both ability and personality (extraversion and agreeableness), and achieve substantial validities. Overall, these results show the importance of the stimulus and response domain sampling logic and send a strong warning to using "single" speeded behavioral simulations in practice.
Over the years, various governmental, employment, and academic organizations have identified a list of skills to successfully master the challenges of the 21st century. So far, an adequate assessment of these skills across countries has remained challenging. Limitations inherent in the use of self-reports (e.g., lack of self-insight, socially desirable responding, response style bias, reference group bias, etc.) have spurred on the search for methods that could complement or even substitute self-report inventories. Situational judgment tests (SJTs) have been proposed as one of the complements/alternatives to the traditional self-report inventories. SJTs are low-fidelity simulations that confront participants with multiple domainrelevant situations and request to choose from a set of predefined responses. Our objectives are twofold: (a) outlining how a combined emic-etic approach can be used for developing SJT items that can be used across geographical regions and (b) investigating whether SJT scores can be compared across regions. Our data come from Laureate International Universities (N = 5,790) and comprise test-takers from Europe and Latin America who completed five different SJTs that were developed in line with a combined emic-etic approach. Results showed evidence for metric measurement invariance across participants from Europe and Latin America for all five SJTs. Implications for the use of SJTs as measures of 21st century skills are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.