This article reports a meta-analysis of studies examining the predictive validity of the Implicit Association Test (IAT) and explicit measures of bias for a wide range of criterion measures of discrimination. The meta-analysis estimates the heterogeneity of effects within and across 2 domains of intergroup bias (interracial and interethnic), 6 criterion categories (interpersonal behavior, person perception, policy preference, microbehavior, response time, and brain activity), 2 versions of the IAT (stereotype and attitude IATs), 3 strategies for measuring explicit bias (feeling thermometers, multi-item explicit measures such as the Modern Racism Scale, and ad hoc measures of intergroup attitudes and stereotypes), and 4 criterion-scoring methods (computed majority-minority difference scores, relative majority-minority ratings, minority-only ratings, and majority-only ratings). IATs were poor predictors of every criterion category other than brain activity, and the IATs performed no better than simple explicit measures. These results have important implications for the construct validity of IATs, for competing theories of prejudice and attitude-behavior relations, and for measuring and modeling prejudice and discrimination.
Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context of 2 case studies in which psychologists need to develop inventories with nonarbitrary metrics. One example comes from social psychology, where researchers have begun using the Implicit Association Test to provide the lay public with feedback about their "hidden biases" via popular Internet Web pages. The other example comes from clinical psychology, where researchers often wish to evaluate the real-world importance of interventions. As the authors show, both pursuits require researchers to conduct formal research that makes their metrics nonarbitrary by linking test scores to meaningful real-world events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.