We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = .27 (k = 770) as compared to r = .08 (k = 386) across 42 meta-analyses examining variables against introspectively assessed criteria (e.g., self-report). Using Hemphill's (2003) data-driven guidelines for interpreting the magnitude of assessment effect sizes with only externally assessed criteria, we found 13 variables had excellent support (r ≥ .33, p < .001; [Symbol: see text] FSN > 50), 17 had good support (r ≥ .21, p < .05, FSN ≥ 10), 10 had modest support (p < .05 and either r ≥ .21, FSN < 10, or r = .15-.20, FSN ≥ 10), 13 had little (p < .05 and either r = < .15 or FSN < 10) or no support (p > .05), and 12 had no construct-relevant validity studies. The variables with the strongest support were largely those that assess cognitive and perceptual processes (e.g., Perceptual-Thinking Index, Synthesized Response); those with the least support tended to be very rare (e.g., Color Projection) or some of the more recently developed scales (e.g., Egocentricity Index, Isolation Index). Our findings are less positive, more nuanced, and more inclusive than those reported in the CS test manual. We discuss study limitations and the implications for research and clinical practice, including the importance of using different methods in order to improve our understanding of people.
Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Rorschach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.
Using 100 clinical cases, we examined the construct validity of the Mutuality of Autonomy (MOA) Scale (Urist, 1977) using Westen and Rosenthal's (2003) r(contrast - construct validity (CV)) procedure for quantifying a pattern of convergent-discriminant relationships between a target measure and a set of criterion variables. Our 15 criterion variables included the Comprehensive System (CS; Exner, 2003) variables, a CS-based measure of ego strength (Resnick, 1994), and 3 subscales from the Social Cognition and Object Relations Scale (Westen, Lohr, Silk, Kerber, & Goodrich, 1990). We generated the r(contrast - CV) coefficients to test 2 competing hypotheses: that the MOA Scale primarily measures object relations (OR) quality or that it primarily measures psychopathology. Results suggest that the MOA Scale is an equally potent measure of OR and psychopathology regardless of the MOA Scale index used.
This article documents and discusses the importance of using a formal systematic approach to validating psychological tests. To illustrate, results are presented from a systematic review of the validity findings cited in the Rorschach Comprehensive System (CS; Exner, 2003) test manual, originally conducted during the manuscript review process for Mihura, Meyer, Dumitrascu, and Bombel's (2013) CS meta-analyses. Our review documents (a) the degree to which the CS test manual reports validity findings for each test variable, (b) whether these findings are publicly accessible or unpublished studies coordinated by the test developer, and (c) the presence and nature of data discrepancies between the CS test manual and the cited source. Implications are discussed for the CS in particular, the Rorschach more generally, and psychological tests more broadly. Notably, a history of intensive scrutiny of the Rorschach has resulted in more stringent standards applied to it, even though its scales have more published and supportive construct validity meta-analyses than any other psychological test. Calls are made for (a) a mechanism to correct data errors in the scientific literature, (b) guidelines for test developers' key unpublished studies, and
We examined the structure of 9 Rorschach variables related to hostility and aggression (Aggressive Movement, Morbid, Primary Process Aggression, Secondary Process Aggression, Aggressive Content, Aggressive Past, Strong Hostility, Lesser Hostility) in a sample of medical students (N= 225) from the Johns Hopkins Precursors Study (The Johns Hopkins University, 1999). Principal components analysis revealed 2 dimensions accounting for 58% of the total variance. These dimensions extended previous findings for a 2-component model of Rorschach aggressive imagery that had been identified using just 5 or 6 marker variables (Baity & Hilsenroth, 1999; Liebman, Porcerelli, & Abell, 2005). In light of this evidence, we draw an empirical link between the historical research literature and current studies of Rorschach aggression and hostility that helps organize their findings. We also offer suggestions for condensing the array of aggression-related measures to simplify Rorschach aggression scoring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.