1996
DOI: 10.1207/s15327752jpa6602_9
|View full text |Cite
|
Sign up to set email alerts
|

Standardizing Procedures for Calculating Rorschach Interrater Reliability: Conceptual and Empirical Foundations

Abstract: Although the Rorschach test has demonstrated significant refinements in reliability, validity, and statistical power as a result of the procedural standardization and scoring innovations introduced by Exner's Comprehensive System, the issue of Rorschach interrater reliability remains unexplored. This article examines the psychometric foundations of Rorschach interrater reliability and applies notions from applied behavioral analysis to the treatment of Rorschach data. We empirically compare 3 methods of quanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
45
0
3

Year Published

1998
1998
2012
2012

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(50 citation statements)
references
References 0 publications
2
45
0
3
Order By: Relevance
“…Alternatively, poor inter-rater reliability would prevent consistent interpretation across examiners, thus compromising the utility because of variability in scoring and interpretation. A great deal of research with various Rorschach systems and scores has demonstrated strong inter-rater reliability (Acklin, McDowell, Verschell, & Chan, 2000;McDowell & Acklin, 1996;Exner, 1993;Meyer & Archer, 2001;Viglione, 1999;Viglione & Meyer, 2008;Viglione & Taylor, 2003). Inter-rater reliability for the CS (Exner, 2003) has been particularly wellestablished, with a large-scale meta-analysis of CS inter-rater reliability reporting high inter-rater reliability for the majority of CS scores (specifically, median intraclass correlations for statistically stable scores ranged from .72-.96; Meyer et al, 2002).…”
Section: Inter-rater Reliability R-pasmentioning
confidence: 99%
“…Alternatively, poor inter-rater reliability would prevent consistent interpretation across examiners, thus compromising the utility because of variability in scoring and interpretation. A great deal of research with various Rorschach systems and scores has demonstrated strong inter-rater reliability (Acklin, McDowell, Verschell, & Chan, 2000;McDowell & Acklin, 1996;Exner, 1993;Meyer & Archer, 2001;Viglione, 1999;Viglione & Meyer, 2008;Viglione & Taylor, 2003). Inter-rater reliability for the CS (Exner, 2003) has been particularly wellestablished, with a large-scale meta-analysis of CS inter-rater reliability reporting high inter-rater reliability for the majority of CS scores (specifically, median intraclass correlations for statistically stable scores ranged from .72-.96; Meyer et al, 2002).…”
Section: Inter-rater Reliability R-pasmentioning
confidence: 99%
“…Kappa coefficients were calculated to determine the level of agreement between raters. Kappa coefficients were estimated according to the methods of McDowell and Acklin (1996).…”
Section: Ratingsmentioning
confidence: 99%
“…Proponents and critics of the CS have also debated the temporal consistency and the interrater reliability of the CS (Acklin, McDowell, Verschell, & Chan, 2000;Costello, 1999;Ganellen, 2001;Garb et al, 2001;Hibbard, 2003;Hunsley & Bailey, 1999;Lilienfeld et al, 2000;McDowell & Acklin, 1996;Meyer, 1997aMeyer, , 1997bMeyer et al, 2002;Viglione, 1999;Viglione & Hilsenroth, 2001;Weiner, 2001;Wood, Lilienfeld, Nezworski, & Garb, 2001;Wood, Nezworski, & Stejskal, 1996a, 1996bWood, Nezworski, & Stejskal, 1997). In research, the reliability of the CS variables is now mostly calculated by using the intraclass correlation coefficient, ICC, as recommended by among others Acklin et al (2000).…”
Section: The Rorschach Controversymentioning
confidence: 99%