The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.
A reliability generalization was conducted on studies that reported use of the Suinn-Lew Asian Self-Identity Acculturation Scale (SL-ASIA), published between 1987 and 2013. For inclusion in this meta-analysis, each study had to have reported a Cronbach's alpha reliability coefficient for its sample. Data from 83 Cronbach's alpha coefficients representing 12,992 participants were analyzed; only 67 out of 193 published studies (43.52%) reported reliability scores for their sample. The reliability scores produced by the SL-ASIA ranged from .62 to .96 with an average of .91 (SD = 0.07); therefore, all of the reported reliability scores for this instrument were in the acceptable to excellent range. Our results demonstrate that SL-ASIA continues to be an instrument with strong psychometric properties when used with diverse populations, and it is therefore appropriate for continued use with studies on acculturation.
The Rorschach Inkblot Method (RIM) is a performance-based personality assessment instrument used in both clinical and research settings worldwide. This investigation examines response-level, interrater reliability of U.S. graduate students enrolled in the same doctoral program and divided into two unique participant groups: 20 of those with previous Comprehensive System (CS) training background who are now trained in Rorschach Performance Assessment System (R-PAS; initial coders N = 11, blind coders N = 9), and 19 of those trained solely with R-PAS (initial coders N = 10, blind coders N = 9). Trainees' coding was analyzed based on a sample of 50 protocols (half per group) representing a total of 1,189 responses. Blind coding was used among trainees and coding between the two investigated training groups was analyzed for each variable at the response level and expressed as percent agreement, Gwet's AC, and kappa values. Rates of agreement between the trainees were similar across both groups, and most of the concordance rates can be characterized as excellent. Overall, these results suggest that trainees may learn how to code R-PAS variables with similar reliability levels whether or not they had prior CS knowledge.
The reliability and validity of course grades as measures of general education learning outcomes were assessed using the multitrait-multimethod (MTMM) proposed by Campbell and Fiske. Convergent validity and discriminant validity were assessed on college undergraduate students over a 12-year period. Two procedures were used to test the MTMM. The first procedure was a nested confirmatory factor analysis proposed by Widaman. The second procedure, proposed here, was a direct method to assess discriminant and convergent validity. The analyses were calculated on two different data sets. The data sets included discipline majors and general education categories. Convergent validity and discriminant validity were supported.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.