In response to research demonstrating limitations in Rorschach validity and reliability, Meyer, Viglione, Mihura, Erard and Erdberg (2011) have developed a new Rorschach System, R-PAS. Based on the available research findings, this system attempts to ground the Rorschach in its evidence base, improve its normative foundation, integrate international findings, reduce examiner variability, and increase utility. As this Rorschach system is new, no reliability studies have yet been produced. The present study sought to establish inter-rater reliability for the new R-PAS. 50 Rorschach records were randomly selected from ongoing research projects using R-Optimized administration. (Wood, 1996) were based on the criticism that CS inter-rater reliability was determined using percent agreement without correcting for chance. However, as noted, inter-rater reliability has been demonstrated with chance-corrected statistics, including ICC, kappa and Iota coefficients, as the most appropriate and precise statistical methods, proving early criticisms to be unfounded. Indeed, inter-rater reliability for the majority of Rorschach scores compare favorably to other published meta-analyses of inter-rater reliability in psychology, psychiatry, and medicine (Meyer, 2004). Given the wide variety of scores, scales, research projects, and systems from which good reliability has been demonstrated, one must conclude that well-trained coders should achieve acceptable, good, and often excellent inter-rater reliability for the great variety of Rorschach scores.As demonstrated by Weiner (2003), the Rorschach can be considered to be a method of generating data relevant to personality and information processing. From this perspective, various scores and scoring methods systematize the data produced during Rorschach administration, thus constituting the Rorschach as a test. As shown by the strong reliability data across different types of systems, scores, countries, and languages, this test has produced consistently strong reliability. The R-PAS includes many variables that were also used in the CS, clarifies and specifies coding instructions, and modifies a few (e.g. Sex content) to be more consistent with their interpretation. Thus, the abovereported research findings suggest that inter-rater reliability for these R-PAS variables should be strong.R-PAS also includes variables not used in the CS (