Purpose. The inter‐rater reliability of criteria‐based content analysis (CBCA) – the main component of statement validity assessment (SVA) – was investigated in a mock‐crime study. The study also addressed the issue of the adequacy of diverse statistical indices of reliability. Furthermore, CBCA's effectiveness in distinguishing between true and false statements was analysed.
Methods. Three raters were trained in CBCA. Subsequently, they analysed transcripts of 102 statements referring to a simulated theft of money. Some of the statements were based on experience and some were confabulated. The raters used 4‐point scales, respectively, to judge the degree to which 18 of the 19 CBCA criteria were fulfilled in each statement.
Results. The analysis of rater judgment distributions revealed that, with judgments of individual raters varying only slightly across transcripts, the weighted kappa coefficient, the product‐moment correlation, and the intra‐class correlation were inadequate indices of reliability. The Finn‐coefficient and percentage agreement, which were calculated as indices independent of rater judgment distributions, were sufficiently high with respect to 17 of the 18 assessed criteria. CBCA differentiated significantly between truthful and fabricated accounts.
Conclusions. The inter‐rater reliability of CBCA achieved in the present study was satisfactory both, if considered absolutely, and as compared with other empirical findings. This suggests that CBCA can be utilized in the mock‐crime paradigm with a sufficient degree of reliability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.