Recent literature suggests a revival of interest in single-case methodology (e.g., the randomised n-of-1 trial is now considered Level 1 evidence for treatment decision purposes by the Oxford Centre for Evidence-Based Medicine). Consequently, the availability of tools to critically appraise single-case reports is of great importance. We report on a major revision of our method quality instrument, the Single-Case Experimental Design Scale. Three changes resulted in a radically revised instrument, now entitled the Risk of Bias in N-of-1 Trials (RoBiNT) Scale: (i) item content was revised and increased to 15 items, (ii) two subscales were developed for internal validity (IV; 7 items) and external validity and interpretation (EVI; 8 items), and (iii) the scoring system was changed from a 2-point to 3-point scale to accommodate currently accepted standards. Psychometric evaluation indicated that the RoBiNT Scale showed evidence of construct (discriminative) validity. Inter-rater reliability was excellent, for pairs of both experienced and trained novice raters. Intraclass correlation coefficients of summary scores for individual (experienced) raters: ICC(TotalScore) = .90, ICC(IVSubscale) = .88, ICC(EVISubscale) = .87; individual (novice) raters: ICC(TotalScore)= .88, ICC(IVSubscale) = .87, ICC(EVISubscale) = .93; consensus ratings between experienced and novice raters (ICC(TotalScore) = .95, ICC(IVSubscale) = .93, ICC(EVISubscale) = .93. The RoBiNT Scale thus shows sound psychometric properties and provides a comprehensive yet efficient examination of important features of single-case methodology.
Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.
There is substantial evidence that research studies reported in the scientific literature do not provide adequate information so that readers know exactly what was done and what was found. This problem has been addressed by the development of reporting guidelines which tell authors what should be reported and how it should be described. Many reporting guidelines are now available for different types of research designs. There is no such guideline for one type of research design commonly used in the behavioral sciences, the single-case experimental design (SCED). The present study addressed this gap. This report describes the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016, which is a set of 26 items that authors need to address when writing about SCED research for publication in a scientific journal. Each item is described, a rationale for its inclusion is provided, and examples of adequate reporting taken from the literature are quoted. It is recommended that the SCRIBE 2016 is used by authors preparing manuscripts describing SCED research for publication, as well as journal reviewers and editors who are evaluating such manuscripts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.