Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.
Objective Chronic pain is a prevalent and burdensome condition. Reboot Online was developed to address treatment barriers traditionally associated with accessing face-to-face chronic pain management programs. It is a comprehensive multidisciplinary online treatment program, based on an existing and effective face-to-face multidisciplinary pain program (the Reboot program). Design & Participants A CONSORT-compliant randomized controlled trial was conducted, enrolling adults who had experienced pain for three months or longer. Methods Participants were randomly allocated to either an eight-lesson multidisciplinary pain management program, Reboot Online (N = 41), or to a usual care (UC) control group (N = 39). Clinical oversight was provided by a multidisciplinary team remotely, including physiotherapists and clinical psychologists. Participants were measured at baseline, post-treatment (week 16), and three-month follow-up (week 28). Results Intention-to-treat analyses revealed that Reboot Online was significantly more effective than UC at increasing pain self-efficacy (g = 0.69) at post-treatment, and these gains were maintained at follow-up. Similarly, Reboot Online was significantly more effective than UC on several secondary measures at post-treatment and follow-up, including movement-based fear avoidance and pain-related disability, but it did not significantly reduce pain interference or depression compared with UC. Clinician input was minimal, and adherence to Reboot Online was moderate, with 61% of participants (N = 25) completing all eight lessons. Conclusions Reboot Online presents a novel approach to multidisciplinary pain management and offers an accessible, efficacious alternative and viable treatment option for chronic pain management.
In the context of evidence-based clinical practice (EBCP), the reliability of empirical data is largely determined by the methodological quality of research design. PsycBITE™ (Psychological Database of Brain Impairment Treatment Efficacy) is a web-based database listing all published, empirical reports on the effectiveness of nonpharmacological interventions for the psychological consequences of acquired brain impairment (ABI). The aim of this study was to survey the listings of PsycBITE™ and examine the methodological quality of the reports it contains. Reports listed in PsycBITE™ include systematic reviews (SRs), randomised controlled trials (RCTs), non-RCTs, case series (CSs) and single-subject designs (SSDs). They are indexed according to research design, neurological group, patient age group, target area and intervention type. The PEDro Scale is used to rate the methodological quality of RCTs, nonRCTs and CSs, with maximum obtainable methodological quality rating (MQR) of 10/10, 8/10 and 2/10 respectively. A search identified 1298 reports indexed in PsycBITE™. The largest proportion was SSDs (39%), followed by CSs (22%), RCTs (21%), non-RCTs (11%) and SRs (7%). The majority of reports was concerned with stroke (41%), traumatic brain injury (29%) and Alzheimer's and related dementias (22%). The most frequently investigated deficits were communication/language/speech disorders (24%); independent/self-care activities (19%); behaviour problems (17%); memory impairments (17%); anxiety, depression, stress, adjustment (15%). Approximately half of the RCTs, non-RCTs and CSs were rated for methodological quality. Mean MQR scores for RCTs, non-RCTs and CSs were 4.49, 2.85 and 1.15 respectively. While some PEDro criteria were met by a high proportion of RCTs and non-RCTs (≥ 70%), other criteria were only met by a small proportion of reports (as low as 1.6%). There was no significant difference in MQR scores between RCTs focusing on different neurological groups or target areas. Furthermore, there was no discernible improvement in MQR score for RCTs published over the last three decades. The methodological quality of studies investigating the efficacy of rehabilitation interventions in ABI has been consistently modest over several decades. This is largely attributable to poor adherence to fundamental tenets of research design, and requires urgent remediation. RCTs (and to a lesser extent, non-RCTs) are research methodologies which can potentially yield a high level of evidence, but only if they are adequately designed. PsycBITE™ has the facility to raise awareness of these issues and be instrumental in promoting EBCP in the field of ABI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.