Background
Lack of effective peer-review process of predatory journals, resulting in more ambiguity in reporting, language and incomplete descriptions of processes might have an impact on the reliability of PEDro scale. The aim of this investigation was to compare the reliability of the PEDro scale when evaluating the methodological quality of RCTs published in predatory (PJs) and non-predatory (NPJs) journals, to more confidently select interventions appropriate for application to practice.
Methods
A selected sample of RCTs was independently rated by two raters randomly selected among 11 physical therapists. Reliability of each item of the PEDro scale and the total PEDro score were assessed by Cohen’s kappa statistic and percent of agreement and by Intraclass Correlation Coefficients (ICC) and the Standard Error of Measurement (SEM), respectively. The Chi-square test was used to compare the rate of agreement between PJs and NPJs.
Results
A total number of 298 RCTs were assessed (119 published in NPJs). Cronbach’s alphas were .704 and .845 for trials published in PJs and NPJs, respectively. Kappa values for individual scale items ranged from .14 to .73 for PJs and from .09 to .70 for NPJs. The ICC was .537 (95% CI .425—.634) and .729 (95% CI .632-.803), and SEM was 1.055 and 0.957 for PJs and NPJs, respectively. Inter-rater reliability in discriminating between studies of moderate to high and low quality was higher for NPJs (k = .57) than for PJs (k = .28).
Conclusions
Interrater reliability of PEDro score of RCTs published in PJs is lower than that of trials published in NPJs, likely also due to ambiguous language and incomplete reporting. This might make the detection of risk of bias more difficult when selecting interventions appropriate for application to practice or producing secondary literature.