Recently, researchers have argued that using quantitative effect sizes in single-case design (SCD) research may facilitate the identification evidence-based practices. Indices to quantify nonoverlap are among the most common methods for quantifying treatment effects in SCD research. Tau-U represents a family of effect size indices that were developed to address criticisms of previously developed measures of nonoverlap. However, more research is necessary to determine the extent to which Tau-U successfully addresses proposed limitations of other nonoverlap methods. This study evaluated Tau-U effect sizes, derived from multiple-baseline designs, where researchers used curriculum-based measures of reading (CBM-R) to measure reading fluency. Specifically, we evaluated the distribution of the summary Tau-U statistic when applied to a large set of CBM-R data and assessed how the variability inherent in CBM-R data may influence the obtained Tau-U values. Findings suggest that the summary Tau-U statistic may be susceptible to ceiling effects. Moreover, the results provide initial evidence that error inherent in CBM-R scores may have a small but meaningful influence on the obtained effect sizes. Implications and recommendations for research and practice are discussed.
Researchers and practitioners frequently use curriculum-based measures of reading (CBM-R) within single-case design (SCD) frameworks to evaluate the effects of reading interventions with individual students. Effect sizes (ESs) developed specifically for SCDs are often used as a supplement to visual analysis to gauge treatment effects. The degree to which measurement error associated with academic measures like CBM-R influences said ESs has not been fully explored. We used simulation methodology to evaluate how common magnitudes of error influenced the consistency and accuracy of outcomes from two nonparametric SCD ESs, percentage of data exceeding baseline trend and TauU. After accounting for other data characteristics, measurement error accounted for a statistically and practically significant amount of variance in the consistency and accuracy of outcomes from both ESs. This article suggests that the psychometric properties of academic measures are important to consider when interpreting ESs from SCDs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.