Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.
This preprint is not the version of record and may not exactly replicate the final, published version of the article. The final article will be available upon publication via its DOI.
Sixty million US residents live in rural areas, but health policies and interventions developed from an urban mindset often fail to address the significant barriers to health experienced by these local communities. Telepsychology, or psychological services delivered by distance via technology, is an emerging treatment modality with special implications for underserved rural areas. This study found that a sample of rural residents seeking telepsychology services (n = 94) had low health-related quality of life (HRQOL), often due to cooccurring physical and mental health diagnoses including high rates of depression. However, a brief telepsychology treatment delivered to rural clients (n = 40) was associated with an improvement in mental health-related quality of life (d = 0.70, P < .001). These results indicate that despite the complex health needs of these underserved communities, telepsychology interventions may help offset the disparities in health service access in rural areas.
Single-case experimental methods are used across a range of educational and psychological research. Single-case data are analyzed with a variety of methods, but no statistic has demonstrated clear superiority over other methods. The time-series nature of single-case designs requires special consideration for baseline trend and autocorrelation when estimating intervention effect size. However, standard correction methods are limited because they assume precise statistical estimation of trend and autocorrelation. Unlike standard correction methods, Monte Carlo simulation methods can address the poor precision of single-case effect size indices. This paper presents the rationale for a new simulation method, Interrupted Time-Series Simulation (ITSSIM). A small field test was also conducted, and ITSSIM performed similarly to sophisticated multilevel methods for single-case research. ITSSIM is accessible as a free software application that requires no prior knowledge of statistical computing or syntax. ITSSIM may be used to estimate the effect size of a single interrupted time-series (AB design), and multiple ITSSIM effect size estimates may be combined via meta-analysis.A new method of single-case data analysis, ITSSIM, uses computer simulation to test the effectiveness of school/psychological interventions at the individual subject level. ITSSIM gave results similar to sophisticated multilevel modeling methods when tested on a small example data set, suggesting ITSSIM is a viable tool for single-case research. This new method of analysis is easy for all investigators to use because ITSSIM software is free to download and requires no previous statistical computing experience.
Single-case experimental designs allow practitioners to conduct clinical outcomes research without the large samples and substantial resources required by randomized clinical trials. Single-case designs have been used to conduct outcomes research for many decades; however, the statistical measurement of treatment effect sizes remains an unresolved issue. The percentage of nonoverlapping data (PND) is one widely used statistic for effect size measurement of single-case experimental designs. Despite its limitations, PND is useful because it is easy to calculate and interpret. However, null hypothesis significance testing (i.e., the use of p values) is not currently feasible with PND because it has an unknown sampling distribution. A method to calculate p values for PND is introduced and discussed. An online calculator and statistical computing code are also made available to single-case investigators who wish to calculate p values for their data. Calculating PND and its associated p values may provide practitioners with valuable insights about their treatment outcomes when PND is used appropriately and its statistical assumptions are not violated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.