2015
DOI: 10.1080/00273171.2014.973989
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Visual and Statistical Analysis in Single-Case Studies Using Published Studies

Abstract: Little is known about the extent to which interrupted time-series analysis (ITSA) can be applied to short, single-case study designs and whether those applications produce results consistent with visual analysis (VA). This paper examines the extent to which ITSA can be applied to single-case study designs and compares the results based on two methods: ITSA and VA, using papers published in the Journal of Applied Behavior Analysis in 2010. The study was made possible by the development of software called UnGrap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

6
87
0
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 111 publications
(98 citation statements)
references
References 69 publications
6
87
0
2
Order By: Relevance
“…The two models fitted to the completed data did significantly impact the RBESEs of the estimated effects, but not the RBs or RMSEs. Our results from fitting a simplified model to completed data are consistent with Harrington and Velicer (2015), who reported biased estimates of the error variance if autocorrelation was not accounted for. Specifically, negative autocorrelations that were unaccounted for resulted in overestimation of the error variance, and positive autocorrelations unaccounted for resulted in underestimation of the error variance (Harrington & Velicer, 2015).…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…The two models fitted to the completed data did significantly impact the RBESEs of the estimated effects, but not the RBs or RMSEs. Our results from fitting a simplified model to completed data are consistent with Harrington and Velicer (2015), who reported biased estimates of the error variance if autocorrelation was not accounted for. Specifically, negative autocorrelations that were unaccounted for resulted in overestimation of the error variance, and positive autocorrelations unaccounted for resulted in underestimation of the error variance (Harrington & Velicer, 2015).…”
Section: Discussionsupporting
confidence: 89%
“…Fourth, only positive autocorrelations were investigated, yet negative autocorrelations have been reported in empirical SCED studies Boldface β 2 (level shift) denotes that unacceptable estimates were uncovered under some manipulated conditions. * p < .05 and η 2 ≥ .06 (Harrington & Velicer, 2015;Parker et al, 2005;Shadish & Sullivian, 2011). For example, Shadish and Sullivan's review uncovered lag-1 autocorrelations ranging from − .93 to .79.…”
mentioning
confidence: 99%
“…However, note that we do not propose to replace visual analysis with the sole use of nonparametric CIs for ESs. We concur with the general consensus in the field of single-case research that visual and statistical analysis are complementary and in most cases should be used together to corroborate the conclusions and to increase the acceptability by the wider scientific community (e.g., Bulté & Onghena, 2012;Busk & Marascuilo, 1992;Harrington & Velicer, 2015;Tate et al, 2013).…”
Section: Introductionsupporting
confidence: 56%
“…The main advantage of visual analysis is that aspects of SCE data such as level, trend, variability, immediacy of the effect, and overlap can be assessed in a flexible way (Horner et al, 2005;Kratochwill, Levin, Horner, & Swoboda, 2014;Lane & Gast, 2014). However, visual analysis has been criticized for its lack of established formal decision guidelines which leaves the method vulnerable to subjectivity and inconsistency between researchers (e.g., Deprospero & Cohen, 1979;Fisch, 1998;Gibson & Ottenbacher, 1988;Harrington & Velicer, 2015;Ximenes, Manolov, Solanas, & Quera, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…The use of Cohen's benchmarks for single‐case designs has been put in doubt (Parker et al., ) and Harrington and Velicer () suggested referring to values between 1 and 2.5 as “medium effects.” However, they used the within‐case version and not the d ‐statistic created to be comparable to that obtainable from between‐group designs (Hedges, Pustejovsky, & Shadish, ).…”
mentioning
confidence: 99%