1996
DOI: 10.1080/00220973.1996.9943464
|View full text |Cite
|
Sign up to set email alerts
|

Determining the Efficacy of Intervention: The Use of Effect Sizes for Data Analysis in Single-Subject Research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
73
0
1

Year Published

1998
1998
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 107 publications
(75 citation statements)
references
References 31 publications
1
73
0
1
Order By: Relevance
“…Applying inferential statistics that make assumptions about the distributional properties of the parent population is also problematic because single-subject data are inherently autocorrelated. In other words, repeated measures within the same subject are clearly not independent of one another, thus limiting the choice of appropriate statistical analyses (Kromrey & Foster-Johnson, 1996;Robey, Schultz, Crawford, & Sinner, 1999). …”
Section: Introductionmentioning
confidence: 99%
“…Applying inferential statistics that make assumptions about the distributional properties of the parent population is also problematic because single-subject data are inherently autocorrelated. In other words, repeated measures within the same subject are clearly not independent of one another, thus limiting the choice of appropriate statistical analyses (Kromrey & Foster-Johnson, 1996;Robey, Schultz, Crawford, & Sinner, 1999). …”
Section: Introductionmentioning
confidence: 99%
“…These data assumptions are common to simulation studies on N 1 designs (see, e.g., Brossart et al, 2006;Huitema & McKean, 2007a, 2007bParker & Brossart, 2003). Thus, future studies may explore the performance of PNCD for ABAB designs with curvilinear trends, computing the percentage for each change in the condition, as was suggested by Kromrey and Foster-Johnson (1996). Additionally, comparative studies such as the present one, which center on finding the technique that performs better, need to be complemented by precision studies in order to identify techniques that perform well-that is, yield accurate estimates of the effect sizes simulated.…”
Section: Discussionmentioning
confidence: 99%
“…Repeated probe data were analyzed in terms of effect sizes (ESs) [29], comparing mean scores in the four posttreatment probes to mean scores at baseline relative to baseline SDs as follows: ES = (Mean posttreatment -Meanbaseline )/SD baseline . In the event where baselines had 0 SD, a pooled ES was calculated using the following formula: d2 = (Mean posttreatment -Mean baseline )/SD pooled.…”
Section: Discussionmentioning
confidence: 99%