2016
DOI: 10.1352/1944-7558-121.3.169
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Single Case Design Overlap-Based Effect Size Metrics From Studies Examining Speech Generating Device Interventions

Abstract: Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with intellectual and developmental disabilities (IDD) with moderate to profound levels of impairment. The effect size metrics included percent of data points exceeding the median (P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(14 citation statements)
references
References 113 publications
0
14
0
Order By: Relevance
“…In addition, effect sizes (percentage of data points exceeding the median [PEM]) were calculated for each variable to provide further information regarding the effectiveness of the VSM intervention. PEM as a method of analysis was found to be reliable with visual analyses (Ma, 2006) and has a relatively small confidence interval (Chen, Hyppa-Martin, Reichle, & Symons, 2016), which are two indicators of usability for single subject research. Scruggs, Mastropieri, Cook, and Escobar (1986) suggest that an intervention is highly effective if greater than 90% of the data exceeds the median, moderately effective when 70-90% of the data points exceed the median, and mildly effective when 50-70% of the data exceeds the median.…”
Section: Resultsmentioning
confidence: 98%
“…In addition, effect sizes (percentage of data points exceeding the median [PEM]) were calculated for each variable to provide further information regarding the effectiveness of the VSM intervention. PEM as a method of analysis was found to be reliable with visual analyses (Ma, 2006) and has a relatively small confidence interval (Chen, Hyppa-Martin, Reichle, & Symons, 2016), which are two indicators of usability for single subject research. Scruggs, Mastropieri, Cook, and Escobar (1986) suggest that an intervention is highly effective if greater than 90% of the data exceeds the median, moderately effective when 70-90% of the data points exceed the median, and mildly effective when 50-70% of the data exceeds the median.…”
Section: Resultsmentioning
confidence: 98%
“…Researchers assessed data consistency by examining the similar conditions across participants to assess the presence of patterns (Kratochwill et al, 2010). Researchers chose the Improvement Rate Difference (IRD) measure of effect size to further describe the data due to its ability to distinguish the scale of effect between two conditions (Chen et al, 2016). IRD was calculated using the IRD calculator from Single Case Research™ (Vannest et al, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…All the included SSED studies used descriptive statistics, with visual presentation of the data. Statistical calculation is recognized to be under used in SSED studies (Smith 2012), and as there is not yet an agreed standard effect size metric for this design (Chen et al 2016), it is unsurprising that none of the included studies provided effect size data. There are, however, some quantitative approaches for analysing SSED data that can complement visual analysis and offer significance testing.…”
Section: Included Studiesmentioning
confidence: 99%