2019
DOI: 10.1002/jeab.501
|View full text |Cite
|
Sign up to set email alerts
|

Replicability and randomization test logic in behavior analysis

Abstract: Randomization tests are a class of nonparametric statistics that determine the significance of treatment effects. Unlike parametric statistics, randomization tests do not assume a random sample, or make any of the distributional assumptions that often preclude statistical inferences about single‐case data. A feature that randomization tests share with parametric statistics, however, is the derivation of a p‐value. P‐values are notoriously misinterpreted and are partly responsible for the putative “replication … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 49 publications
1
12
0
Order By: Relevance
“…We have reviewed current meta-analytic methods and advocate for their increased use in applied behavior analysis research and literature reviews. The perspective advanced herein is consistent with a larger movement toward incorporating newer quantitative tools and methods in behavior science and behavior analysis in general (Caron, 2019;Craig & Fisher, 2019;DeHart & Kaplan, 2019;Elliffe & Elliffe, 2019;Falligant et al, 2020;Franck et al, 2019;Friedel et al, 2019;Gilroy et al, 2020;Greene et al, 2017;Jacobs, 2019;Kaplan et al, 2019;Killeen, 2019;Kyonka, 2019;Lanovaz et al, 2020;Levin et al, 2019;Riley et al, 2019;Turgeon & Lanovaz, 2020;Villarreal et al, 2019;Young, 2019). To borrow a quote attributed to Galileo, mathematics is the language of science.…”
Section: Discussionsupporting
confidence: 68%
“…We have reviewed current meta-analytic methods and advocate for their increased use in applied behavior analysis research and literature reviews. The perspective advanced herein is consistent with a larger movement toward incorporating newer quantitative tools and methods in behavior science and behavior analysis in general (Caron, 2019;Craig & Fisher, 2019;DeHart & Kaplan, 2019;Elliffe & Elliffe, 2019;Falligant et al, 2020;Franck et al, 2019;Friedel et al, 2019;Gilroy et al, 2020;Greene et al, 2017;Jacobs, 2019;Kaplan et al, 2019;Killeen, 2019;Kyonka, 2019;Lanovaz et al, 2020;Levin et al, 2019;Riley et al, 2019;Turgeon & Lanovaz, 2020;Villarreal et al, 2019;Young, 2019). To borrow a quote attributed to Galileo, mathematics is the language of science.…”
Section: Discussionsupporting
confidence: 68%
“…The recommendations of Craig and Fisher (2019), Jacobs (2019), and Elliffe and Elliffe (2019) to consider randomization tests in SCI research represents a pathway to improving the scientific merit of research initiatives in the EAB. In this article we have emphasized a distinction between the use of randomization in a SCI's design and the use of randomization statistical tests in its analysis.…”
Section: Summary and Take-home Messagesmentioning
confidence: 99%
“…The Campbell and Stanley (1966) construct of "external validity," or generalization of a study's findings beyond the present sample, is not a material consideration in a SCI research context, but rather would be reflected by how the participant sample or research context was selected (e.g., random or not, and from what population) and through the typical replication process (e.g., internally, with respect to the number of participants included; and externally, through the number of similarly structured studies conducted). In contrast to conventional "group" intervention research, in SCI research, replication within and between cases speaks more about the internal validity (i.e., the scientific credibility) of the study than it does about the study's external validity (see also Jacobs, 2019). At the same time, it should be acknowledged that even conventional "group" researchers must wrestle with justifications of external-validity generalizations in their studies, which are rarely if ever based on randomly selected samples and contexts (Shadish et al, 2002).…”
mentioning
confidence: 99%
“…This approach was selected because it is appealing for a properly designed experiment with a limited number of repeated measurements, and because it is relatively easy to explain and to understand. Furthermore, the approach stays close to the data and the data description, and makes minimal statistical assumptions, which is most compatible with the behavior analytical perspective from which the single-case experimental design tradition grew (Jacobs, 2019).…”
Section: An Example Of Unilevel Design-based Inferencementioning
confidence: 83%