2009
DOI: 10.1002/bin.286
|View full text |Cite
|
Sign up to set email alerts
|

Detecting changes in simulated events II: Using variations of momentary time‐sampling to measure changes in duration events

Abstract: The extent to which a greater proportion of small behavior changes could be detected with momentary time-sampling (MTS) was evaluated by (a) combining various interval sizes of partial-interval recording (PIR) with 20 s, 30 s, 1 min MTS and (b) using variable interval sizes of MTS that were based on means of 20 s and 1 min. For each targeted percentage, low, moderate, and high inter-response times (IRTs) to event-run ratios were compared with reversal designs to determine whether sensitivity increased with eit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
8

Relationship

4
4

Authors

Journals

citations
Cited by 10 publications
(24 citation statements)
references
References 4 publications
0
24
0
Order By: Relevance
“…They examined the line graphs generated by these alternative measurement systems and found that, although there was additional error represented in their analysis, the overall interpretations of figures by an expert panel remained unchanged relative to those collected using continuous recording in the majority of cases. Thus, from a pragmatic perspective, error is only problematic to the extent to which it changes interpretation of the data (for similar evaluations, see Carroll, Rapp, Colby-Dirksen, & Lindenberg, 2009;Devine, Rapp, Testa, Henrickson, & Schnerch, 2011;Rapp et al, 2007).…”
Section: Discussionmentioning
confidence: 99%
“…They examined the line graphs generated by these alternative measurement systems and found that, although there was additional error represented in their analysis, the overall interpretations of figures by an expert panel remained unchanged relative to those collected using continuous recording in the majority of cases. Thus, from a pragmatic perspective, error is only problematic to the extent to which it changes interpretation of the data (for similar evaluations, see Carroll, Rapp, Colby-Dirksen, & Lindenberg, 2009;Devine, Rapp, Testa, Henrickson, & Schnerch, 2011;Rapp et al, 2007).…”
Section: Discussionmentioning
confidence: 99%
“…Carroll et al () noted that there are currently no guidelines for determining an acceptable level of false positives with interval methods of data collection. Given the absence of formal guidelines in the literature, we arbitrarily designated methods that produced 20% or fewer false negatives and 33% or fewer false positives as being ‘sensitive’ to behavior change.…”
Section: Study 2: Evaluating False Positives With Duration Events Andmentioning
confidence: 99%
“…Because of the labor-intensive nature of conducting this analysis, we opted to only evaluate false positives for interval sizes of PIR and MTS that detected 80% or more of changes in duration events or frequency events in study 1. Carroll et al (2009) noted that there are currently no guidelines for determining an acceptable level of false positives with interval methods of data collection. Given the absence of formal guidelines in the literature, we arbitrarily designated methods that produced 20% or fewer false negatives and 33% or fewer false positives as being 'sensitive' to behavior change.…”
Section: Study 2: Evaluating False Positives With Duration Events Andmentioning
confidence: 99%
“…By contrast, statistical procedures are routinely used to determine probability of obtaining false positives in group designs. Some recent studies have focused on the probability of obtaining false positives and false negatives with interval recording (e.g., partial interval recording [PIR]) methods in single-subject designs (Carroll, Rapp, Colby-Dirksen, & Lindenberg, 2009;Meany-Daboul, Roscoe, Bourrett, & Ahearn, 2007;Rapp, Colby-Dirksen, Michalski, Carroll, & Lindenberg, 2008;Rapp et al, 2007); however, none has evaluated false positives for continuous measures of behavior. In addition, although some studies have focused on increasing the objectiveness of visual analysis of data depicted within AB (e.g., Fisher, Kelley, & Lomas, 2003;Stewart, Carr, Brandt, & McHenry, 2007) and ABAB designs (Kahng et al, 2010), at SIMON FRASER LIBRARY on June 9, 2015 bmo.sagepub.com Downloaded from relatively few studies have focused on increasing the objectiveness of data depicted within multielement designs.…”
mentioning
confidence: 99%