2009
DOI: 10.1037/a0015248
|View full text |Cite
|
Sign up to set email alerts
|

The impact of item wording and behavioral specificity on the accuracy of direct behavior ratings (DBRs).

Abstract: Direct behavior ratings (DBRs) combine aspects of both systematic direct observation and behavior rating scales to create a feasible method for social behavior assessment within a problem-solving model. The purpose of the current study was to examine whether accuracy of DBRs was affected depending on the behaviors selected to be rated using a DBR. Specifically, the impact target behavior wording (positive vs. negative) and degree of specificity by which the behaviors were defined were investigated. Participant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
33
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 36 publications
(36 citation statements)
references
References 25 publications
2
33
1
Order By: Relevance
“…Third, the wording of items on the DBR may have resulted in greater error. The behavior rated in the current study was specific (i.e., out‐of‐seat behavior), whereas previous research has found ratings of disruptive behavior to be more accurate when worded globally (e.g., disruptive behavior; Riley‐Tillman, Chafouleas, Christ, Briesch, & LeBel, ). Future researchers should consider evaluating the effects of completion latency when items are worded globally, as this may result in increased accuracy in ratings.…”
Section: Discussioncontrasting
confidence: 76%
See 1 more Smart Citation
“…Third, the wording of items on the DBR may have resulted in greater error. The behavior rated in the current study was specific (i.e., out‐of‐seat behavior), whereas previous research has found ratings of disruptive behavior to be more accurate when worded globally (e.g., disruptive behavior; Riley‐Tillman, Chafouleas, Christ, Briesch, & LeBel, ). Future researchers should consider evaluating the effects of completion latency when items are worded globally, as this may result in increased accuracy in ratings.…”
Section: Discussioncontrasting
confidence: 76%
“…Given that the purpose of the study was to examine the relative accuracy of different DBR completion latencies, the primary dependent variable used in the analyses was a measure of rating accuracy. Difference scores have been used in psychological research to demonstrate discrepancies between observed scores and a criterion (Mullins & Force, ), and have recently been used within the context of school‐based behavior assessment to examine the accuracy of teacher‐completed DBRs (Riley‐Tillman, Chafouleas, Christ, Briesch, & LeBel, ; Schlientz, Riley‐Tillman, Briesch, Walcott, & Chafouleas, ). In this study, difference scores were calculated by subtracting each participants’ DBR score from 40% (i.e., the criterion established through SDO) and using the absolute value of the difference.…”
Section: Methodsmentioning
confidence: 99%
“…A measure of perceptions of their classmates' disruptive classroom behaviors was developed from the DBR-SIS (see above). Specific behaviors that comprised the disruptive behavior category were generated from previous DBR-SIS research (Riley-Tillman et al, 2009 ; Christ et al, 2011 ), and wording of the items was simplified to be appropriate for the age of the children. The eight disruptive acts presented to children were: “gets out of his/her seat without permission,” “talks or yells about things we're not working on,” “makes sounds (like humming, laughing, whistling) that aren't allowed during class time,” “talks to other kids when we're not allowed to,” “calls out things to the teacher without permission to talk,” “does or says things that interrupt what we're doing,” “is rude or mean to the teacher,” and “plays with things at his or her desk that don't have anything to do with our work.” Children were asked to rate each of their peers on each behavior using a 3-point response scale of “never,” “sometimes,” and “a lot/always.” The scale was administered to each child individually with the help of a graduate assistant and was completed over contiguous 3-day sessions.…”
Section: Methodsmentioning
confidence: 99%
“…In each grade, teachers were asked to rate students' classroom behaviors using the DBR-SIS (Direct Behavior Rating—Single Item Scales; Riley-Tillman et al, 2009 ; Chafouleas, 2011 ) following four 2-h instructional sessions toward the end of the school year. At the end of each class period teachers were asked to estimate how often each student showed disruptive behavior on a 5-point scale (1 = “never/almost never,” 3 = “sometimes,” 5 = “always/almost always”).…”
Section: Methodsmentioning
confidence: 99%
“…Over the past few years, Chafouleas, Riley-Tillman, and colleagues have initiated a systematic evaluation of DBR-SIS that includes considerations such as behavior targets, rating procedures, scale design, and comparisons with other methods. For example, one line of investigations involved the influence of specificity (being molar versus molecular) and phrasing (being positive versus negative) on the accuracy of target behavior rating (Chafouleas et al, Submitted for publication;Christ, Riley-Tillman, Chafouleas, & Jaffery, 2011;Riley-Tillman, Chafouleas, Christ, Briesch, & LeBel, 2009). Generally, results of this research suggested that wording of target behaviors used in DBR-SIS should be described globally rather than as specific indicators to enhance rating outcomes associated with reliability and accuracy.…”
Section: Assessment Of Behavior Using Direct Behavior Ratingmentioning
confidence: 99%