1981
DOI: 10.1111/j.1745-3984.1981.tb00841.x
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Instructional Sensitivity in the Empirical Review of Criterion‐referenced Test Items

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0
4

Year Published

1985
1985
2016
2016

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(34 citation statements)
references
References 18 publications
1
29
0
4
Order By: Relevance
“…To examine instructional sensitivity of the instrument, we applied the paired samples t ‐test at α = 0.05 to Rasch ability estimates of students who took the temporal‐magnitude instrument before and after the cosmic evolution course. Instructional sensitivity in this study is defined as “the tendency for an item to vary in difficulty as a function of instruction” (Haladyna & Roid, 1981, p. 40). If the instrument was sensitive to the cosmic evolution course that targeted scientific changes with extremely long durations, students would improve their temporal magnitude recognition ability after the cosmic evolution course by mainly improving their performances on the extremely long duration items.…”
Section: Methodssupporting
confidence: 88%
“…To examine instructional sensitivity of the instrument, we applied the paired samples t ‐test at α = 0.05 to Rasch ability estimates of students who took the temporal‐magnitude instrument before and after the cosmic evolution course. Instructional sensitivity in this study is defined as “the tendency for an item to vary in difficulty as a function of instruction” (Haladyna & Roid, 1981, p. 40). If the instrument was sensitive to the cosmic evolution course that targeted scientific changes with extremely long durations, students would improve their temporal magnitude recognition ability after the cosmic evolution course by mainly improving their performances on the extremely long duration items.…”
Section: Methodssupporting
confidence: 88%
“…Polikoff's (2010) review of instructional-sensitivity research notes pervasive differences among items' operating characteristics before and after instruction (e.g., see Muthén, Kao, & Burstein, 1991;Tatsuoka, Linn, Tatsuoka, & Yamamoto, 1988;Wilson, 1989). He also reaffirms Haladyna and Roid's (1981) finding that "sensitivity indices were not systematically related to the difficulty or discrimination indices" (Polikoff, 2010, p. 7), implying the need for a model more complex than one that is adequate for a single time point. Lord (1976) describes the relationship between instructional sensitivity and dimensionality as follows:…”
Section: Considerations For the Irt Modelmentioning
confidence: 97%
“…Most methods required item difficulty estimates before and after students had received instruction, or difficulty estimates for equivalent groups of students who had or had not received instruction. Several empirical methods to compare pre-and post-item difficulties or item difficulties for the two groups have been proposed, including a simple item p value comparison procedure (Cox & Vargas, 1966), a phi-coefficient analysis (Popham, 1971), a t test of item response theory (IRT) item calibration differences (Wright & Stone, 1979), and Bayesian methods (see Haladyna & Roid, 1981, for a review of those methods).…”
Section: Background and Objectives The Evolving Concept Of Instructiomentioning
confidence: 99%