1999
DOI: 10.1177/01466219922031329
|View full text |Cite
|
Sign up to set email alerts
|

Using Response-Time Constraints to Control for Differential Speededness in Computerized Adaptive Testing

Abstract: An item-selection algorithm is proposed for neutralizing the differential effects of time limits on computerized adaptive test scores. The method is based on a statistical model for distributions of examinees’ response times on items in a bank that is updated each time an item is administered. Predictions from the model are used as constraints in a 0-1 linear programming model for constrained adaptive testing that maximizes the accuracy of the trait estimator. The method is demonstrated empirically using an it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
106
0

Year Published

2000
2000
2017
2017

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 103 publications
(113 citation statements)
references
References 15 publications
5
106
0
Order By: Relevance
“…These include assessing examinee motivation levels, particularly for low stakes tests [132][133][134], evaluating strategy use, for example by differences in response times for subgroups employing different problem solving strategies [130,135], or even the same individual employing different strategies at different points in the test [136], and evaluating cultural differences in pacing and time management during test taking [137]. Other applications include detecting cheating [97], assembling parallel forms [138], and item selection in adaptive testing [139,140], which may be particularly important for ensuring score comparability 16 across sets of items that might be similar in difficulty but differ in their time intensity [142,143]. Finally, although the applications here focused on response time for cognitive tests, another application only briefly mentioned here (Section 6.4) is to model response time on personality and attitude assessments [31,144,145].…”
Section: Other Uses Of Response Timementioning
confidence: 99%
“…These include assessing examinee motivation levels, particularly for low stakes tests [132][133][134], evaluating strategy use, for example by differences in response times for subgroups employing different problem solving strategies [130,135], or even the same individual employing different strategies at different points in the test [136], and evaluating cultural differences in pacing and time management during test taking [137]. Other applications include detecting cheating [97], assembling parallel forms [138], and item selection in adaptive testing [139,140], which may be particularly important for ensuring score comparability 16 across sets of items that might be similar in difficulty but differ in their time intensity [142,143]. Finally, although the applications here focused on response time for cognitive tests, another application only briefly mentioned here (Section 6.4) is to model response time on personality and attitude assessments [31,144,145].…”
Section: Other Uses Of Response Timementioning
confidence: 99%
“…However, previous results obtained for the between-person level are mixed, possibly depending as well on analysis method and task content. In fact, positive RTACs were found for reasoning [6,10] and problem solving [1], null RTACs for arithmetic tasks [11], and negative RTACs for basic computer operation skills [12] and reading tasks [1].…”
Section: Item Response Time and Item Successmentioning
confidence: 99%
“…The discrimination parameters with respect to speed (i.e., a) were generated from the U(1, 3) distribution whereas the time-intensity parameters were generated in two different ways: (i) assuming no correlation with IRT b parameters, the b parameters were generated from the U(3, 5) distribution (van der Linden, 2008) and (ii) assuming a 0.65 correlation with IRT b parameters (van der Linden et al, 1999), the b parameters were generated by sampling a separate value from its conditional distribution given b; that is, Five hundred examinees were simulated at each of 25 evenly spaced true ability values from À3:0 to + 3:0 with an increment of 0.25. Similar to the time-intensity parameters, examinees' speed parameters were also generated in two different ways: (i) assuming no correlation with u, the t parameters were generated from the N(0, 0.24 2 ) distribution and (ii) assuming a 0.59 correlation with u (van der Linden, 1999), the t parameters were generated by sampling a separate value from its conditional distribution given u; that is, N (m t + r ut s t =s u (u À m u ), s For approach (ii), m t = 0 and s 2 t = 0:24 2 were used to give the same mean and variance of the t parameters as in approach (i).…”
Section: Designmentioning
confidence: 99%