2012
DOI: 10.1002/j.2333-8504.2012.tb02294.x
|View full text |Cite
|
Sign up to set email alerts
|

Does Preequating Work? An Investigation Into a Preequated Testlet‐based College Placement Exam Using Postadministration Data

Abstract: In this study, we investigated whether preequating results agree with equating results that are based on observed operational data (postequating) for a college placement program. Specifically, we examined the degree to which item response theory (IRT) true score preequating results agreed with those from IRT true score postequating and from observed score equating. Three academic subjects were examined in this study: analyzing and interpreting literature, American government, and college algebra. The findings … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Despite these mixed results, IRT preequating continues to be one of the most sought‐after methods for preequating. For example, large‐scale testing programs such as the College‐Level Examination Program® (CLEP®) or Graduate Record Examinations® (GRE®) use IRT to preequate new test forms (Gao, He, & Ruan, ).…”
mentioning
confidence: 99%
“…Despite these mixed results, IRT preequating continues to be one of the most sought‐after methods for preequating. For example, large‐scale testing programs such as the College‐Level Examination Program® (CLEP®) or Graduate Record Examinations® (GRE®) use IRT to preequate new test forms (Gao, He, & Ruan, ).…”
mentioning
confidence: 99%
“…Potential factors contributing to its uneven performance include context effects, test speededness, item position effects, and multidimensionality. Even so, IRT-based item preequating is still in demand in large-scale testing programs (Zu & Puhan, 2014;Gao et al, 2012). EICC Preequating.…”
Section: Previous Research On Item Preequating: Irt and Eiccmentioning
confidence: 99%
“…For instance, Bejar and Wingersky (1982) assessed the feasibility of IRT preequating based on the Test of Standard Written English (TSWE); Eignor and colleagues conducted a series of studies on IRT preequating based on data collected from the SAT (Eignor, 1985; Eignor & Cook, 1983; Eignor & Stocking, 1986; Stocking & Eignor, 1986); and Kolen and Harris (1990) experimented with item preequating in the American College Test’s (ACT) Mathematics Assessment Test. The most recent research on the feasibility of IRT preequating in the context of large-scale testing programs is seen in Domaleski (2006); Tong, Wu, and Xu (2008); and Gao, He, and Ruan (2012). The studies show that IRT-based preequating achieves mixed results.…”
Section: A Further Review Of Item Preequatingmentioning
confidence: 99%
“…While the SAT rejected the use of preequating on the basis of this research, during the 1990s, other testing programs moved to test administration and scoring designs, such as computer-adaptive testing, that relied on even more restrictive invariance assumptions than those that did not hold in the SAT studies. Gao et al (2012) investigated whether IRT true-score preequating results based on a Rasch model agreed with equating results based on observed operational data (postequating) for CLEP. The findings varied from subject to subject.…”
Section: Item Response Theory Preequating Researchmentioning
confidence: 99%