There is an increasing market demand to reduce score reporting time. For example, due to a longstanding desire of test takers, the Praxis R teacher certification program reduced the time taken to report scores on multiple choice tests from 22 to 17 business days (ETS News, 2009). Reducing score reporting time is considered as part of a testing program's effort to enhance service to its customers. Preequating, a process to obtain the raw-to-scale score conversion of a form before it is administered intact (Kolen & Brennan, 2004, p. 205), is one way to reduce score reporting time. To make preequating possible, all operational items in the new form must have been previously administered. Then, equating is conducted on the pretested data to obtain conversions of the new form. Two factors are important for a successful preequating. One is an appropriate equating method. The other is that the statistics relevant for equating are invariant across pretesting and operational samples.Item response theory (IRT) has been essential to preequating. Eignor (1985) and Eignor and Stocking (1986) examined the feasibility of using IRT calibrated item pools to preequate new SAT forms. They concluded that preequating was inadequate for the SAT Mathematics test and provided reasons such as multidimensionality and difference in examinees groups. Bejar and Wingersky (1982), who examined the feasibility of preequating the Test of Standard Written English (TSWE), concluded that preequating does not appear to present problems beyond those inherent to IRT equating. Despite these mixed results, IRT preequating continues to be one of the most sought-after methods for preequating.