Time limits on some computer-ud(iptive tests (CATk) are such that many examinees have difjculty finishing, mid some examinees may be administered tests with more iirna-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six quesiioris of the analytical section of the Graduate Record Examination if they were to finish befiwe time expires. At the higher-ability levels, even more guessing was required because the questions administered to higher-ability examinees were typically more time consuming. Because the scoring model is not designed to cope with extended strings of guesses, substantial errors in ability estimates can be introduced when CATS have strict time limits. Furthermore. examinees who are administered tests with a disproportionate number of time-consuming items appear to get lower scores than examinees of comparable ability who are administered tests containing items that cun be answered more quickly, though the issue is very complex becuuse ofthe relationship of time and dificultv, arid the niultidimensionaliry of the test.The Graduate Record Examination General Test (GRE) is a computer-adaptive test (CAT) of verbal, quantitative, and analytical reasoning skills. Unlike some CATS, the GRE has a fixed number of questions and strict time limits on each section. According to the GRE Technical Manual (Briel, O'Neill, & Scheuneman, 1993) "GRE General and Sub.ject Tests are not intended to be speeded" (p. 32). When the CAT version of the GRE was first introduced, "time limits were set with the intention that almost all examinees would have sufficient time to answer all items" (Schaeffer et al., 1995, p. 18). Nevertheless, fairly strict time limits were imposed in order to maintain comparability with the existing paper-and-pencil forms (and a linear computer-based test), which were somewhat speeded tests. Research on the linear computer-administered analytical section of the GRE General Test (GRE-A) suggested that, although completion rates were reasonably high, many students had to make random guesses at the end in order to finish (Schnipke & Scrams, 1997).Because the GRE is a CAT, different examinees receive different sets of questions. The three parameter logistic scoring model takes account of the difficulty differences in these questions. Examinees who get difficult questions are not disadvantaged relative to examinees who get easier questions, but unidimensional scoring models do not take into account differences in the amount of time it takes to respond to different questions (Hambleton & Swaminathan, 1985). A fair assessment on a speeded test would seem to require that no examinee should, by chance, receive a set of items that takes longer to answer than the items given to another examinee. Bridgeman and Cline (2000) presented evidence that some questions on the quantitative and analytical sections of the GRE CAT could be answered more quickly than others. Much of
Given the serious consequences of making ill‐fated admissions and funding decisions for applicants to graduate and professional school, it is important to rely on sound evidence to optimize such judgments. Previous meta‐analytic research has demonstrated the generalizable validity of the GRE® General Test for predicting academic achievement. That research does not address predictive validity for specific populations and situations or the predictive validity of the GRE Analytical Writing section introduced in October 2002. Furthermore, much of the past GRE predictive validity research is primarily based on approaches that are correlational and univariate only. Stakeholders familiar with GRE predictive validity mainly in the form of zero‐order correlation coefficients might automatically interpret the usefulness of the GRE solely through the prism of Cohen's (1988) guidelines for judging effect sizes and without regard to the larger context. However, by using innovative and multivariate approaches to conceptualize and measure GRE predictive validity within the larger context, our investigation reveals the substantial value of the GRE General Test, including its Analytical Writing section, for predicting graduate school grades.
This project described the characteristics and teaching behaviors of those successfully teaching AP® Calculus AB and AP English Literature and Composition to underrepresented minority students. Its purpose was to assist educators in improving the participation and performance of underrepresented minority students in AP classes. Study results showed successful teachers of minority students are good teachers for all groups. They express a high opinion of students, both majority and minority, and hold them to high standards. They make sure that students understand and can apply the fundamental concepts in the discipline. They also help students and parents understand and feel comfortable about college.
This validity study examined differential item functioning (DIF) results on large-scale state standards-based English-language arts assessments at grades 4 and 8 for students without disabilities taking the test under standard conditions and students who are blind or visually impaired taking the test with either a large print or braille form. Using the Mantel-Haenszel method, only one item at each grade was flagged as displaying large DIF, in each case favoring students without disabilities. Additional items were flagged as exhibiting intermediate DIF, with some items found to favor each group. A priori hypothesis coding and attempts to predict the effects of large print or braille accommodations on DIF were not found to have a relationship with the actual flagging of items, although some a posteriori explanations could be made. The results are seen as supporting the accessibility and validity of the current test for students who are blind or visually impaired while also identifying areas for improvement consisting mainly of attention to formatting and consistency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.