A two-part study was conducted to determine whether theoretical work examining gender differences in cognitive processing can be applied to quantitative items on the Graduate Record Examination (GRE ® ) to minimize gender differences in performance. In Part I, the magnitude of gender differences in performance on specific test items was predicted using a coding scheme. In Part II, a new test was created by using the coding scheme developed in Part I to clone items that elicited few gender-based performance differences. Results indicate that gender differences in performance on some GRE quantitative items may be influenced by cognitive factors such as item context, whether multiple solution paths lead to a correct answer, and whether spatiallybased shortcuts can be used.
In order to create fair and valid assessments, it is necessary to be clear about what is to be measured and how the resulting data should be interpreted. For a number of historical and practical reasons described in this paper, adequately detailed statements with both a theoretical and empirical base do not currently exist for the construct of quantitative reasoning for use in assessments. There is also no adequate explanation of the important differences between assessments that measure quantitative reasoning constructs and those that are intended to measure achievement in related mathematical content areas.The literature in psychology, psychometrics, philosophy, and education, while containing much that is relevant to the construct of quantitative reasoning, unfortunately does not provide materials that can be used in research and development to address such practical issues or to explain the basic nature of quantitative reasoning assessments. This paper briefly discusses the importance and use of constructs and the quantitative reasoning and standards literature. It then presents a statement about the construct of quantitative reasoning for assessment purposes within a construct validity framework that includes both a definition of the construct and threats to valid score interpretation. These threats are based on related but distinguishable constructs and other types of construct-irrelevant variance in the assessments.
In order to estimate the likely effects on item difficulty when a calculator becomes available on the quantitative section of the Graduate Record Examinations® (GRE®‐Q), 168 items (in six 28‐item forms) were administered either with or without access to an on‐screen four‐function calculator. The forms were administered as a special research section at the end of operational tests, with student volunteers randomly assigned to the calculator or no‐calculator groups. Usable data were obtained from 13,159 participants. Test development specialists were asked to rate which items they thought would become easier with a calculator. In general, the specialists were successful in identifying the items with relatively large calculator effects, though even these effects were quite small. An increase of only about four points in the percent correct should suffice for the items identified as likely to show calculator effects with no adjustment needed for the majority of the items. Introduction of a calculator should have little or no effect on gender and ethnic differences.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.