The potential benefits of computer-based testing include the ability to present a wider variety of item formats and to tailor tests for specific purposes. In this study we examined the relationships among revised reasoning items that had more varied formats than the traditional items, a constructed-response generating-explanations task, and the current GRE General Test. In addition we examined the relationship of these measures to other indicators of achievement. A computer-based test of the revised reasoning items and the generating-explanations task was administered to a sample of examinees who previously had taken a paper-and-pencil version of the GRE General Test. The revised reasoning item types had acceptable psychometric characteristics and were correlated highly with the current logical reasoning items as well as with items on the GRE verbal measure. The generating-explanations task was only marginally related to the GRE measures and to the revised reasoning items and was more strongly related to ideational fluency measures. Factor analytic results suggest that if the revised reasoning items were added to the analytical measure, the correlation between the verbal measure and the analytical measure would increase. A better fitting factor model resulted when some of the current and the revised reasoning items were included in the verbal measure. The relationship of the generating-explanations task and the analytical measure to other indicators of achievement varied for type of achievement and for broad major undergraduate fields of study. The implications of these results for understanding the nature of the skills assessed by different tasks and formats, and for long-term modifications in the GRE General Test, are discussed.
11
Executive SummaryIn this study the potential effects of adding new variations of reasoning items and a generating-explanations task to the GRE General Test were evaluated. The impact of these item types on the internal structure of the test and their relationships to other, concurrent criteria of success and achievement were examined. An experimental, computer-based test with the new variations and the generating-explanations task was administered to a sample of 388 examiness who had previously taken a paper-and-pencil version of the GRE General Test. The two most significant outcomes of this study were that (a) adding the new variations of logical reasoning items to the analytical reasoning measure is likely to decrease its discriminant validity and that (b) the generating-explanations task is as distinct from the new variations of reasoning items as it is from verbal and quantitative reasoning.The variations of reasoning items that were evaluated in this study included argument evaluation, logical functions, and analysis of explanations. The formats of the logical functions items and analysis of explanations differed from traditional multiple-choice items. For logical function items, examinees had to highlight a sentence in an argument that served a particular function. Fo...