Preinstruction SAT scores and normalized gains ͑G͒ on the force concept inventory ͑FCI͒ were examined for individual students in interactive engagement ͑IE͒ courses in introductory mechanics at one high school ͑N = 335͒ and one university ͑N = 292͒, and strong, positive correlations were found for both populations ͑r = 0.57 and r = 0.46, respectively͒. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson's Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students' cognitive skills, as measured either by the SAT or by Lawson's test. While Lawson's test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students' reasoning abilities. Knowing the students' cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students' cognitive level.
Many teachers administer a force concept test such as the Force Concept Inventory1,2 (FCI) to their students in an effort to evaluate and improve their instructional practices. It has been commonly assumed that looking at class normalized gains allows teachers to compare their courses with other courses. In this paper we present evidence to suggest that the use of class normalized gains alone may not provide a complete picture. We argue that student reasoning ability should also be assessed before between-course comparisons can be made. Assessment of reasoning ability is also useful in identifying students who are at risk. In the following we shall concentrate on the FCI, but we think our conclusions probably apply to physics concept tests generally.
Recently, Nissen et al. argued in this journal for the use of Cohen's d, in place of the more commonly used normalized gain, in the analysis of preinstruction and postinstruction scores on concept inventories used to measure the effectiveness of instruction. Their reason for advocating such a change is that they say normalized gains are "prescore biased." We provide five examples, including one cited by Nissen, that show no prescore bias when data are carefully analyzed, demonstrating that the problem with their analysis is omitted variable bias. We show that Cohen's d is less informative than normalized gain when used as a single parameter measure of teaching effectiveness, even though, as Nissen points out, d is more widely used in other fields. We believe that physics education researchers should continue to use normalized gain to assess educational effectiveness of pedagogy. However, because different student populations can have significantly different responses to the same pedagogy, in any interpretation of normalized gain, it is important to consider a measure of the abilities of the students. In analyzing normalized gains for the Force Concept Inventory (FCI), average scores on either Lawson's Test of Scientific Reasoning Ability or the SAT should be considered, because these scores are strongly correlated with normalized gain, indicating student abilities may have a greater impact on the gains achieved in a class than the specific pedagogy used.
In a recent article, Ates and Cataloglu (2007 Eur. J. Phys. 28 1161–71), in analysing results for a course in introductory mechanics for prospective science teachers, found no statistically significant correlation between students' pre-instruction scores on the Lawson classroom test of scientific reasoning ability (CTSR) and post-instruction scores on the force concept inventory (FCI). As a possible explanation, the authors suggest that the FCI does not probe for skills required to determine reasoning abilities. Our previously published research directly contradicts the authors' finding. We summarize our research and present a likely explanation for their observation of no correlation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.