This study investigates university students' understanding of graphs in three different domains: mathematics, physics (kinematics), and contexts other than physics. Eight sets of parallel mathematics, physics, and other context questions about graphs were developed. A test consisting of these eight sets of questions (24 questions in all) was administered to 385 first year students at University of Zagreb who were either prospective physics or mathematics teachers or prospective physicists or mathematicians. Rasch analysis of data was conducted and linear measures for item difficulties were obtained. Average difficulties of items in three domains (mathematics, physics, and other contexts) and over two concepts (graph slope, area under the graph) were computed and compared. Analysis suggests that the variation of average difficulty among the three domains is much smaller for the concept of graph slope than for the concept of area under the graph. Most of the slope items are very close in difficulty, suggesting that students who have developed sufficient understanding of graph slope in mathematics are generally able to transfer it almost equally successfully to other contexts. A large difference was found between the difficulty of the concept of area under the graph in physics and other contexts on one side and mathematics on the other side. Comparison of average difficulty of the three domains suggests that mathematics without context is the easiest domain for students. Adding either physics or other context to mathematical items generally seems to increase item difficulty. No significant difference was found between the average item difficulty in physics and contexts other than physics, suggesting that physics (kinematics) remains a difficult context for most students despite the received instruction on kinematics in high school.
Previous studies have identified physics students' difficulties with graph slope and area under a graph in different contexts. In this study we compared physics and nonphysics (psychology) students' understanding of graphs; i.e., we evaluated the effects of concept (graph slope vs area under graph), type of question (qualitative vs quantitative), and context (physics vs finance) on their scores, strategies, and eye-tracking data. All students solved questions about graph slope better than the questions about area under a graph. Psychology students scored rather low on the questions about area under a graph, and physics students spent more time on questions about area under a graph than on slope questions, indicating that understanding of area under a graph is quite a difficult concept that seems unlikely to develop spontaneously. Generally, physics students had comparable scores on the qualitative and quantitative questions, whereas psychology students solved qualitative questions much better. As expected, students' scores and eye-tracking measures indicated that problems involving physics context were easier for physics students since they typically had higher scores and shorter total and axes viewing times for physics than finance questions. Some physics students may have transferred the concepts and techniques from physics to finance because they typically scored better than psychology students on the finance questions that were novel for both groups. Analysis of student strategies showed that physics students mostly relied on strategies learned in physics courses, with strong emphasis on the use of formulas, whereas psychology students mostly used common-sense strategies, as they did not know the physics formulas. The implications of the results for teaching and learning about graphs in physics courses are also discussed.
The Force Concept Inventory ͑FCI͒ is an important diagnostic instrument which is widely used in the field of physics education research. It is therefore very important to evaluate and monitor its functioning using different tools for statistical analysis. One of such tools is the stochastic Rasch model, which enables construction of linear measures for persons and items from raw test scores and which can provide important insight in the structure and functioning of the test ͑how item difficulties are distributed within the test, how well the items fit the model, and how well the items work together to define the underlying construct͒. The data for the Rasch analysis come from the large-scale research conducted in 2006-07, which investigated Croatian high school students' conceptual understanding of mechanics on a representative sample of 1676 students ͑age 17-18 years͒. The instrument used in research was the FCI. The average FCI score for the whole sample was found to be ͑27.7Ϯ 0.4͒%, indicating that most of the students were still non-Newtonians at the end of high school, despite the fact that physics is a compulsory subject in Croatian schools. The large set of obtained data was analyzed with the Rasch measurement computer software WINSTEPS 3.66. Since the FCI is routinely used as pretest and post-test on two very different types of population ͑non-Newtonian and predominantly Newtonian͒, an additional predominantly Newtonian sample ͑N = 141, average FCI score of 64.5%͒ of first year students enrolled in introductory physics course at University of Zagreb was also analyzed. The Rasch model based analysis suggests that the FCI has succeeded in defining a sufficiently unidimensional construct for each population. The analysis of fit of data to the model found no grossly misfitting items which would degrade measurement. Some items with larger misfit and items with significantly different difficulties in the two samples of students do require further examination. The analysis revealed some problems with item distribution in the FCI and suggested that the FCI may function differently in non-Newtonian and predominantly Newtonian population. Some possible improvements of the test are suggested.
This paper is part of the Focused Collection on Quantitative Methods in PER: A Critical Examination.] The Rasch model is a probabilistic model which describes the interaction of persons (test takers or survey respondents) with test or survey items and is governed by two parameters: item difficulty and person ability. Rasch measurement parallels physical measurement processes by constructing and using linear person and item measures that are independent of the particular characteristics of the sample and the test items along a unidimensional construct. The model's properties make it especially suitable for test construction and evaluation as well as the development and use of surveys. The evaluation of item fit with the model can pinpoint problematic items and flag idiosyncratic respondents. The possibility of determining sample-independent item difficulties makes it possible to use the Rasch model for linking tests and tracking students' progression. The use of the Rasch model in PER is continuously increasing. We provide an overview and examples of its use and benefits, and also outline common mistakes or misconceptions made by researchers when considering the use of the Rasch model. We focus in particular on the question of how Rasch modeling can improve some common practices in PER, such as test construction, test evaluation, and calculation of student gain on PER diagnostic instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.