In this study, we evaluate the structural validity of Q.16 and Q.7 in the Force Concept Inventory (FCI). We address whether respondents who answer Q.16 and Q.7 correctly actually have an understanding of the concepts of physics tested in the questions. To examine respondents' levels of understanding, we use subquestions that test them on concepts believed to be required to answer the actual FCI questions. Our sample size comprises 111 respondents; we derive false-positive ratios for prelearners and postlearners and then statistically test the difference between them. We find a difference at the 0.05 significance level for both Q.16 and Q.7, implying that it is possible for postlearners to answer both questions without an understanding of the concepts of physics tested in the questions; therefore, the structures of Q.16 and Q.7 are invalid. In this study, we only evaluate the validity of these two FCI questions; we do not assess the validity of previous studies that have compared total FCI scores.
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.
We address the validity of the FCI, that is, whether respondents who answer FCI questions correctly have an actual understanding of the concepts of physics tested in the questions. We used sub-questions that test students on concepts believed to be required to answer the actual FCI questions. Our sample size comprises about five hundred respondents; we derive false positive ratios for pre-learners and post-learners, and evaluate the significant difference between them. Our analysis shows a significant difference at the 95 % confidence level for Q.6, Q.7, and Q.16, implying that it is possible for post-learners to answer three questions without understanding the concepts of physics tested in the questions; therefore, Q.6, Q.7 and Q.16 are invalid.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.