We review the literature on the gender gap on concept inventories in physics. Across studies of the most commonly used mechanics concept inventories, the Force Concept Inventory and Force and Motion Conceptual Evaluation, men's average pretest scores are always higher than women's, and in most cases men's posttest scores are higher as well. The weighted average gender difference on these tests is 13% for pretest scores, 12% for posttest scores, and 6% for normalized gain. This difference is much smaller than the average difference in normalized gain between traditional lecture and interactive engagement (25%), but it is large enough that it could impact the results of studies comparing the effectiveness of different teaching methods. There is sometimes a gender gap on commonly used electricity and magnetism concept inventories, the Brief Electricity and Magnetism Assessment and Conceptual Survey of Electricity and Magnetism, but it is usually much smaller and sometimes is zero or favors women. The weighted average gender difference on these tests is 3.7% for pretest scores, 8.5% for posttest scores, and 6% for normalized gain. There are far fewer studies of the gender gap on electricity and magnetism concept inventories and much more variation in the existing studies. Based on our analysis of 26 published articles comparing the impact of 30 factors that could potentially influence the gender gap, no single factor is sufficient to explain the gap. Several high-profile studies that have claimed to account for or reduce the gender gap have failed to be replicated in subsequent studies, suggesting that isolated claims of explanations of the gender gap should be interpreted with caution. For example, claims that the gender gap could be eliminated through interactive engagement teaching methods or through a ''values affirmation writing exercise'' were not supported by subsequent studies. Suggestions that the gender gap might be reduced by changing the wording of ''male-oriented'' questions or refraining from asking demographic questions before administering the test are not supported by the evidence. Other factors, such as gender differences in background preparation, scores on different kinds of assessment, and splits between how students respond to test questions when answering for themselves or for a ''scientist'' do contribute to a difference between male and female responses, but the size of these differences is smaller than the size of the overall gender gap, suggesting that the gender gap is most likely due to the combination of many small factors rather than any one factor that can easily be modified.
In this meta-analysis, we synthesize the results of 24 studies using the Colorado Learning Attitudes about Science Survey (CLASS) and the Maryland Physics Expectations Survey (MPEX) to answer several questions: (1) How does physics instruction impact students' beliefs? (2) When do physics majors develop expert-like beliefs? and (3) How do students' beliefs impact their learning of physics? We report that in typical physics classes, students' beliefs deteriorate or at best stay the same. There are a few types of interventions, including an explicit focus on model-building and/or developing expertlike beliefs that lead to significant improvements in beliefs. Further, small courses and those for elementary education and non-science majors also result in improved beliefs. However, because the available data oversamples certain types of classes, it is unclear whether these improvements are actually due to the interventions, or due to the small class size, or student population typical of the kinds of classes in which these interventions are most often used. Physics majors tend to enter their undergraduate education with more expert-like beliefs than non-majors and these beliefs remain relatively stable throughout their undergraduate careers. Thus, typical physics courses appear to be selecting students who already have strong beliefs, rather than supporting students in developing strong beliefs. There is a small correlation between students' incoming beliefs about physics and their gains on conceptual mechanics surveys. This suggests that students with more expert-like incoming beliefs may learn more in their physics courses, but this finding should be further explored and replicated. Some unanswered questions remain. To answer these questions, we advocate several specific types of future studies: measuring students' beliefs in courses with a wider range of class sizes, student populations, and teaching methods, especially large classes with very innovative pedagogy and small classes with more typical pedagogy; analysis of the relationship between students' beliefs and conceptual understanding including a wide variety of variables that might influence each; and analysis of large data sets from a variety of classes that track individual students rather than averaging over classes.
This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas consistent with a novicelike response and areas of high perceptual salience. Participants ranged from those who had only taken one high school physics course to those who had completed a Physics Ph.D. We found that participants who answered correctly spent a higher percentage of time looking at the relevant areas of the diagram, and those who answered incorrectly spent a higher percentage of time looking in areas of the diagram consistent with a novicelike answer. Thus, when solving physics problems, top-down processing plays a key role in guiding visual selective attention either to thematically relevant areas or novicelike areas depending on the accuracy of a student's physics knowledge. This result has implications for the use of visual cues to redirect individuals' attention to relevant portions of the diagrams and may potentially influence the way they reason about these problems.
This resource letter provides a guide to research-based assessment instruments (RBAIs) of physics and astronomy content. These are standardized assessments that were rigorously developed and revised using student ideas and interviews, expert input, and statistical analyses. RBAIs have had a major impact on physics and astronomy education reform by providing a universal and convincing measure of student understanding that instructors can use to assess and improve the effectiveness of their teaching. In this resource letter, we present an overview of all content RBAIs in physics and astronomy by topic, research validation, instructional level, format, and themes, to help faculty find the best assessment for their course. I. INTRODUCTIONThe physics and astronomy education research communities have produced 60+ researchbased assessment instruments (RBAIs) of physics and astronomy content, which evaluate the effectiveness of different teaching methods. We define a research-based assessment as an assessment that is developed based on research into student thinking for use by the 2 wider physics and astronomy education community to provide a standardized assessment of teaching and learning. Conceptual RBAIs have had a major impact on physics education reform by providing a universal and convincing measure of student understanding that instructors can use to assess and improve the effectiveness of their teaching. Studies using these instruments consistently show that research-based teaching methods lead to dramatic improvements in students' conceptual understanding of physics. 1,2 These instruments are already being used on a very large scale: The Force Concept Inventory 3 (FCI), a test of basic concepts of forces and acceleration, has been given to thousands of students throughout the world; the use of similar instruments in nearly every subject area of physics is becoming increasingly widespread. According to a recent survey of faculty who are about to participate in the Workshop for New Faculty in Physics and Astronomy, nearly half have heard of the FCI, and nearly a quarter have used it in their classrooms. 4 The use of these instruments has the potential to transform teaching practice by informing instructors about their teaching efficacy so that they can improve it.Our previous research shows that many physics faculty are aware of the existence of RBAIs for introductory physics, but want to know more about RBAIs for a wider range of topics, including upper-division physics, and about which assessments are available and how to use them. 5 This resource letter addresses these needs of physics faculty by presenting an overview of content RBAIs by topic, research validation, instructional level, format, and themes, to help faculty find the best assessment for their course. A second resource letter will discuss the large number of RBAIs that cover non-content topics such 3 as attitudes and beliefs about physics, epistemologies and expectations, the nature of physics, problem solving, self-efficacy, math skills, reasonin...
There is a plethora of concept inventories available for faculty to use, but it is not always clear exactly why you would use these tests, or how you should administer them and interpret the results. These research-based tests about physics and astronomy concepts are valuable because they allow for standardized comparisons among institutions, instructors, or over time. In order for these comparisons to be meaningful, you should use best practices for administering the tests. In interviews with 24 physics faculty 1 , we have identified common questions that faculty members have about concept inventories. We have written this article to address common questions from interviews and provide a summary of best practices for administering concept inventories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.