Many STEM degrees require passing an introductory physics course. Physics courses often have high failure rates that may disproportionately harm students who are marginalized by racism, sexism, and classism. We examined the associations between Learning Assistant (LA) supported courses and equity in non-passing grades (i.e., d, drop, fail, or withdrawal; DFW) in introductory physics courses. The data used in the study came from 2312 students in 41 sections of introductory physics courses at a regional Hispanic serving institution. We developed hierarchical generalized linear models of student DFW rates that accounted for gender, race, first-generation status and LA-supported instruction. We used a quantitative critical race theory (QuantCrit) perspective focused on the role of hegemonic power structures in perpetuating inequitable student outcomes. Our QuantCrit perspective informed our research questions, methods, and interpretations of findings. The models associated LAs with overall decreases in DFW rates and larger decreases in DFW rates for students of color than their white peers. While the inequities in DFW rates were lower in LA-supported courses, they were still present.
Physics education researchers (PER) commonly use complete-case analysis to address missing data. For complete-case analysis, researchers discard all data from any student who is missing any data. Despite its frequent use, no PER article we reviewed that used complete-case analysis provided evidence that the data met the assumption of missing completely at random (MCAR) necessary to ensure accurate results. Not meeting this assumption raises the possibility that prior studies have reported biased results with inflated gains that may obscure differences across courses. To test this possibility, we compared the accuracy of complete-case analysis and multiple imputation (MI) using simulated data. We simulated the data based on prior studies such that students who earned higher grades participated at higher rates, which made the data missing at random (MAR). PER studies seldom use MI, but MI uses all available data, has less stringent assumptions, and is more accurate and more statistically powerful than complete-case analysis. Results indicated that complete-case analysis introduced more bias than MI and this bias was large enough to obscure differences between student populations or between courses. We recommend that the PER community adopt the use of MI for handling missing data to improve the accuracy in research studies.
We investigated the intersectional nature of race/racism and gender/sexism in broad scale inequities in physics student learning using a critical quantitative intersectionality. To provide transparency and create a nuanced picture of learning, we problematized the measurement of equity by using two competing operationalizations of equity: Equity of Individuality and Equality of Learning. These two models led to conflicting conclusions. The analyses used hierarchical linear models to examine student's conceptual learning as measured by gains in scores on research-based assessments administered as pretests and posttests. The data came from the Learning About STEM Student Outcomes' (LASSO) national database and included data from 13,857 students in 187 firstsemester college physics courses. Findings showed differences in student gains across gender and race. Large gender differences existed for White and Hispanic students but not for Asian, Black, and Pacific Islander students. The models predicted larger gains for students in collaborative learning than in lecture-based courses. The Equity of Individuality operationalization indicated that collaborative instruction improved equity because all groups learned more with collaborative learning. The Equality of Learning operationalization indicated that collaborative instruction did not improve equity because differences between groups were unaffected. We discuss the implications of these mixed findings and identify
Measuring student learning is a complicated but necessary task for understanding the effectiveness of instruction and issues of equity in college science, technology, engineering, and mathematics (STEM) courses. Our investigation focused on the implications on claims about student learning that result from choosing between one of two commonly used metrics for analyzing shifts in concept inventories. The metrics are normalized gain (g), which is the most common method used in physics education research and other discipline based education research fields, and Cohen's d, which is broadly used in education research and many other fields. Data for the analyses came from the Learning About STEM Student Outcomes (LASSO) database and included test scores from 4551 students on physics, chemistry, biology, and math concept inventories from 89 courses at 17 institutions from across the United States. We compared the two metrics across all the concept inventories. The results showed that the two metrics lead to different inferences about student learning and equity due to the finding that g is biased in favor of high pretest populations. We discuss recommendations for the analysis and reporting of findings on student learning data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.