Many STEM degrees require passing an introductory physics course. Physics courses often have high failure rates that may disproportionately harm students who are marginalized by racism, sexism, and classism. We examined the associations between Learning Assistant (LA) supported courses and equity in non-passing grades (i.e., d, drop, fail, or withdrawal; DFW) in introductory physics courses. The data used in the study came from 2312 students in 41 sections of introductory physics courses at a regional Hispanic serving institution. We developed hierarchical generalized linear models of student DFW rates that accounted for gender, race, first-generation status and LA-supported instruction. We used a quantitative critical race theory (QuantCrit) perspective focused on the role of hegemonic power structures in perpetuating inequitable student outcomes. Our QuantCrit perspective informed our research questions, methods, and interpretations of findings. The models associated LAs with overall decreases in DFW rates and larger decreases in DFW rates for students of color than their white peers. While the inequities in DFW rates were lower in LA-supported courses, they were still present.
We investigated the intersectional nature of race/racism and gender/sexism in broad scale inequities in physics student learning using a critical quantitative intersectionality. To provide transparency and create a nuanced picture of learning, we problematized the measurement of equity by using two competing operationalizations of equity: Equity of Individuality and Equality of Learning. These two models led to conflicting conclusions. The analyses used hierarchical linear models to examine student's conceptual learning as measured by gains in scores on research-based assessments administered as pretests and posttests. The data came from the Learning About STEM Student Outcomes' (LASSO) national database and included data from 13,857 students in 187 firstsemester college physics courses. Findings showed differences in student gains across gender and race. Large gender differences existed for White and Hispanic students but not for Asian, Black, and Pacific Islander students. The models predicted larger gains for students in collaborative learning than in lecture-based courses. The Equity of Individuality operationalization indicated that collaborative instruction improved equity because all groups learned more with collaborative learning. The Equality of Learning operationalization indicated that collaborative instruction did not improve equity because differences between groups were unaffected. We discuss the implications of these mixed findings and identify
Physics education researchers (PER) commonly use complete-case analysis to address missing data. For complete-case analysis, researchers discard all data from any student who is missing any data. Despite its frequent use, no PER article we reviewed that used complete-case analysis provided evidence that the data met the assumption of missing completely at random (MCAR) necessary to ensure accurate results. Not meeting this assumption raises the possibility that prior studies have reported biased results with inflated gains that may obscure differences across courses. To test this possibility, we compared the accuracy of complete-case analysis and multiple imputation (MI) using simulated data. We simulated the data based on prior studies such that students who earned higher grades participated at higher rates, which made the data missing at random (MAR). PER studies seldom use MI, but MI uses all available data, has less stringent assumptions, and is more accurate and more statistically powerful than complete-case analysis. Results indicated that complete-case analysis introduced more bias than MI and this bias was large enough to obscure differences between student populations or between courses. We recommend that the PER community adopt the use of MI for handling missing data to improve the accuracy in research studies.
Measuring student learning is a complicated but necessary task for understanding the effectiveness of instruction and issues of equity in college science, technology, engineering, and mathematics (STEM) courses. Our investigation focused on the implications on claims about student learning that result from choosing between one of two commonly used metrics for analyzing shifts in concept inventories. The metrics are normalized gain (g), which is the most common method used in physics education research and other discipline based education research fields, and Cohen's d, which is broadly used in education research and many other fields. Data for the analyses came from the Learning About STEM Student Outcomes (LASSO) database and included test scores from 4551 students on physics, chemistry, biology, and math concept inventories from 89 courses at 17 institutions from across the United States. We compared the two metrics across all the concept inventories. The results showed that the two metrics lead to different inferences about student learning and equity due to the finding that g is biased in favor of high pretest populations. We discuss recommendations for the analysis and reporting of findings on student learning data.
This paper is part of the Focused Collection on Quantitative Methods in PER: A Critical Examination.] Physics education researchers (PER) often analyze student data with single-level regression models (e.g., linear and logistic regression). However, education datasets can have hierarchical structures, such as students nested within courses, that single-level models fail to account for. The improper use of single-level models to analyze hierarchical datasets can lead to biased findings. Hierarchical models (also known as multilevel models) account for this hierarchical nested structure in the data. In this publication, we outline the theoretical differences between how single-level and multilevel models handle hierarchical datasets. We then present analysis of a dataset from 112 introductory physics courses using both multiple linear regression and hierarchical linear modeling to illustrate the potential impact of using an inappropriate analytical method on PER findings and implications. Research can leverage multi-institutional datasets to improve the field's understanding of how to support student success in physics. There is no post hoc fix, however, if researchers use inappropriate single-level models to analyze multilevel datasets. To continue developing reliable and generalizable knowledge, PER should use hierarchical models when analyzing hierarchical datasets. The Supplemental Material includes a sample dataset, R code to model the building and analysis presented in the paper, and an HTML output from the R code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.