The identification of students at risk for academic failure in undergraduate chemistry courses has been heavily addressed in the literature. Arguably one of the strongest and most well-supported predictors of undergraduate success in chemistry is the mathematics portion of the SAT (SAT-M), a college-entrance, standardized test administered by the College Board. While students scoring in the bottom quartile of the SAT-M (herein referred to as at-risk) perform significantly worse on first-semester chemistry assessments, little is known of the topics on which these students differentially struggle. The purpose of this study is to provide insight as to which first-semester chemistry topics present an incommensurate challenge to at-risk students. Students were identified as either at-risk or not at-riskviaSAT-M scores. Students’ assessment responses were collected across four semesters of first-semester chemistry courses at a large, public university (N= 5636). At-risk students struggled consistently across all topics but disproportionately with mole concept and stoichiometry. Analyzing the trend in topics suggests that the struggles of at-risk students are not entirely attributable to topics that rely heavily on algorithms or algebraic math. Moreso, at-risk students found to have performed well on mole concept and stoichiometry went on to perform similarly as their not at-risk peers. The results support an instructional emphasis on these topics with reviewed literature offering promising, practical options to better serve at-risk students and broaden representation in the sciences.
What we emphasize and reward on assessments signals to students what matters to us. Accordingly, a great deal of scholarship in chemistry education has focused on defining the sorts of performances worth assessing. Here, we unpack observations we made while analyzing what “success” meant across three large-enrollment general chemistry environments. We observed that students enrolled in two of the three environments could succeed without ever connecting atomic/molecular behavior to how and why phenomena happen. These environments, we argue, were not really “chemistry classes” but rather opportunities for students to gain proficiency with a jumble of skills and factual recall. However, one of the three environments dedicated 14–57% of points on exams to items with the potential to engage students in using core ideas (e.g., energy, bonding interactions) to predict, explain, or model observable events. This course, we argue, is more aligned with the intellectual work of the chemical sciences than the other two. If our courses assess solely (or largely) decontextualized skills and factual recall we risk (1) gating access to STEM careers on the basis of facility with skills most students will never use outside the classroom and (2) never allowing students to experience the tremendous predictive and explanatory power of atomic/molecular models. We implore the community to reflect on whether “what counts” in the courses we teach aligns with the performances we actually value.
Many conversations surrounding improvement of large‐enrollment college science, technology, engineering & mathematics (STEM) courses focus primarily (or solely) on changing instructional practices. By reducing dynamic, complex learning environments to collections of teaching methods, we neglect other meaningful parts of a course ecosystem (e.g., curriculum, assessments). Here, we advocate extending STEM education reform conversations beyond “active versus passive learning.” We argue communities of researchers and instructors would be better served if what we teach and assess was discussed alongside how we teach. To enable nuanced conversations about the characteristics of learning environments that support students in explaining phenomena, we defined a model of college STEM learning environments which attends to the intellectual work emphasized and rewarded on exams (i.e., assessment emphasis), what is taught in whole‐class meetings (i.e., instructional emphasis), and how those meetings are enacted (i.e., instructional practices). We subsequently characterized three distinct chemistry courses and qualitatively examined the characteristics of chemistry learning environments that effectively supported students in explaining why a beaker of water warms as a white solid dissolves. Furthermore, we quantitatively investigated the extent to which measures of incoming preparation explained variance in students’ explanations relative to enrollment in each learning environment. Our findings demonstrate that learning environments that effectively supported learners in explaining dissolution emphasized how and why salts dissolve in‐class and on assessments. Changing teaching methods in an otherwise traditionally structured course (i.e., a course organized by topics that primarily assesses math and recall) did not appear to impact the sophistication of students’ explanations. Additionally, we observed that learning environment enrollment explained substantially more of the variance observed in students’ explanations than measures of precollege math preparation. This finding suggests that emphasizing and rewarding the construction of causal accounts for phenomena in‐class and on assessments may support more equitable achievement.
The affordances given to a structured, timed, and proctored paper exam are not as readily applicable in a digital medium. Accordingly, the rapid shift from in-person to online enactments may have forced instructors to consider changing their assessment practices and priorities. As assessments convey strong implicit messages about “what counts” in a given learning environment, altering what is assessed may have a profound impact on what students view as important in a course. Our four-instructor team sought to examine whether we were able to maintain emphasis on assessing how and why chemical phenomena occur online while minimizing negative impacts to students, teaching assistants, and ourselves. To support claims regarding the degree to which online assessments emphasized sensemaking relative to past exams, we characterized all summative assessments given in organic chemistry II enactments from 2016 to the present using the three-dimensional learning assessment protocol. To examine the impact of enrolling in a rapidly assembled online organic course on student outcomes, we examined the distribution of students who performed above, at, or below the final exam score predicted by their midterm performance and compared this distribution with historic norms. Results suggest that we were able to maintain emphasis on student sensemaking as our course moved online (∼50% of points on exams administered remotely were dedicated to 3D performances). Additionally, the distribution of students enrolled this past spring who scored above, at, or below the final exam score predicted by their midterm performance was in line with historic norms. When taken in aggregate, our analyses suggest that organic chemistry-enrolled students maintained their ability to make sense of chemical phenomena after the pivot to online instruction. Consistent emphasis on assessing 3D learning online was achieved without adding appreciably to the burden on instructors or teaching assistants due to our assessment writing practices, streamlined approach to online grading, and pre-existing course resources. Instructional implications for assessment design, enacting team grading, and tracking student trajectories are provided in addition to a suite of assessment items with the potential to engage students in sensemaking.
Students who score within the bottom quartile on cognitive measures of math aptitude have been identified as at-risk for low performance in chemistry courses, with less attention as to why such differential performance persists. At-risk students struggle most differentially on assessment items related to the mole concept and stoichiometry. An exploration as to the nature of the differential performance observed became of great interest as the assessment of these topics rarely progresses beyond multiplication or division, and at-risk students who achieved proficiency with the mole concept and stoichiometry had no noticeable gaps in academic chemistry performance when compared to students scoring in the top three quartiles of math aptitude. Thus, students in first-semester general chemistry were surveyed to describe their solution processes toward assessment items involving the mole concept and stoichiometry. Three hundred and forty-eight students responded to all survey prompts with 101 identified as at-risk. Findings suggest that while all students were observed to struggle in the conceptualization of the algorithms by which they execute solution processes, not-at-risk chemistry students were more likely to achieve correct answers via chemically implausible solution pathways. Rather than suggest the removal of assessment practices involving algorithmic, multiple-choice assessment on these topics, the implications include practical suggestions and opportunities for further research toward improving the equitability of measures used to assess proficiency with stoichiometry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.