Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Mastery‐based assignments typically provide students with multiple opportunities to improve their performance, but getting students to take advantage of these opportunities is difficult. We report on the implementation of a two‐part series of nudges designed to improve students' engagement with and performance on mastery‐based assignments in high‐enrollment, introductory undergraduate courses. Students received one or two email‐based nudges for each assignment, providing both decision information and decision assistance. Delivered via email by a digital coaching system that provides tailored support to students, we evaluated the effectiveness of viewing versus not viewing these personalized nudges across introductory courses in mathematics, computer science and engineering over five terms at our large, research‐intensive university. We used multilevel modelling with a sample of 30,693 assignment scores nested within 79 assignments and 5349 undergraduate students to explore the relationship between students' viewing the nudges and their scores while controlling for performance in other courses and students' baseline proclivities to read email from the coaching system. We find that viewing versus not viewing nudges is significantly related to improved outcomes. Viewing the first of the two‐part nudge series is associated with a 3% increase in scores on average. Importantly, we found a significant interaction with prior academic performance such that students with the lowest average academic performance across all courses are predicted to benefit the most—up to 9%—from nudging. Positive impacts are observed in most courses where nudges were implemented and are found to be most impactful for later versus earlier assignments within a term. We discuss the implications of these results for supporting students' decision‐making within mastery‐based assignment schemes and suggest future research for tailored nudges to support student success. Practitioner notesWhat is already known about this topic Nudges are known to be broadly effective, but heterogeneity is observed especially in education where the research base is relatively limited. Mastery‐based grading is an alternative approach to grading that, among other features, emphasizes learning through an explicit feedback cycle of struggle and revision without penalty. What this paper adds This study assesses the effectiveness of personalized nudges in the context of mastery‐based grading in undergraduate STEM courses. We examined the efficacy of viewing versus not viewing nudges in the real learning settings of three STEM courses enrolling more than 5000 students. We found the nudges to be effective overall, correlated with improvements of approximately 3% in students' assignment scores and most impactful for students with the lowest average academic performance. Implications for practice and/or policy Nudging by email or text message has a low overhead cost and yet may provide materially significant gains in students' assessment scores. Nudges can be effectively targeted to specific subsets of students and tailored based on students' prior and concurrent learning analytics data.
Mastery‐based assignments typically provide students with multiple opportunities to improve their performance, but getting students to take advantage of these opportunities is difficult. We report on the implementation of a two‐part series of nudges designed to improve students' engagement with and performance on mastery‐based assignments in high‐enrollment, introductory undergraduate courses. Students received one or two email‐based nudges for each assignment, providing both decision information and decision assistance. Delivered via email by a digital coaching system that provides tailored support to students, we evaluated the effectiveness of viewing versus not viewing these personalized nudges across introductory courses in mathematics, computer science and engineering over five terms at our large, research‐intensive university. We used multilevel modelling with a sample of 30,693 assignment scores nested within 79 assignments and 5349 undergraduate students to explore the relationship between students' viewing the nudges and their scores while controlling for performance in other courses and students' baseline proclivities to read email from the coaching system. We find that viewing versus not viewing nudges is significantly related to improved outcomes. Viewing the first of the two‐part nudge series is associated with a 3% increase in scores on average. Importantly, we found a significant interaction with prior academic performance such that students with the lowest average academic performance across all courses are predicted to benefit the most—up to 9%—from nudging. Positive impacts are observed in most courses where nudges were implemented and are found to be most impactful for later versus earlier assignments within a term. We discuss the implications of these results for supporting students' decision‐making within mastery‐based assignment schemes and suggest future research for tailored nudges to support student success. Practitioner notesWhat is already known about this topic Nudges are known to be broadly effective, but heterogeneity is observed especially in education where the research base is relatively limited. Mastery‐based grading is an alternative approach to grading that, among other features, emphasizes learning through an explicit feedback cycle of struggle and revision without penalty. What this paper adds This study assesses the effectiveness of personalized nudges in the context of mastery‐based grading in undergraduate STEM courses. We examined the efficacy of viewing versus not viewing nudges in the real learning settings of three STEM courses enrolling more than 5000 students. We found the nudges to be effective overall, correlated with improvements of approximately 3% in students' assignment scores and most impactful for students with the lowest average academic performance. Implications for practice and/or policy Nudging by email or text message has a low overhead cost and yet may provide materially significant gains in students' assessment scores. Nudges can be effectively targeted to specific subsets of students and tailored based on students' prior and concurrent learning analytics data.
Randomized controlled trials (RCTs) admit unconfounded design-based inference – randomization largely justifies the assumptions underlying statistical effect estimates – but often have limited sample sizes. However, researchers may have access to big observational data on covariates and outcomes from RCT nonparticipants. For example, data from A/B tests conducted within an educational technology platform exist alongside historical observational data drawn from student logs. We outline a design-based approach to using such observational data for variance reduction in RCTs. First, we use the observational data to train a machine learning algorithm predicting potential outcomes using covariates and then use that algorithm to generate predictions for RCT participants. Then, we use those predictions, perhaps alongside other covariates, to adjust causal effect estimates with a flexible, design-based covariate-adjustment routine. In this way, there is no danger of biases from the observational data leaking into the experimental estimates, which are guaranteed to be exactly unbiased regardless of whether the machine learning models are “correct” in any sense or whether the observational samples closely resemble RCT samples. We demonstrate the method in analyzing 33 randomized A/B tests and show that it decreases standard errors relative to other estimators, sometimes substantially.
To meet the goal of understanding students' complex learning processes and maximizing their learning outcomes, the field of learning analytics delves into the myriad of data captured as students use computer assisted learning platforms. Although many platforms associated with learning analytics focus on students' performance, performance on learning related tasks is a limited measure of learning itself. In this chapter, the authors review research that leverages data collected in programs to understand specific learning processes and contribute to a robust vision of knowledge acquisition. In particular, they review work related to two important aspects of the learning process—students' problem-solving strategies and behavioral engagement—then provide an example of an effective math program that focuses on the learning process over correct or incorrect responses. Finally, they discuss ways in which the findings f rom this research can be incorporated into the development and improvement of computer assisted learning platforms, with the goal of maximizing students' learning outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.