2021
DOI: 10.1177/25152459211027575
|View full text |Cite
|
Sign up to set email alerts
|

ManyClasses 1: Assessing the Generalizable Effect of Immediate Feedback Versus Delayed Feedback Across Many College Classes

Abstract: Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 49 publications
(39 citation statements)
references
References 42 publications
5
34
0
Order By: Relevance
“…With that said, any generalizations from this small literature are challenging for many reasons: These few studies are so methodologically diverse that effect sizes might vary systematically with aspects of the study design, such as subject sample, video topic and length, number of thought probes, thought-probe format, number of interpolated tests and their format, interpolated-test difficulty, allowing or not allowing notetaking, posttest retention interval and difficulty, and extent of subjects’ prior knowledge on the lecture topic. Future research on the effect of interpolated testing on TUTs should thus take designing-for-variation and meta-analytic approaches to estimating effect size and its robustness (e.g., Baribault et al, 2018 ; Brunswik, 1955 ; Fyfe et al, 2021 ; Greenwald et al, 1986 ; Harder, 2020 ; Landy et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…With that said, any generalizations from this small literature are challenging for many reasons: These few studies are so methodologically diverse that effect sizes might vary systematically with aspects of the study design, such as subject sample, video topic and length, number of thought probes, thought-probe format, number of interpolated tests and their format, interpolated-test difficulty, allowing or not allowing notetaking, posttest retention interval and difficulty, and extent of subjects’ prior knowledge on the lecture topic. Future research on the effect of interpolated testing on TUTs should thus take designing-for-variation and meta-analytic approaches to estimating effect size and its robustness (e.g., Baribault et al, 2018 ; Brunswik, 1955 ; Fyfe et al, 2021 ; Greenwald et al, 1986 ; Harder, 2020 ; Landy et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…For instance, questions about novel approaches to feedback (section 3.1.3) could naturally be addressed with randomised controlled trials comparing student outcomes under different feedback conditions. Moreover, some questions could be suitable for a multi-site approach, similar to the recent ManyClasses study on the efficacy of delayed feedback (Fyfe et al, 2021). For instance, comparing different approaches to timing of assessments (Question 30) across many contexts would enable a better understanding of possible moderators of their effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…By uncovering the predictors, consequences, and potential underlying causes for procrastination, however, prior research has laid the necessary foundation for developing and conducting rigorous intervention studies. Like Fyfe et al's (2021) ManyClasses project, it will be demanding to design and conduct intervention research in line with the guiding principles of TORCHeS, but its ambitious goals are within reach through open, collaborative efforts to "pass the torch" from researchers to instructors and students.…”
Section: Discussionmentioning
confidence: 99%