2013
DOI: 10.1007/s10648-013-9227-1
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Impact of Testing Aids on Post-Secondary Student Performance: A Meta-Analytic Investigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
7
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 37 publications
3
7
0
Order By: Relevance
“…Student-prepared aids (e.g. own notes or "cheat sheets") had a more positive effect than other material (Larwin et al 2013). The results of this study support these findings with students perceiving a positive effect on performance with own notes added to texts perceived as the most beneficial testing aid at their disposal.…”
Section: Effect On Performancesupporting
confidence: 73%
“…Student-prepared aids (e.g. own notes or "cheat sheets") had a more positive effect than other material (Larwin et al 2013). The results of this study support these findings with students perceiving a positive effect on performance with own notes added to texts perceived as the most beneficial testing aid at their disposal.…”
Section: Effect On Performancesupporting
confidence: 73%
“…The sizes of effects for this learning method are discernibly larger than those for the other course-based learning methods. Notwithstanding these differences, problem-based learning (including guided design), student self-directed learning, critical thinking instruction, and different kinds of note-taking practices [95][96][97] were all positively related to student performance. In contrast, visually-based learning and explanation-based learning were associated with smaller sizes of effect for the influence of these learning methods on student performance.…”
Section: Course-based Learning Methodsmentioning
confidence: 99%
“…Comparing the effectiveness of interventions across these areas on the basis of relative average effect size is not a valid form of argumentation. Schneider and Preckel (2017) rank (among many other meta-analyses) averaged effect sizes from a set of studies for intelligent learning systems (d = 0.35;Steenbergen-Hu & Cooper, 2014) with that for testing aids (d = 0.34; Larwin et al, 2013). To argue, then, that these types of intervention are about equally effective because the effect sizes are close requires that the other elements which impact on effect size (test, comparison treatment and sample) are distributed in the same way in each set.…”
Section: A Simpsonmentioning
confidence: 99%
“…They are not: take one component, the distribution of comparison treatments. Steenbergen-Hu and Cooper (2014) include studies with a wide variety of comparison activity (as noted above, from comparing to human tutoring, through to not teaching the topic at all), while Larwin et al (2013) include only studies where the comparison treatment involves no testing aids at all. These cannot be described as studies with the same distribution of comparison treatments.…”
Section: A Simpsonmentioning
confidence: 99%