Interactive computer simulations are commonly used as a pedagogical tool to support students’ statistical reasoning. However, whether and how these simulations enable the intended effects remain unclear. Here, we review the literature on students’ reasoning about statistical sampling through simulations. The findings suggest tentative benefits of the simulations in terms of building statistical habits of mind. However, challenges remain persistent when more specific concepts and skills are investigated. Students have difficulty forming an aggregate view of data, interpreting sampling distributions, showing a process-based understanding of the law of large numbers, making statistical inferences, and context independent reasoning. From a grounded cognition perspective, we discuss the roles of repeated practice, specific design elements, and guidance of visual routines for supporting students’ meaning-making from simulations. Finally, we propose testable instructional strategies for using simulations in statistics education. Overall, the paper illuminates the underlying cognitive processes of reasoning about statistical sampling and offers a set of concrete pedagogical considerations that future empirical studies can test.
Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic, and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications, and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target.
Previous research indicates that failure feedback leads people to tune out from the task, which is detrimental to their learning (Eskreis-Winkler & Fishbach, 2019; Keith et al., 2022). The current work aims to identify ways to optimize learning from failure feedback. We conducted five preregistered experiments (N = 1,061) to replicate the findings from Eskreis-Winkler and Fishbach (2019) and test boundary conditions to the tuneout effects of failure feedback. The detriments of failure feedback replicated in Studies 1a, 1b, and 1c which altered the focus of the feedback message to be self-focused (e.g., your answer) or task-focused (e.g., the answer). The detrimental effects also replicated in Study 2a, particularly when participants expected the task to be easy. These results generally underscored the robustness of the results from the original study. However, Study 2b established boundary conditions. When it was a rule-based task and brief instructions on the rule were provided after feedback, there was no evidence for a detrimental effect of failure, and failure feedback resulted in even better learning than success feedback for learning new material. We conclude that the tune-out reactions to failure during feedback are diminished, and may even be reversed, when learning is assessed on subsequent tasks.
Reasoning about sampling distributions is notably challenging for humans. It has been argued that the complexity involved in sampling processes can be facilitated by interactive computer simulations that allow learners to experiment with variables. In the current study, we compared the effects of learning sampling distributions through a simulation-based learning (SBL) versus direct instruction (DI) method. While both conditions resulted in similar improvement in rule learning and graph identification, neither condition improved more distant transfer of concepts. Furthermore, the simulation-based learning method resulted in unintuitive and surprising kinds of misconceptions about how sample size affects estimation of parameters while the direct instruction group used correct intuitive judgments more often. We argue that similar perceptual properties of different sampling processes in the SBL condition overrode learners’ intuitions and led them to make conceptual confusions that they would not typically make. We conclude that conceptually important differences should be grounded in easily interpretable and distinguishable perceptual representations in simulation-based learning methods.
Computer-based interactive simulations that model the processes of sampling from a population are increasingly being used in data literacy education. However, these simulations are often summarized by graphs designed from the point of view of experts which makes them difficult for novices to grasp. In our ongoing design-based research project, we build and test alternative sampling simulations to the standard ones. Based on a grounded and embodied learning perspective, the core to our design position is that difficult and abstract sampling concepts and processes should: be grounded in familiar objects that are intuitive to interpret, incorporate concrete animations that spontaneously activate learners’ gestures, and be accompanied by verbal instruction for a deeply integrated learning. Here, we report the results from the initial two phases of our project. In the first iteration, through an online experiment (N=126), we show that superficial perceptual elements in a standard simulation can lead to misinterpretation of concepts. In the second iteration, we pilot test a new grounded simulation with think-aloud interviews (N=9). We reflect on the complementary affordances of visual models, verbal instruction, and learners’ gestures in fostering integrated and deep understanding of concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.