In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merriënboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners’ eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants’ working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies.
Cognitive load theory (CLT) provides us guiding principles in the design of learning materials. CLT differentiates three different kinds of cognitive load --intrinsic, extraneous and germane load. Intrinsic load is related to the learning goal, extraneous load costs cognitive resources but does not contribute to learning. Germane load can foster learning. Objective methods, such as eye movement measures and EEG have been used measure the total cognitive load. Very few research studies, if any, have been completed to measure the three kinds of load separately with physiological methods in a continuous manner. In this current study, we will show how several eye-tracking based parameters are related to the three kinds of load by having explicit manipulation of the three loads independently. Participants having low prior knowledge regarding the learning material participated in the study. Working memory capacity was also measured by an operation memory span task.
Even though it has been shown that retrieval practice could foster deeper learning and better long-term retention in other domains such as psychology, it is rarely studied in the context of physics learning where students need to solve more complicated problems. To an even lesser degree is comparing retrieval-based learning with other active learning style methods adopted in the physics classroom. In this study, we compared the effects of retrieval-based learning and peer instruction based restudying on physics problem solving and transfer. In both conditions (retrieval and peer instruction), participants were first presented video lectures explaining the definition of speed and energy conservation. In the training session that immediately followed the video lectures, the retrieval condition was asked to recognize, recall, and apply the relevant physics concepts to solving problems; while the peer instruction condition was asked to discuss the two video lectures with each other in a group of three or four members. After this training, an immediate and a delayed (one-week) final test were administered. Both tests contained an initial task, isomorphic to the training materials, near transfer tasks, and far transfer tasks for each topic. Subjective judgment of learning (JOL) was collected immediately after watching the video lectures, after the training, and right before the delayed final test. We found a retention advantage of the retrieval practice for the initial tasks, but not for the near and far transfer tasks. We also found an advantage of the retrieval practice on far transfer on the immediate final test. Peer instruction training inflated the participants' JOL compared to the retrieval practice. Both JOL scores were significantly lower than after the video lecture but did not differ from before the delayed final test.
Cognitive load theory (CLT) posits the classic view that cognitive load (CL) has three-components: intrinsic, extraneous and germane. Prior research has shown that subjective ratings are valid measures of different CL subtypes. To a lesser degree, how the validity of these subjective ratings depends on learner characteristics has not been studied. In this research, we explored the extent to which the validity of a specific set of subjective measures depends upon learners’ prior knowledge. Specifically, we developed an eight-item survey to measure the three aforementioned subtypes of CL perceived by participants in a testing environment. In the first experiment (N = 45) participants categorized the eight items into different groups based on similarity of themes. Most of the participants sorted the items consistent with a threefold construct of the CLT. Interviews with a subgroup (N = 13) of participants provided verbal evidence corroborating their understanding of the items that was consistent with the classic view of the CLT. In the second experiment (N = 139) participants completed the survey twice after taking a conceptual test in a pre/post setting. A principal component analysis (PCA) revealed a two-component structure for the survey when the content knowledge level of the participants was initially lower, but a three-component structure when the content knowledge of the participants was improved to a higher level. The results seem to suggest that low prior knowledge participants failed to differentiate the items targeting the intrinsic load from those measuring the extraneous load. In the third experiment (N = 40) participants completed the CL survey after taking a test consisting of problems imposing different levels of intrinsic and extraneous load. The results reveals that how participants rated on the CL survey was consistent with how each CL subtype was manipulated. Thus, the CL survey developed is decently effective measuring different types of CL. We suggest instructors to use this instrument after participants have established certain level of relevant knowledge.
Background: Previous work has identified that the benefits of learning with videogames and learning from simulations. However, recent meta-analytic work has also identified that little research directly compares learning with videogames and learning with simulations.Objectives: This study examines two learning technologies and their corresponding pedagogical approaches and compares them for learning the Science, Technology, Engineering and Mathematics topic of electric charges.Methods: Participants were randomly assigned to either an intervention using a computer simulation for inquiry-based learning or a computer videogame for game-based learning. Their learning gains, self-reported emotional state and experienced cognitive load were recorded.Results: We found that both learning environments improved conceptual learning, and there were no statistically significant differences between the two conditions.Participants did perceive the game-based environment to be more engaging as well as more frustrating. We also found that cognitive load did not predict learning-however, different types of cognitive load correlated with different emotions. Overall, participants in both conditions were engaged and perceived understanding of the topic, yet they also experienced both confusion and task-unrelated thoughts.Takeaways: When learning with simulations and videogames, educators need to align intended learning outcomes with pedagogical approaches enabled by technology. In addition, a balance between principles of multimedia learning to reduce or prevent extraneous processing, and scaffolding to reduce negative effects of learning with technology, need to be considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.