Parieto-occipital electroencephalogram (EEG) alpha power and subjective reports of attentional state are both associated with visual attention and awareness, but little is currently known about the relationship between these two measures. Here, we bring together these two literatures to explore the relationship between alpha activity and participants’ introspective judgments of attentional state as each varied from trial-to-trial during performance of a visual detection task. We collected participants’ subjective ratings of perceptual decision confidence and attentional state on continuous scales on each trial of a rapid serial visual presentation detection task while recording EEG. We found that confidence and attentional state ratings were largely uncorrelated with each other, but both were strongly associated with task performance and post-stimulus decision-related EEG activity. Crucially, attentional state ratings were also negatively associated with prestimulus EEG alpha power. Attesting to the robustness of this association, we were able to classify attentional state ratings via prestimulus alpha power on a single-trial basis. Moreover, when we repeated these analyses after smoothing the time series of attentional state ratings and alpha power with increasingly large sliding windows, both the correlations and classification performance improved considerably, with the peaks occurring at a sliding window size of approximately 7 min worth of trials. Our results therefore suggest that slow fluctuations in attentional state in the order of minutes are reflected in spontaneous alpha power. Since these subjective attentional state ratings were associated with objective measures of both behavior and neural activity, we suggest that they provide a simple and effective estimate of task engagement that could prove useful in operational settings that require human operators to maintain a sustained focus of visual attention.
This article explores 2 important aspects of metacognition: (a) how students monitor their ongoing performance to detect and correct errors and (b) how students reflect on those errors to learn from them. Although many instructional theories have advocated providing students with immediate feedback on errors, some researchers have argued that immediate feedback eliminates the opportunity for students to practice monitoring for and learning from errors. Thus, they advocate delayed feedback. This article provides evidence that this line of reason is flawed and suggests that rather than focusing on the timing of feedback, instructional designers should focus on the "model of desired performance" with respect to which feedback is provided. Instead of delaying feedback, we suggest broadening our model of correct behavior or desired performance to include some kinds of incorrect, but reasonable behaviors. This article explores the effects of providing feedback on the basis of a so-called intelligent novice cognitive model. A system based on an intelligent novice model allows students to make certain reasonable errors, and provides guidance through the exercise of error detection and correction skills. There are two pedagogical motivations for feedback based on an intelligent novice model. First, jobs today not only require ready-made expertise for dealing with known problems, but also intelligence to address novel situation where nominal experts are thrown back to the state of a novice. Second, the opportunity to reason about the causes and consequences of errors may allow students to form a better model of the behavior of domain operators. Results show that students receiving intelligent novice feedback acquire a deeper conceptual understanding of domain principles and demonstrate better transfer and retention of skills over time.
Abstract. Personalised environments such as adaptive educational systems can be evaluated and compared using performance curves. Such summative studies are useful for determining whether or not new modifications enhance or degrade performance. Performance curves also have the potential to be utilised in formative studies that can shape adaptive model design at a much finer level of granularity. We describe the use of learning curves for evaluating personalised educational systems and outline some of the potential pitfalls and how they may be overcome. We then describe three studies in which we demonstrate how learning curves can be used to drive changes in the user model. First, we show how using learning curves for subsets of the domain model can yield insight into the appropriateness of the model's structure. In the second study we use this method to experiment with model granularity. Finally, we use learning curves to analyse a large volume of user data to explore the feasibility of using them as a reliable method for finetuning a system's model. The results of these experiments demonstrate the successful use of performance curves in formative studies of adaptive educational systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.