EEG-based deep learning models have trended toward models that are designed to perform classification on any individual (cross-participant models). However, because EEG varies across participants due to non-stationarity and individual differences, certain guidelines must be followed for partitioning data into training, validation, and testing sets, in order for cross-participant models to avoid overestimation of model accuracy. Despite this necessity, the majority of EEG-based cross-participant models have not adopted such guidelines. Furthermore, some data repositories may unwittingly contribute to the problem by providing partitioned test and non-test datasets for reasons such as competition support. In this study, we demonstrate how improper dataset partitioning and the resulting improper training, validation, and testing of a cross-participant model leads to overestimated model accuracy. We demonstrate this mathematically, and empirically, using five publicly available datasets. To build the cross-participant models for these datasets, we replicate published results and demonstrate how the model accuracies are significantly reduced when proper EEG cross-participant model guidelines are followed. Our empirical results show that by not following these guidelines, error rates of cross-participant models can be underestimated between 35% and 3900%. This misrepresentation of model performance for the general population potentially slows scientific progress toward truly high-performing classification models.
Tasks which require sustained attention over a lengthy period of time have been a focal point of cognitive fatigue research for decades, with these tasks including air traffic control, watchkeeping, baggage inspection, and many others. Recent research into physiological markers of mental fatigue indicate that markers exist which extend across all individuals and all types of vigilance tasks. This suggests that it would be possible to build an EEG model which detects these markers and the subsequent vigilance decrement in any task (i.e., a task-generic model) and in any person (i.e., a cross-participant model). However, thus far, no task-generic EEG cross-participant model has been built or tested. In this research, we explored creation and application of a task-generic EEG cross-participant model for detection of the vigilance decrement in an unseen task and unseen individuals. We utilized three different models to investigate this capability: a multi-layer perceptron neural network (MLPNN) which employed spectral features extracted from the five traditional EEG frequency bands, a temporal convolutional network (TCN), and a TCN autoencoder (TCN-AE), with these two TCN models being time-domain based, i.e., using raw EEG time-series voltage values. The MLPNN and TCN models both achieved accuracy greater than random chance (50%), with the MLPNN performing best with a 7-fold CV balanced accuracy of 64% (95% CI: 0.59, 0.69) and validation accuracies greater than random chance for 9 of the 14 participants. This finding demonstrates that it is possible to classify a vigilance decrement using EEG, even with EEG from an unseen individual and unseen task.
Intelligent agents provide simulations a means to add lifelike behavior in place of manned entities. When implemented, typically a single intelligent agent model (or approach to defining decision making), such as rule-based, behavior trees, neural networks, etc., is selected. This choice introduces restrictions into what behaviors agents can manifest, and can require significant testing in edge cases. This paper presents the incorporation and application of the Unified Behavior Framework (UBF) into the Advanced Framework for Simulation, Integration, and Modeling environment. The UBF provides the flexibility to implement any and all behavior-based systems, allowing the developer to rapidly assemble a decision making agent that leverages multiple paradigms or approaches. The UBF achieves this by leveraging several key software engineering principles: modular design, scalability through reduced code complexity, simplified development and testing through abstraction, and the promotion of code reuse. The use of UBF to define intelligent agents within a 2v2 Integrated Air Defense System is demonstrated.
Cognitive biases are known to affect human decision making and can have disastrous effects in the fast-paced environments of military operators. Traditionally, post-hoc behavioral analysis is used to measure the level of bias in a decision. However, these techniques can be hindered by subjective factors and cannot be collected in real-time. This pilot study collects behavior patterns and physiological signals present during biased and unbiased decision-making. Supervised machine learning models are trained to find the relationship between Electroencephalography (EEG) signals and behavioral evidence of cognitive bias. Once trained, the models should infer the presence of confirmation bias during decision-making using only EEG - without the interruptions or the subjective nature of traditional confirmation bias estimation techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.