Electrocardiogram (ECG) has been widely used for emotion recognition. This paper presents a deep neural network based on convolutional layers and a transformer mechanism to detect stress using ECG signals. We perform leave-one-subject-out experiments on two publicly available datasets, WESAD and SWELL-KW, to evaluate our method. Our experiments show that the proposed model achieves strong results, comparable or better than the state-of-theart models for ECG-based stress detection on these two datasets. Moreover, our method is end-to-end, does not require handcrafted features, and can learn robust representations with only a few convolutional blocks and the transformer component. CCS CONCEPTS• Computing methodologies → Neural networks; Machine learning.
Simulation-based training has been proven to be a highly effective pedagogical strategy. However, misalignment between the participant’s level of expertise and the difficulty of the simulation has been shown to have significant negative impact on learning outcomes. To ensure that learning outcomes are achieved, we propose a novel framework for adaptive simulation with the goal of identifying the level of expertise of the learner, and dynamically modulating the simulation complexity to match the learner’s capability. To facilitate the development of this framework, we investigate the classification of expertise using biological signals monitored through wearable sensors. Trauma simulations were developed in which electrocardiogram (ECG) and galvanic skin response (GSR) signals of both novice and expert trauma responders were collected. These signals were then utilized to classify the responders’ expertise, successive to feature extraction and selection, using a number of machine learning methods. The results show the feasibility of utilizing these bio-signals for multimodal expertise classification to be used in adaptive simulation applications.
We propose cross-modal attentive connections, a new dynamic and effective technique for multimodal representation learning from wearable data. Our solution can be integrated into any stage of the pipeline, i.e., after any convolutional layer or block, to create intermediate connections between individual streams responsible for processing each modality. Additionally, our method benefits from two properties. First, it can share information uni-directionally (from one modality to the other) or bi-directionally. Second, it can be integrated into multiple stages at the same time to further allow network gradients to be exchanged in several touch-points. We perform extensive experiments on three public multimodal wearable datasets, WE-SAD, SWELL-KW, and CASE, and demonstrate that our method can effectively regulate and share information between different modalities to learn better representations. Our experiments further demonstrate that once integrated into simple CNN-based multimodal solutions (2, 3, or 4 modalities), our method can result in superior or competitive performance to state-of-the-art and outperform a variety of baseline uni-modal and classical multimodal methods.
Background: In resuscitation medicine, effectively managing cognitive load in highstakes environments has important implications for education and expertise development. There exists the potential to tailor educational experiences to an individual's cognitive processes via real-time physiologic measurement of cognitive load in simulation environments.Objective: The goal of this research was to test a novel simulation platform that utilized artificial intelligence to deliver a medical simulation that was adaptable to a participant's measured cognitive load. Methods: The research was conducted in 2019. Two board-certified emergency physicians and two medical students participated in a 10-minute pilot trial of a novel simulation platform. The system utilized artificial intelligence algorithms to measure cognitive load in real time via electrocardiography and galvanic skin response. In turn, modulation of simulation difficulty, determined by a participant's cognitive load, was facilitated through symptom severity changes of an augmented reality (AR) patient. A postsimulation survey assessed the participants' experience. Results: Participants completed a simulation that successfully measured cognitive load in real time through physiological signals. The simulation difficulty was adapted to the participant's cognitive load, which was reflected in changes in the AR patient's symptoms. Participants found the novel adaptive simulation platform to be valuable in supporting their learning. Conclusion:Our research team created a simulation platform that adapts to a participant's cognitive load in real-time. The ability to customize a medical simulation to a participant's cognitive state has potential implications for the development of expertise in resuscitation medicine.Aaron J. Ruberto: study concept and design, acquisition of the data, analysis and interpretation of the data, drafting of the manuscript.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.