Objective. Concurrent changes in physiological signals across multiple listeners (physiological synchrony—PS), as caused by shared affective or cognitive processes, may be a suitable marker of selective attentional focus. We aimed to identify the selective attention of participants based on PS with individuals sharing attention with respect to different stimulus aspects. Approach. We determined PS in electroencephalography (EEG), electrodermal activity (EDA) and electrocardiographic inter-beat interval (IBI) of participants who all heard the exact same audio track, but were instructed to either attend to the audiobook or to interspersed auditory events such as affective sounds and beeps that attending participants needed to keep track of. Main results. PS in all three measures reflected the selective attentional focus of participants. In EEG and EDA, PS was higher for participants when linked to participants with the same attentional instructions than when linked to participants instructed to focus on different stimulus aspects, but in IBI this effect did not reach significance. Comparing PS between a participant and members from the same or the different attentional group allowed for the correct identification of the participant’s attentional instruction in 96%, 73% and 73% of the cases, for EEG, EDA and IBI, respectively, all well above chance level. PS with respect to the attentional groups also predicted performance on post-audio questions about the groups’ stimulus content. Significance. Our results show that selective attention of participants can be monitored using PS, not only in EEG, but also in EDA and IBI. These results are promising for real-world applications, where wearables measuring peripheral signals like EDA and IBI may be preferred over EEG sensors.
Although emotion detection using electroencephalogram (EEG) data has become a highly active area of research over the last decades, little attention has been paid to stimulus familiarity, a crucial subjectivity issue. Using both our experimental data and a sophisticated database (DEAP dataset), we investigated the effects of familiarity on brain activity based on EEG signals. Focusing on familiarity studies, we allowed subjects to select the same number of familiar and unfamiliar songs; both resulting datasets demonstrated the importance of reporting self-emotion based on the assumption that the emotional state when experiencing music is subjective. We found evidence that music familiarity influences both the power spectra of brainwaves and the brain functional connectivity to a certain level. We conducted an additional experiment using music familiarity in an attempt to recognize emotional states; our empirical results suggested that the use of only songs with low familiarity levels can enhance the performance of EEG-based emotion classification systems that adopt fractal dimension or power spectral density features and support vector machine, multilayer perceptron or C4.5 classifier. This suggests that unfamiliar songs are most appropriate for the construction of an emotion recognition system.
SUMMARYResearch on emotion recognition using electroencephalogram (EEG) of subjects listening to music has become more active in the past decade. However, previous works did not consider emotional oscillations within a single musical piece. In this research, we propose a continuous music-emotion recognition approach based on brainwave signals. While considering the subject-dependent and changing-over-time characteristics of emotion, our experiment included self-reporting and continuous emotion annotation in the arousal-valence space. Fractal dimension (FD) and power spectral density (PSD) approaches were adopted to extract informative features from raw EEG signals and then we applied emotion classification algorithms to discriminate binary classes of emotion. According to our experimental results, FD slightly outperformed PSD approach both in arousal and valence classification, and FD was found to have the higher correlation with emotion reports than PSD. In addition, continuous emotion recognition during music listening based on EEG was found to be an effective method for tracking emotional reporting oscillations and provides an opportunity to better understand human emotional processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.