2022
DOI: 10.3389/fnins.2022.884475
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Emotion Recognition Using Region-Specific Electroencephalogram Data and Dynamic Functional Connectivity

Abstract: Recognizing the emotional states of humans through EEG signals are of great significance to the progress of human-computer interaction. The present study aimed to perform automatic recognition of music-evoked emotions through region-specific information and dynamic functional connectivity of EEG signals and a deep learning neural network. EEG signals of 15 healthy volunteers were collected when different emotions (high-valence-arousal vs. low-valence-arousal) were induced by a musical experimental paradigm. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 63 publications
1
4
0
Order By: Relevance
“…It contrasts the outcomes of GC results in the current study and earlier reports where brain entropy/signal complexity [ 39 ]/Granger Causality [ 12 ] within beta and gamma bands showed the most effects during improvised/emotional playing. Importantly, the classification of GC-based features while performing music resulted in accuracies very similar to those reported by Liu et al while classifying music-induced emotions [ 40 ], and Guo et al while detecting induced emotions in the data from the DEAP and SEED datasets using neural networks on GC features [ 41 ]. The summary of accuracies obtained in the above mentioned works is presented as Supplementary Material, Table S1 .…”
Section: Discussionsupporting
confidence: 59%
“…It contrasts the outcomes of GC results in the current study and earlier reports where brain entropy/signal complexity [ 39 ]/Granger Causality [ 12 ] within beta and gamma bands showed the most effects during improvised/emotional playing. Importantly, the classification of GC-based features while performing music resulted in accuracies very similar to those reported by Liu et al while classifying music-induced emotions [ 40 ], and Guo et al while detecting induced emotions in the data from the DEAP and SEED datasets using neural networks on GC features [ 41 ]. The summary of accuracies obtained in the above mentioned works is presented as Supplementary Material, Table S1 .…”
Section: Discussionsupporting
confidence: 59%
“…In their investigations, Ekman ( 1999 ) and Gilda et al ( 2017 ) introduced six distinct and quantifiable emotional states, namely happiness, sadness, anger, fear, surprise, and disgust, as the basis for implementing emotion identification. Over time, other emotional states have been included in this collection, such as neutrality, arousal, and relaxation (Bong et al, 2012 ; Selvaraj et al, 2013 ; Goshvarpour et al, 2017 ; Minhad et al, 2017 ; Wei et al, 2018 ; Sheykhivand et al, 2020 ; Liu et al, 2022 ). In the context of machine learning, the establishment of distinct states for emotions serves as a significant framework for effectively addressing the challenge of emotion recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Over time, other emotional states have been included in this collection, such as neutrality, arousal, and relaxation (Bong et al, 2012;Selvaraj et al, 2013;Goshvarpour et al, 2017;Minhad et al, 2017;Wei et al, 2018;Sheykhivand et al, 2020;Liu et al, 2022). In the context of machine learning, the establishment of distinct states for emotions serves as a significant framework for effectively addressing the challenge of emotion recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Convolution Neuron Network (CNN) is well-suited to processing image data with the bias of transitional invariance. Thus, the power spectrogram generated by the EEG frequency signal was adopted reasonably (Er et al, 2021 ; Liu et al, 2022 ). LSTM was born for time series data since it can keep track of arbitrary long-term dependencies in the input sequences.…”
Section: Eeg-based Music Emotion Recognitionmentioning
confidence: 99%