2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2019
DOI: 10.1109/embc.2019.8857745
|View full text |Cite
|
Sign up to set email alerts
|

Combining Electrodermal Activity and Speech Analysis towards a more Accurate Emotion Recognition System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 29 publications
1
8
0
Order By: Relevance
“…In this study, we analyzed only the stress stages during which the subjects were not talking to avoid any interaction between EDA signals and speech [66]. Indeed, speech induces physiological irregular respiration that activates the sympathetic reflex and consequently affects the sweat gland dynamics and the related EDA signal [66], [68]. Accordingly, we selected three stress stages: ST, SA, VE in addition to the basal stage (BS).…”
Section: Feature Extractionmentioning
confidence: 99%
“…In this study, we analyzed only the stress stages during which the subjects were not talking to avoid any interaction between EDA signals and speech [66]. Indeed, speech induces physiological irregular respiration that activates the sympathetic reflex and consequently affects the sweat gland dynamics and the related EDA signal [66], [68]. Accordingly, we selected three stress stages: ST, SA, VE in addition to the basal stage (BS).…”
Section: Feature Extractionmentioning
confidence: 99%
“…Physiological signals provide more continuous real-time monitoring compared to facial expressions. In comparable studies [28][29][30][31][32][33][34][35], the impact of using physiological signals for emotion detection and subsequent recognition is highlighted. Shukla J. et al (2021) [28] assessed and evaluated different techniques for EDA signals and determined the optimal number of features required to yield high accuracy and real-time emotion recognition.…”
Section: Related Workmentioning
confidence: 99%
“…A combination, more commonly known as fusion, of more than one signal for emotion recognition has also been studied, with promising results. Greco A. et al (2019) explored the fusion of both EDA signals and speech patterns to improve arousal level recognition, yielding a marginal classifier improvement of 11.64% using an SVM classifier with recursive feature elimination [32]. Du G. et al (2020) investigated the combination of facial expressions and HR for emotion recognition in gaming environments, increasing the recognition accuracy by 8.30% [33].…”
Section: Related Workmentioning
confidence: 99%
“…On average, RIPPER achieved 92.01% accuracy in the binary assessment of sad versus relaxed emotional state. Greco, et al (2019) investigated the possibility of combining electrodermal activity (EDA) and voice data to recognize human arousal levels during single effective word pronunciation. The support vector machine with a recursive feature elimination (SVM-RFE) was trained and tested on three datasets, using the two channels (speech and EDA) independently and combined.…”
Section: Related Workmentioning
confidence: 99%