Robust Speech Recognition and Understanding 2007
DOI: 10.5772/4754
|View full text |Cite
|
Sign up to set email alerts
|

Bimodal Emotion Recognition using Speech and Physiological Changes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
80
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 102 publications
(80 citation statements)
references
References 21 publications
0
80
0
Order By: Relevance
“…Speech and biosignals are par excellence suitable for personalized and ubiquitous emotion-aware computing technology. However, surprisingly, this combination has hardly been explored; except for the author's own work [97], the only work the author is acquainted with that applied this combination is that of Kim et al [41,42,44,45]. Processing both signals in parallel can, however, be done conveniently, as is illustrated by this study; see also Fig.…”
Section: The Five Issues Under Investigationmentioning
confidence: 93%
See 1 more Smart Citation
“…Speech and biosignals are par excellence suitable for personalized and ubiquitous emotion-aware computing technology. However, surprisingly, this combination has hardly been explored; except for the author's own work [97], the only work the author is acquainted with that applied this combination is that of Kim et al [41,42,44,45]. Processing both signals in parallel can, however, be done conveniently, as is illustrated by this study; see also Fig.…”
Section: The Five Issues Under Investigationmentioning
confidence: 93%
“…To the author's knowledge, only two groups have reported on this combination: Kim et al [41,42,44,45] and the current author and colleagues [97]. A possible explanation is the lack of knowledge of the application of this combination of measures.…”
Section: Ubiquitous Signals Of Emotionmentioning
confidence: 99%
“…[3]), and foreign speech (e.g. [4]). Inevitably, using these methods to mask linguistic content will affect certain acoustic properties that may characterise the emotion present e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The result for their decision level fusion was slightly lower than the feature level. Jonghwa Kim evaluated feature level, decision level, and hybrid fusion performance integrating multichannel physiological signals and speech signal for detecting valance and arousal using linear discriminant analysis (LDA) classifier [16]. Their fusion scheme reported results for feature, decision, and hybrid fusion, where the performance for the feature fusion was highest.…”
Section: Introductionmentioning
confidence: 99%