2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854514
|View full text |Cite
|
Sign up to set email alerts
|

Emotions are a personal thing: Towards speaker-adaptive emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
3

Relationship

3
6

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 8 publications
0
18
0
Order By: Relevance
“…Secondly, in order to extend the advantage of using the speaker-dependent classification-based emotion recognition [24,25] to regressionbased affect and depression recognition, a speaker identification approach has also been applied. The following subsections describe the process of SVM application for considering tasks more deeply.…”
Section: Emotion and Depression Predic-tion With Multimodal Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, in order to extend the advantage of using the speaker-dependent classification-based emotion recognition [24,25] to regressionbased affect and depression recognition, a speaker identification approach has also been applied. The following subsections describe the process of SVM application for considering tasks more deeply.…”
Section: Emotion and Depression Predic-tion With Multimodal Featuresmentioning
confidence: 99%
“…In [5] the authors have achieved reasonable results in emotion recognition using the following set of audio-based features: the means, the standard deviations, the ranges, the maximum values, the minimum values and medians of the pitch and the energy. Regarding the learning stage of emotion recognition using audio cues, the most common algorithms used are Multi Layer Perceptron (MLP) [25] and Support Vector Machine (SVM) [12], as well as their combination [30].…”
Section: Introductionmentioning
confidence: 99%
“…To overcoming the variability of the speaker, a novel speaker-independent emotional feature, a ratio of a spectral flatness measure to a spectral center, was suggested by Kim et al [7]. In 2015, Maxim Sidorov [8], [9] proposed a novel method on speech-based adaptive emotion recognition through addition of speaker specific information and achieved 10% accuracy improvement. And Iliou and Anagnostopoulos reported around a 51% recognition rate for seven emotions using neural networks [10].…”
Section: Introductionmentioning
confidence: 99%
“…Busso et al [3] have achieved reasonable results in emotion recognition using the following set of audio-based features: the means, the standard deviations, the ranges, the maximum values, the minimum values and medians of the pitch and the energy. Regarding the learning stage of emotion recognition using audio cues, the most common algorithms applied are Multi Layer Perceptron (MLP) [22] and Support Vector Machine (SVM) [9], as well as their combination [25].…”
Section: Introductionmentioning
confidence: 99%