2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA) 2014
DOI: 10.1109/cscita.2014.6839284
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Speech Emotion Recognition: A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…Finally, the utterance feature vector is fed to the classifier. There are many classification models that have been used [3], [4], [5], [6], with support vector machine (SVM) being one of the most popular choices.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, the utterance feature vector is fed to the classifier. There are many classification models that have been used [3], [4], [5], [6], with support vector machine (SVM) being one of the most popular choices.…”
Section: Introductionmentioning
confidence: 99%
“…For the audio modality, the speech features [ 49 , 50 , 51 , 52 , 53 ] include qualitative features, such as voice quality, harshness, tense and breathy; continuous features, such as energy, pitch, formant, zero-cross rate (ZCR), and speech rate; spectral features, such as Mel-frequency cepstral coefficients (MFCC), linear predictor coefficients (LPC), perceptual linear prediction (PLP), and linear predictive cepstral coefficients (LPCC); Teager energy operator (TEO)-based features, such as TEO-decomposed frequency modulation variation (TEO-FM-Var), normalized TEO autocorrelation envelope area (TEO-Auto-Env), and critical band based TEO autocorrelation envelope (TEO-CB-Auto-Env). Similar to the visual modality, given the raw speech signal, researchers first extracted their desired features such as above, then fed them into the classifier.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed approach reached the accuracy up to 84.21% in Berlin emotional database. References [26], [27] reviewed several common emotional databases and machine learning-based approaches, such as PCA, Naïve Bayes Classifier, Spectrum method, SVM, regression, etc. The top classification accuracy is up to 90%.…”
Section: A Deep Neural Network and Speech Emotion Recognitionmentioning
confidence: 99%