2014 4th World Congress on Information and Communication Technologies (WICT 2014) 2014
DOI: 10.1109/wict.2014.7076906
|View full text |Cite
|
Sign up to set email alerts
|

Emotion detection with hybrid voice quality and prosodic features using Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…In [1][2][3] compares the speech emotion classification accuracy of speaker-based and the time to construct the model between Support Vector Machine(SVM) and Multi-Layer Perceptron(MLP) classifiers. The classification was performed with the WEKA unit, and the features were extracted with PRAAT.…”
Section: Literature Surveymentioning
confidence: 99%
“…In [1][2][3] compares the speech emotion classification accuracy of speaker-based and the time to construct the model between Support Vector Machine(SVM) and Multi-Layer Perceptron(MLP) classifiers. The classification was performed with the WEKA unit, and the features were extracted with PRAAT.…”
Section: Literature Surveymentioning
confidence: 99%
“…Researchers usually use two kinds of acoustic features for the speech emotion recognition. One is the global statistical features based on sentences, including prosodic features [3][4][5], power spectrum features [6,7] and voice quality features [8,9]. Seppänen et al [10] used 43-dimensional global prosodic features related to fundamental frequency, energy and duration to recognize the emotion of Finnish speech, and achieved 60% recognition rate in the case of speaker-independent.…”
Section: Introductionmentioning
confidence: 99%