Thinkquest~2010 2011
DOI: 10.1007/978-81-8489-989-4_40
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing emotions from human speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 5 publications
0
8
0
Order By: Relevance
“…Owkwon made analysis on emotional expressions using pitch, log-energy, mel bond energies, formant, MFCC's (base features) and Gaussian mixture model, SVM in 2003. The accuracy was about 96.3% [3]. In the same year Gobl and Chasaide said that voice quality is reasonable for certain emotions.…”
Section: Literature Surveymentioning
confidence: 94%
See 1 more Smart Citation
“…Owkwon made analysis on emotional expressions using pitch, log-energy, mel bond energies, formant, MFCC's (base features) and Gaussian mixture model, SVM in 2003. The accuracy was about 96.3% [3]. In the same year Gobl and Chasaide said that voice quality is reasonable for certain emotions.…”
Section: Literature Surveymentioning
confidence: 94%
“…In 1993, Murray and Arnott [2] done an analysis on qualitative correlation between emotion and speech features such as pitch, intensity and timing of utterances. During the period of 1998 to 1999, Petrushin distinguished between agitation (Anger, Happy, Fear) and calm (Sad, Neutral) a type of emotions using RELIEF-F algorithm [3], K-NN, ANN classifiers obtained 43 features such as min, max, range, sd etc., but selected only top 14 features and the accuracy level was about 77% for normal state and sadness state. During the period of 2001, Nwe made analysis taking six emotions exhibited by two speakers with 12 MFCC features as input to discrete Hidden Markov Model (HMM) and the accuracy was 70%.…”
Section: Literature Surveymentioning
confidence: 99%
“…This paper explores how to combine the information from various sources (e.g. facial expression [24], speech [21,22,23] and others [20]) to achieve better recognition of emotional state using rule based approach.…”
Section: The Problem Domain For Multimodal Emotion Recognitionmentioning
confidence: 99%
“…We will use facial expressions as the running example to illustrate these stages, etc. The framework remains same across all modalities [20,21,22].…”
Section: Framework For Emotion Recognitionmentioning
confidence: 99%
“…Currently, speech emotion recognition (SER) is a growing research area that aims to recognize the emotional state of a speaker from the speech signal. It has potential applications both for the study of human-human communication and human-computer interaction (HCI) [1].…”
Section: Introductionmentioning
confidence: 99%