2011 IEEE International Conference on Multimedia and Expo 2011
DOI: 10.1109/icme.2011.6012003
|View full text |Cite
|
Sign up to set email alerts
|

Vowels formants analysis allows straightforward detection of high arousal emotions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 35 publications
(21 citation statements)
references
References 10 publications
1
20
0
Order By: Relevance
“…As one can see from Figure 1, the vowel-level mean values for the first and the second formants are different for depressed and non-depressed speech. As expected, the results differ for each gender; for male speakers we see displacement of mean values to the left (i. e., lower F 1) for depressed speech, as in the case with low-arousal emotional speech described in [17]. In a case of low-arousal detection based on vowel-level formant features, female and males have common tendency: average values for F 1 are shifted left for indicative vowels.…”
Section: Vowel-level Formant Analysissupporting
confidence: 58%
See 3 more Smart Citations
“…As one can see from Figure 1, the vowel-level mean values for the first and the second formants are different for depressed and non-depressed speech. As expected, the results differ for each gender; for male speakers we see displacement of mean values to the left (i. e., lower F 1) for depressed speech, as in the case with low-arousal emotional speech described in [17]. In a case of low-arousal detection based on vowel-level formant features, female and males have common tendency: average values for F 1 are shifted left for indicative vowels.…”
Section: Vowel-level Formant Analysissupporting
confidence: 58%
“…The difference in performance of the VL-Formants between the genders is larger than expected but not completely unsurprising. Gender difference in formant difference have been reported for both emotional speech [17] and depression [8].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, since smart home systems for AAL often concern distress situations, it is unclear whether distress voice will challenge the applicability of these system. Speech signal contains linguistic information but it may be influenced by the health, the social status and the emotional state [61] [62]. Recent studies suggests that ASR performance decreases in case of emotional speech [63][64], however it is still an under-researched area.…”
Section: Smart Home and Aalmentioning
confidence: 99%