2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications 2009
DOI: 10.1109/cisda.2009.5356537
|View full text |Cite
|
Sign up to set email alerts
|

Application of voiced-speech variability descriptors to emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…These actors speak ten different sentences lasting from 1 to 5 seconds in the following emotional states: anger, disgust, fear, happiness, sadness, boredom, and neutral. The second dataset (dbPL) used in experiments is called Database of Polish Emotional Speech (Slot et al, 2009). It contains 240 examples of emotional speech in the Polish language recorded as monophonic with 44.1 kHz sampling rate.…”
Section: Speech Datamentioning
confidence: 99%
“…These actors speak ten different sentences lasting from 1 to 5 seconds in the following emotional states: anger, disgust, fear, happiness, sadness, boredom, and neutral. The second dataset (dbPL) used in experiments is called Database of Polish Emotional Speech (Slot et al, 2009). It contains 240 examples of emotional speech in the Polish language recorded as monophonic with 44.1 kHz sampling rate.…”
Section: Speech Datamentioning
confidence: 99%