2018
DOI: 10.1166/jctn.2018.7447
|View full text |Cite
|
Sign up to set email alerts
|

Audio Based Emotion Recognition Using Mel Frequency Cepstral Coefficient and Support Vector Machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The number of acoustic parameters proven to contain emotional information is still increasing. Generally, the most commonly used features can be divided into three groups: prosodic features (e.g., fundamental frequency, energy, speed of speech) [ 22 ], quality characteristics (e.g., formants, brightness) [ 23 ] and spectrum characteristics (e.g., mel-frequency cepstral coefficients) [ 24 , 25 ]. The final features vector is based on their statistics such as mean, maximum, minimum, change rate, kurtosis, skewness, zero-crossing rate, variance etc., [ 26 , 27 ].…”
Section: Related Workmentioning
confidence: 99%
“…The number of acoustic parameters proven to contain emotional information is still increasing. Generally, the most commonly used features can be divided into three groups: prosodic features (e.g., fundamental frequency, energy, speed of speech) [ 22 ], quality characteristics (e.g., formants, brightness) [ 23 ] and spectrum characteristics (e.g., mel-frequency cepstral coefficients) [ 24 , 25 ]. The final features vector is based on their statistics such as mean, maximum, minimum, change rate, kurtosis, skewness, zero-crossing rate, variance etc., [ 26 , 27 ].…”
Section: Related Workmentioning
confidence: 99%