2015
DOI: 10.1016/j.procs.2015.10.020
|View full text |Cite
|
Sign up to set email alerts
|

Emotion Detection Using MFCC and Cepstrum Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
32
0
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(33 citation statements)
references
References 5 publications
0
32
0
1
Order By: Relevance
“…We use the MFCC to extract features from the audio signal. The MFCC is a linear representation of the cosine transforms of a short duration of logarithmic power spectrum of the sound signal on a non-linear scale Mel frequency [20]. It perceives frequency in a logarithmic way, inspired in the behavior of the human ear.…”
Section: Time Featuresmentioning
confidence: 99%
“…We use the MFCC to extract features from the audio signal. The MFCC is a linear representation of the cosine transforms of a short duration of logarithmic power spectrum of the sound signal on a non-linear scale Mel frequency [20]. It perceives frequency in a logarithmic way, inspired in the behavior of the human ear.…”
Section: Time Featuresmentioning
confidence: 99%
“…Cepstrum analysis is widely used in signal processing in many areas. For example to glottal flow estimation [15], identification of damage in civil engineering structures [16], detection of voice disorders [17], detection of emotions [18], estimating the heart rate from arrays of fiber Bragg grating (FBG) sensors [19], heart rate estimation (improve clarity of data from photoplethysmography) [20], and improve the detection quality of micro changes in biological structures [21]. There are also few publications in the literature regarding the reduction of the multipath effect in the hydroacoustic channel during data transmission, such as [22], where authors are using pulse position modulation spread spectrum underwater acoustic communication system using the N-H sequence.…”
Section: Introductionmentioning
confidence: 99%
“…A wide research has been done on speech emotion recognition to improve the human computer interaction [6]. Most researches are done using Berlin Emotional speech database comprising different emotions [7]. A detailed review of the databases comprising 32 emotional speech databases including English, German, Spanish, Dutch, Russian, Sweden and Chinese are presented in [8].…”
Section: Introductionmentioning
confidence: 99%