2022
DOI: 10.1016/j.jksuci.2018.11.012
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition in speech signals using optimization based multi-SVNN classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(23 citation statements)
references
References 22 publications
1
22
0
Order By: Relevance
“…MFCC here simplices the human hearing merits; and the linear spectrum is first mapped to the MFCC's nonlinear spectrum based on auditory perception, and then being converted to the Cepstrum. The formula for converting ordinary frequency to Mel frequency is, ( ) 2595*lg(1 / 700) mel f f =+ (12) Then, we pass the spectrum through a set of Mel filters to get the Mel spectrum. The formula is,…”
Section: Features Extraction For Acoustic Signalsmentioning
confidence: 99%
See 1 more Smart Citation
“…MFCC here simplices the human hearing merits; and the linear spectrum is first mapped to the MFCC's nonlinear spectrum based on auditory perception, and then being converted to the Cepstrum. The formula for converting ordinary frequency to Mel frequency is, ( ) 2595*lg(1 / 700) mel f f =+ (12) Then, we pass the spectrum through a set of Mel filters to get the Mel spectrum. The formula is,…”
Section: Features Extraction For Acoustic Signalsmentioning
confidence: 99%
“…The fundamental frequency contains a large number of features that characterize speech affective, which are crucial in acoustic affective recognition. The change range is 50-500Hz; and the detection difficulty is relatively high [12,13]. Commonly, fundamental frequency feature extraction methods are used for the autocorrelation function (ACF)-time domain, average amplitude difference method (AMFD)time domain and wavelet method (WM)-frequency domain; C.K, Y., et al selected higher order spectral features in a set for affective recognition by using 28 bi-spectral features and 22 bi-coherence features [14].…”
Section: Introductionmentioning
confidence: 99%
“…There are some other emotion recognition algorithms which use multi SVNN classifiers (Multiple Support Vector Neural Network) [19] for identifying the emotions and sentiment analysis [20]. The significance of obtaining student intellectual states information in an unobtrusive way had incited the headway of various methodology to propel different procedures to advance the result in estimating head posture and analysis of facial expressions.…”
Section: Related Workmentioning
confidence: 99%
“…Many researchers are using a combination of various features of speech signal. Some of them are temporal features [10][11][12].MFCC features [13], pitch Chroma, spectral flux and tonal power ratio [14][15][16]. To classify the signals for a particular application a classification technique will be used [17][18][19][20][21].…”
Section: Introductionmentioning
confidence: 99%