2019
DOI: 10.1007/s10772-019-09605-w
|View full text |Cite
|
Sign up to set email alerts
|

Multistage classification scheme to enhance speech emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…It extracts, identifies, and classifies valuable emotional features, and builds models with corresponding parameters. The updated parameters can then be trained and matched by computer means to obtain the corresponding recognition results ( Poorna and Nair, 2019 ). Different from speech recognition, speech emotion recognition utilizes the personality characteristics of different emotional states.…”
Section: English Flipped Classroom Teaching Mode Methods Based On Emo...mentioning
confidence: 99%
“…It extracts, identifies, and classifies valuable emotional features, and builds models with corresponding parameters. The updated parameters can then be trained and matched by computer means to obtain the corresponding recognition results ( Poorna and Nair, 2019 ). Different from speech recognition, speech emotion recognition utilizes the personality characteristics of different emotional states.…”
Section: English Flipped Classroom Teaching Mode Methods Based On Emo...mentioning
confidence: 99%
“…Poorna et al [18] developed a speech emotion recognition mechanism for the Arabic population. A speech database elicited emotions such as surprise, anger, disgust, happiness, neutrality, and sadness was developed from 14 non-native yet efficient speakers of the language.…”
Section: Related Workmentioning
confidence: 99%
“…According to [16,17], the techniques in neu-ral style transfer [18] can be applied for spectrograms as it is the two-dimensional representations of audio frequencies with respect to time. In [19], Poorna et al applied a multistage learning network for classifying speech emotions in Arabic speaking community. Kown et al [20] proposed an artificial intelligence-assisted deep stride convolution neural network architecture using the plain nets strategy to learn discriminative and salient features from spectrogram of speech signals that are enhanced in prior steps to perform better.…”
Section: Related Workmentioning
confidence: 99%