2022
DOI: 10.1109/taslp.2021.3133196
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Speech Emotion Recognition Via Collective Relation Construction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(8 citation statements)
references
References 48 publications
0
8
0
Order By: Relevance
“…Initially, EMD algorithms were widely used in signal processing fields such as mechanical fault diagnosis. In recent years, EMD has been applied to the analysis and enhancement of acoustic features [18].…”
Section: A the Acoustic Feature Mapping Module (Afmm)mentioning
confidence: 99%
See 1 more Smart Citation
“…Initially, EMD algorithms were widely used in signal processing fields such as mechanical fault diagnosis. In recent years, EMD has been applied to the analysis and enhancement of acoustic features [18].…”
Section: A the Acoustic Feature Mapping Module (Afmm)mentioning
confidence: 99%
“…Similarly, Pan et al [17] proposed a strategy for SER by combining the Evolutional Algorithm (EA) with the Empirical Mode Decomposition (EMD) to improve the emotion recognition rate. Hou et al [18] investigated the function of multi-view speech spectrograms, which includes extracting multi-view features by the attention network and the collective relation network. The results of the above methods show that it is feasible to exploit the multi-view speech representations.…”
Section: A Acoustic Features Extractionmentioning
confidence: 99%
“…The dimension of features is 43. We use leave-one-speaker-out (LOSO) 10-fold cross-validation to provide an accurate assessment of the proposed IMEMD-CRNN model ( Hou et al, 2022 ). In the LOSO 10-fold cross-validation method, utterances of 8 speakers are used as training set, one speaker is selected as the validation data, and utterances of the left-out speaker are used as the testing set.…”
Section: Resultsmentioning
confidence: 99%
“…The unweighted accuracy of our method reaches 93.54%, greater than the SOTA method by 1.03%. To verify that the improvement in accuracy of the proposed method is statistically significant compared to the SOTA method (the method proposed by Hou et al (2022) ), a paired-sample t -test is used. The null hypothesis is that the pairwise difference between the UA of the two methods has a mean equal to zero.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation