2021
DOI: 10.1007/s12652-021-03529-7
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal emotion recognition using SDA-LDA algorithm in video clips

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 40 publications
0
8
0
Order By: Relevance
“…It is remarkable that our approach surpasses several supervised competitors: [42], [48], [44], [28] with a margin of 2-35% despite working in a more difficult (unsupervised) setting. It also performs on par with supervised approaches: [27], [52]. The results for CMU-MOSEI [43] are given in Table IV.…”
Section: Comparisons With the State-of-the-art Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…It is remarkable that our approach surpasses several supervised competitors: [42], [48], [44], [28] with a margin of 2-35% despite working in a more difficult (unsupervised) setting. It also performs on par with supervised approaches: [27], [52]. The results for CMU-MOSEI [43] are given in Table IV.…”
Section: Comparisons With the State-of-the-art Methodsmentioning
confidence: 99%
“…Indeed, it is very common in the MER litreature to apply the feature extraction step separately. This is performed on each modality by using either hand-crafted formulations [52], [53], [29], [26], [27], [43]) and/or deep learning architectures [42], [26], [27]. As example of acoustic features; Log-Mel spectrogram [27], pitch, voiced/unvoiced segmenting features [26], [43], [29], MFCCs [28], [26], [43], [29], features extracted from SoundNet [42]) can be given.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations