2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2018
DOI: 10.1109/bibm.2018.8621129
|View full text |Cite
|
Sign up to set email alerts
|

Correlated Attention Networks for Multimodal Emotion Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 9 publications
0
12
0
Order By: Relevance
“…That is because both MEI-PCANet and GoogLeNet model consider the tiny characteristics of the target, and the multilayer network is used to describe the target, which increases the detection accuracy. Comparing the original CAN model [54] , the proposed HVG-based sparse classification scheme has improved the recognition rate effectively, which indicates the efficiency of our proposed hierarchical visual cognition model. Original SRC model achieves the lowest recognition rate.…”
Section: Resultsmentioning
confidence: 75%
See 1 more Smart Citation
“…That is because both MEI-PCANet and GoogLeNet model consider the tiny characteristics of the target, and the multilayer network is used to describe the target, which increases the detection accuracy. Comparing the original CAN model [54] , the proposed HVG-based sparse classification scheme has improved the recognition rate effectively, which indicates the efficiency of our proposed hierarchical visual cognition model. Original SRC model achieves the lowest recognition rate.…”
Section: Resultsmentioning
confidence: 75%
“…The proposed model is compared with many classical and recent algorithms: correlated attention network (CAN) [54] , motion energy image-based principal component analysis network (MEI-PCANet) [17] , appearance and motion deepnet (AMDN) model [55] , GoogLeNet in [56] and VGG16 in [57] as well as the model in [44] .…”
Section: Resultsmentioning
confidence: 99%
“…This indicates that the three feature extraction techniques performed differently on the DEAP dataset by extracting different discriminatory features from it. The various best results obtained by each of the three modalities and features are state of the art and better than the results obtained in recent research studies [7,[33][34][35][36][37][38][39][40][77][78][79][80][81][82][83][84][85][86][87][88][89] that utilized the DEAP dataset. These results are also better than those reported by [34,[36][37][38] despite the trending deep learning approaches applied in those studies.…”
Section: Discussionmentioning
confidence: 78%
“…Valence(%) Arousal(%) Tang-2017 [25] 83.82 ± 5.01 83.23 ± 2.61 Liu-2016 [26] 85.20 ± 4.47 80.50 ± 3.39 Liu-2019 [27] 85.62 ± 3.48 84.33 ± 2.25 Qiu-2018 [28] 86.45 ± / 84.79 ± / Yin-2021 [29] 90.45 ± 3.09 90.60 ± 2.62 Yang-2018 [30] 90.80 ± 3.08 91.03 ± 2.99 Liao-2020 [31] 91.95 ± / 93.06 ± / Ma-2019 [32] 92.30 ± 1.55 92.87 ± 2.11 Huang-2021 [15] 94.38 ± 2.61 94.72 ± 2.56 Cui-2020 [23] 96.65 ± 2.65 97.11 ± 2.01 MSBAM 98.89 ± 1.03 98.87 ± 0.92…”
Section: Methodsmentioning
confidence: 99%