2018
DOI: 10.1007/978-3-030-04221-9_20
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view Emotion Recognition Using Deep Canonical Correlation Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
22
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(23 citation statements)
references
References 12 publications
1
22
0
Order By: Relevance
“…Thus, minimizing loss is equivalent to maximizing correlation. The feature fusion layer is defined as the weighted average of the two transformed features [60]. Finally, the fused multimodal feature is fed into the support vector machine (SVM) to train the affective model.…”
Section: Deep Canonical Correlation Analysis Modelmentioning
confidence: 99%
“…Thus, minimizing loss is equivalent to maximizing correlation. The feature fusion layer is defined as the weighted average of the two transformed features [60]. Finally, the fused multimodal feature is fed into the support vector machine (SVM) to train the affective model.…”
Section: Deep Canonical Correlation Analysis Modelmentioning
confidence: 99%
“…In the experiment, since multi-modal features such as gaze features and visual features were used, this fusion method is considered to be suitable for comparison. Qiu et al proposed an emotional category classification method [44] by performing fusion of bio-information based on deep canonical correlation analysis (Deep CCA) [45]. Thus, we used the above state-of-the-art method as comparative method 4 (CM4) by using gaze features [42] and CNN features.…”
Section: Experimental Conditionsmentioning
confidence: 99%
“…A recently developed deep canonical correlation analysis (DCCA) allows us to examine this possibility by approximating a non-linear mapping from neuronal ensemble activities to canonical variables with a DNN (Andrew et al, 2013). Previous non-invasive brain-computer interface (BCI) studies showed the effectiveness of DCCA as a means of feature extraction from electroencephalogram associated with various covariates of interest, such as eye movements and visual stimulus frequencies (Vu et al, 2016;Qiu et al, 2018;Liu et al, 2019). For example, Vu et al successfully improved the performance of the steady-state visual evoked potential-based BCI using DCCA-based feature extraction (Vu et al, 2016).…”
Section: Introductionmentioning
confidence: 99%