2019
DOI: 10.1007/s13735-019-00185-8
|View full text |Cite
|
Sign up to set email alerts
|

Multi-level context extraction and attention-based contextual inter-modal fusion for multimodal sentiment analysis and emotion classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 33 publications
0
9
0
1
Order By: Relevance
“…e canonical correlation coefficient measures the strength of the connection between these two sets of variables. is maximization technique aims to map the high-dimensional relationship between the two sets of variables to some typical variables [17].…”
Section: Research Methods Of Neuronal Apoptosismentioning
confidence: 99%
“…e canonical correlation coefficient measures the strength of the connection between these two sets of variables. is maximization technique aims to map the high-dimensional relationship between the two sets of variables to some typical variables [17].…”
Section: Research Methods Of Neuronal Apoptosismentioning
confidence: 99%
“…Different from traditional manual retrieval methods, automatic retrieval will save a lot of labor costs. At the same time, compared with manual analysis, improving the accuracy of analysis will be a difficult problem for automatic analysis [12]. As an important means of automatic music retrieval, classifying music according to the expressed emotion is attracting the attention of researchers from different fields.…”
Section: Introductionmentioning
confidence: 99%
“…M.G. Huddar, et al, [13] used multimodal corpus of sentiment intensity dataset and interactive motional dyadic motion capture dataset for emotion detection and multimodal sentiment analysis. Initially, Z-score standardization approach was employed on the audio modality for voice intensity threshold and voice normalization, and then 6392 feature vectors were extracted from the audio signals: arithmetic mean, pitch, standard deviation, amplitude mean, voice intensity, root quadratic mean, etc.…”
Section: Related Workmentioning
confidence: 99%