2017 IEEE International Conference on Data Mining (ICDM) 2017
DOI: 10.1109/icdm.2017.134
|View full text |Cite
|
Sign up to set email alerts
|

Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 146 publications
(63 citation statements)
references
References 8 publications
0
62
0
1
Order By: Relevance
“…We compare HFFN with following multimodal algorithms: RMFN (Liang et al, 2018a), MFN (Zadeh et al, 2018a), MCTN (Pham et al, 2019), BC-LSTM (Poria et al, 2017b), TFN , MARN (Zadeh et al, 2018b), LMF ), MFM (Tsai et al, 2019, MR-RF (Barezi et al, 2018), FAF (Gu et al, 2018b), RAVEN (Wang et al, 2019), GMFN (Zadeh et al, 2018c), Memn2n (Sukhbaatar et al, 2015), MM-B2 , CHFusion (Majumder et al, 2018), SVM Trees (Rozgic et al, 2012), CMN , C-MKL (Poria et al, 2016b) and CAT-LSTM (Poria et al, 2017c).…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…We compare HFFN with following multimodal algorithms: RMFN (Liang et al, 2018a), MFN (Zadeh et al, 2018a), MCTN (Pham et al, 2019), BC-LSTM (Poria et al, 2017b), TFN , MARN (Zadeh et al, 2018b), LMF ), MFM (Tsai et al, 2019, MR-RF (Barezi et al, 2018), FAF (Gu et al, 2018b), RAVEN (Wang et al, 2019), GMFN (Zadeh et al, 2018c), Memn2n (Sukhbaatar et al, 2015), MM-B2 , CHFusion (Majumder et al, 2018), SVM Trees (Rozgic et al, 2012), CMN , C-MKL (Poria et al, 2016b) and CAT-LSTM (Poria et al, 2017c).…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…Literature consists of numerous fusion techniques for multimodal data (Atrey et al, 2010;Zadeh et al, 2017;Poria et al, 2017c). Exploring these on CMN, however, is beyond the scope of this paper and left as a future work.…”
Section: Fusionmentioning
confidence: 99%
“…Advancing from the existing data fusion method, various data fusion methodologies have been proposed by applying attention mechanisms rather than simply focusing on various inter-modality dynamics through various combinations of data fusion. For example, CAT-LSTM improves performance by applying an attention module when performing data fusion or utterance classification [17]. A study to improve the performance of multimodal sentiment analysis by using contextual inter-modal attention has been proposed [18].…”
Section: A Multimodal Sentiment Analysismentioning
confidence: 99%
“…The evaluation was conducted by dividing the training, validation, and test sets according to the split criteria used in previous research [17], [18]. In the experiment using the CMU-MOSI dataset, we evaluated for binary classification of the 'positive' and 'negative' polarities of sentiment.…”
Section: ) Datasetmentioning
confidence: 99%