Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413909
|View full text |Cite
|
Sign up to set email alerts
|

MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 31 publications
0
17
0
Order By: Relevance
“…• Zero Padding (ZP): Padding the feature representations of the missing modality with zero is a widely used way to copy with incomplete modalities [30,31,32]. For this method, we consider two forms of φ to fuse features f and g: addition and concatenation.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…• Zero Padding (ZP): Padding the feature representations of the missing modality with zero is a widely used way to copy with incomplete modalities [30,31,32]. For this method, we consider two forms of φ to fuse features f and g: addition and concatenation.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…b) Emotional Conversation Datasets: Generally, the emotional perception ability of a dialogue model is defined as the task: emotion recognition in conversations (ERC) [40] or emotion reasoning (ER) [46]. Datasets, e.g., IEMOCAP [38], Mastodon [39], MELD [40], EMOTyDA [42], EDA [50], MEmoR [46] and M 3 ED [43], are usually used for the ERC or ER task. These datasets generally have small sizes, with fewer than 10K dialogues, making them unsuitable for conversation generation tasks.…”
Section: B Conversation Datasetsmentioning
confidence: 99%
“…PELD [48] is proposed for predicting emotion for response using BF personality traits and VAD vector, in which the personality traits are averaged with personality traits of FriendsPersona [47]. MEmoR [46], a recent multimodal emotion reasoning dataset used for the task of multimodal emotion reasoning, provides a multimodal conversation context, 14 fine-grained emotions and 3 types of personalities (16PF, BF and MBTI). MEmoR is mainly used for the task of multimodal emotion reasoning, in which the personalities are used for improving the performance of emotion reasoning.…”
Section: B Conversation Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…DialoGPT [2], CDialGPT [7]) can only learn personalized or emotional expressions through the dialogue context (single-modal or multi-modal) provided by the corpus. b) Emotional Conversation Datasets: Generally, the emotional perception ability of a dialogue model is defined as the task: emotion recognition in conversations (ERC) [40] or emotion reasoning (ER) [46]. Datasets, e.g., IEMOCAP [38], Mastodon [39], MELD [40], EMOTyDA [42], EDA [50], MEmoR [46] and M 3 ED [43], are usually used for the ERC or ER task.…”
Section: B Conversation Datasetsmentioning
confidence: 99%