Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.108
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Emotion: A Multi-Modal Dataset for Human Desire Understanding

Abstract: Desire is a strong wish to do or have something, which involves not only a linguistic expression, but also underlying cognitive phenomena driving human feelings. As the most primitive and basic human instinct, conscious desire is often accompanied by a range of emotional responses. As a strikingly understudied task, it is difficult for machines to model and understand desire due to the unavailability of benchmarking datasets with desire and emotion labels. To bridge this gap, we present MSED, the first multi-m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 23 publications
(24 reference statements)
0
2
0
Order By: Relevance
“…The representative datasets contain MELD, IEMOCAP, etc. For example, Jia et al (2022) constructed a multi-modal emotion and desire recognition dataset, called MSED. It consisted of 9,190 text-image pairs collected from a wide range of social media resources, e.g., Twitter, Getty Image, Flickr.…”
Section: Multi-modal Emotion Recognitionmentioning
confidence: 99%
“…The representative datasets contain MELD, IEMOCAP, etc. For example, Jia et al (2022) constructed a multi-modal emotion and desire recognition dataset, called MSED. It consisted of 9,190 text-image pairs collected from a wide range of social media resources, e.g., Twitter, Getty Image, Flickr.…”
Section: Multi-modal Emotion Recognitionmentioning
confidence: 99%
“…Intentions have been extensively explored in psychology tests, e.g., behavioral re-enactment (Meltzoff, 1995), action prediction (Phillips et al, 2002, intention explanation (Smiley, 2001), and intention attribution to abstract figures (Castelli, 2006). EPISTEMIC REASONING (Cohen, 2021) Infer T -✓ ✓ TOMI (Nematzadeh et al, 2018) QA T -✓ ✓ ✓ HI-TOM (He et al, 2023) QA T -✓ ✓ ✓ MINDGAMES (Sileo and Lernould, 2023) Infer T -✓ ✓ ✓ ✓ ADV-CSFB (Shapira et al, 2023a) QA H -✓ ✓ CONVENTAIL (Zhang and Chai, 2010) Infer H -✓ ✓ ✓ ✓ SOCIALIQA (Sap et al, 2019) QA H -✓ ✓ ✓ BEST (Tracey et al, 2022) -H -✓ ✓ ✓ ✓ FAUXPAS-EAI (Shapira et al, 2023b) QA H,AI -✓ ✓ ✓ COKE NLG AI -✓ ✓ ✓ ✓ TOM-IN-AMC Infer H -✓ ✓ ✓ ✓ G4C (Zhou et al, 2023b) NLG H,AI -✓ ✓ ✓ ✓ ✓ ✓ VISUALBELIEFS (Eysenbach et al, 2016) Infer -Cartoon ✓ ✓ ✓ TRIANGLE COPA (Gordon, 2016) QA H Cartoon ✓ ✓ ✓ ✓ MSED (Jia et al, 2022) Infer H Images ✓ ✓ ✓ BIB (Gandhi et al, 2021) Infer -2D Grid ✓ ✓ ✓ AGENT (Shu et al, 2021) Infer -3D Sim. ✓ ✓ ✓ ✓ MTOM (Rabinowitz et al, 2018) Infer -2D Grid ✓ ✓ ✓ SYMMTOM (Sclar et al, 2022) MARL -2D Grid ✓ ✓ ✓ ✓ ✓ ✓ MINDCRAFT (Bara et al, 2021) Infer (Bara et al, 2023) Infer Desires.…”
Section: Abilities In Theory Of Mind Space (Atoms) Frameworkmentioning
confidence: 99%