2023
DOI: 10.1080/10447318.2023.2228983
|View full text |Cite
|
Sign up to set email alerts
|

Emo-MG Framework: LSTM-based Multi-modal Emotion Detection through Electroencephalography Signals and Micro Gestures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 70 publications
0
1
0
Order By: Relevance
“…Sarcasm detection, the task of automatically identifying sarcastic language, has received significant research attention due to its practical applications in sentiment analysis, opinion mining, and social media analysis. Compared to other forms of sentiment analysis, the complexity of sarcasm detection lies in the difficulty of identifying its appropriate contextual dependencies or distinguishing between literal meaning and the author's underlying intent.Some studies have explored the combination of cognitive and physiological features of humans, such as Electroencephalo-gram (EEG) signals [10,11] , facial data [12,13], expression habit [14] and micro-gestures [15]. Compared to physiological feature data, multimodal data from social networks is more easily obtainable and contains richer semantic information.…”
Section: Multi-modal Sarcasm Detectionmentioning
confidence: 99%
“…Sarcasm detection, the task of automatically identifying sarcastic language, has received significant research attention due to its practical applications in sentiment analysis, opinion mining, and social media analysis. Compared to other forms of sentiment analysis, the complexity of sarcasm detection lies in the difficulty of identifying its appropriate contextual dependencies or distinguishing between literal meaning and the author's underlying intent.Some studies have explored the combination of cognitive and physiological features of humans, such as Electroencephalo-gram (EEG) signals [10,11] , facial data [12,13], expression habit [14] and micro-gestures [15]. Compared to physiological feature data, multimodal data from social networks is more easily obtainable and contains richer semantic information.…”
Section: Multi-modal Sarcasm Detectionmentioning
confidence: 99%