2021
DOI: 10.1016/j.eswa.2021.115507
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal sentiment and emotion recognition in hyperbolic space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Hong et al (2023a) show that hyperbolic classification is beneficial for visual anomaly recognition tasks, such as out-of-distribution detection in image classification and segmentation tasks. Araño et al (2021) use hyperbolic layers to perform multi-modal sentiment analysis based on the audio, video, and text modalities. Ahmad & Lecue (2022) also show the effect of hyperbolic space to perform object recognition with ultra-wide field-of-view lenses.…”
Section: Sample-to-gyroplane Learningmentioning
confidence: 99%
“…Hong et al (2023a) show that hyperbolic classification is beneficial for visual anomaly recognition tasks, such as out-of-distribution detection in image classification and segmentation tasks. Araño et al (2021) use hyperbolic layers to perform multi-modal sentiment analysis based on the audio, video, and text modalities. Ahmad & Lecue (2022) also show the effect of hyperbolic space to perform object recognition with ultra-wide field-of-view lenses.…”
Section: Sample-to-gyroplane Learningmentioning
confidence: 99%
“…Including the calculations of the forget gate, output gate, and input gate, the formula for calculating the candidate cell is as follows ( Abdar, Nikou & Gandomi, 2021 ):…”
Section: Mvcs Model Framementioning
confidence: 99%
“…Arao et al [26] proposed a method for recognizing emotions and sentiments that integrated hyperbolic space in neural network models. They added a hyperbolic output layer to existing state-of-the-art models and found that it has the potential to improve the modal's prediction accuracy.…”
Section: Literature Reviewmentioning
confidence: 99%