2018 15th International Conference on Ubiquitous Robots (UR) 2018
DOI: 10.1109/urai.2018.8441795
|View full text |Cite
|
Sign up to set email alerts
|

Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…Data fusion is a critical step involved in multimodal emotion recognition for producing the estimation. The literature about emotional data fusion involves three data fusion techniques, which are early fusion (feature fusion) [42,43], late fusion (decision fusion) [44,45,46] and hybrid approaches [17,47,48].…”
Section: Background and Literature Review On Multimodal Emotion Rmentioning
confidence: 99%
See 1 more Smart Citation
“…Data fusion is a critical step involved in multimodal emotion recognition for producing the estimation. The literature about emotional data fusion involves three data fusion techniques, which are early fusion (feature fusion) [42,43], late fusion (decision fusion) [44,45,46] and hybrid approaches [17,47,48].…”
Section: Background and Literature Review On Multimodal Emotion Rmentioning
confidence: 99%
“…These outputs can be used separately or jointly through machine learning methods. Some studies have implemented decision level fusion as reported in [44,45,49,54,55].…”
Section: Background and Literature Review On Multimodal Emotion Rmentioning
confidence: 99%
“…By integrating faces, gestures, and voice at the decision level, the system could better understand user's intentions. Song et al [35] used K-NN as a classifier in the decision-making layer for the facial expression and speech emotion recognition results, which improved the recognition rate of existing emotion recognizers on social robots. In summary, the multimodal fusion method is used to eliminate the ambiguity of another single mode through a single mode.…”
Section: Related Workmentioning
confidence: 99%
“…Emotion recognition from features such as facial expressions, gestures, and walks has been addressed in the literature surveyed in [1,2,4,29]. Multimodal and context-aware affect recognition models are also available for that purpose as well [19][20][21]33].…”
Section: Multimodal Behavior Analysis In Social Roboticsmentioning
confidence: 99%