2022
DOI: 10.1109/taffc.2019.2954118
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Instance Learning for Emotion Recognition Using Physiological Signals

Abstract: The problem of continuous emotion recognition has been the subject of several studies. The proposed affective computing approaches employ sequential machine learning algorithms for improving the classification stage, accounting for the time ambiguity of emotional responses. Modeling and predicting the affective state over time is not a trivial problem because continuous data labeling is costly and not always feasible. This is a crucial issue in real-life applications, where data labeling is sparse and possibly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
34
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(38 citation statements)
references
References 66 publications
4
34
0
Order By: Relevance
“…For example, Romeo et al. [ 70 ] designed an SVM-based multi-instance learning algorithm to recognize valence and arousal for each fine-grained instance and achieves of accuracy on high arousal. These kinds of methods are also widely used for fine-grained emotion recognition with different data modalities such as facial expressions [ 71 ] and vocal features [ 72 ] (e.g., pitch and loudness).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Romeo et al. [ 70 ] designed an SVM-based multi-instance learning algorithm to recognize valence and arousal for each fine-grained instance and achieves of accuracy on high arousal. These kinds of methods are also widely used for fine-grained emotion recognition with different data modalities such as facial expressions [ 71 ] and vocal features [ 72 ] (e.g., pitch and loudness).…”
Section: Related Workmentioning
confidence: 99%
“…The main challenge for this kind of methods is to extract and fuse both the features inside and between instances, as the information which resides only within instances may not be enough to determine which emotion it represents. Previous works [ 70 , 73 ] use the joint loss [ 74 ] of instances and bags (instances under one video stimuli) to fuse the features inside and between instances. However, it could lead to temporal ambiguity of emotions as instances are not directly trained by their emotion labels (and instead trained by the label of bags) [ 70 ].…”
Section: Related Workmentioning
confidence: 99%
“…In fact, there is a plethora of opportunities where emotion recognition would be beneficial [ 7 , 8 , 9 ], from computer interaction to mental health issues (e.g., to provide emotional response monitoring during therapy). Emotions regulate body changes through activation of the central and peripheral nervous system, being this translated as behavioral (facial expression, speech, posture) and/or physiological responses (electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG)) [ 6 , 10 ]. For instance, in the presence of emotional triggers, our heart activity changes, our facial expression is different, and our muscles compress [ 11 ].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, in the presence of emotional triggers, our heart activity changes, our facial expression is different, and our muscles compress [ 11 ]. So, emotional responses may be described by the quantification of body information [ 10 , 11 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation