2020
DOI: 10.3390/s20082308
|View full text |Cite
|
Sign up to set email alerts
|

The uulmMAC Database—A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction

Abstract: In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 60 publications
0
16
0
Order By: Relevance
“…The study in [19] proposes a fusion model based on verbal and nonverbal information to increase the emotion recognition capability of children companion robots. The study in [20] created a multimodal emotion database that includes the visual, audio, physiological, depth and pose data of 60 participants. Researchers in [21] collected participants' facial expressions, heart rate, pupil diameter and EEG data by showing them specific visual stimuli.…”
Section: Background and Literature Review On Multimodal Emotion Rmentioning
confidence: 99%
“…The study in [19] proposes a fusion model based on verbal and nonverbal information to increase the emotion recognition capability of children companion robots. The study in [20] created a multimodal emotion database that includes the visual, audio, physiological, depth and pose data of 60 participants. Researchers in [21] collected participants' facial expressions, heart rate, pupil diameter and EEG data by showing them specific visual stimuli.…”
Section: Background and Literature Review On Multimodal Emotion Rmentioning
confidence: 99%
“…Building around a human-computer interaction (HCI) setting, [ 20 ] addressed the challenging issue of induction of dialog-based HCI relevant emotional and cognitive load states by presenting a multimodal dataset for affective computing research. The dataset used an experimental mobile and interactive scenario design that was implemented based on a gamified generic paradigm.…”
Section: Facial Expression Recognitionmentioning
confidence: 99%
“…The biosignal data used for developing the present workflow were acquired with the bioamplifier system g.MOBIlab+ from g.tec. The data are part of our previously acquired uulmMAC dataset-A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction-that is freely available for academic use and research applications [12].…”
Section: Psychophysiological Biosignalsmentioning
confidence: 99%