2022
DOI: 10.1007/s11760-022-02257-5
|View full text |Cite
|
Sign up to set email alerts
|

Automatic reaction emotion estimation in a human–human dyadic setting using Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 22 publications
1
3
0
Order By: Relevance
“…Regarding how sensitive humans are to noticing discrepancies in human facial expressions even in milliseconds, any delay in facial data synchronization may induce distortions to the dataset. The work presented in this paper can be seen in line with the recent studies on dyadic human–robot interaction, for instance [ 30 , 31 , 32 , 33 ].…”
Section: Introductionsupporting
confidence: 82%
See 2 more Smart Citations
“…Regarding how sensitive humans are to noticing discrepancies in human facial expressions even in milliseconds, any delay in facial data synchronization may induce distortions to the dataset. The work presented in this paper can be seen in line with the recent studies on dyadic human–robot interaction, for instance [ 30 , 31 , 32 , 33 ].…”
Section: Introductionsupporting
confidence: 82%
“…Based on the previous study [ 33 ], we next applied the Facial Expression Analysis (FEA) model that had worked well for videos over a huge number of frames (more precisely, over 2000 frames). We used this model to compare the above two models.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lower ISC values for pairs that for the SS group could be due to communicative turn taking (see, e.g., turn taking in question reply setting, Bögels, 2020). In this case, one's response to the stimuli (e.g., a spontaneous smile or a glance toward the partner) is met with a unique reaction from the other person, in turn provoking potentially further nonverbal interchange based on subtle facial expressions, i.e., dyadic reaction emotion (Sham et al, 2022).…”
Section: Dyadic Behaviors Detected In Emotional Soundsmentioning
confidence: 99%