2019 International Conference on Multimodal Interaction 2019
DOI: 10.1145/3340555.3353765
|View full text |Cite
|
Sign up to set email alerts
|

Engagement Modeling in Dyadic Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 26 publications
(28 citation statements)
references
References 16 publications
1
27
0
Order By: Relevance
“…These low-level signals are processed using EyesWeb and other external tools, such as machine learning pretrained models (Dermouche and Pelachaud, 2019;Wang et al, 2019), to extract high-level features about the user, such as their level of engagement.…”
Section: Dimensions Of Studymentioning
confidence: 99%
See 1 more Smart Citation
“…These low-level signals are processed using EyesWeb and other external tools, such as machine learning pretrained models (Dermouche and Pelachaud, 2019;Wang et al, 2019), to extract high-level features about the user, such as their level of engagement.…”
Section: Dimensions Of Studymentioning
confidence: 99%
“…This article is organized as follows: in Section 2, we review the main theories about adaptation which our work relies on, in particular Burgoon and others' work; in Section 3, we present an overview of existing models that focus on adapting the ECA's behavior according to the user's behavior; in Section 4, we specify the dimensions we focused on in our adaptation models; in Section 5, we present the general architecture we conceived to endow our ECA with the capability of adapting its behavior to the user's reactions in real time; in Section 6, we describe the scenario we conceived to test the different adaptation models; in Sections 7-9, we report the implementation and evaluation of each of the three models. More details about them can be found in our previous articles (Biancardi et al, 2019b;Biancardi et al, 2019a;Dermouche and Pelachaud, 2019). We finally discuss the results of our work and possible improvements in Sections 10, 11, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…The agent that adapted its behavior to maximize user's engagement was perceived as warm by participants, but they did not find any effect of agent's adaptation on users' evaluation of their experience of the interaction. As noted in Dermouche and Pelachaud (2019a), engagement was defined by user's behaviors that included gaze directions, facial expressions, and posture shifts. Bickmore et al (2011) found that the use of relational behavior lead to significantly greater engagement by museum visitors.…”
Section: Adaptation Mechanismsmentioning
confidence: 99%
“…LSTM are recurrent neural networks able to capture the different dynamics of time series and they have been shown to be efficient in sequence prediction problems. These models have been successfully applied to engagement recognition using head movements in Hadfield et al ( 2018 ) and Lala et al ( 2017 ) and facial expression in Dermouche and Pelachaud ( 2019a ). Temporal models such as LSTM and Gated Recurrent Unit (GRU) are compared to static deep leaning approaches as well as logistic regression.…”
Section: Perceptionmentioning
confidence: 99%
“…In social robotics, engagement can be defined as "the process by which two (or more) participants establish, maintain, and end their perceived connection to one another" [1]. Aside from few attempts in human-human interaction [2] and in human-virtual agent interaction [3,4] at integrating secondparty information in the analysis of user's socio-emotional behavior, current human-robot interaction (HRI) systems rely only on the user data without exploiting the contextual information offered by the robot data, despite the evident link between the robot's behavior and the user's socio-emotional state. We argue that architectures for automatic user engagement analysis can benefit from using the robot data, as well as from learning the interaction dynamics between the user and the robot.…”
Section: Introductionmentioning
confidence: 99%