2020
DOI: 10.1007/978-3-030-49062-1_18
|View full text |Cite
|
Sign up to set email alerts
|

Impression Detection and Management Using an Embodied Conversational Agent

Abstract: Embodied Conversational Agents (ECAs) are a promising medium for human-computer interaction, since they are capable of engaging users in real-time face-to-face interaction [1, 2]. Users' formed impressions of an ECA (e.g. favour or dislike) could be reflected behaviourally [3,4]. These impressions may affect the interaction and could even remain afterwards [5,7]. Thus, when we build an ECA to impress users, it is important to detect how users feel about the ECA. The impression the ECA leaves can then be adjust… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…The agent would detect the user's affective state as well, progressing with the interview when appropriate cues were detected. Another example is one by Wang et al (2020), which featured a model that estimated users' impressions of an agent by monitoring their facial expressions and leveraging this information to change how users felt about the agent. Other agents recognize gaze behaviors through complex models derived from psychological research (Andrist et al, 2017;Huang et al, 2011) or trained using data from user studies (Morency et al, 2006).…”
Section: Understanding Phasementioning
confidence: 99%
“…The agent would detect the user's affective state as well, progressing with the interview when appropriate cues were detected. Another example is one by Wang et al (2020), which featured a model that estimated users' impressions of an agent by monitoring their facial expressions and leveraging this information to change how users felt about the agent. Other agents recognize gaze behaviors through complex models derived from psychological research (Andrist et al, 2017;Huang et al, 2011) or trained using data from user studies (Morency et al, 2006).…”
Section: Understanding Phasementioning
confidence: 99%
“…These low-level signals are processed using EyesWeb and other external tools, such as machine learning pretrained models (Dermouche and Pelachaud, 2019;Wang et al, 2019), to extract high-level features about the user, such as their level of engagement.…”
Section: Dimensions Of Studymentioning
confidence: 99%
“…The self-report annotations being considered separately, the MLP model was trained twice, one for warmth and one for competence. More details about this model can be found in the study by (Wang et al, 2019).…”
Section: User's Analysis: User's Impression Detectionmentioning
confidence: 99%