Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents 2020
DOI: 10.1145/3383652.3423901
|View full text |Cite
|
Sign up to set email alerts
|

Conversational Error Analysis in Human-Agent Interaction

Abstract: Conversational Agents (CAs) present many opportunities for changing how we interact with information and computer systems in a more natural, accessible way. Building on research in machine learning and HCI, it is now possible to design and test multi-turn CA that is capable of extended interactions. However, there are many ways in which these CAs can "fail" and fall short of human expectations. We systematically analyzed how five different types of conversational errors impacted perceptions of an embodied CA. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…The agents we compared, represent different levels of embodiment in conversational agents. Dialogue with the gaze ROBOT condition in Study 1 was longer in conversational turns in comparison to the less anthropomorphic 6 SS. It is interesting to mention however, that most participants were more familiar with smart speakers than they were with social robots, which could indicate a novelty effect while interacting with the agent.…”
Section: Robot Embodimentmentioning
confidence: 78%
See 1 more Smart Citation
“…The agents we compared, represent different levels of embodiment in conversational agents. Dialogue with the gaze ROBOT condition in Study 1 was longer in conversational turns in comparison to the less anthropomorphic 6 SS. It is interesting to mention however, that most participants were more familiar with smart speakers than they were with social robots, which could indicate a novelty effect while interacting with the agent.…”
Section: Robot Embodimentmentioning
confidence: 78%
“…Research in HRI has also investigated how robot failures impact user behaviours, including patterns in eye-gaze, head movements, and speech -social signals that exhibit either established grounding sequences or implicit behavioural responses to failures [6,31,35,76,80]. Behavioural signals have also been examined at unexpected responses from human-robot interactions in the wild [5,30,75], with the use of social signals from low-level sensor input, to highlevel features that represent affect, attention and engagement.…”
Section: Robot Failuresmentioning
confidence: 99%
“…More research is needed on social cue recognition models that can account for different demographic variables (for a systematic approach using deep learning techniques, see Fan et al [44]). Finally, we mention Aneja et al [4]'s work on conversational failure with artificial agents, which provided the Agent Conversational Error (ACE) dataset with transcripts and error annotations, and can provide an interesting overview of how humans react in such settings.…”
Section: How Can Robots Harness Nonverbal Social Feedback From Humans?mentioning
confidence: 99%
“…Aneja et al [ 32 ] designed an embodied conversational agent (ECA) with capabilities that echo some of the requirements for successful conversations with CAs described in the study by Yaghoubzadeh et al [ 31 ]. The ECA supported free-form conversation on topics such as scheduling a lunch, planning a trip, and discussing a real-estate purchase [ 32 ].…”
Section: Introductionmentioning
confidence: 99%
“…Aneja et al [ 32 ] designed an embodied conversational agent (ECA) with capabilities that echo some of the requirements for successful conversations with CAs described in the study by Yaghoubzadeh et al [ 31 ]. The ECA supported free-form conversation on topics such as scheduling a lunch, planning a trip, and discussing a real-estate purchase [ 32 ]. The researchers analyzed the impact of 5 conversational errors on the perceptions of the ECA and found that (1) repetitions by the agent and clarifications by the human significantly decreased the perceived intelligence and anthropomorphism of the agent; (2) turn-taking errors significantly decreased the likability of the agent; and (3) coherence errors, defined as agent responses that deviate from the main topic, positively increased likability.…”
Section: Introductionmentioning
confidence: 99%