2017
DOI: 10.3389/frobt.2017.00021
|View full text |Cite
|
Sign up to set email alerts
|

To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

Abstract: We conducted a user study for which we purposefully programmed faulty behavior into a robot's routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human-robot interaction. In contrast to our previous work, where we studie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

6
93
1
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 155 publications
(102 citation statements)
references
References 18 publications
6
93
1
2
Order By: Relevance
“…The presence of errors has been reported to result in increased anthropomorphism and likeability, despite a reduced task performance (Salem et al, 2013). Mirnig et al (2017) found no significant impact of errors on a final perceived level of trust in a robotic assistant but also found an increase in likeability. Guznov, Lyons, Nelson, and Woolley (2016) also found no statis-tically significant impact on self-reported trust levels in automation despite manipulating both error type and severity.…”
Section: Robot Factorsmentioning
confidence: 93%
See 2 more Smart Citations
“…The presence of errors has been reported to result in increased anthropomorphism and likeability, despite a reduced task performance (Salem et al, 2013). Mirnig et al (2017) found no significant impact of errors on a final perceived level of trust in a robotic assistant but also found an increase in likeability. Guznov, Lyons, Nelson, and Woolley (2016) also found no statis-tically significant impact on self-reported trust levels in automation despite manipulating both error type and severity.…”
Section: Robot Factorsmentioning
confidence: 93%
“…Whilst others conclude the perceived intelligence of the robot increased after having made a mistake and attempting to put it right, but only when the new method was error-free (Lemaignan, Fink, & Dillenbourg, 2014;Hamacher et al, 2016). Mirnig et al (2017) allowed for the classification of real errors into types; social norm and technical. They also highlighted that all robotic errors could be classed as technical from a roboticists point of view in contrast with a naive participant.…”
Section: Robot Factorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous research has shown that humans' patience towards a robot that performs suboptimally can be increased if the robot employs a mitigation strategy such as seeking human assistance and/or adapting its approach (Lee et al, 2010;Brooks et al, 2016;Mirnig et al, 2017), or expresses a negative emotional reaction and attempts to rectify its mistake (Hamacher et al, 2016). The present study aimed to build upon this research by addressing the more general question of how to sustain human interactants' willingness to persist in interacting with a robot partner despite increasing boredom or frustration --irrespective of whether that boredom or frustration arises from errors on the part of the robot or from the nature of the interaction itself.…”
Section: Introductionmentioning
confidence: 99%
“…People make mistakes; but machines err as well [14] as there is no such thing as a perfect machine. Humans and machines should therefore recognize and communicate their "imperfectness" when they collaborate, especially in case of robots that share our physical space.…”
Section: Introductionmentioning
confidence: 99%