2019
DOI: 10.1075/is.18067.flo
|View full text |Cite
|
Sign up to set email alerts
|

On the impact of different types of errors on trust in human-robot interaction

Abstract: Trust is a key dimension of human-robot interaction (HRI), and has often been studied in the HRI community. A common challenge arises from the difficulty of assessing trust levels in ecologically invalid environments: we present in this paper two independent laboratory studies, totalling 160 participants, where we investigate the impact of different types of errors on resulting trust, using both behavioural and subjective measures of trust. While we f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 42 publications
1
18
0
Order By: Relevance
“…Though results from prior research agree that robot errors negatively impact task performance and human safety, the effects of robot errors on people's perceptions of the robot-especially perceived trustworthiness-are inconclusive. While some evidence suggested that robots that did not make mistakes were rated significantly more trustworthy than those that did [24], others indicated minor to no statistical significance that errors negatively impacted trust [9,21]. It was further found that participants liked the robot more when it made mistakes during interactions than when it interacted flawlessly [21], commonly referred as the pratfall effect-an increase in likability due to errors [1].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Though results from prior research agree that robot errors negatively impact task performance and human safety, the effects of robot errors on people's perceptions of the robot-especially perceived trustworthiness-are inconclusive. While some evidence suggested that robots that did not make mistakes were rated significantly more trustworthy than those that did [24], others indicated minor to no statistical significance that errors negatively impacted trust [9,21]. It was further found that participants liked the robot more when it made mistakes during interactions than when it interacted flawlessly [21], commonly referred as the pratfall effect-an increase in likability due to errors [1].…”
Section: Background and Related Workmentioning
confidence: 99%
“…1) Subjective Trust Measurement: The first and most dominant method for trust measurement in HRI is subjective trust measurement. This trust measurement technique involves assessing experiment participants' answers to questionnaires designed to gauge a people's trust in automated agents or specifically to the robots [57]. The main advantage of subjective trust measurement methods is the ease of use, as in these methods, information is derived from the source directly.…”
Section: Trust Measurement In Human-robot Interactionmentioning
confidence: 99%
“…2) Objective trust measurement: These trust measurement methods are based on analyzing how experiment participants interact with the robots, rather than relying on participants' speculation about themselves. The main advantage of these methods is that they are not prone to errors related to biased answers by participants and variation between stated trust and behavioral trust in real-world scenarios [57]. However, these methods also have some drawbacks.…”
Section: Trust Measurement In Human-robot Interactionmentioning
confidence: 99%
“…Since trust evolves in a non-linear way and is especially vulnerable during first interactions, it is decisive that the latter are perceived as positive. Many good experiences are necessary to compensate for a single negative experience [108,109].…”
Section: Trust In the Cobotmentioning
confidence: 99%