2020
DOI: 10.1371/journal.pone.0229132
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive trust calibration for human-AI collaboration

Abstract: Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a fra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 115 publications
(74 citation statements)
references
References 42 publications
0
74
0
Order By: Relevance
“…This has led to the proliferation of qualitatively different ways to define trust. For instance, trust has been thought of as a belief [13], an attitude [13], an affective response [22], a sense of willingness [23], a form of mutual understanding [24], and as an act of reliance [25].…”
Section: A Question Of Trustmentioning
confidence: 99%
See 1 more Smart Citation
“…This has led to the proliferation of qualitatively different ways to define trust. For instance, trust has been thought of as a belief [13], an attitude [13], an affective response [22], a sense of willingness [23], a form of mutual understanding [24], and as an act of reliance [25].…”
Section: A Question Of Trustmentioning
confidence: 99%
“…Heuristics have been proposed to tackle two different situations: to combat overtrust and to repair trust. As an example of the former, [25] proposed to use visual prompts to nudge users to reevaluate their trust in the robotic system when the user has left the automated system running unattended for too long.…”
Section: Heuristicsmentioning
confidence: 99%
“…We approach the research challenges with an emphasis on two important aspects of trust in human-AI cooperation: performance and human behavior. We previously proposed a method of adaptive trust calibration [21], using a formal definition of overtrust and under-trust, and conducted an initial evaluation with an over-trust scenario. In the current study, we extend the original method by introducing a third actor called "trust calibration AI" (TCAI) to human-AI cooperation.…”
Section: Volume 0 2020mentioning
confidence: 99%
“…In this sense, the concept of trust is very important in the adoption of technologies to assist older adults at home [7,8]. Trust can be defined as an attitudinal judgement of the degree to which a user (the ageing adult) can rely on an agent (the social assistive robot) to achieve its goals under conditions of uncertainty [9]. People are more reluctant to engage with robots if negative consequences are more likely, and once confidence has been lost, people take longer to use this technology again [6,10].…”
Section: Introductionmentioning
confidence: 99%
“…People are more reluctant to engage with robots if negative consequences are more likely, and once confidence has been lost, people take longer to use this technology again [6,10]. Moreover, safety and efficiency of HRI collaboration often depend on appropriately calibrating trust towards the robot [9] and using a user-centred approach to realise what impacts the development of trust [11]. To date, trust regarding older adults' adoption of assistive technology has been determined in several ways, including whether the elderly feels safe and comfortable with the proposed solution [12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%