2018
DOI: 10.3389/fnhum.2018.00309
|View full text |Cite
|
Sign up to set email alerts
|

Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents

Abstract: With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challeng… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
52
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 47 publications
(58 citation statements)
references
References 80 publications
(110 reference statements)
6
52
0
Order By: Relevance
“…If users receive neurophysiological feedback over a sustained period of repeated interactions, it is possible for them to learn how to self-regulate brain activity in order to achieve a desired outcome, as would be the case with conventional neurofeedback; however, we are currently unable to assess the viability of self-regulation as an interaction mechanic beyond the mental and motor imagery protocols, which characterize active BCI (Cavazza et al, 2014). Similarly, sustained exposure to a working neurotechnology permits the user to assess an appropriate degree of trust in the system, which is likely to be highly significant for those systems associated with autonomous function (Lee and See, 2004;Hoff and Bashir, 2015) and can be assessed using psychophysiological (Hu et al, 2016) and neurophysiological (de Visser et al, 2018) measures.…”
Section: Grand Challenge: Designing User Experience With Neurotechnolmentioning
confidence: 99%
“…If users receive neurophysiological feedback over a sustained period of repeated interactions, it is possible for them to learn how to self-regulate brain activity in order to achieve a desired outcome, as would be the case with conventional neurofeedback; however, we are currently unable to assess the viability of self-regulation as an interaction mechanic beyond the mental and motor imagery protocols, which characterize active BCI (Cavazza et al, 2014). Similarly, sustained exposure to a working neurotechnology permits the user to assess an appropriate degree of trust in the system, which is likely to be highly significant for those systems associated with autonomous function (Lee and See, 2004;Hoff and Bashir, 2015) and can be assessed using psychophysiological (Hu et al, 2016) and neurophysiological (de Visser et al, 2018) measures.…”
Section: Grand Challenge: Designing User Experience With Neurotechnolmentioning
confidence: 99%
“…For that reason, there needs to be further evaluation to confirm whether this visual attentional shift changes in relation to developing trust. Future work should continue to leverage ocular and other physiological measures together with behavior to obtain a more holistic understanding of trust in human autonomy interactions, as has been done recently (Basu & Singhal, 2016;de Visser, Beatty, et al, 2018).…”
Section: Level Of Distrust As a Potential Better Construct Of Trust Imentioning
confidence: 99%
“…Our present research seeks to replicate the findings of de Visser et al, (2018) while extending it to a more complex experimental paradigm and applying a different EEG-based analytical technique. Instead of the Eriksen flanker task, we use the automated mode of the Air-Force Multi-Attribute Task Battery (AF-MATB).…”
Section: Introductionmentioning
confidence: 99%
“…There are two problems associated with investigating and measuring trust calibration. First is the lack of a commonly accepted model of trust, even within the field of humanautomation interaction (de Visser et al, 2018). Second, most empirical work on trust has been based on psychometric measures (Dimoka, 2018), and with those come the metacognitive problem that the act of reflecting on trust may change one's own perception of it (de Visser et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation