2021
DOI: 10.1515/itit-2020-0024
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating feedback requirements for trust calibration in automated vehicles

Abstract: The inappropriate use of automation as a result of trust issues is a major barrier for a broad market penetration of automated vehicles. Studies so far have shown that providing information about the vehicle’s actions and intentions can be used to calibrate trust and promote user acceptance. However, how such feedback could be designed optimally is still an open question. This article presents the results of two user studies. In the first study, we investigated subjective trust and user experience of (N=21) pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…The AV, as a complex AI system, also needs to be explained for better human-AV team performance, since it is important to keep an appropriate level of trust in automation and effectively manage uncertainty. Previous studies already confirmed the necessity of feedback in autonomous driving (Seppelt & Lee, 2019;Wiegand et al, 2020;Wintersberger, Janotta, Peintner, Löcken, & Riener, 2021). For example, Wintersberger et al (2021) found that regardless of the trust in AV, people still preferred to be informed about forthcoming strategies and maneuvers.…”
Section: Introductionmentioning
confidence: 75%
See 1 more Smart Citation
“…The AV, as a complex AI system, also needs to be explained for better human-AV team performance, since it is important to keep an appropriate level of trust in automation and effectively manage uncertainty. Previous studies already confirmed the necessity of feedback in autonomous driving (Seppelt & Lee, 2019;Wiegand et al, 2020;Wintersberger, Janotta, Peintner, Löcken, & Riener, 2021). For example, Wintersberger et al (2021) found that regardless of the trust in AV, people still preferred to be informed about forthcoming strategies and maneuvers.…”
Section: Introductionmentioning
confidence: 75%
“…Previous studies already confirmed the necessity of feedback in autonomous driving (Seppelt & Lee, 2019;Wiegand et al, 2020;Wintersberger, Janotta, Peintner, Löcken, & Riener, 2021). For example, Wintersberger et al (2021) found that regardless of the trust in AV, people still preferred to be informed about forthcoming strategies and maneuvers.…”
Section: Introductionmentioning
confidence: 75%
“…Previous research has examined various aspects and underlying determinants of passenger trust in automated vehicles, including the requirements and effects of feedback from the automated driving system via in-vehicle human-machine interfaces (HMIs; e.g., Colley et al, 2020;Feierle et al, 2020;Hartwich et al, 2021;Wintersberger et al, 2021;Yun & Yang, 2022;see Ekman et al, 2017 for a trust-based framework for HMI design), uncertainty displays (e.g., Beller et al, 2013;Helldin et al, 2013), automation malfunctions (e.g., Feierle et al, 2021), overtrust (e.g., Victor et al, 2018), driving behavior and style (e.g., Ekman et al, 2019;Mühl et al, 2020), and initial or introductory information (e.g., Forster et al, 2018;Körber et al, 2018) on self-reported passenger trust.…”
Section: Previous Research On Risk Perception and Trust Of Passengers...mentioning
confidence: 99%
“…The existence of uncertainty led to the development of DNN Supervisors (in short, supervisors), which aim to recognize inputs for which the DL component is likely to make incorrect predictions, allowing the DLS to take appropriate countermeasures to prevent harmful system misbehavior [2]- [8]. For instance, the supervisor of a self-driving car might safely disengage the auto-pilot when detecting a high uncertainty driving scene [2], [9]. Other examples of application domains where supervision is crucial include medical diagnosis [10], [11] and natural hazard risk assessment [12].…”
Section: Introductionmentioning
confidence: 99%