2020
DOI: 10.1016/j.trf.2020.06.021
|View full text |Cite
|
Sign up to set email alerts
|

Effects of explanation types and perceived risk on trust in autonomous vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
61
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(62 citation statements)
references
References 18 publications
1
61
0
Order By: Relevance
“…To this end, they developed a real-time trust measurement to explore trust levels according to different driving scenarios. Similarly, Ha et al (2020) examine the impact of explanation types and perceived risk on trust in AVs. Simple explanations, or feedback, such as descriptions of the vehicle's tasks, led to elevated trust in AVs, while too much explanation led to potential cognitive overload, and did not increase trust (Ha et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…To this end, they developed a real-time trust measurement to explore trust levels according to different driving scenarios. Similarly, Ha et al (2020) examine the impact of explanation types and perceived risk on trust in AVs. Simple explanations, or feedback, such as descriptions of the vehicle's tasks, led to elevated trust in AVs, while too much explanation led to potential cognitive overload, and did not increase trust (Ha et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Merely stating the AI's performance without a detailed explanation seemed insufficient to justify its limitations and algorithmic certainty. Previous studies that focused on system uncertainty or algorithmic limitations found that such information has a detrimental effect on users' trust (Cai et al, 2019;Lim & Dey, 2011;Papenmeier et al, 2019;Robinette, Howard, & Wagner, 2017;Ha, Kim, Seo, & Lee, 2020). For example, Robinette, Howard, and Wagner (2017) showed that when people find themselves in a highrisk situation, they lose trust in the computer's advice, which reveals its own limitations.…”
Section: Cost Of Providing Information About the Performance Of Aimentioning
confidence: 99%
“…For example, Robinette, Howard, and Wagner (2017) showed that when people find themselves in a highrisk situation, they lose trust in the computer's advice, which reveals its own limitations. Furthermore, Ha, Kim, Seo, and Lee (2020) suggested that people trust AI more when it provides no information about itself than when information is provided.…”
Section: Cost Of Providing Information About the Performance Of Aimentioning
confidence: 99%
“…Post-hoc explanations provided on this premise significantly increased the user's level of situation awareness. Ha et al [14] and Koo et al [15] examined the effect of explanations on peoples' trust through user studies. Ha et al [14] examined two explanation types, simple and attributional, as well as perceived risk on trust in AVs in four autonomous driving scenarios with different levels of risk.…”
Section: Related Workmentioning
confidence: 99%
“…Ha et al [14] and Koo et al [15] examined the effect of explanations on peoples' trust through user studies. Ha et al [14] examined two explanation types, simple and attributional, as well as perceived risk on trust in AVs in four autonomous driving scenarios with different levels of risk. Their results show that the explanation type significantly affects trust in autonomous vehicles and that, under high levels of perceived risk, attributional explanations lead to the highest trust.…”
Section: Related Workmentioning
confidence: 99%