2022
DOI: 10.1007/978-981-19-1142-2_10
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence (XAI): Connecting Artificial Decision-Making and Human Trust in Autonomous Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…XAI provides explanations for AI-based decisions. This is important to the customers to convince of the AI-based decisions on credit evaluation [154].…”
Section: ) Explainable Ai(xai)-based Trust Establishmentmentioning
confidence: 99%
“…XAI provides explanations for AI-based decisions. This is important to the customers to convince of the AI-based decisions on credit evaluation [154].…”
Section: ) Explainable Ai(xai)-based Trust Establishmentmentioning
confidence: 99%
“…They performed experimentation on it using the KDD data set. Madhav and Tyagi 6 proposed XAI techniques for increasing trust in the domain of autonomous vehicles. They provided the insights and need for transparent AI solutions to increase trust in the sector.…”
Section: Related Workmentioning
confidence: 99%
“…How well domain experts and users can grasp and trust these models' functionality is one of the key factors that influence their successful adoption in cyber threat hunting. Stakeholders demand more transparency and explicability when these black‐box models are used to make significant predictions 6,7 . In cyber threat hunting, where specialists need significantly more information from the model than a simple binary outcome for their analysis, justifications supporting the output of AI models are essential.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…By offering explanations for the classification of a vehicle’s behavior as anomalous, XAI empowers researchers to scrutinize the safety and security of autonomous systems. Additionally, establishing trust in these XAI methods for autonomous vehicles is imperative for human operators [ 17 ], considering the life-critical decisions made by these vehicles. Moreover, shedding light on potential shortcomings in existing XAI methods assists developers in identifying and addressing issues, such as understanding the factors that led to incorrect explanations by XAI methods, thereby pinpointing the causes of erroneous outcomes by XAI models [ 18 ].…”
Section: Introductionmentioning
confidence: 99%