2021
DOI: 10.48550/arxiv.2106.05506
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program

Abstract: The advances in artificial intelligence enabled by deep learning architectures are undeniable. In several cases, deep neural network driven models have surpassed human level performance in benchmark autonomy tasks. The underlying policies for these agents, however, are not easily interpretable. In fact, given their underlying deep models, it is impossible to directly understand the mapping from observations to actions for any reasonably complex agent. Producing this supporting technology to "open the black box… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 5 publications
0
1
0
Order By: Relevance
“…Nevertheless, it has been pointed out that explainability alone does not fully meet expectations and does not guarantee the achievement of the objectives for which it was theorized [20]. This is even clearer if we analyze the legal principles that algorithmic intelligibility would be required to pursue.…”
Section: Explainable Aimentioning
confidence: 93%
“…Nevertheless, it has been pointed out that explainability alone does not fully meet expectations and does not guarantee the achievement of the objectives for which it was theorized [20]. This is even clearer if we analyze the legal principles that algorithmic intelligibility would be required to pursue.…”
Section: Explainable Aimentioning
confidence: 93%
“…Explanations could help individuals to recognize errors in their mental model, leading to a better fit between experienced traceability and performance. However, they could also erroneously increase the confidence in an incorrect mental model and thus worsen the calibration [34], which results in wrong expectations about system behavior and potentially confuses users, ultimately leading to a reduction of trust [109]. Explanations can have an ambiguous effect on the calibration between experienced traceability of a system and the user's ability to correctly predict information processing.…”
mentioning
confidence: 99%