2021
DOI: 10.1007/s43681-021-00091-y
|View full text |Cite
|
Sign up to set email alerts
|

Coarse ethics: how to ethically assess explainable artificial intelligence

Abstract: The integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 40 publications
0
1
0
Order By: Relevance
“…XAI should not reduce people's responsibilities or obligations to AI. XAI should not cause people to develop emotional attachment or dependence on AI [51,52]. Another limitation of this study is that it is only a computational theoretical study.…”
Section: Discussionmentioning
confidence: 99%
“…XAI should not reduce people's responsibilities or obligations to AI. XAI should not cause people to develop emotional attachment or dependence on AI [51,52]. Another limitation of this study is that it is only a computational theoretical study.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, utilizing eight distinct classifiers allowed us to test a variety of simple (but sophisticated enough to suit a relationship between input and output well) to complex models [73]. In addition, to effectively cover the ethics of XAI, we considered sufficiently high coverage and order preservation [74]. Using multiple sizes of feature sets and varied combinations of them allowed us to cover the most available features while also studying each feature set separately.…”
Section: Discussionmentioning
confidence: 99%