IRI 2022
DOI: 10.53292/208f5901.20b0a4e7
|View full text |Cite
|
Sign up to set email alerts
|

Non-Asimov Explanations: Regulating AI Through Transparency

Abstract: An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions -they really ask what ought to have happened, and how the humans involved ought to have behaved. If we ask the same questions about AI systems we run into two difficulties. The first is what might be described as the 'black box'problem, which lawyers have begun … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Humans require a narrative form of explanation which opposes the binary nature of AI systems' outputs. As noted by Reed, Grieman, and Early (2021), most citizens would not trust any AI system if they were simply told 'We cannot explain how it works, but it is really safe'. This prompted a development of an entire field of eXplainable AI (XAI) which focuses on designing tools that can enable explanations for the decisions produced by complex autonomous systems.…”
Section: Explainabilitymentioning
confidence: 99%
“…Humans require a narrative form of explanation which opposes the binary nature of AI systems' outputs. As noted by Reed, Grieman, and Early (2021), most citizens would not trust any AI system if they were simply told 'We cannot explain how it works, but it is really safe'. This prompted a development of an entire field of eXplainable AI (XAI) which focuses on designing tools that can enable explanations for the decisions produced by complex autonomous systems.…”
Section: Explainabilitymentioning
confidence: 99%