2022
DOI: 10.1007/978-3-031-04083-2_17
|View full text |Cite
|
Sign up to set email alerts
|

Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond

Abstract: The quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations - such as multiple forms of enabling, technical and protective transparency - and show how different legal areas engage with and mandate such different types o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 64 publications
1
23
0
Order By: Relevance
“…The AI & law researchers seem to prefer the latter solution (Pasquale 2017;Selbst and Barocas 2018;Mittelstadt et al 2019;) and offer its variants (Ye et al 2018;Prakken 2020;Prakken and Ratsma 2021). This line of research also includes numerous papers interpreting the existing (or, as in the case of EU AI Act, pending) legal requirements on explainability, criticizing them or proposing new mechanisms and provisions (Goodman and Flaxman 2017;Malgieri and Comandé 2017;Wachter et al 2017a;Selbst and Powles 2018;Casey et al 2019;Zuiderveen Borgesius 2020;Grochowski et al 2021;Kaminski 2021;Hacker and Passoth 2022;Sovrano et al 2022). It is worth noting that de lege lata objections state that the rules are vague, too weak or incompatible with AI conceptual grid rather than unnecessary.…”
Section: Discussionmentioning
confidence: 99%
“…The AI & law researchers seem to prefer the latter solution (Pasquale 2017;Selbst and Barocas 2018;Mittelstadt et al 2019;) and offer its variants (Ye et al 2018;Prakken 2020;Prakken and Ratsma 2021). This line of research also includes numerous papers interpreting the existing (or, as in the case of EU AI Act, pending) legal requirements on explainability, criticizing them or proposing new mechanisms and provisions (Goodman and Flaxman 2017;Malgieri and Comandé 2017;Wachter et al 2017a;Selbst and Powles 2018;Casey et al 2019;Zuiderveen Borgesius 2020;Grochowski et al 2021;Kaminski 2021;Hacker and Passoth 2022;Sovrano et al 2022). It is worth noting that de lege lata objections state that the rules are vague, too weak or incompatible with AI conceptual grid rather than unnecessary.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, citizens may also actively contribute to the design of novel solutions, for example in the realm of explanations regarding AI systems or justification structures. Researchers have already pointed to the advantages of using co-design strategies to this end (Liegl et al, 2015), also in AI regulation (Aldewereld & Mioch, 2021;Hacker & Passoth, 2022). Such participatory strategies may then be fused with a legal system willing and able to receive such input and accommodate temporality.…”
Section: Participation Legal Issues and Temporalitymentioning
confidence: 99%
“…9), but ignores some of the most troubling current applications. An example of this is emotion recognition systems (see Mc-Stay, 2016Hacker, 2022). Paradoxically, there seems to be too much future and too little present in Art.…”
Section: Temporality In the Aiamentioning
confidence: 99%
See 1 more Smart Citation
“…In this vein, growing attention has been redirected toward AI policy regulation to balance the rule of law and AI-driven business innovation [11]. Such attention denoted how ensuring compliance over explanation generation is not just a technical, yet a legal and political endeavor [12].…”
Section: Introductionmentioning
confidence: 99%