2019
DOI: 10.2139/ssrn.3384790
|View full text |Cite
|
Sign up to set email alerts
|

Right to an Explanation Considered Harmful

Abstract: Lay and professional reasoning has it that newly introduced data protection regulation in Europe -GDPR -mandates a 'right to an explanation'. This has been read as requiring that the machine learning (ML) community build 'explainable machines' to enable legal compliance. In reviewing relevant accountability requirements of GDPR and measures developed within the ML community to enable human interpretation of ML models, we argue that this reading should be considered harmful as it creates unrealistic expectation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“… 29 , 30 Although the interpretation of the GDPR is still widely debated by legal experts, the regulation generally aims to protect a person’s right to understand the decision-making process and evaluate the reasonableness of AI decisions. Viewed in this way, the explainability requirement “does not equate to providing a causal account” but involves explaining “the choices about the decision-making model, how data was collected, the features that were and were not considered, and the anticipated effects of the automated decision-making process.” 31 These statements suggest that explainability should consider the implications of using AI in a specific clinical context. Furthermore, it should be helpful for patients as data subjects and doctors as users of AI models.…”
Section: Considering Explainability In Healthcarementioning
confidence: 99%
“… 29 , 30 Although the interpretation of the GDPR is still widely debated by legal experts, the regulation generally aims to protect a person’s right to understand the decision-making process and evaluate the reasonableness of AI decisions. Viewed in this way, the explainability requirement “does not equate to providing a causal account” but involves explaining “the choices about the decision-making model, how data was collected, the features that were and were not considered, and the anticipated effects of the automated decision-making process.” 31 These statements suggest that explainability should consider the implications of using AI in a specific clinical context. Furthermore, it should be helpful for patients as data subjects and doctors as users of AI models.…”
Section: Considering Explainability In Healthcarementioning
confidence: 99%
“…This led to debates about how to make automated decision-making explainable to citizens and spurred the growth of new techniques, including non-machine based techniques such as counterfactuals [79], in a bid to help people understand how AI systems arrive at decisions. Despite an active technical agenda in XAI, legal debate has continued to discuss if a right to an explanation exists or not in GDPR, its breadth, on what legal basis it might exist, and if it is even the best remedy to protect user interests and build trust in autonomous systems [78,63,19,64,16,15]. As these discussions show, the law is interpretative and rarely provides clean rules that tell us what the 'correct' response from designers should be.…”
Section: The Human Side Of Aimentioning
confidence: 99%
“…Experimental studies would allow manipulations to help detect whether some types of input (or interactions) would affect 'personality' and 'judgment' more than others. True experimental design (also known as randomized controlled trials: RCT) would even allow the study of potential biases in AI systems as well as the causes of the biases (i.e., by means of manipulations), something which would provide valid and reliable answers to the concerns raised about machine learning biases, both by academe (Baeza-Yates, 2018;Crabtree et al, 2019;Dix, 1992;Dix 2018;Dwork 2011;Monroe, 2018;Zemel et al, 2013;) and regulatory authorities (USACM 2017) 6,7 . These concerns are explicitly raised, but yet no reliable answers have been provided, something which strongly signals the need for scientists to respond to the concern by exploring methodology in novel ways to arrive at the answers.…”
Section: Human Cognition and Artificial Intelligence Processesmentioning
confidence: 99%