2017
DOI: 10.2139/ssrn.2972855
|View full text |Cite
|
Sign up to set email alerts
|

Slave to the Algorithm? Why a Right to Explanationn is Probably Not the Remedy You are Looking for

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
108
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 112 publications
(109 citation statements)
references
References 12 publications
0
108
0
1
Order By: Relevance
“…There is significant discussion as to precisely what these provisions entail in practice regarding algorithmic decision-making, automation and profiling and whether they are adequate to address the concerns that arise from such processes (see e.g. Edwards & Veale 2017;Wachter, Mittelstadt & Floridi 2017b Most prominent of the EU initiatives has been the European Union High-Level Expert Group on Artificial Intelligence (a multi-stakeholder group of 52 experts from academia, civil society and industry) finalising its Ethics Guidelines for Trustworthy AI in April 2019 (2019a). They include 7 key, but nonexhaustive, requirements that AI systems should meet in order to be 'trustworthy'.…”
Section: European Unionmentioning
confidence: 99%
“…There is significant discussion as to precisely what these provisions entail in practice regarding algorithmic decision-making, automation and profiling and whether they are adequate to address the concerns that arise from such processes (see e.g. Edwards & Veale 2017;Wachter, Mittelstadt & Floridi 2017b Most prominent of the EU initiatives has been the European Union High-Level Expert Group on Artificial Intelligence (a multi-stakeholder group of 52 experts from academia, civil society and industry) finalising its Ethics Guidelines for Trustworthy AI in April 2019 (2019a). They include 7 key, but nonexhaustive, requirements that AI systems should meet in order to be 'trustworthy'.…”
Section: European Unionmentioning
confidence: 99%
“…While various tools designed to help decision-makers identify and correct discrimination in data mining exist, these are aimed at data scientists implementing systems rather providing information to the individual decision-subjects affected by them [3,59,29,23,28]. The potential for local, pedagogical explanation systems to provide justice-related information, fulfilling the policy goals of transparency, accountability and fairness, has recently been noted by computer scientists and law scholars [20,65,82]. It has been suggested that organisations might rely upon these explanation facilities to fulfill legal duties to provide meaningful information about the logic of specific system outputs to affected individuals.…”
Section: Interpreting Intelligent Systemsmentioning
confidence: 99%
“…In these cases, it is unclear at what point the 'decision' is being made. Decisions might be seen in the design process, or adaptive interfaces may be seen as 'deciding' which information to provide or withold [14]. Exercise of data protection rights is different in further ways in ambient environments [13], as smart cities and ambient computing may bring significant challenges, if, for example, they are construed as part of decision-making environments.…”
Section: Meaningful Information About the Logic Of Processingmentioning
confidence: 99%
“…A key trigger condition for the automated decision-making provisions in the GDPR (art 22) [14] centres on the degree of automation of the process. Significant decisions "based solely on automated processing" require at least consent, a contract or a basis in member state law.…”
Section: Mitigating Automation Biasmentioning
confidence: 99%