2019
DOI: 10.2139/ssrn.3404040
|View full text |Cite
|
Sign up to set email alerts
|

Transparency As Design Publicity: Explaining and Justifying Inscrutable Algorithms

Abstract: In this paper we argue that transparency, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable. These approaches simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of transparency, that consists in explaining the artifact as an intentional product, that serves a particular g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(18 citation statements)
references
References 18 publications
0
18
0
Order By: Relevance
“…However, before determining whether a specific AI system is legal, one must consider which mechanisms are available to establish its behaviour and performance (i.e. what it is doing at 14 A parallel can be made to what Loi et al (2020) called transparency as design publicity, whereby organisations that design or deploy AI systems are expected to publicise the intentional explanation of the use of a specific system as well as the procedural justification of the decision it takes. 15 The Centre for Data Ethics and Innovation (CDEI) is an independent advisory body to the UK Government.…”
Section: Previous Research: Enforcement Mechanisms and Ai Auditingmentioning
confidence: 99%
“…However, before determining whether a specific AI system is legal, one must consider which mechanisms are available to establish its behaviour and performance (i.e. what it is doing at 14 A parallel can be made to what Loi et al (2020) called transparency as design publicity, whereby organisations that design or deploy AI systems are expected to publicise the intentional explanation of the use of a specific system as well as the procedural justification of the decision it takes. 15 The Centre for Data Ethics and Innovation (CDEI) is an independent advisory body to the UK Government.…”
Section: Previous Research: Enforcement Mechanisms and Ai Auditingmentioning
confidence: 99%
“…Finally, closer to our approach, [22] criticizes "anormative" explanations and calls for the explicit definition of algorithm "goals" that should be understandable. These goals being defined, a decision is justified when the "evidence" that it meets the goals can be provided.…”
Section: Related Workmentioning
confidence: 99%
“…Indeed, one of the areas of major concern has been the need to adapt the guarantees provided by the general regulation on transparency and access to public sector information to the algorithms (Cerrillo, 2018), in the light of the fact that the regulatory framework is rather outdated in terms of the IA's requirements. Therefore, the proposals based on the preventive application of general principles in the design of the applications are particularly thought-provoking (Loi et al, 2019). One of the main topics of discussion concerns the transparency of the algorithms and, in particular, when private entities contracted by public administrations participate in their design.…”
Section: Regulating Algorithms In the Public Sectormentioning
confidence: 99%