2020
DOI: 10.1007/978-3-030-58957-8_16
|View full text |Cite
|
Sign up to set email alerts
|

Modelling GDPR-Compliant Explanations for Trustworthy AI

Abstract: Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision-Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centre… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 21 publications
0
12
0
Order By: Relevance
“…The proposed solution builds over the extraction and structuration of an Explanatory Space (ES), intended (as in [28]) as the set of all possible explanations (about an explanandum) reachable by a user, through an explanatory process, starting from an initial explanans, via a pre-defined set of actions. According to the model of Sovrano et al, we might see the ES as a graph of interconnected bits of explanation, and an explanation as nothing more than a path within the ES.…”
Section: Proposed Solutionmentioning
confidence: 99%
“…The proposed solution builds over the extraction and structuration of an Explanatory Space (ES), intended (as in [28]) as the set of all possible explanations (about an explanandum) reachable by a user, through an explanatory process, starting from an initial explanans, via a pre-defined set of actions. According to the model of Sovrano et al, we might see the ES as a graph of interconnected bits of explanation, and an explanation as nothing more than a path within the ES.…”
Section: Proposed Solutionmentioning
confidence: 99%
“…Additionally, the recent White Paper on Artificial Intelligence [10] emitted by the European Commission stressed the need to monitor and audit not only the Automated Decision-Making system (ADM) algorithms but also the data records used for training, developing, running, the AI systems in order to fight the opacity and to improve transparency. From a technical point of view, there are technologyspecific information to consider in order to fully meet the explanation requirements of the GDPR, for a more detailed overview refer to [35]. The qualities of explanations are listed in different works [25], but the EU Parliament [31] lists the following as a good summary of the current state of the art: intelligibility, understandability, fidelity, accuracy, precision, level of detail, completeness, consistency.…”
Section: Background: the Right To Explanationmentioning
confidence: 99%
“…Thus, in order to answer our research question, what we need is to design a process to effectively allow users to extract explanatory narratives from an Explanatory Space. In [35] we present our model of Explanatory Narrative Process making specific references to the GDPR and the AI-HLEG guidelines, modelling a generic explanatory process, giving a formal definition of explanandum, explanans and Explanatory Space. Hereafter we show a plausible example of YAI in action.…”
Section: Proposed Solutionmentioning
confidence: 99%
“…In fact, we might see the space of all the explanations about an explanandum (or Explanatory Space [33]) as a sort of manifold space where every point within it is interconnected explainable information that is not user-centred locally (because it is the same for every user), but globally as an element of a sequence of information that can be chosen by users according to their interest drifts while exploring the space.…”
Section: Proof Of Concept: An Algorithm For Generating Explanationsmentioning
confidence: 99%
“…Hence, we designed a novel pipeline of AI algorithms for the generation of pragmatic explanations through the extraction and structuration of an Explanatory Space (ES) [33], intended as all the possible explanations (about an explanandum) reachable by a user through an explanatory process, via a pre-defined set of actions, i.e. Open Question Answering and Overviewing.…”
Section: Introductionmentioning
confidence: 99%