2021
DOI: 10.1007/978-3-030-89811-3_12
|View full text |Cite
|
Sign up to set email alerts
|

Making Things Explainable vs Explaining: Requirements and Challenges Under the GDPR

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…In our qualitative evaluation, we collected feedback from sales representatives via questionnaires, interviews, and other feedback channels. Similar approaches have been proposed in [15] and [27], where the authors argued that "subjective satisfaction is the only reasonable metrics to evaluate success in explanation". We have conducted a survey within a small group of sales representatives on the helpfulness of Intellige-based sales recommendations (ratings from 1 -not helpful at all to 5 -couldn't do my job without them).…”
Section: Evaluation Resultsmentioning
confidence: 86%
See 1 more Smart Citation
“…In our qualitative evaluation, we collected feedback from sales representatives via questionnaires, interviews, and other feedback channels. Similar approaches have been proposed in [15] and [27], where the authors argued that "subjective satisfaction is the only reasonable metrics to evaluate success in explanation". We have conducted a survey within a small group of sales representatives on the helpfulness of Intellige-based sales recommendations (ratings from 1 -not helpful at all to 5 -couldn't do my job without them).…”
Section: Evaluation Resultsmentioning
confidence: 86%
“…Recent work on creating narrative explanations via templatebased approaches includes imputing the predefined narrative templates with the most important features to explain the recommendation models [22][23] [24]. In [25], a Java package provides narrative justifications for logistic/linear regression models, [26] propose a way to generate narrative explanations using logical knowledge translated from a decision tree model, and [27] introduce a rule-based explainer for a GDPR automated decision which applies to explainable models. However, all these aforementioned templated-based approaches are only applicable to a subset of machine learning models, and can easily fail when facing a more complex model such as a random forest.…”
Section: Related Workmentioning
confidence: 99%
“…Authors of paper 38 considered that an explanation is the result of an explanatory process and introduced the SAGES (Simple, Adaptable, Grounded, Expandable, Sourced) guidance model for the implementation of a tool for the exploration of the explanatory space of a good explanatory process:…”
Section: Explanationsmentioning
confidence: 99%
“…Authors of paper 38 considered that an explanation is the result of an explanatory process and introduced the SAGES (Simple, Adaptable, Grounded, Expandable, Sourced) guidance model for the implementation of a tool for the exploration of the explanatory space of a good explanatory process: (i) simple —XAI systems need to consider whether an explanation is available and to understand what does it actually mean. Explanations should be designed on the basis of pre‐existing user knowledge to ensure the satisfaction of the specified user, (ii) adaptable —users should be allowed to navigate the explanatory space depending on their individual objectives, interests, and context, rather than being informed about all aspects of the process that they might not be interested in.…”
Section: Background and Related Conceptsmentioning
confidence: 99%