2019
DOI: 10.1609/aaai.v33i01.33019780
|View full text |Cite
|
Sign up to set email alerts
|

Meaningful Explanations of Black Box AI Decision Systems

Abstract: Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML sys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
102
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 139 publications
(102 citation statements)
references
References 22 publications
0
102
0
Order By: Relevance
“…In black box models, it can be challenging to determine what is coordinating the visible patterns. Such models are problematic not only for lack of transparency but also for possible biases inherited by the algorithms from clinicians mistakes [53]. This issue is caused based on the human errors and biased sampling of training data as well as the underestimation of the impact of the risk factors underlying behaviour/pattern.…”
Section: Visualisation In Deep Learningmentioning
confidence: 99%
“…In black box models, it can be challenging to determine what is coordinating the visible patterns. Such models are problematic not only for lack of transparency but also for possible biases inherited by the algorithms from clinicians mistakes [53]. This issue is caused based on the human errors and biased sampling of training data as well as the underestimation of the impact of the risk factors underlying behaviour/pattern.…”
Section: Visualisation In Deep Learningmentioning
confidence: 99%
“…The prevalence of CRL as interpretable models indicates the importance of logical rules for explainability. Logical rules are intuitive to understand, being the standard language of reasoning [20,36] and are the paradigm that we have adopted in our method.…”
Section: Xai and Interpretable Models -Current State Of The Artmentioning
confidence: 99%
“…Yet we wish to communicate explanations to a variety of levels of domain expertise: patient, practitioner, healthcare administrators and regulators. Additionally, we set higher standards of statistical rigour before granting our trust to ML derived decisions and explanations [20,21].…”
Section: Introductionmentioning
confidence: 99%
“…The prevalence of CRL as interpretable models indicates the importance of logical rules for explainability. Logical rules are intuitive to understand, being the standard language of reasoning [20,36] and are the paradigm that we have adopted in our method. The above mentioned methods are examples of globally interpretable proxy models; they allow the user to infer some understanding of the black box model's overall behaviour.…”
Section: Xai and Interpretable Models -Current State Of The Artmentioning
confidence: 99%