2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00016
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

Abstract: One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 16 publications
0
8
0
2
Order By: Relevance
“…Waltl and Vogl (2018) provide a taxonomy of different learning systems and their natural contribution to explainability. As neural networks are not transparent by nature there are some efforts to support their explainability (see e. g. Ribeiro, Singh, and Guestrin (2016) or Schaaf, Huber, and Maucher (2019)). Moreover, trust can be further increased using a visually represented agent communicate the explanation rather than just displaying it (Weitz et al 2019).…”
Section: Discussionmentioning
confidence: 99%
“…Waltl and Vogl (2018) provide a taxonomy of different learning systems and their natural contribution to explainability. As neural networks are not transparent by nature there are some efforts to support their explainability (see e. g. Ribeiro, Singh, and Guestrin (2016) or Schaaf, Huber, and Maucher (2019)). Moreover, trust can be further increased using a visually represented agent communicate the explanation rather than just displaying it (Weitz et al 2019).…”
Section: Discussionmentioning
confidence: 99%
“…Some try to explain the model as a whole or completely replace it with an inherently understandable model such as a decision tree (Freitas, 2014). Other approaches try to steer the model in the learning process to a more explainable state (Schaaf & Huber, 2019;Burkart et al, 2019) or focus on just explaining single predictions for example by highlighting important features (Ribeiro et al, 2016b) or contrasting it to another decision (Wachter et al, 2018). In the following sections, we structure the area of explainable supervised machine learning.…”
Section: Concepts Of Explainabilitymentioning
confidence: 99%
“…Dies hat den Vorteil, dass man das Risiko eines Einsatzes besser nachvollziehen kann, dass sich die Akzeptanz bei den Mitarbeitern, die von diesen Modellen unterstützt werden, erhöht, und dass Fehler sowie eventuelle Vorurteile der ML-Methoden erkannt werden können. Dies kann zum Beispiel durch die Approximation komplexer Modelle mittels verständlicher Modelle oder durch spieltheoretische Überlegungen ermöglicht werden (Schaaf et al 2019;Lundberg und Lee 2017).…”
Section: Zuverlässigkeit Digitale Souveränität Und Regulatorische Anunclassified