2017
DOI: 10.48550/arxiv.1711.06178
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(20 citation statements)
references
References 0 publications
0
20
0
Order By: Relevance
“…However, even a small neural model may not be interpretable. Closest to our work, [7] regularize a neural model to behave like a simple decision tree.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, even a small neural model may not be interpretable. Closest to our work, [7] regularize a neural model to behave like a simple decision tree.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of interpreting a model posthoc, an alternative is to optimize a measure of interpretability alongside predictive performance. [7], [11] pose two paths forward: include input gradient explanations or decision tree explanations in the objective function. As a result, models are encouraged to find "more interpretable" minima.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The field of Explainable AI (XAI) has emerged specifically for research on the development of interpretable machine learning algorithms that can increase the transparency of black-box models [3,10,13,47,65]. While the majority of XAI techniques focus on expert systems designed for machine learning developers [50,51,62,64], a growing body of work at the intersection of XAI and HCI has developed explainable methodologies targeting non-technical users [11,16].…”
Section: Introductionmentioning
confidence: 99%
“…Ismail et al (2020) demonstrate the unreliability and inaccuracy of several explanation methods designed for tabular data in identifying feature importance in temporal models. Some approaches have focused on interpreting specific temporal model architectures, such as recurrent neural networks (Karpathy et al, 2015;Suresh et al, 2017;Ismail et al, 2019) and attention-based models (Choi et al, 2016;Zhang et al, 2019), while others have explored methods to encourage temporal models during training to be more interpretable using tree regularization (Wu et al, 2017) and game-theoretic characterizations (Lee et al, 2018). However, model-agnostic explanation for temporal models has begun to be addressed only recently.…”
Section: Introductionmentioning
confidence: 99%