2020
DOI: 10.3389/frai.2020.00003
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability With Accurate Small Models

Abstract: Models often need to be constrained to a certain size for them to be considered interpretable. For example, a decision tree of depth 5 is much easier to understand than one of depth 50. Limiting model size, however, often reduces accuracy. We suggest a practical technique that minimizes this trade-off between interpretability and classification accuracy. This enables an arbitrary learning algorithm to produce highly accurate small-sized models. Our technique identifies the training data distribution to learn f… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 55 publications
0
3
0
Order By: Relevance
“…While decision trees (DT) are generally considered interpretable (Letham et al, 2015), trees of arbitrarily large depths can be difficult to understand (Ghose and Ravindran, 2020) and simulate (Lipton, 2018). A sufficiently sparse DT is desirable and considered interpretable (Lakkaraju et al, 2016).…”
Section: Icct Architecturementioning
confidence: 99%
“…While decision trees (DT) are generally considered interpretable (Letham et al, 2015), trees of arbitrarily large depths can be difficult to understand (Ghose and Ravindran, 2020) and simulate (Lipton, 2018). A sufficiently sparse DT is desirable and considered interpretable (Lakkaraju et al, 2016).…”
Section: Icct Architecturementioning
confidence: 99%
“…Therefore, interpretable models are preferably small in size, as well as of sufficient high-performance. In order to have high explanation complexity, there is a significant need for shrinkage methods for ML models [ 5 ]. For example, a decision tree of depth = 5 is easier to understand than one of depth = 50.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, there are methods applied while generating a model that aim to find a trade-off between model accuracy and complexity [ 10 ]. At this stage, optimal sampling techniques or model structures that lead to higher accuracy and lower complexity might be determined [ 11 , 12 ].…”
Section: Introductionmentioning
confidence: 99%