2020
DOI: 10.1016/j.inffus.2020.03.013
|View full text |Cite
|
Sign up to set email alerts
|

Explainable decision forest: Transforming a decision forest into an interpretable tree

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 130 publications
(48 citation statements)
references
References 29 publications
0
48
0
Order By: Relevance
“…Authors [8] developed reasoning through the use of visual indicators making the model interpretable. In [9] authors proposed an interpretable tree from a decision forest making understandable by humans. As proposed in [10,11] interpretable ML models helps develop a reasonable and data-driven decision support system that results in personalised decisions.…”
Section: Interpretable Machine Learningmentioning
confidence: 99%
“…Authors [8] developed reasoning through the use of visual indicators making the model interpretable. In [9] authors proposed an interpretable tree from a decision forest making understandable by humans. As proposed in [10,11] interpretable ML models helps develop a reasonable and data-driven decision support system that results in personalised decisions.…”
Section: Interpretable Machine Learningmentioning
confidence: 99%
“…The leaf nodes can also acquire an effective fault classification ability in the proposed tree-structured decision layer to deal with crossseverity fault diagnosis tasks, which lays the foundation for the better generalization of the model. However, weak knowledge learning ability has greatly limited its application for a long time [31][32][33]. Although decision trees are interpretable and simple to use, they are prone to overfitting, can be less robust to small changes in training data, and generally rely on heuristic algorithms.…”
Section: Proposed Deep Convolutional Tree-inspired Networkmentioning
confidence: 99%
“…To avoid the need for overfitting and parameter tuning, the algorithm optimizes the rule lists and reduces the hyperparameter that allows the model to consider the trade-off between complicacy and goodness of fit. Sagi, et al presented a method to transform decision forests into interpretable decision trees [29], which aims to maintain the predictive performance of decision forests and enable humans to understand the effective classification of decision forests.…”
Section: Interpretable Model Extractionmentioning
confidence: 99%
“…Many explanations for black-box models are achieved by feature visualization whether they are explainable modelbased or specific interpretable methods. We collect five methods including LIME [29], Anchors [30], CAM [71], Guided Grad-CAM [70], and meaningful perturbation [28] to show the interpretability.…”
Section: A Applications Of Feature Visualizationmentioning
confidence: 99%
See 1 more Smart Citation