2019
DOI: 10.1007/s12650-019-00607-z
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing surrogate decision trees of convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 28 publications
(17 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…Besides, a new generation of AI which has better reliability, interpretability, accountability, and transparency than black-box AI is worth investing in to overcome the "black box" dilemma. For example, Jia et al created visualizing surrogate decision trees of convolutional neural networks with python (161).…”
Section: Limitations and Future Considerationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides, a new generation of AI which has better reliability, interpretability, accountability, and transparency than black-box AI is worth investing in to overcome the "black box" dilemma. For example, Jia et al created visualizing surrogate decision trees of convolutional neural networks with python (161).…”
Section: Limitations and Future Considerationsmentioning
confidence: 99%
“…For example, Jia et al. created visualizing surrogate decision trees of convolutional neural networks with python ( 161 ).…”
Section: Limitations and Future Considerationsmentioning
confidence: 99%
“…Visualizing (deep) neural networks has become a popular research focus in recent years. However, in most cases the approaches focus on explaining and debugging the models, for instance, visualizing which pixel regions are most supportive for the prediction [64], explaining predictions of convolutional neural networks with surrogate decision trees [26], or visualizing activation patterns to understand deep learning models [29]. Liu et al [33] provide an overview of visual analytic approaches for understanding, debugging and refining machine learning models, Choo et al [12] for explainable deep learning, and recently Sacha et al [47] for assisting machine learning.…”
Section: Related Workmentioning
confidence: 99%
“…Our technique contributes to another way of interpretable machine learning [13,31]. Previous works either open the black boxes [40,41,43,45,62,68] which need analysts to have machine learning knowledge or explain the black boxes using surrogate models [33,46], which adds another layer of uncertainty.…”
Section: Interactive Classificationmentioning
confidence: 99%