2019
DOI: 10.1109/tvcg.2019.2903943
|View full text |Cite
|
Sign up to set email alerts
|

DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
62
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 111 publications
(62 citation statements)
references
References 22 publications
0
62
0
Order By: Relevance
“…It is very hard to understand what happens in the hidden layers and why a trained NN gives a positive diagnosis for a given input sample. This ''black-box'' aspect is very restrictive in many application fields, where the interpretation of a decision can lead to serious legal consequences especially in safetycritical applications [14] (e.g., medical diagnosis [15], [16], autonomous driving, electric power generation. .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is very hard to understand what happens in the hidden layers and why a trained NN gives a positive diagnosis for a given input sample. This ''black-box'' aspect is very restrictive in many application fields, where the interpretation of a decision can lead to serious legal consequences especially in safetycritical applications [14] (e.g., medical diagnosis [15], [16], autonomous driving, electric power generation. .…”
Section: Introductionmentioning
confidence: 99%
“…Feature maps of the model are then obtained. The t-SNE method has been used and reported in many research publications such as [8], [14], [21]- [25]. The main constraint of this method is the lack of repeatability due to the minimization of the Kullback-Leibler divergence between the input space distribution and the embedding space distribution [22].…”
Section: Introductionmentioning
confidence: 99%
“…Hohman et al [29] presented a comprehensive survey to summarize the state-of-the-art visual analysis methods for explainable deep learning. Existing methods can be categorized into three classes: network-centric [30], [31], [32], instance-centric [20], [33], [34], [35], and hybrid [36], [37]. Network-centric methods.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [28] surveyed the recent progress on visualizations developed to understand, diagnose, and refine ML models. For example, Wang et al [50] developed an interpretation approach to review the inner mechanisms of complicated deep neural networks. Manifold [51] is a framework for visually interpreting, debugging, and comparing ML models.…”
Section: Visualizations For Xaimentioning
confidence: 99%