2020
DOI: 10.1007/978-3-030-58536-5_37
|View full text |Cite
|
Sign up to set email alerts
|

Training Interpretable Convolutional Neural Networks by Differentiating Class-Specific Filters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(19 citation statements)
references
References 26 publications
0
19
0
Order By: Relevance
“…It is shown that neuron units generally extract features that can be interpreted as various levels of semantic concept, from textures and patterns to objects and scenes. Moreover, to learn interpretable neural networks, one option is to disentangle the representations learned by internal filters, which makes the filters more specialized [45,27]. Inspired by these works, we observe that in deep MDE networks, some hidden units are selective to some ranges of depth.…”
Section: Introductionmentioning
confidence: 80%
See 1 more Smart Citation
“…It is shown that neuron units generally extract features that can be interpreted as various levels of semantic concept, from textures and patterns to objects and scenes. Moreover, to learn interpretable neural networks, one option is to disentangle the representations learned by internal filters, which makes the filters more specialized [45,27]. Inspired by these works, we observe that in deep MDE networks, some hidden units are selective to some ranges of depth.…”
Section: Introductionmentioning
confidence: 80%
“…Moreover, other methods that share a similar concept to our method are to learn more specialized filters. In interpretable CNNs from [45], each filter represents a specific object part, while a more recent study [27] trains interpretable CNNs by alleviating filter-class entanglement, i.e. each filter only responds to one or few classes.…”
Section: Interpretable Deep Network For Visionmentioning
confidence: 99%
“…Zhang et al [8] designed interpretable CNNs by making each filter represent a specific object part. Liang et al [23] trained interpretable CNNs by learning class-specific deep filters, namely, encouraging each filter only to account for few classes. Similarly, You et al [13] proposed to improve the depth selectivity by designing specific loss functions for MDE models.…”
Section: Interpretable and Explainable Deep Neural Networkmentioning
confidence: 99%
“…Providing human-intelligible explanations for the decisions of DCNNs is the ultimate goal of Explainable Artificial Intelligence (XAI) [9,10]. Existing work on interpretability mainly involves training interpretable machine learning [11,12] and explaining blackbox models [13][14][15]. Here, we mainly focus on the methods of explaining the black box model.…”
Section: Introductionmentioning
confidence: 99%
“…Figure11. For different interpretation algorithms under top-3% occlusion, the statistical results of object names that users can identify.…”
mentioning
confidence: 99%