2018
DOI: 10.1007/978-3-030-01261-8_32
|View full text |Cite
|
Sign up to set email alerts
|

Choose Your Neuron: Incorporating Domain Knowledge Through Neuron-Importance

Abstract: Individual neurons in convolutional neural networks supervised for image-level classification tasks have been shown to implicitly learn semantically meaningful concepts ranging from simple textures and shapes to whole or partial objects -forming a "dictionary" of concepts acquired through the learning process. In this work we introduce a simple, efficient zero-shot learning approach based on this observation. Our approach, which we call Neuron Importance-Aware Weight Transfer (NIWT), learns to map domain knowl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 27 publications
0
19
0
Order By: Relevance
“…The results of NIWT [23], RN [20], and DEM [17] are obtained from the released codes on the authors' GitHub page. The rest of these results are cited directly from their published papers.…”
Section: Comparative Results Of Gzslmentioning
confidence: 99%
See 1 more Smart Citation
“…The results of NIWT [23], RN [20], and DEM [17] are obtained from the released codes on the authors' GitHub page. The rest of these results are cited directly from their published papers.…”
Section: Comparative Results Of Gzslmentioning
confidence: 99%
“…ZSKL [22] applies the well-established kernel methods to learn nonlinear mapping between the feature and attribute spaces, which contrast the existing ones that learn a linear mapping function. NIWT [23] use training instances and corresponding semantic information to learn a mapping function between the class-specific semantic and the importance of individual neurons within a deep network. The learned mapping function can predict neuron importance from knowledge regarding unseen classes and then optimize classification weights.…”
Section: A Mapping-based Methodsmentioning
confidence: 99%
“…Producing coarse localisation maps in deep neural networks that highlight input image regions essential for a task was also tackled in [ 21 ]. The authors based their research on [ 22 ], in which a method for mapping unseen objects into the dictionary of known concepts to learn classifiers for novel classes was presented. Based on this, the Human Importance-aware Network was presented in [ 23 ], where a deep neural network was encouraged to be sensitive to the same input regions as humans, which was effectively visualised.…”
Section: Related Workmentioning
confidence: 99%
“…Each prototype is then exploited to generate logical rules to provide natural language explanations. From a different perspective, Selvaraju et al propose a method to learn a map between neurons weight to semantic domain knowledge [45]. In a work more focused on unsupervised learning, Batet et al [46] exploit WordNet [47] and its taxonomic knowledge to compute semantic similarities that conduct to more interpretable clusters.…”
Section: Explanations For Non-insiders: Three Research Challenges With Symbolic Systemsmentioning
confidence: 99%