2017
DOI: 10.1007/978-3-319-67786-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Re-training Deep Neural Networks to Facilitate Boolean Concept Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…More recently, Zilke et al (2016) proposed an algorithm that extracts decision trees per layer which can be merged into one rule set for the complete NN. González et al (2017) improve on this algorithm by polarizing real-valued activations and pruning weights through retraining. Both rely on the C4.5 ( Quinlan, 2014 ) decision tree algorithm for rule extraction.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, Zilke et al (2016) proposed an algorithm that extracts decision trees per layer which can be merged into one rule set for the complete NN. González et al (2017) improve on this algorithm by polarizing real-valued activations and pruning weights through retraining. Both rely on the C4.5 ( Quinlan, 2014 ) decision tree algorithm for rule extraction.…”
Section: Related Workmentioning
confidence: 99%
“…From Equation (8) and the condition of Equation (2), Lagrange's undetermined multiplier α is determined, and Equation (8) is rewritten as Equation (5). Table 1 shows the detailed settings for the experiments described in sections 3.1, 3.2, and 3.3.…”
Section: Appendix a Proof Of Em Algorithm For Community Detectionmentioning
confidence: 99%
“…• Approach B: Training interpretable layered neural networks -This approach is used to devise the training methods so that the trained layered neural network is represented by an interpretable function ( [7,8]). For instance, training a network has been proposed as a way of performing an affine transformation or one that can be represented in a rule-based manner.…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, the interest in methods that make learned models more interpretable has increased with the success of deep learning. Some research has been devoted to trying to convert such arcane models to more interpretable rule-based (Andrews et al, 1995) or tree-based models (Frosst and Hinton, 2017), which may be facilitated with appropriate neural network training techniques (González et al, 2017). Instead of making the entire model interpretable, methods like LIME are able to provide local explanations for inscrutable models, allowing a trade-off between fidelity to the original model with interpretability and complexity of the local model.…”
Section: Neural Network and Deep Learningmentioning
confidence: 99%