2016
DOI: 10.1007/978-3-319-46307-0_29
|View full text |Cite
|
Sign up to set email alerts
|

DeepRED – Rule Extraction from Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
105
0
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 176 publications
(109 citation statements)
references
References 13 publications
0
105
0
4
Order By: Relevance
“…Rule-based learner: [82,83,147,148,251,252,253,254,255,256] Decision Tree: [21,56,79,81,97,135,257,258,259] Others: [80] Feature relevance explanation Importance/Contribution: [60,61,110,260,261] Sensitivity / Saliency: [260] [262] Local explanation Decision Tree / Sensitivity: [233] [263] Explanation by Example Activation clusters: [264,144] Text explanation Caption generation: [111] [150] Visual explanation Saliency / Weights: [265] Architecture modification Others: [264] [266] [267] Convolutional Neural Networks Explanation by simplification Decision Tree: [78] Feature relevance explanation Activations: [72,268] [46] Feature Extraction: [72,268] Visual explanation Filter / Activation: [63,136,137,...…”
Section: Explanation By Simplificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Rule-based learner: [82,83,147,148,251,252,253,254,255,256] Decision Tree: [21,56,79,81,97,135,257,258,259] Others: [80] Feature relevance explanation Importance/Contribution: [60,61,110,260,261] Sensitivity / Saliency: [260] [262] Local explanation Decision Tree / Sensitivity: [233] [263] Explanation by Example Activation clusters: [264,144] Text explanation Caption generation: [111] [150] Visual explanation Saliency / Weights: [265] Architecture modification Others: [264] [266] [267] Convolutional Neural Networks Explanation by simplification Decision Tree: [78] Feature relevance explanation Activations: [72,268] [46] Feature Extraction: [72,268] Visual explanation Filter / Activation: [63,136,137,...…”
Section: Explanation By Simplificationmentioning
confidence: 99%
“…Several model simplification techniques have been proposed for neural networks with one single hidden layer, however very few works have been presented for neural networks with multiple hidden layers. One of these few works is DeepRED algorithm [257], which extends the decompositional approach to rule extraction (splitting at neuron level) presented in [259] for multi-layer neural network by adding more decision trees and rules.…”
Section: Multi-layer Neural Networkmentioning
confidence: 99%
“…Efforts to decompose neural networks into decision trees have recently extended work from the 1990s, which focused on shallow networks, to generalizing the process for deep neural networks. One such method is DeepRED [21], which demonstrates a way of extending the CRED [22] algorithm (designed for shallow networks) to arbitrarily many hidden layers. DeepRED utilizes several strategies to simplify its decision trees: it uses RxREN [23] to prune unnecessary input, and it applies algorithm C4.5 [24], a statistical method for creating a parsimonious decision tree.…”
Section: A Explanations Of Deep Network Processingmentioning
confidence: 99%
“…Methods that combine elements of both approaches are called eclectic. Recent work can be found for both pedagogical (Augasta & Kathirvalavakumar, 2012) and decompositional (Zilke et al, 2016) approaches, with this last one extending a decompositional approach to deep networks.…”
Section: Interpreting Embedding Models: Related Workmentioning
confidence: 99%