2020
DOI: 10.48550/arxiv.2003.04675
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Interpretable ANNs: An Exact Transformation to Multi-Class Multivariate Decision Trees

Abstract: Deep neural networks (DNNs) are commonly labelled as black-boxes lacking interpretability; thus, hindering human's understanding of DNNs' behaviors. A need exists to generate a meaningful sequential logic for the production of a specific output. Decision trees exhibit better interpretability and expressive power due to their representation language and the existence of efficient algorithms to generate rules. Growing a decision tree based on the available data could produce larger than necessary trees or trees … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…Early work largely focussed on multi-layer perceptrons (MLPs) with one or very few hidden layers and also on recurrent neural networks. Research has since grown into explaining 'deeper' neural networks of several to many layers, be these MLPs that are deep in this particular sense [26,27,28] or more advanced architectures such as LSTMs [29], Deep Belief Networks [30] or CNNs [16,17,18,19,20,21]. Remaining subsections only cover methods that extract explanations from CNNs.…”
Section: Rule Extraction From Neural Networkmentioning
confidence: 99%
“…Early work largely focussed on multi-layer perceptrons (MLPs) with one or very few hidden layers and also on recurrent neural networks. Research has since grown into explaining 'deeper' neural networks of several to many layers, be these MLPs that are deep in this particular sense [26,27,28] or more advanced architectures such as LSTMs [29], Deep Belief Networks [30] or CNNs [16,17,18,19,20,21]. Remaining subsections only cover methods that extract explanations from CNNs.…”
Section: Rule Extraction From Neural Networkmentioning
confidence: 99%