2019
DOI: 10.1007/978-3-030-12385-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Reverse Engineering Creativity into Interpretable Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…This problem of interpretability and explainability of how or why the DDN has made some set of connections is not trivial, and makes it difficult for researchers to traditionally understand what is being encoded on each neuron in each layer of the DNN ( Towell and Shavlik, 1993 ; Liu X. et al, 2018 ; Kumar et al, 2020 ; Erasmus et al, 2021 ). This ability to understand some of the encodings can be important with applications such as medical decision making, law enforcement, financial analysis ( Horta et al, 2021 ) as well as when attempting to model and explain the cognitive system ( Cichy and Kaiser, 2019 ; Oita, 2019 ; Monte-Serrat and Cattani, 2021 ) such as in tasks relating to background knowledge which my further help researchers understand distributed knowledge encodings across layers and what the weight distributions actually mean in graphical form.…”
Section: Encoding Of Information To Network Layers and Graph Visual E...mentioning
confidence: 99%
“…This problem of interpretability and explainability of how or why the DDN has made some set of connections is not trivial, and makes it difficult for researchers to traditionally understand what is being encoded on each neuron in each layer of the DNN ( Towell and Shavlik, 1993 ; Liu X. et al, 2018 ; Kumar et al, 2020 ; Erasmus et al, 2021 ). This ability to understand some of the encodings can be important with applications such as medical decision making, law enforcement, financial analysis ( Horta et al, 2021 ) as well as when attempting to model and explain the cognitive system ( Cichy and Kaiser, 2019 ; Oita, 2019 ; Monte-Serrat and Cattani, 2021 ) such as in tasks relating to background knowledge which my further help researchers understand distributed knowledge encodings across layers and what the weight distributions actually mean in graphical form.…”
Section: Encoding Of Information To Network Layers and Graph Visual E...mentioning
confidence: 99%