2018
DOI: 10.1016/j.dsp.2017.10.011
|View full text |Cite
|
Sign up to set email alerts
|

Methods for interpreting and understanding deep neural networks

Abstract: This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
1,435
1
16

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 2,078 publications
(1,457 citation statements)
references
References 42 publications
5
1,435
1
16
Order By: Relevance
“…Activation maximization (Montavon et al, ) identifies input patterns that lead to maximal activations relating to specific classes in the output layer (Berkes & Wiskott, ; Simonyan & Zisserman, ). This makes the visualization of prototypes of classes possible, and assesses which properties the model captures for classes (Erhan, Bengio, Courville, & Vincent, ).…”
Section: General Approaches Of Explainable Ai Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Activation maximization (Montavon et al, ) identifies input patterns that lead to maximal activations relating to specific classes in the output layer (Berkes & Wiskott, ; Simonyan & Zisserman, ). This makes the visualization of prototypes of classes possible, and assesses which properties the model captures for classes (Erhan, Bengio, Courville, & Vincent, ).…”
Section: General Approaches Of Explainable Ai Modelsmentioning
confidence: 99%
“…The effect of the ℓ 2 ‐norm regularizer in the code space can instead be understood as encouraging codes that have high probability. High probability codes do not necessarily map to high density regions of the input space; for more details refer to the excellent tutorial given by Montavon et al ().…”
Section: General Approaches Of Explainable Ai Modelsmentioning
confidence: 99%
“…To understand class-specific spectral characteristics in the EEG recordings, we analyzed band powers in five frequency ranges: delta (0-4 Hz), theta (4-8 Hz), alpha (8)(9)(10)(11)(12)(13)(14), low beta (14)(15)(16)(17)(18)(19)(20), high beta (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) and low gamma .…”
Section: Visualizations Of the Spectral Differences Between Normalmentioning
confidence: 99%
“…Deep learning models that use an attention mechanism might be more interpretable, since these models can highlight which parts of the recording were most important for the decoding decision. Other deep learning visualization methods like recent saliency map methods [27,28] to explain individual decisions or conditional generative adversarial networks [29,30] to understand what makes a recording pathological or normal might further improve the clinical benefit of deep learning methods that decode pathological EEG.…”
mentioning
confidence: 99%
“…Another future direction would be to analyze the interpretability of NNS systems, specifically for recommender systems with non-linear query mechanism, in terms of salient features that have led to the query result. This is in the line of research on ''explaining learning machines'', i.e., answering to the question which part of the data is responsible for specific decisions made by learning machines (Baehrens et al 2010;Zeiler and Fergus 2014;Bach et al 2015; Ribeiro et al 2016; Montavon et al 2017Montavon et al , 2018. This question is non-trivial when the learning machines are complex and non-linear.…”
Section: Resultsmentioning
confidence: 99%