2022
DOI: 10.1007/s10115-022-01756-8
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
68
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 187 publications
(69 citation statements)
references
References 97 publications
1
68
0
Order By: Relevance
“…There has been work on interpreting deep CNNs to explore their internal reasoning (X. Li et al 2021). By applying available interpretation algorithms it would be possible to identify which parts of a scene are triggering the networks' decision.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been work on interpreting deep CNNs to explore their internal reasoning (X. Li et al 2021). By applying available interpretation algorithms it would be possible to identify which parts of a scene are triggering the networks' decision.…”
Section: Discussionmentioning
confidence: 99%
“…Deep neural networks are considered 'black box' models so it is difficult to say how the networks were making decisions and to what degree they were recognising different potential drivers of loss. There has been work on interpreting deep CNNs to explore their internal reasoning (Li et al, 2021). By applying interpretation algorithms, it may be possible to identify which parts of a scene are triggering the networks' decision.…”
Section: Probing the Black Boxmentioning
confidence: 99%
“…The field of interpretable AI is itself a major research area that is crucial to gaining a better understanding of black box ML models such as DNNs. Based on the nature of the ML model, available data, and interpretation strategy, interpretable AI methods have been categorized [ 131 ]. In future work, it is imperative to determine interpretable AI methods best suited for the medical diagnosis domain.…”
Section: Discussionmentioning
confidence: 99%
“…For interpretability, various researches focused on the mathematical level to introduce explanation to neural networks (Li et al, 2022). Such effort is important but difficult.…”
Section: Interpretabilitymentioning
confidence: 99%