2022
DOI: 10.3390/stats5020026
|View full text |Cite
|
Sign up to set email alerts
|

Opening the Black Box: Bootstrapping Sensitivity Measures in Neural Networks for Interpretable Machine Learning

Abstract: Artificial neural networks are powerful tools for data analysis, particularly in the context of highly nonlinear regression models. However, their utility is critically limited due to the lack of interpretation of the model given its black-box nature. To partially address the problem, the paper focuses on the important problem of feature selection. It proposes and discusses a statistical test procedure for selecting a set of input variables that are relevant to the model while taking into account the multiple … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…However, significant steps still need to be taken, as the inherent “black box” nature of deep learning poses challenges in interpreting deep learning models. Anticipated efforts in enhancing the interpretability of the model are expected to refine its overall effectiveness and integration into climate research methodologies (Guidotti et al., 2018; La Rocca & Perna, 2022; Savage, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…However, significant steps still need to be taken, as the inherent “black box” nature of deep learning poses challenges in interpreting deep learning models. Anticipated efforts in enhancing the interpretability of the model are expected to refine its overall effectiveness and integration into climate research methodologies (Guidotti et al., 2018; La Rocca & Perna, 2022; Savage, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…First, neural networks, as used here, have a reputation for being a black box algorithm and thus having a decision process that is hard to understand. Still, there are recent developments to make them more transparent (Guidotti et al., 2019; Rocca & Perna, 2022; Savage, 2022). Second, training a neural network while relying on a GPU creates several sources of randomness, and reproducibility depends on using the exact same settings as the authors of a framework (Feng & Hao, 2020; Scardapane & Wang, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…First, neural networks, as used here, have a reputation for being a black box algorithm and thus having a decision process that is hard to understand. Still, there are recent developments to make them more transparent (Rocca & Perna, 2022;Savage, 2022).…”
Section: Limitations Of Machine Learningmentioning
confidence: 99%