2022
DOI: 10.1007/978-3-658-40004-0
|View full text |Cite
|
Sign up to set email alerts
|

Transparency and Interpretability for Learned Representations of Artificial Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 0 publications
0
17
0
1
Order By: Relevance
“…An ablation study, commonly applied in neuroscience research, is employed for artificial neural network performance analysis [44]. The objective is to investigate the change in classification accuracy with different numbers of neural network layers or different features.…”
Section: Ablation Study On Layer Depthmentioning
confidence: 99%
“…An ablation study, commonly applied in neuroscience research, is employed for artificial neural network performance analysis [44]. The objective is to investigate the change in classification accuracy with different numbers of neural network layers or different features.…”
Section: Ablation Study On Layer Depthmentioning
confidence: 99%
“…Table 4 shows how the results are affected by altering different building blocks of our model 40 . We select the model with the best validation Dice score (mean of GGO and high opacity) for the final evaluation.…”
Section: Resultsmentioning
confidence: 99%
“…In contemporary machine learning involving deep models [31,32], ablation analysis is frequently used to identify the role and/or importance of different features or neuron groups within a neural network. By selectively removing certain features or neurons and assessing how this affects the model's performance, researchers can identify which components are critical to the model's predictive power and which are less important.…”
Section: Methodsmentioning
confidence: 99%