2023
DOI: 10.1007/978-3-031-37731-0_19
|View full text |Cite
|
Sign up to set email alerts
|

Explainability of Image Semantic Segmentation Through SHAP Values

Abstract: The introduction of Deep Neural Networks in high-level applications is significantly increasing. However, the understanding of such model decisions by humans is not straightforward and may limit their use for critical applications. In order to address this issue, recent research work has introduced explanation methods, typically for classification and captioning. Nevertheless, for some tasks, explainability methods need to be developed. This includes image segmentation that is an essential component for many h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 20 publications
0
0
0
Order By: Relevance
“…not always improve the performance. In our case, the model hit the best similarity at class weights = [1,20] for the nonlesion and lesion classes. This emphasizes the necessity to fine-tune the class weights when considering the employment of the WCE loss in order to select the best-performing hyperparameters for the problem at hand.…”
Section: B Results and Discussionmentioning
confidence: 78%
See 2 more Smart Citations
“…not always improve the performance. In our case, the model hit the best similarity at class weights = [1,20] for the nonlesion and lesion classes. This emphasizes the necessity to fine-tune the class weights when considering the employment of the WCE loss in order to select the best-performing hyperparameters for the problem at hand.…”
Section: B Results and Discussionmentioning
confidence: 78%
“…A number of these revamped models have found their place in disease categorization networks, offering justification for predictions made by such opaque DL models [19]. Nevertheless, the quest to augment transparency in segmentation networks is still in its infancy [20], [21], although these too suffer from "black-box" frameworks. Additionally, post-model interpretative analysis can be pivotal for tech researchers to discern whether the model is pinpointing relevant patterns or merely over-optimizing based on training images' irrelevant attributes.…”
Section: Lack Of Explainabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…SHAP can be implemented for semantic segmentation (e.g., Dardouillet et al, 2023), however, it computes relevance values for clusters of input grid points and not for individual input grid points. The same applies to LIME; moreover, to the best of our knowledge, we are not aware of implementations of LIME for semantic segmentation.…”
Section: Introductionmentioning
confidence: 99%