2022
DOI: 10.1007/978-3-031-12053-4_4
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting the Shape-Bias of Deep Learning for Dermoscopic Skin Lesion Classification

Abstract: It is generally believed that the human visual system is biased towards the recognition of shapes rather than textures. This assumption has led to a growing body of work aiming to align deep models' decision-making processes with the fundamental properties of human vision. The reliance on shape features is primarily expected to improve the robustness of these models under covariate shift. In this paper, we revisit the significance of shape-biases for the classification of skin lesion images. Our analysis shows… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…For instance, a DL-based document classifier that categorizes applicant resumes as acceptable or otherwise could potentially learn to discriminate against women or minority groups. For these reasons, model interpretability is crucial, as it can help VOLUME 4, 2016 identify biases in the data and provide insights into the model's decision-making process, ultimately enabling their safe deployment [13], [14].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, a DL-based document classifier that categorizes applicant resumes as acceptable or otherwise could potentially learn to discriminate against women or minority groups. For these reasons, model interpretability is crucial, as it can help VOLUME 4, 2016 identify biases in the data and provide insights into the model's decision-making process, ultimately enabling their safe deployment [13], [14].…”
Section: Introductionmentioning
confidence: 99%