2022
DOI: 10.1016/j.compbiomed.2022.105939
|View full text |Cite
|
Sign up to set email alerts
|

An improved transformer network for skin cancer classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 101 publications
(47 citation statements)
references
References 24 publications
0
46
0
1
Order By: Relevance
“…Xin et al [ 176 ] explore Vision Transformer (VIT) and apply on skin lesion classification using multi-scale and overlapping sliding windows. Current decades have seen significant advancements in the initial detection and medication of melanoma, whose prevalence is rising annually throughout the world and pose potential a serious risk to human health.…”
Section: Deep Learning For Medical Image Analysis and Cadmentioning
confidence: 99%
“…Xin et al [ 176 ] explore Vision Transformer (VIT) and apply on skin lesion classification using multi-scale and overlapping sliding windows. Current decades have seen significant advancements in the initial detection and medication of melanoma, whose prevalence is rising annually throughout the world and pose potential a serious risk to human health.…”
Section: Deep Learning For Medical Image Analysis and Cadmentioning
confidence: 99%
“…In the last three years, researches started to apply Transformer‐based DL approaches to pattern recognition of tumor images and tumor classification. [ 121–123 ] One vision transformer (VIT) model by Xin et al. [ 121 ] had superior performance in skin cancer classification using benchmarking dermatoscopy data set in.…”
Section: Artificial Intelligencementioning
confidence: 99%
“…After the ViT got researchers' attention because of its strong performance, there was some work in the field of skin cancer classification/segmentation based on the ViT model in 2022. In the work [30], researchers proposed a new method in the image feature embedding block of the original ViT model combined with a contrastive learning method. [32] conducted experiments addressing the bottleneck of the original ViT with their improved position encoding method.…”
Section: Related Workmentioning
confidence: 99%