2022
DOI: 10.1016/j.media.2022.102357
|View full text |Cite
|
Sign up to set email alerts
|

Fully transformer network for skin lesion analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(26 citation statements)
references
References 28 publications
0
26
0
Order By: Relevance
“…Recent studies have proposed numerous advances to improve the classification accuracy of skin lesions. On the architectural front, advanced methods include wavelet domain CNN models [32], [33], synergic models that contain an ensemble of CNNs [34], multi-tasking models that leverage dermoscopy images along with their segmentation features [16], attention-gated CNN or self-attention transformer models [9], [35], [36]. On the algorithmic front, proposed techniques include domain transfer of pre-trained feature sets [37], augmentation via GAN-based synthetic sample generation [38], [39], and combination of multiple imaging modalities and patient metadata [40].…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies have proposed numerous advances to improve the classification accuracy of skin lesions. On the architectural front, advanced methods include wavelet domain CNN models [32], [33], synergic models that contain an ensemble of CNNs [34], multi-tasking models that leverage dermoscopy images along with their segmentation features [16], attention-gated CNN or self-attention transformer models [9], [35], [36]. On the algorithmic front, proposed techniques include domain transfer of pre-trained feature sets [37], augmentation via GAN-based synthetic sample generation [38], [39], and combination of multiple imaging modalities and patient metadata [40].…”
Section: Related Workmentioning
confidence: 99%
“…The attention mechanism is a biomimetic cognitive method used in diverse computer vision assignments such as image classification (Woo et al, 2018;Hu et al, 2018;Dosovitskiy et al, 2020;Hou et al, 2021;Mehta and Rastegari, 2021) and image segmentation (Mehta and Rastegari, 2021;He et al, 2022). An example of the attention network is the SENet, which obtains global representations by global average pooling and channel-wise feature response by squeeze and excitation (Hu et al, 2018).…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…However, the standard transformer method ignores the inductive biases (e.g., translation equivariance and locality) inherent to CNNs, which leads to poor performance while training with insufficient data (Dosovitskiy et al, 2020). Recently, (He et al, 2022) first introduced a purely ViT based network to analyse skin lesions with acceptable results on segmentation and classification. Although their pyramid pooling in the multihead self-attention (He et al, 2022) has linear computational complexity, the model is still significantly sizeable with 8M to 19M parameters.…”
Section: Vision Transformermentioning
confidence: 99%
See 2 more Smart Citations