2021
DOI: 10.1007/s12652-021-02933-3
|View full text |Cite
|
Sign up to set email alerts
|

Serial attention network for skin lesion segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 38 publications
0
17
0
Order By: Relevance
“…Their technique produced promising results for lesion classification. Ren et al [ 23 ] presented a fusion mechanism for the segmentation of a skin lesion. The spatial attention and channel attention modules extracted the information from channels of skin images.…”
Section: Related Workmentioning
confidence: 99%
“…Their technique produced promising results for lesion classification. Ren et al [ 23 ] presented a fusion mechanism for the segmentation of a skin lesion. The spatial attention and channel attention modules extracted the information from channels of skin images.…”
Section: Related Workmentioning
confidence: 99%
“…Attention mechanism introduced in whole network has channel spatial and LSTM adaptive attention mechanism. First, skin lesion regions vary in color, size, and shape; some light lesions are highly similar to normal skin, and boundaries of deeper lesion regions are complex, to segment model with sufficiently dense feature resolution, Ren et al [ 83 ] used serial attention network (SANet) to segment skin lesion regions and introduced channel first followed by a spatial attention mechanism to aggregate global, local, and interchannel information using interdependencies between channel mappings to improve representation of semantic features and using spatial attention to select global contextual information and contextual relationships to make semantic features more compact and consistent; SANet captures interpixel and interchannel feature dependencies and achieves 0.7692 average Jaccard index in ISIC2017 dataset. Second, it is difficult for image caption generation tasks to correctly extract image global features and perform image regions for each word without ignoring partial words, Deng et al [ 84 ] used DenseNet to extract features required for LSTM generation of sentences and introduced LSTM adaptive attention mechanism to improve correspondence problem of forced text and image regions, which was demonstrated on Flickr30k and COCO datasets showed that flexibility of caption generation was improved, with significant improvements in the BLEU and METEOR evaluation criteria.…”
Section: Development Of Densenetmentioning
confidence: 99%
“…It consists of 10,015 dermoscopic dermatoscopic images which are released as a training set for academic machine learning purposes and are publicly available through the ISIC archive [120]. A dermo- [103] 12,500 D 7 P [24], [29], [48], [58], [59], [104]-[107] ISIC 2017 [108] ∼2,000 D 3 P [46], [85], [104], [109] ISBI 2016 [110] 1,279 D 2 P [28], [76], [77], [90], [101], [111]- [113] ISIC Archive(2018) [114] 23,665 D 7 P [44], [49], [63], [115]- [119] HAM 10000 [120] 10,015 The interactive atlas of dermoscopy [132] (Atlas) dataset has 1,011 dermoscopic images (252 melanoma and 759 nevi cases), with 7-point checklist criteria. There are also 1,011 clinical color images corresponding to dermoscopic images.…”
Section: Framework Year Features Referencesmentioning
confidence: 99%
“…Moreover, transfer learning is more efficient in classifying between similar lesions, making it a first choice [161]. These papers used transfer learning in the literature we surveyed [25], [26], [28], [30], [33]- [39], [41], [42], [46], [52], [58], [61], [62], [64], [66]- [68], [70]- [73], [75], [76], [76], [77], [85]- [87], [92], [102], [112], [113], [122], [124], [126], [127], [129], [141], [151], [152], [158], [159], [162]- [179]. Transfer learning can transfer the parameters of the trained model (pre-training model) to the new model to help the new model training.…”
Section: B Transfer Learningmentioning
confidence: 99%