2023
DOI: 10.1016/j.aej.2023.02.039
|View full text |Cite
|
Sign up to set email alerts
|

MSA-Net: Multiscale spatial attention network for medical image segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…The detailed structure of attention calculation and MLP is shown in Figure 3. .12], [13], [14], attention is calculated by the following formula…”
Section: Attention Connection Modulementioning
confidence: 99%
“…The detailed structure of attention calculation and MLP is shown in Figure 3. .12], [13], [14], attention is calculated by the following formula…”
Section: Attention Connection Modulementioning
confidence: 99%
“…As Transformers continue to perform well in computer vision, an increasing number of medical image segmentation studies are adopting self‐attention mechanisms to address related issues 11–18 . These methods mainly focus on fusing self‐attention mechanisms and convolutional operations to improve the accuracy and efficiency of medical image segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…As Transformers continue to perform well in computer vision, an increasing number of medical image segmentation studies are adopting self-attention mechanisms to address related issues. [11][12][13][14][15][16][17][18] These methods mainly focus on fusing self-attention mechanisms and convolutional operations to improve the accuracy and efficiency of medical image segmentation. The self-attention mechanism is one of the most critical components in the Transformer, which can better capture long-distance dependency relationships.…”
mentioning
confidence: 99%
“…Currently, the concept of visual spatial attention mechanism is applied in medical image analysis to detect representative features from images with multiple lesions, noise, and artifacts [23], [24]. By aggregating different levels of features, the attention model shows strong capability for disease classification.…”
Section: Introductionmentioning
confidence: 99%