2022
DOI: 10.1088/1361-6560/ac628a
|View full text |Cite
|
Sign up to set email alerts
|

Category guided attention network for brain tumor segmentation in MRI

Abstract: Objective: Magnetic resonance imaging (MRI) has been widely used for the analysis and diagnosis of brain diseases. Accurate and automatic brain tumor segmentation is of paramount importance for radiation treatment. However, low tissue contrast in tumor regions makes it a challenging task. Approach: We propose a novel segmentation network named Category Guided Attention U-Net (CGA U-Net). In this model, we design a Supervised Attention Module (SAM) based on the attention mechanism, which can capture more accur… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…Attention mechanisms can assist models to focus on the specific object of the input, which are widely used in various tasks including machine translation, classification, semantic segmentation, recognition, and image generation (Li et al 2022). Oktay et al (2018) combined attention gate with U-net in CT image segmentation of pancreatic cancer to improve the prediction performance of U-Net while maintaining the computational efficiency.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Attention mechanisms can assist models to focus on the specific object of the input, which are widely used in various tasks including machine translation, classification, semantic segmentation, recognition, and image generation (Li et al 2022). Oktay et al (2018) combined attention gate with U-net in CT image segmentation of pancreatic cancer to improve the prediction performance of U-Net while maintaining the computational efficiency.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…A c→c and A p→p perform the After that, we reverse the enhanced sequence back to class tokens and patch tokens according to the order of concatenation. Moreover, we introduce an auxiliary Euclidean distance loss L EU (Li et al 2022) to maximize the distance between the enhanced class tokens, which can further improve the feature discrepancy. The enhanced patch tokens are reshaped into 2D features and upsampled into the original size of H × W × D. To retain the source information, the input X intra is also combined, which is reformed by a 1×1 convolutional layer, to obtain the final output X inter ∈ R H×W ×D .…”
Section: Class Tokens (G)mentioning
confidence: 99%
“…Specifically, there are two types of attention methods: channel attention and spatial attention. Recently, integrating channel attention, spatial attention, or both into convolution blocks has attracted much interest, achieving significant performance improvement (Woo et al 2018, Goceri 2020, Wang et al 2020, Hou et al 2021, Qin et al 2021, Zeng et al 2021, Li et al 2022. One of the representative methods of channel attention is the squeeze and excitation (SE) module, which can be effectively improved by selectively capturing highly sophisticated channel correlations.…”
Section: Attention Mechanismmentioning
confidence: 99%