2022
DOI: 10.1007/978-3-031-16443-9_20
|View full text |Cite
|
Sign up to set email alerts
|

Attentive Symmetric Autoencoder for Brain MRI Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 17 publications
1
7
0
Order By: Relevance
“…In addition, sensitivity and accuracy results are also compromised when using complementary modalities, but specificity seems insensitive to MRI modality selection. Compared to the baseline results from the classic U‐Net, both Attention U‐Net and Cascaded U‐Net models obtain higher overall Dice coefficients for all three segmentation targets, which are in line with the state‐of‐the‐art segmentation performance reported in the recent BraTS Challenges 37,59 . The sensitivity, specificity, and accuracy results from these two models are comparable to the classic U‐Net baseline results.…”
Section: Resultssupporting
confidence: 79%
See 1 more Smart Citation
“…In addition, sensitivity and accuracy results are also compromised when using complementary modalities, but specificity seems insensitive to MRI modality selection. Compared to the baseline results from the classic U‐Net, both Attention U‐Net and Cascaded U‐Net models obtain higher overall Dice coefficients for all three segmentation targets, which are in line with the state‐of‐the‐art segmentation performance reported in the recent BraTS Challenges 37,59 . The sensitivity, specificity, and accuracy results from these two models are comparable to the classic U‐Net baseline results.…”
Section: Resultssupporting
confidence: 79%
“…Compared to the baseline results from the classic U-Net, both Attention U-Net and Cascaded U-Net models obtain higher overall Dice coefficients for all three segmentation targets, which are in line with the state-of -the-art segmentation performance reported in the recent BraTS Challenges. 37,59 The sensitivity, specificity, and accuracy results from these two models are comparable to the classic U-Net baseline results. F I G U R E 7 (a)-(c) presents the 𝜇 k t as a function of t for ET, TC, and WT segmentation, respectively.…”
Section: Figure 4a Provides An Example Of Four Image Flowsmentioning
confidence: 78%
“…Subsequently, I and W are transformed into patches represented as and , respectively. Here, n signifies the quantity of patches, and the patch size is configured at 8, a choice consistent with previous studies ( 35 ). This configuration leads to n = 16 × 16 × 16, aligning with the concept of vision transformers ( 12 ) splitting the 2D image into 16 × 16 tokens.…”
Section: Methodsmentioning
confidence: 99%
“…We adopt a shifted window vision transformer, known as SW-ViT ( 35 ), as the transformer encoder in API-MAE. As shown in Figures 3C , D , the multi-head self-attention (MSA) in the original transformer block is replaced with linear window-based multi-head self-attention (LW-MSA) and shifted linear window-based multi-head self-attention (SLW-MSA) in the Swin transformer block.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation