2023
DOI: 10.1109/tip.2023.3293771
|View full text |Cite
|
Sign up to set email alerts
|

nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
34
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 140 publications
(34 citation statements)
references
References 23 publications
0
34
0
Order By: Relevance
“…We compared our full two-step segmentation pipeline with several standalone state-of-the-art segmentation algorithms, nnUNetv2, 54 Swin UNETR, 46 and nnFormer 68 . For extensive comparison, we obtained 15 additional test cases, and compared performance of all models on a total of 30 test cases.…”
Section: Resultsmentioning
confidence: 99%
“…We compared our full two-step segmentation pipeline with several standalone state-of-the-art segmentation algorithms, nnUNetv2, 54 Swin UNETR, 46 and nnFormer 68 . For extensive comparison, we obtained 15 additional test cases, and compared performance of all models on a total of 30 test cases.…”
Section: Resultsmentioning
confidence: 99%
“…To demonstrate the effectiveness of the proposed method, this paper was compared with TransUNet, 24 HiFormer, 25 nnUNet, 13 and nnFormer 26 multiple segmentation networks for the same segmentation task. The network segmentation results using the CT image data of gastric adenocarcinoma outlined by physicians are shown in Table 1.…”
Section: Resultsmentioning
confidence: 99%
“…Li et al [28] proposed adaptive tokens to model the global context information and reduce computational complexity. Zhou et al [29] introduced a combination of self-attention mechanisms and interleaved convolutions and exploited local and global self-attention mechanisms to learn spatial features. Lee et al [30] set tokens of different sizes and input them into the Transformer through multiple paths.…”
Section: Vision Transformer-based Segmentation Methodsmentioning
confidence: 99%