2022
DOI: 10.48550/arxiv.2203.02430
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Characterizing Renal Structures with 3D Block Aggregate Transformers

Abstract: Efficiently quantifying renal structures can provide distinct spatial context and facilitate biomarker discovery for kidney morphology. However, the development and evaluation of transformer model to segment the renal cortex, medulla, and collecting system remains challenging due to data inefficiency. Inspired by the hierarchical structures in vision transformer, we propose a novel method using 3D block aggregation transformer for segmenting kidney components on contrast-enhanced CT scans. We construct the fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Hierarchical ViT architectures introduce CNN-like properties into the Transformers as they compute local attention with shifted windows, starting from small-sized patches and gradually merging neighboring patches in the subsequent layers. To reduce the design complexity of traditional hierarchical ViT, a 3-D U-shape model inspired by nested hierarchical Transformers [122] exploited the idea of global SA within smaller nonoverlapping 3-D blocks [123]. Cross-block SA communication was achieved by hierarchically nesting these Transformers and connecting them with a specific aggregation function.…”
Section: A Medical Transformersmentioning
confidence: 99%
“…Hierarchical ViT architectures introduce CNN-like properties into the Transformers as they compute local attention with shifted windows, starting from small-sized patches and gradually merging neighboring patches in the subsequent layers. To reduce the design complexity of traditional hierarchical ViT, a 3-D U-shape model inspired by nested hierarchical Transformers [122] exploited the idea of global SA within smaller nonoverlapping 3-D blocks [123]. Cross-block SA communication was achieved by hierarchically nesting these Transformers and connecting them with a specific aggregation function.…”
Section: A Medical Transformersmentioning
confidence: 99%