2023
DOI: 10.1016/j.engappai.2023.106634
|View full text |Cite
|
Sign up to set email alerts
|

DPCTN: Dual path context-aware transformer network for medical image segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…CT-Net [45] utilized an asymmetric asynchronous branch parallel structure to efficiently extract local and global representations while reducing unnecessary computational costs. DPCTN [46] combined the dual-branch fusion of a CNN and Transformer. To reduce the information loss during the information pooling process, DPCTN specially adopted a three-branch transposed self-attention module to significantly improve the segmentation performance.…”
Section: Transformermentioning
confidence: 99%
“…CT-Net [45] utilized an asymmetric asynchronous branch parallel structure to efficiently extract local and global representations while reducing unnecessary computational costs. DPCTN [46] combined the dual-branch fusion of a CNN and Transformer. To reduce the information loss during the information pooling process, DPCTN specially adopted a three-branch transposed self-attention module to significantly improve the segmentation performance.…”
Section: Transformermentioning
confidence: 99%
“…Instance Segmentation with Transformer. In the realm of 2D instance segmentation, the power of Transformers [49] has been harnessed in several state-of-the-art works. For instance, DETR [20] has demonstrated superior performance for various vision tasks [41,7], owing to the Transformer's inherent capability to model long-range dependencies which is beneficial for handling complex scenes.…”
Section: Related Workmentioning
confidence: 99%