2019
DOI: 10.1007/978-3-030-32248-9_33
|View full text |Cite
|
Sign up to set email alerts
|

CompareNet: Anatomical Segmentation Network with Deep Non-local Label Fusion

Abstract: Label propagation is a popular technique for anatomical segmentation. In this work, we propose a novel deep framework for label propagation based on non-local label fusion. Our framework, named CompareNet, incorporates subnets for both extracting discriminating features, and learning the similarity measure, which lead to accurate segmentation. We also introduce the voxel-wise classification as an unary potential to the label fusion function, for alleviating the search failure issue of the existing non-local fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…The input 2D panoramic X-ray X ∈ R H ×W (Figure 2(a)), where H and W are image height and width, is fed to a feature extraction subnet (Figure 2(a)) of a 2D encoder-decoder structure for capturing a deep feature map of the same resolution as the input X-ray. Given the feature map, a segmentation subnet (Figure 2(b)) formulates an anatomical segmentation task [25,26] to map it into a categorical mask Y seg ∈ Z H ×W ×K , where K = 32 denotes the maximum number of tooth category. Figure 4a demonstrates the tooth numbering rule we used by following the World Dental Federation Notion [20].…”
Section: D Reconstruction Of Oral Cavitymentioning
confidence: 99%
“…The input 2D panoramic X-ray X ∈ R H ×W (Figure 2(a)), where H and W are image height and width, is fed to a feature extraction subnet (Figure 2(a)) of a 2D encoder-decoder structure for capturing a deep feature map of the same resolution as the input X-ray. Given the feature map, a segmentation subnet (Figure 2(b)) formulates an anatomical segmentation task [25,26] to map it into a categorical mask Y seg ∈ Z H ×W ×K , where K = 32 denotes the maximum number of tooth category. Figure 4a demonstrates the tooth numbering rule we used by following the World Dental Federation Notion [20].…”
Section: D Reconstruction Of Oral Cavitymentioning
confidence: 99%
“…We use the implementation of batchgenerators python library, with the alpha range (0, 900) and sigma range (9,13).…”
Section: A Appendix -Data Augmentation Detailsmentioning
confidence: 99%
“…To improve data-efficient learning, several successful approaches have been proposed from different perspectives, such as leveraging unlabeled data for semisupervised self-training [16,1,15] or self-supervised pre-training [19,1,12], distilling priors from data as explicit constraints for model training [10,9], generating new data with the imaging of an anatomy of a different modality [8,11], or utilizing appropriate data augmentation methods to increase data diversity [5,14,2,15]. Some of them are designated for medical images.…”
Section: Introductionmentioning
confidence: 99%
“…A large body of literature [1,19,20] exploits anatomical correlation for medical image segmentation within the deep learning framework. The anatomical correlation also serves as the foundation of atlas-based segmentation [12,15], where one or several labeled reference images (i.e., atlases) are non-rigidly registered to a target image based on the anatomical similarity, and the labels of the atlases are propagated to the target image as the segmentation output. Different from these methods, OrganNet has the ability to learn anatomical similarity between images by employing the reasoning process.…”
Section: Related Workmentioning
confidence: 99%
“…Anatomical similarity have been widely used in medical image segmentation [7,15]. Compared to these methods, our work mainly exploits using anatomical similarity within each one-shot pair of images to perform reasoning.…”
Section: Introductionmentioning
confidence: 99%