2022
DOI: 10.1016/j.media.2022.102530
|View full text |Cite
|
Sign up to set email alerts
|

Mutual consistency learning for semi-supervised medical image segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 110 publications
(54 citation statements)
references
References 53 publications
0
54
0
Order By: Relevance
“…The experiments were divided into training and validation sets according to the ratio of 4:1. In addition, the latest methods that achieved excellent performance in the LASC2013 and ASC2018 datasets were selected for comparison [ 38 , 39 , 40 , 41 , 42 ]. The segmentation performance of the different models is shown in Table 2 and Figure 7 .…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…The experiments were divided into training and validation sets according to the ratio of 4:1. In addition, the latest methods that achieved excellent performance in the LASC2013 and ASC2018 datasets were selected for comparison [ 38 , 39 , 40 , 41 , 42 ]. The segmentation performance of the different models is shown in Table 2 and Figure 7 .…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In this study, two public datasets and one in-house dataset are employed to validate the proposed RCPS method, which includes the LA dataset [9], pancreas-CT dataset [10] and TBI dataset [41]. We compare the segmentation performance on the public datasets of the proposed RCPS and state-of-the-art methods, including UA-MT [23], SASSNet [26], DTC [27], URPC [29] and MC-Net+ [30], which have been discussed in Sec. II-B.…”
Section: Methodsmentioning
confidence: 99%
“…Dataset and Preprocessing 1) LA Dataset: The LA dataset is the benchmark dataset for the 2018 Atrial Segmentation Challenge, which includes 100 gadolinium-enhanced labeled MR scans with an isotropic resolution of 0.625 × 0.625 × 0.625 mm. Since the annotations of the test set in LA are not available, we use the fixed data split used in previous works [23], [30], [31], where 80 samples are used for training and the rest 20 are for validation. Then, performance comparison with other models with the same validation set is reported for fair comparison.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work is closely related to the consistency based semi-supervised learning (SSL) [46,45], where the basic idea is to leverage the unlabeled data based on the smoothness assumptions, i.e. deep models under various small perturbations or augmentations should output consistent results.…”
Section: Consistency-based Semi-supervised Learningmentioning
confidence: 99%