2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01269
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Semantic Segmentation With Cross-Consistency Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
473
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 695 publications
(476 citation statements)
references
References 35 publications
3
473
0
Order By: Relevance
“…We compare with two state-of-the-art consistency-related methods whose codes are publicly available [32], [46], achieving 57.22 ± 21.68 (89.53, 64.22) and 51.21 ± 24.11 (91.86, 55.24) in terms of DSC for PDAC segmentation, respectively. To run these methods, we use the same training models (backbone: DenseUNet [47] for [32] and ResNet-50 [48] for [46]) and parameters as reported in [32] and [46]. Models are trained and tested from multiple planes separately in a sliceby-slice manner.…”
Section: Comparison Between Iag-net and Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare with two state-of-the-art consistency-related methods whose codes are publicly available [32], [46], achieving 57.22 ± 21.68 (89.53, 64.22) and 51.21 ± 24.11 (91.86, 55.24) in terms of DSC for PDAC segmentation, respectively. To run these methods, we use the same training models (backbone: DenseUNet [47] for [32] and ResNet-50 [48] for [46]) and parameters as reported in [32] and [46]. Models are trained and tested from multiple planes separately in a sliceby-slice manner.…”
Section: Comparison Between Iag-net and Other Methodsmentioning
confidence: 99%
“…Note that, [46] uses ResNet-50 as the backbone network, which is a stronger backbone than VGG-Net, as shown in [48]. We also test our IAG-Net with ResNet-50, and achieve 56.…”
Section: Comparison Between Iag-net and Other Methodsmentioning
confidence: 99%
“…The first one is unsupervised or self-supervised pretraining, followed by fine-tuning on a small set of labeled data. The second paradigm is to jointly use the labeled data and unlabeled data through pseudo labeling [4] or consistency regularization [5][6][7][8]. Since there is an obvious gap between the objectives of the unsupervised pretraining and the downstream segmentation, the effect of unsupervised pretraining is not always significant.…”
Section: Introductionmentioning
confidence: 99%
“…Consistency regularization encourages the segmentation prediction to be consistent on the unlabeled examples under different data perturbations or among different models. We follow the studies in [ 6 , 9 , 10 ] and enforce consistency among different models' predictions. Both strong and weak perturbations are applied.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation