2023
DOI: 10.1109/tmm.2022.3154159
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Mix Monitoring for Medical Image Segmentation With Limited Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 42 publications
0
9
0
Order By: Relevance
“…3) Small data alleviation with weak supervision, semisupervision or self-supervision strategies: The current image segmentation algorithms mostly rely on the availability of large amount of training data with pixel-level annotations, which are often expensive, tedious, and laborious. To alleviate the labeling burden and small data limitation, the past years have witnessed an increasing attention in building label-efficient deep segmentation algorithms [12], [70], [71]. According to the supervision provided by different types of labeling, deep learning methods with weak supervision, semi-supervision, or self-supervision strategies are explored to alleviate the problem of medical data limitation in data-driven deep learning.…”
Section: Loss Function Designmentioning
confidence: 99%
See 2 more Smart Citations
“…3) Small data alleviation with weak supervision, semisupervision or self-supervision strategies: The current image segmentation algorithms mostly rely on the availability of large amount of training data with pixel-level annotations, which are often expensive, tedious, and laborious. To alleviate the labeling burden and small data limitation, the past years have witnessed an increasing attention in building label-efficient deep segmentation algorithms [12], [70], [71]. According to the supervision provided by different types of labeling, deep learning methods with weak supervision, semi-supervision, or self-supervision strategies are explored to alleviate the problem of medical data limitation in data-driven deep learning.…”
Section: Loss Function Designmentioning
confidence: 99%
“…Semi-supervised tasks utilize a small amount of well labeled data and a large amount of unlabeled data, learning potential rules from the latter. One approach is based on the prediction consistency on unlabeled data, commonly using the mean-teacher framework [71], [72], [74]. Yang et al [75] added a C-render framework to provide complementary information for the student model.…”
Section: Loss Function Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Huang et al [34] add cutout content loss and slice misalignment as input perturbations. Another common consistency is mix-up consistency [37], [38], [39], which encourages the segmentation of interpolation of two data to be consistent with the interpolation of segmentation results of those data. Apart from disturbances on inputs, there are also many studies focusing on disturbances at feature map level.…”
Section: Unsupervised Regularization With Consistency Learningmentioning
confidence: 99%
“…Note that if the perturbations are too weak, it may cause the Lazy Student Phenomenon, but large perturbations may confuse the teacher and student and lead to low performance. Shu et al [38] add a transductive monitor for further knowledge distillation to narrow the semantic gap between the student model and teacher model.…”
Section: Unsupervised Regularization With Consistency Learningmentioning
confidence: 99%