2021
DOI: 10.48550/arxiv.2108.02559
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MS-KD: Multi-Organ Segmentation with Multiple Binary-Labeled Datasets

Shixiang Feng,
Yuhang Zhou,
Xiaoman Zhang
et al.

Abstract: Annotating multiple organs in 3D medical images is timeconsuming and costly. Meanwhile, there exist many single-organ datasets with one specific organ annotated. This paper investigates how to learn a multi-organ segmentation model leveraging a set of binary-labeled datasets. A novel Multi-teacher Single-student Knowledge Distillation (MS-KD) framework is proposed, where the teacher models are pretrained single-organ segmentation networks, and the student model is a multi-organ segmentation network. Considerin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Cross pseudo supervision (CPS), 39 a pseudolabel-based segmentation method, uses Siamese networks supervising each other to correct potential noises in pseudolabels. Another way in this research field is MS-KD, 40 which pretrains teacher models based on several datasets, each for one organ, to generate pseudolabels. In multi-teacher single-student knowledge distillation (MS-KD), features from all layers are distilled along with final logits to ease model training using the Kullback-Leibler (KL) loss.…”
Section: Related Workmentioning
confidence: 99%
“…Cross pseudo supervision (CPS), 39 a pseudolabel-based segmentation method, uses Siamese networks supervising each other to correct potential noises in pseudolabels. Another way in this research field is MS-KD, 40 which pretrains teacher models based on several datasets, each for one organ, to generate pseudolabels. In multi-teacher single-student knowledge distillation (MS-KD), features from all layers are distilled along with final logits to ease model training using the Kullback-Leibler (KL) loss.…”
Section: Related Workmentioning
confidence: 99%
“…Such a design is redundant and with poor flexibility. Recently, two other types of solutions have been proposed, that is, pseudo-label-based methods, [18][19][20][21] and conditional information-based methods. 22,23 Most pseudo-label-based approaches obtain teacher networks from single-organ datasets and then teach a student network by generating pseudo-labels.…”
Section: (A) (B) (D) (C)mentioning
confidence: 99%