2021
DOI: 10.1016/j.media.2021.102148
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…For the results on Chest X-Ray14 in Table 1, our method, NoTeacher [36], UPS [28], and S 2 MTS 2 [25] use the DenseNet-121 backbone, while SRC-MT [26] and GraphXNet [2] use DenseNet-169 [12]. SRC-MT [26] is a consistency-based SSL; NoTeacher [36] extends MT by replacing the EMA process with two networks combined with a probabilistic graph model; S 2 MTS 2 [25] combines self-supervised pre-training with MT fine-tuning; and GraphXNet [2] constructs a graph from dataset samples and assigns pseudo labels to unlabelled samples through label propagation; and UPS [28] applies probability and uncertainty thresholds to enable the pseudo labelling of unlabelled samples. All methods use the official test set [38].…”
Section: Thorax Disease Classification Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the results on Chest X-Ray14 in Table 1, our method, NoTeacher [36], UPS [28], and S 2 MTS 2 [25] use the DenseNet-121 backbone, while SRC-MT [26] and GraphXNet [2] use DenseNet-169 [12]. SRC-MT [26] is a consistency-based SSL; NoTeacher [36] extends MT by replacing the EMA process with two networks combined with a probabilistic graph model; S 2 MTS 2 [25] combines self-supervised pre-training with MT fine-tuning; and GraphXNet [2] constructs a graph from dataset samples and assigns pseudo labels to unlabelled samples through label propagation; and UPS [28] applies probability and uncertainty thresholds to enable the pseudo labelling of unlabelled samples. All methods use the official test set [38].…”
Section: Thorax Disease Classification Resultsmentioning
confidence: 99%
“…The main benchmarks for SSL in MIA study the multilabel classification of chest X-ray (CXR) images [13,38] and multi-class classification of skin lesions [8,35]. For CXR SSL classification, pseudo-labelling methods have been explored [2], but SOTA results are achieved with consistency learning approaches [9,23,25,26,36]. For skin lesion SSL classification, the current SOTA is also based on consistency learning [26], with pseudo-labelling approaches [3] not being competitive.…”
Section: Related Workmentioning
confidence: 99%
“…In our study, only ACL tears are considered as positive samples and other abnormal or normal patients are considered as negative samples. We train and test our presented model using the publicly available training set (1,130 scans) and test set (120 scans) with no patient overlaps, and the split refers to previous work ( Unnikrishnan et al, 2021 ; Azcona et al, 2020 ). Meanwhile, of the 1,130 scans available for training, we randomly sample 20% as the validation set.…”
Section: Methodsmentioning
confidence: 99%
“…NoTeacher (NoT) [52]: In the Mean Teacher, the consistency target, which is the teacher, relies on the EMA of the student. In other words, the teacher's weight is an ensemble of student weights.…”
Section: Consistency-based Methodsmentioning
confidence: 99%