2022
DOI: 10.48550/arxiv.2203.16983
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-distillation Augmented Masked Autoencoders for Histopathological Image Classification

Abstract: Self-supervised learning (SSL) has drawn increasing attention in pathological image analysis in recent years. However, the prevalent contrastive SSL is suboptimal in feature representation under this scenario due to the homogeneous visual appearance. Alternatively, masked autoencoders (MAE) build SSL from a generative paradigm. They are more friendly to pathological image modeling. In this paper, we firstly introduce MAE to pathological image analysis. A novel SD-MAE model is proposed to enable a self-distilla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 23 publications
2
5
0
Order By: Relevance
“…Hence transferring decoder information to the encoder with self-distillation improves the outcomes of self-learning. We also observe that similar to [6,7] and unlike [10], our results show that predicting the masked area only outperforms predicting all image pixels for both SimMIM and our SD-SimMIM.…”
Section: Quantitative Resultssupporting
confidence: 66%
See 3 more Smart Citations
“…Hence transferring decoder information to the encoder with self-distillation improves the outcomes of self-learning. We also observe that similar to [6,7] and unlike [10], our results show that predicting the masked area only outperforms predicting all image pixels for both SimMIM and our SD-SimMIM.…”
Section: Quantitative Resultssupporting
confidence: 66%
“…We believe that the visible patches in the decoder contain more knowledge than the ones in the encoder. Moreover, similar to [6,7] and unlike [10], we found out that predicting the masked area only outperforms predicting all image pixels.…”
Section: Introductionsupporting
confidence: 65%
See 2 more Smart Citations
“…Macro F1 and ROC AUC are reported for multi-class classification tasks. It should be noted that the “background” (BACK) class is not considered for NCT-CRC-HE neither for training nor for evaluation, following (23, 50, 77, 78).…”
Section: Experimental and Evaluation Setupmentioning
confidence: 99%