2021
DOI: 10.3390/s21051566
|View full text |Cite
|
Sign up to set email alerts
|

RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification

Abstract: While growing instruments generate more and more airborne or satellite images, the bottleneck in remote sensing (RS) scene classification has shifted from data limits toward a lack of ground truth samples. There are still many challenges when we are facing unknown environments, especially those with insufficient training data. Few-shot classification offers a different picture under the umbrella of meta-learning: digging rich knowledge from a few data are possible. In this work, we propose a method named RS-SS… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 56 publications
(120 reference statements)
0
3
0
Order By: Relevance
“…Recent studies have shown that self-supervised learning effectively improves the generalization performance of transfer-learning models in FSC tasks [ 34 ]. Based on this, in this paper, we use a linear combination of cross-entropy and self-supervised loss function to pre-train the teacher network model and select rotation prediction as the self-supervised learning task.…”
Section: Main Approachmentioning
confidence: 99%
“…Recent studies have shown that self-supervised learning effectively improves the generalization performance of transfer-learning models in FSC tasks [ 34 ]. Based on this, in this paper, we use a linear combination of cross-entropy and self-supervised loss function to pre-train the teacher network model and select rotation prediction as the self-supervised learning task.…”
Section: Main Approachmentioning
confidence: 99%
“…Knowledge distillation from the heavy network as the teacher to the light network as the student can be used as a way to improve the performance of the student network. Zhang et al (2021) proposed a novel two-branch network that took three pairs of original transformed images as input and incorporated a class activation graph to drive the network to mine the most relevant class-specific regions. This strategy ensured that the network generated differentiated embeddings and a round of self-knowledge distillation was set up to prevent overfitting and improve performance.…”
Section: Related Workmentioning
confidence: 99%
“…Following recent FSRSSC studies [10,[22][23][24], we utilize ResNet-12 as a backbone for feature extraction. We also adopt the pre-training strategy as suggested in [22] to better initialize the meta-learner feature extractor.…”
Section: Implementation Detailsmentioning
confidence: 99%