2022
DOI: 10.48550/arxiv.2204.02825
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Empirical Study of Remote Sensing Pretraining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 48 publications
0
8
0
Order By: Relevance
“…Considering the domain gap between ImageNet and RS images, a new trend is to pre-train on the remote sensing data to learn the in-domain representations [10, 14-17, 19, 50-53]. Supervised pre-training on RS classification samples [10,14,15,50,51] is still limited by the amount of labeled data, while selfsupervised pre-training could make use of the vast amount of unlabeled RS data [16,17,19].…”
Section: A Handling Label Insufficiency In CDmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering the domain gap between ImageNet and RS images, a new trend is to pre-train on the remote sensing data to learn the in-domain representations [10, 14-17, 19, 50-53]. Supervised pre-training on RS classification samples [10,14,15,50,51] is still limited by the amount of labeled data, while selfsupervised pre-training could make use of the vast amount of unlabeled RS data [16,17,19].…”
Section: A Handling Label Insufficiency In CDmentioning
confidence: 99%
“…Considering the domain gap between natural and RS images may induce sub-optimal representations, a new trend is to pre-train on RS data to better learn the in-domain representations, including supervised pre-training [10,14,15] and selfsupervised learning (SSL) [16][17][18][19]. Contrastive SSL [20,21] could learn useful representations from massive unlabeled data by pulling together representations of semantically similar samples (i.e., positive pairs) and pushing away those of dissimilar samples (i.e., negative pairs).…”
Section: Introductionmentioning
confidence: 99%
“…In addition, by their experimental results on DIOR [23], they found that knowledge transfer learning also makes the superior detectors in natural scenes obtain competitive results in the RSD without any careful model design. Moreover, Wang, D. et al [34] provided an empirical study of model pre-training in the RSD, which utilized the self-built optical remote sensing dataset M-AID [30] to pre-train models of CNNs [7,10] and ViTs [15,16] and then fine-tuned them on several downstream tasks to avoid the severe domain gap impact of transferring knowledge from the natural scenes to remote sensing scenes. From the experimental results of [34], they found that ViT-based models [15,16] are promising backbones to provide a stronger feature representation to facilitate downstream tasks in the RSD.…”
Section: Knowledge Transfer Learningmentioning
confidence: 99%
“…Therefore, in order to apply these trained data-hungry models for other specific domains, knowledge transfer learning should be considered; specifically, when data-hungry models are applied for remote sensing domain (RSD), they have to utilize a large-scale dataset (e.g., ImageNet [18]) to sufficiently stimulate their potential and then better adapt for various downstream tasks (e.g., scene classification, object detection, and semantic segmentation) in the RSD. Until now, many efforts [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] have demonstrated that the consensus solution of supervised pre-training based knowledge transfer learning for model training has basically formed, which needs to pre-train the model on a large-scale dataset with manual annotation and then directly fine-tune pre-trained models on downstream task datasets in the RSD.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation