2021
DOI: 10.48550/arxiv.2112.01715
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks

Abstract: Self-supervised learning aims to learn image feature representations without the usage of manually annotated labels. It is often used as a precursor step to obtain useful initial network weights which contribute to faster convergence and superior performance of downstream tasks. While selfsupervision allows one to reduce the domain gap between supervised and unsupervised learning without the usage of labels, the self-supervised objective still requires a strong inductive bias to downstream tasks for effective … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 78 publications
0
6
0
Order By: Relevance
“…Very recently, many contrastive methods have been applied in RS to obtain in-domain pre-trained models that benefit downstream tasks, including land-cover classification [22, 24-28, 60, 61], semantic segmentation [29][30][31][32], and change detection [16][17][18]. Most existing methods apply InfoNCE loss [18, 22, 24-28, 30, 33, 62] or triple loss [17,60] on the constructed positive and negative pairs.…”
Section: Semantic Dissimilaritymentioning
confidence: 99%
See 3 more Smart Citations
“…Very recently, many contrastive methods have been applied in RS to obtain in-domain pre-trained models that benefit downstream tasks, including land-cover classification [22, 24-28, 60, 61], semantic segmentation [29][30][31][32], and change detection [16][17][18]. Most existing methods apply InfoNCE loss [18, 22, 24-28, 30, 33, 62] or triple loss [17,60] on the constructed positive and negative pairs.…”
Section: Semantic Dissimilaritymentioning
confidence: 99%
“…Some attempts apply SSL directly on a small downstream change detection dataset to extract seasonally invariant features for unsupervised change detection [64][65][66]. Other more related studies that follow the normal SSL pipeline mostly evaluate the pre-trained model on the mediumresolution change detection dataset [16][17][18]. We instead explore pre-trained models suitable for high-resolution RS image change detection.…”
Section: Semantic Dissimilaritymentioning
confidence: 99%
See 2 more Smart Citations
“…These pretext tasks include predicting context [25], solving jigsaw puzzles [26,27], image rotation and colorization [28][29][30][31], spatio-temporal consistence [32], and so on. The applications of this method to RSI can be found in [33][34][35], which use domain knowledge and temporal prediction to supervise the training.…”
Section: Related Workmentioning
confidence: 99%