2021
DOI: 10.48550/arxiv.2103.12718
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Pretraining Improves Self-Supervised Pretraining

Abstract: While self-supervised pretraining has proven beneficial for many computer vision tasks, it requires expensive and lengthy computation, large amounts of data, and is sensitive to data augmentation. Prior work demonstrates that models pretrained on datasets dissimilar to their target data, such as chest X-ray models trained on ImageNet, underperform models trained from scratch. Users that lack the resources to pretrain must use existing models with lower performance. This paper explores Hierarchical PreTraining … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…More recent studies in pre-training for remote sensing usually involve some pre-text tasks like the colorization of images [42], super-resolution of imagery [43], or classifying whether two patches overlap [44]. It is also possible to prepre-train a network on natural imagery before pre-training on aerial imagery in a second step [45].…”
Section: Pre-trained Models In Remote Sensingmentioning
confidence: 99%
“…More recent studies in pre-training for remote sensing usually involve some pre-text tasks like the colorization of images [42], super-resolution of imagery [43], or classifying whether two patches overlap [44]. It is also possible to prepre-train a network on natural imagery before pre-training on aerial imagery in a second step [45].…”
Section: Pre-trained Models In Remote Sensingmentioning
confidence: 99%
“…In this sense, Foundation Models training is an approach that trades off the need for task-specific data with the need for large amounts of data at pretraining. This is leveraged in hierarchical self-supervised pretraining which consists of a sequence of self-supervised training steps on decreasing amounts of increasingly task-relevant data, so as to tune the trade-off between data quantity and quality in ways that best match the data availability [10], [11].…”
Section: Data Requirementsmentioning
confidence: 99%
“…Self-supervised learning often involves various image restoration (e.g., inpainting [40], colorization [58], denoising [48]) and higher level prediction tasks like image orientation [19], context [14], temporal ordering [38], and cluster assignments [5]. The learned representations transfer well to image classification but the improvement is less significant for instance-level tasks, such as object detection and instance segmentation [24,41].…”
Section: Related Workmentioning
confidence: 99%