2018 International Symposium ELMAR 2018
DOI: 10.23919/elmar.2018.8534634
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Split-Brain Autoencoders for High-Resolution Remote Sensing Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…One limitation of this architecture compared to other asymmetric Siamese methods is the need to use two different encoders, even during inference, since each one is trained on different data channels than the other. Based on their experiments on the Resisc-45 and AID datasets using the RGB and LAB color spaces, the authors show that the method could yield competitive results even with few unlabeled training images [54]. They expect a high potential of split-brain AEs on multi-spectral remote sensing images in perspective works.…”
Section: Generativementioning
confidence: 98%
See 1 more Smart Citation
“…One limitation of this architecture compared to other asymmetric Siamese methods is the need to use two different encoders, even during inference, since each one is trained on different data channels than the other. Based on their experiments on the Resisc-45 and AID datasets using the RGB and LAB color spaces, the authors show that the method could yield competitive results even with few unlabeled training images [54]. They expect a high potential of split-brain AEs on multi-spectral remote sensing images in perspective works.…”
Section: Generativementioning
confidence: 98%
“…Another early use of generative models in SSL applied to remote sensing is proposed by [54], where the authors evaluate the use of a split-brain autoencoder for self-supervised image representation. During the process of learning to reconstruct the input image, autoencoders discover relevant information about the data distribution.…”
Section: Generativementioning
confidence: 99%
“…Again, they do not address the few shot problem in their work. In 34 , Stojnic et al study the Contrastive Multiview Coding (CMC) method 35 for self-supervised pre-training application of SSL. They analyze the influence of the number and domain of images used for self-supervised pre-training on the performance on downstream tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Lately, some research has been done in the area of applying self-supervised learning to the analysis of remote sensing images. In [31,30] the authors analyzed the possibilities of using split-brain autoencoder [44] in the analysis of aerial images. They analyzed the influence of the number of images used for self-supervised learning, as well as usage of different color channels, on the results obtained on the downstream task of aerial image classification.…”
Section: Self-supervised Learning In Remote Sensingmentioning
confidence: 99%
“…However, the application of self-supervised learning methods in remote sensing has not been studied a lot. Most of these applications either used small amounts, up to 50,000 images, of unlabeled training data [31,30,35] or tested learned representations on a small number of datasets [1,19,38].…”
Section: Introductionmentioning
confidence: 99%