2021
DOI: 10.1609/aaai.v35i17.17728
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Ecological Sciences from Above: Spatial Contrastive Learning for Remote Sensing

Abstract: The rise of neural networks has opened the door for automatic analysis of remote sensing data. A challenge to using this machinery for computational sustainability is the necessity of massive labeled data sets, which can be cost-prohibitive for many non-profit organizations. The primary motivation for this work is one such problem; the efficient management of invasive species -- invading flora and fauna that are estimated to cause damages in the billions of dollars annually. As an ongoing collaboration with th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Contrastive learning, which learns data representations by contrasting similar and dissimilar data samples, has received growing attention in computer vision. Such techniques usually leverage a contrastive loss to guide the data encoder to pull together similar samples in an embedding space, which has been shown to facilitate downstream learning in many applications (Zhao et al 2021;Bengar et al 2021;Bjorck et al 2021), especially when data labels are unavailable or scarce. To date, most contrastive learning approaches are designed in unsupervised settings (Gutmann and Hyvärinen 2010;Sohn 2016;Oord, Li, and Vinyals 2018;Hjelm et al 2018;Wu et al 2018;Bachman, Hjelm, and Buchwalter 2019;He et al 2020;Chen et al 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Contrastive learning, which learns data representations by contrasting similar and dissimilar data samples, has received growing attention in computer vision. Such techniques usually leverage a contrastive loss to guide the data encoder to pull together similar samples in an embedding space, which has been shown to facilitate downstream learning in many applications (Zhao et al 2021;Bengar et al 2021;Bjorck et al 2021), especially when data labels are unavailable or scarce. To date, most contrastive learning approaches are designed in unsupervised settings (Gutmann and Hyvärinen 2010;Sohn 2016;Oord, Li, and Vinyals 2018;Hjelm et al 2018;Wu et al 2018;Bachman, Hjelm, and Buchwalter 2019;He et al 2020;Chen et al 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…In designing contrastive pretext tasks, remote sensing researchers built positive pairs by leveraging spatial [80,81,82,83,84,85,86,87,88,89,90], temporal [91,92,93],…”
Section: Remote Sensing Representationsmentioning
confidence: 99%