2020
DOI: 10.1007/978-3-030-65742-0_12
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Temporal Self-attention for Classifying Satellite Images Time Series

Abstract: The increasing accessibility and precision of Earth observation satellite data offers considerable opportunities for industrial and state actors alike. This calls however for efficient methods able to process time-series on a global scale. Building on recent work employing multi-headed self-attention mechanisms to classify remote sensing time sequences, we propose a modification of the Temporal Attention Encoder of Garnot et al. [5]. In our network, the channels of the temporal inputs are distributed among sev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 52 publications
(32 citation statements)
references
References 11 publications
0
32
0
Order By: Relevance
“…Multiple recent studies [6,[17][18][19][20] have solidified the PSE+LTAE (Pixel Set Encoder and Lightweight Temporal Attention) as the state-of-the-art of crop type classification. Furthermore, this network is particularly parsimonious in terms of computation and memory usage, which proves well suited for training on multi-year data.…”
Section: Single-year Crop-type Classificationmentioning
confidence: 99%
See 4 more Smart Citations
“…Multiple recent studies [6,[17][18][19][20] have solidified the PSE+LTAE (Pixel Set Encoder and Lightweight Temporal Attention) as the state-of-the-art of crop type classification. Furthermore, this network is particularly parsimonious in terms of computation and memory usage, which proves well suited for training on multi-year data.…”
Section: Single-year Crop-type Classificationmentioning
confidence: 99%
“…Its architecture is inspired by set-encoding deep architecture [13,37], and dispense us from preprocessing parcels into image patches, saving memory, and computation. The Temporal Attention Encoder (TAE) [6] and its parsimonious version Lightweight-TAE (LTAE) [18] are temporal sequence encoders based on the language processing literature [38] and adapted for processing SITS. Both networks can be used sequentially to map the sequence of observations x i at year i to a learned yearly spatiotemporal descriptor e i :…”
Section: Pixel-set and Temporal Attention Encodersmentioning
confidence: 99%
See 3 more Smart Citations