2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01234
|View full text |Cite
|
Sign up to set email alerts
|

Satellite Image Time Series Classification With Pixel-Set Encoders and Temporal Self-Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(84 citation statements)
references
References 26 publications
0
63
0
Order By: Relevance
“…Deep learning has been applied using convolutional neural networks (CNNs) for handling the temporal dimension [21,35]; recurrent neural networks (RNNs)-like models [36,37], including long short term memory (LSTM) [33,38] or gated recurrent unit (GRU); and strategies that combine CNN with recurrent models [39,40], or ConvLSTM [41,42]. Recently, attention-based architectures have been proposed for the SITS classification in the context of crop type mapping [22,43]. The work presented in [22] shows that attention-based mechanisms outperformed CNNs but are at par with LSTM on unprocessed data, e.g., cloudy optical SITS; however, when extensive data pre-processing is applied, results are comparable to random forests.…”
Section: Satellite Image Time Series Classificationmentioning
confidence: 99%
“…Deep learning has been applied using convolutional neural networks (CNNs) for handling the temporal dimension [21,35]; recurrent neural networks (RNNs)-like models [36,37], including long short term memory (LSTM) [33,38] or gated recurrent unit (GRU); and strategies that combine CNN with recurrent models [39,40], or ConvLSTM [41,42]. Recently, attention-based architectures have been proposed for the SITS classification in the context of crop type mapping [22,43]. The work presented in [22] shows that attention-based mechanisms outperformed CNNs but are at par with LSTM on unprocessed data, e.g., cloudy optical SITS; however, when extensive data pre-processing is applied, results are comparable to random forests.…”
Section: Satellite Image Time Series Classificationmentioning
confidence: 99%
“…Temporal relations between observed values in a time series are taken into account. Time series classification models for satellite data include 1D convolution neural networks (1D-CNN) [8,18], recurrent neural networks (RNN) [45], and attentionbased deep learning [46,47]. The sits package supports a set of 1D-CNN algorithms: TempCNN [8], ResNet [48], and InceptionTime [18].…”
Section: Training Machine Learning Modelsmentioning
confidence: 99%
“…Our RF-based solutions are tested against several deep-learning approaches, using our own available implementations and prediction at the parcel level on T31FM tile: Recurrent Neural Networks (GRU, LSTM (Ienco et al, 2017), ConvLSTM (Rußwurm and Körner, 2018), for Gated Recurrent Unit, Long Short-Term Memory, and Convultional LSTM, respectively) and a hybrid Spatio-Temporal attention-based architecture (PSE-TAE, for Pixel-Set Encoder Temporal Attention Encoder) exhibiting state of the art results (Sainte Fare Garnot et al, 2020). We selected such methods since the temporal dimension is superior to the spatial one for Sentinel-based crop mapping (Sainte Fare Garnot et al, 2019) and since recurrent and attention-based mechanisms are superio to convolutional approaches (Rußwurm and Körner, 2020).…”
Section: Performance With Respect To Existing Solutionsmentioning
confidence: 99%
“…The vast literature dealing with crop mapping has demonstrated the relevance of satellite image time series (STIS) both at the pixel and parcel levels (Belgiu and Csillik, 2018;Sitokonstantinou et al, 2018), especially with the joint exploitation of Synthetic Aperture Radar (SAR) and optical images (Veloso et al, 2017;Neetu and Ray, 2020). In particular, deep learning techniques have recently shown their suitability to foster temporal information extraction from multi-modal STIS (Zhao et al, 2020;Adrian, Sagan, and Maimaitijiang, 2021) and their high discrimination power for a large range of crop types (Kussul et al, 2017;Ji et al, 2018;Sainte Fare Garnot et al, 2020;Rußwurm and Körner, 2020). However, they have not yet proved to be country-wise compliant.…”
Section: Introductionmentioning
confidence: 99%