2024
DOI: 10.1109/jstars.2023.3333969
|View full text |Cite
|
Sign up to set email alerts
|

S1S2-Water: A Global Dataset for Semantic Segmentation of Water Bodies From Sentinel- 1 and Sentinel-2 Satellite Images

Marc Wieland,
Florian Fichtner,
Sandro Martinis
et al.

Abstract: This study introduces the S1S2-Water dataseta global reference dataset for training, validation and testing of convolutional neural networks for semantic segmentation of surface water bodies in publicly available Sentinel-1 and Sentinel-2 satellite images. The dataset consists of 65 triplets of Sentinel-1 and Sentinel-2 images with quality checked binary water mask. Samples are drawn globally on the basis of the Sentinel-2 tile-grid (100 x 100 km) under consideration of predominant landcover and availability o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…They are routinely adopted for assessing classification capabilities of flooding detection algorithms, regardless of the methodology adopted. Indeed, the performance of probabilistic and fuzzy methods can be assessed using such metrics once binary classification maps are derived by assigning a probability threshold, typically set at 0.5, as it is performed, e.g., in [186]. This procedure is typically referred to as defuzzification in the context of fuzzy methods.…”
Section: Validation Strategies and Map Quality Indicatorsmentioning
confidence: 99%
See 4 more Smart Citations
“…They are routinely adopted for assessing classification capabilities of flooding detection algorithms, regardless of the methodology adopted. Indeed, the performance of probabilistic and fuzzy methods can be assessed using such metrics once binary classification maps are derived by assigning a probability threshold, typically set at 0.5, as it is performed, e.g., in [186]. This procedure is typically referred to as defuzzification in the context of fuzzy methods.…”
Section: Validation Strategies and Map Quality Indicatorsmentioning
confidence: 99%
“…Consequently, strict time requirements may take precedence over accuracy. Computational complexity can be evaluated via the algorithm throughput measured as the average number of processed pixels per second [186].…”
Section: Validation Strategies and Map Quality Indicatorsmentioning
confidence: 99%
See 3 more Smart Citations