2022
DOI: 10.3390/rs15010151
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal and Multitemporal Land Use/Land Cover Semantic Segmentation on Sentinel-1 and Sentinel-2 Imagery: An Application on a MultiSenGE Dataset

Abstract: In the context of global change, up-to-date land use land cover (LULC) maps is a major challenge to assess pressures on natural areas. These maps also allow us to assess the evolution of land cover and to quantify changes over time (such as urban sprawl), which is essential for having a precise understanding of a given territory. Few studies have combined information from Sentinel-1 and Sentinel-2 imagery, but merging radar and optical imagery has been shown to have several benefits for a range of study cases,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 49 publications
0
13
0
Order By: Relevance
“…In previous work, the reference data [14], initially in 14 classes, was reclassified into 10 classes by merging the least represented classes (Table I). Indeed, these classes represented less than 0.1% of the total area of the region and were not spatially homogeneous over the territory, which causes problems for the training and classification, even when applying a weighted loss.…”
Section: A Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…In previous work, the reference data [14], initially in 14 classes, was reclassified into 10 classes by merging the least represented classes (Table I). Indeed, these classes represented less than 0.1% of the total area of the region and were not spatially homogeneous over the territory, which causes problems for the training and classification, even when applying a weighted loss.…”
Section: A Datasetsmentioning
confidence: 99%
“…In previous work [14], it has been shown that the contribution of multitemporal and multimodal imagery improve UF semantic segmentation. Thus, a convolution network has been developed and trained on the MultiSenGE [15] dataset which was built for the entire Grand-Est region in France.…”
Section: Introductionmentioning
confidence: 98%
“…The electronic and technological advancements of both these platforms and their onboard sensors, as well as the numerous applications where remote sensing provides valuable information for decision-making, particularly in the determination of more precise land cover classifications, have seen significant advancements in recent years [24,25] Satellite images can now be obtained from a variety of platforms located across the globe [25]. The development of indicators to track and comprehend anthropogenic and natural processes necessitated using high-resolution and frequently updated land cover maps [26].…”
Section: Introductionmentioning
confidence: 99%
“…The development of novel approaches for producing LULC maps based on classification techniques, and more specifically, deep learning, has been made possible by the evolution of cloud computing [33]. For LULC mapping, many studies have employed deep learning methods, particularly convolutional neural networks (CNN), either in conjunction with pixel categorization [34] or semantic segmentation [26,35,36].…”
Section: Introductionmentioning
confidence: 99%
“…CNN-based methods have been continuously proposed, such as the more mature and popular VGG [ 12 ], ResNet [ 13 ], UNet [ 14 ], SegNet [ 15 ], and DeepLab series [ 16 , 17 , 18 , 19 ]. Due to the advantage of automatically learning discriminative features, CNNs have been widely used in remote sensing image processing, including change detection [ 20 , 21 ], scene recognition [ 22 , 23 ], and land-use classification [ 24 , 25 ]. Transformer [ 26 ] is a later-emerged deep learning model that was first proposed for natural language processing (NLP) tasks.…”
Section: Introductionmentioning
confidence: 99%