2022
DOI: 10.1016/j.isprsjprs.2021.12.008
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical fusion of optical and dual-polarized SAR on impervious surface mapping at city scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 48 publications
1
11
0
Order By: Relevance
“…Thus, additional information is required to improve the mapping accuracy of impervious surfaces. SAR data can provide complementary information for optical images due to its all-day and all-weather capabilities at high spatial resolution and low cost [88][89][90]. Zhang et al [80] combined multi-source and multi-sensor remote sensing datasets (i.e., Landsat ETM+, SPOT-5 and ASAR and ALOS PALSAR, SPOT-5, and TerraSAR-X) to estimate impervious surfaces in Pearl River Delta.…”
Section: Cloud and Snow Contaminationsmentioning
confidence: 99%
“…Thus, additional information is required to improve the mapping accuracy of impervious surfaces. SAR data can provide complementary information for optical images due to its all-day and all-weather capabilities at high spatial resolution and low cost [88][89][90]. Zhang et al [80] combined multi-source and multi-sensor remote sensing datasets (i.e., Landsat ETM+, SPOT-5 and ASAR and ALOS PALSAR, SPOT-5, and TerraSAR-X) to estimate impervious surfaces in Pearl River Delta.…”
Section: Cloud and Snow Contaminationsmentioning
confidence: 99%
“…And it provides limited information compared to rich spectral information of optical remote sensing. Therefore, researchers have proposed various methods for fusing optical and SAR data for ULC classification, primarily at three different levels: pixel level, feature level, and decision level [11]. Without feature extraction, pixel-level fusion refers to the overlay of optical and SAR data at the pixel level [12].…”
Section: Introductionmentioning
confidence: 99%
“…The combination of characteristics obtained from optical and SAR data is known as a feature-level fusion [10,13]. Support vector machine (SVM), random forest (RF), and deep network methods are common approaches used for feature-level fusion [11,14]. Decision-level fusion refers to making decisions based on the classification results from both optical and SAR sources.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Zhang et al [13] extracted UIS by fusing texture features of optical and SAR images, and achieved better results than using optical data alone. Then in order to reduce the influence of shadows on Sentinel-2 image on the extraction accuracy of UIS, Sun et al [14] fused the polarization features of Sentinel-1 and the multispectral features of Sentinel-2, and proposed a hierarchical UIS extraction framework. And with the open source of Sentinel-1 data, SAR data has gradually become another important data source for UIS extraction, which has been verified in largescale UIS extraction [15], [16].…”
mentioning
confidence: 99%