2017
DOI: 10.1016/j.ecolind.2016.09.029
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of object-based and pixel-based Random Forest algorithm for wetland vegetation mapping using high spatial resolution GF-1 and SAR data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
66
0
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 165 publications
(77 citation statements)
references
References 41 publications
0
66
0
2
Order By: Relevance
“…Therefore, we used VV polarization of the constructed SAR image to fuse with the WV-3 image. The image fusion was conducted by wavelet principal component analysis (W-PCA), which can provide the highest accuracy in land cover classification and vegetation mapping based on the use of optical and SAR data for information extraction [18,19]. The steps of the W-PCA fusion are: (i) apply PCA to the multispectral WV-3 image data and obtain the first principal component (PC1); (ii) match the histograms of the Sentinel-1 and PC1 image data; (ii) utilize a wavelet decomposition to merge Sentinel-1 images into the PC1 image; (iv) apply the inverse PCA transform so that the embedded Sentinel-1 information carried by the PC1 image can be integrated to obtain the fused image.…”
Section: Image Fusionmentioning
confidence: 99%
“…Therefore, we used VV polarization of the constructed SAR image to fuse with the WV-3 image. The image fusion was conducted by wavelet principal component analysis (W-PCA), which can provide the highest accuracy in land cover classification and vegetation mapping based on the use of optical and SAR data for information extraction [18,19]. The steps of the W-PCA fusion are: (i) apply PCA to the multispectral WV-3 image data and obtain the first principal component (PC1); (ii) match the histograms of the Sentinel-1 and PC1 image data; (ii) utilize a wavelet decomposition to merge Sentinel-1 images into the PC1 image; (iv) apply the inverse PCA transform so that the embedded Sentinel-1 information carried by the PC1 image can be integrated to obtain the fused image.…”
Section: Image Fusionmentioning
confidence: 99%
“…We used McNemar's test for paired-sample nominal scale data (Agresti 2002) to assess whether statistically significant differences exist between the classifications. This test is suitable to assess the performance of multiple classifications that use the same test and training samples (Foody 2004) and has been applied widely in thematic map comparison (Duro et al 2012;Fu et al 2017).…”
Section: Classificationmentioning
confidence: 99%
“… Component substitution techniques such as principal component analysis (PCA) (Fu et al., ; Yonghong, ) and intensity‐hue‐saturation (IHS) transformation (Chen, Hepner, & Forster, ; Leung, Liu, & Zhang, ) are among the most widely used pixel‐fusion techniques (Pohl & Yen, ). During PCA fusion, the original pixel values extracted from the radar and multispectral images are used to define new axes along which data variability is maximised; the new, fused pixel values are essentially linear combinations of their position along these new axes (Amarsaikhan et al., ).…”
Section: Overview Of Multispectral‐radar Srs Data Fusion Techniques mentioning
confidence: 99%
“…Some insights are provided byFu et al (2017), who found that fusing ALOS PALSAR/RADARSAT-2 and multispectral imagery from GF-1 increased mapping accuracy of wetland vegetation types beyond that achieved using these data on their own. Similarly, when multispectral and radar imagery (Landsat TM5 and ALOS PALSAR respectively) were integrated in a classification tree modelling approach, misclassification of different types of wetland vegetation and the extent of standing F I G U R E 3 Spatial resolution and launch date of freely available SRS data with global coverage from active, long-term missions.…”
mentioning
confidence: 99%