2018
DOI: 10.3390/rs10081235
|View full text |Cite
|
Sign up to set email alerts
|

Monitoring of an Indonesian Tropical Wetland by Machine Learning-Based Data Fusion of Passive and Active Microwave Sensors

Abstract: In this study, a novel data fusion approach was used to monitor the water-body extent in a tropical wetland (Lake Sentarum, Indonesia). Monitoring is required in the region to support the conservation of water resources and biodiversity. The developed approach, random forest database unmixing (RFDBUX), makes use of pixel-based random forest regression to overcome the limitations of the existing lookup-table-based approach (DBUX). The RFDBUX approach with passive microwave data (AMSR2) and active microwave data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 41 publications
0
8
1
Order By: Relevance
“…Image fusion can be performed at pixel-level, feature-level (e.g., land-cover classes of interest), and decision-level (e.g., purpose driven) [5,11] by considering the following image blending scenarios: (1) combining high and low-spatial resolution images from the same satellite system, e.g., 15 m panchromatic images with 30 multispectral images from Landsat satellite, or from different satellite systems, e.g., SPOT 10 m panchromatic with Landsat multispectral images at 30 m spatial resolution [22]; (2) combining optical and microwave remote sensing images [17,[23][24][25][26][27][28][29]; (3) combining multispectral satellite imagery and Light Detection and Ranging (LiDAR) data [30]; (4) combining multispectral satellite imagery and hyperspectral data [31], (5) combining high-resolution, low-frequency images with low resolution, high-frequency images [32] and (6) fusing microwave (passive) and microwave (active) sensors [33].…”
Section: Fusion Methods To Increase Spatiotemporal Resolution Of Satementioning
confidence: 99%
See 1 more Smart Citation
“…Image fusion can be performed at pixel-level, feature-level (e.g., land-cover classes of interest), and decision-level (e.g., purpose driven) [5,11] by considering the following image blending scenarios: (1) combining high and low-spatial resolution images from the same satellite system, e.g., 15 m panchromatic images with 30 multispectral images from Landsat satellite, or from different satellite systems, e.g., SPOT 10 m panchromatic with Landsat multispectral images at 30 m spatial resolution [22]; (2) combining optical and microwave remote sensing images [17,[23][24][25][26][27][28][29]; (3) combining multispectral satellite imagery and Light Detection and Ranging (LiDAR) data [30]; (4) combining multispectral satellite imagery and hyperspectral data [31], (5) combining high-resolution, low-frequency images with low resolution, high-frequency images [32] and (6) fusing microwave (passive) and microwave (active) sensors [33].…”
Section: Fusion Methods To Increase Spatiotemporal Resolution Of Satementioning
confidence: 99%
“…When the application required, calculation of indices before spatiotemporal fusion is performed [40]. low-frequency images with low resolution, high-frequency images [32] and (6) fusing microwave (passive) and microwave (active) sensors [33]. Long revisit time is not suitable for seasonal vegetation phenology monitoring or rapid surface changes.…”
Section: Fusion Methods To Increase Spatiotemporal Resolution Of Satementioning
confidence: 99%
“…The trained random forest could predict MODIS-like images from the low-resolution images, even when the original MODIS was not obtained owing to cloud cover or lack of observation for other reasons. Popular ML methods such as the random forest [28] and support vector machine [54] techniques have been applied for downscaling satellite-driven data and have shown robust performance; however, the predicted maps tended to show lower variation ranges than the original datasets. This feature is often observed in data-fusion approaches, rendering the effective tracking of abrupt changes or extreme phenomena difficult [55,56].…”
Section: Machine-learning Algorithmsmentioning
confidence: 99%
“…However, empirical approaches using machine learning (ML) have been used for a flexible fusion of datasets with highly different features [28,29]. The fundamental process involves ML training with matched pairs between different types of data and then using the training results to predict spatially high-resolution but temporally low-resolution data from counterpart data (temporally high-resolution but spatially low-resolution).…”
Section: Introductionmentioning
confidence: 99%
“…An appropriately tuned constraint enabled us to mitigate errors ( Figure 5), thereby contributing to the robustness of the overall algorithm. The BULC-based sensor fusion also overcame the limitation of the approach by Mizuochi et al [24,25] because it is not dependent on any past match-ups but is the best-effort update of land cover using the Bayesian framework.…”
Section: Accuracy Of the Integration Mapmentioning
confidence: 99%