Abstract:In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS) and very high resolution (WorldView-2, Quickbird, Ikonos) multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm's performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based), have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.
We propose an integrated approach to estimating building inventory for seismic vulnerability assessment, which can be applied to different urban environments and be efficiently scaled depending on the desired level of detail. The approach employs a novel multi-source method for evaluating structural vulnerability-related building features based on satellite remote sensing and ground-based omnidirectional imaging. It aims to provide a comparatively cost-and time-efficient way of inventory data capturing over large areas. The latest image processing algorithms and computer vision techniques are used on multiple imaging sources within the framework of an integrated sampling scheme, where each imaging source and technique is used to infer specific, scale-dependent information. Globally available low-cost data sources are preferred and the tools are being developed on an open-source basis to allow for a high degree of transferability and usability. An easily deployable omnidirectional camera-system is introduced for ground-based datacapturing. After a general description of the approach and the developed tools and techniques, preliminary results from a first application to our study area, Bishkek, Kyrgyzstan, are presented.
Cloud and cloud shadow segmentation is a crucial pre-processing step for any application that uses multispectral satellite images. In particular, disaster related applications (e.g., flood monitoring or rapid damage mapping), which are highly time-and data-critical, require methods that produce accurate cloud and cloud shadow masks in short time while being able to adapt to large variations in the target domain (induced by atmospheric conditions, different sensors, scene properties, etc.). In this study, we propose a data-driven approach to semantic segmentation of cloud and cloud shadow in single date images based on a modified U-Net convolutional neural network that aims to fulfil these requirements. We train the network on a global database of Landsat OLI images for the segmentation of five classes ("shadow", "cloud", "water", "land" and "snow/ice"). We compare the results to state-of-the-art methods, proof the model's generalization ability across multiple satellite sensors (Landsat TM, Landsat ETM+, Landsat OLI and Sentinel-2) and show the influence of different training strategies and spectral band combinations on the performance of the segmentation. Our method consistently outperforms Fmask and a traditional Random Forest classifier on a globally distributed multi-sensor test dataset in terms of accuracy, Cohen's Kappa coefficient, Dice coefficient and inference speed. The results indicate that a reduced feature space composed solely of red, green, blue and near-infrared bands already produces good results for all tested sensors. If available, adding shortwave-infrared bands can increase the accuracy. Contrast and brightness augmentations of the training data further improve the segmentation performance. The best performing U-Net model achieves an accuracy of 0.89, Kappa of 0.82 and Dice coefficient of 0.85, while running the inference over 896 test image tiles with 44.8 seconds/megapixel (2.8 seconds/megapixel on GPU). The Random Forest classifier reaches an accuracy of 0.79, Kappa of 0.65 and Dice coefficient of 0.74 with 3.9 seconds/megapixel inference time (on CPU) on the same training and testing data. The rule-based Fmask method takes significantly longer (277.8 seconds/megapixel) and produces results with an accuracy of 0.75, Kappa of 0.60 and Dice coefficient of 0.72.
Synthetic Aperture Radar (SAR) observations are widely used in emergency response for flood mapping and monitoring. However, the current operational services are mainly focused on flood in rural areas and flooded urban areas are less considered. In practice, urban flood mapping is challenging due to the complicated backscattering mechanisms in urban environments and in addition to SAR intensity other information is required. This paper introduces an unsupervised method for flood detection in urban areas by synergistically using SAR intensity and interferometric coherence under the Bayesian network fusion framework. It leverages multi-temporal intensity and coherence conjunctively to extract flood information of varying flooded landscapes. The proposed method is tested on the Houston (US) 2017 flood event with Sentinel-1 data and Joso (Japan) 2015 flood event with ALOS-2/PALSAR-2 data. The flood maps produced by the fusion of intensity and coherence and intensity alone are validated by comparison against high-resolution aerial photographs. The results show an overall accuracy of 94.5% (93.7%) and a kappa coefficient of 0.68 (0.60) for the Houston case, and an overall accuracy of 89.6% (86.0%) and a kappa coefficient of 0.72 (0.61) for the Joso case with the fusion of intensity and coherence (only intensity). The experiments demonstrate that coherence provides valuable information in addition to intensity in urban flood mapping and the proposed method could be a useful tool for urban flood mapping tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.