This study modeled the urban growth in the Greater Cairo Region (GCR), one of the fastest growing mega cities in the world, using remote sensing data and ancillary data. Three land use land cover (LULC) maps (1984, 2003 and 2014) were produced from satellite images by using Support Vector Machines (SVM). Then, land cover changes were detected by applying a high level mapping technique that combines binary maps (change/no-change) and post classification comparison technique. The spatial and temporal urban growth patterns were analyzed using selected statistical metrics developed in the FRAGSTATS software. Major transitions to urban were modeled to predict the future scenarios for year 2025 using Land Change Modeler (LCM) embedded in the IDRISI software. The model results, after validation, indicated that 14% of the vegetation and 4% of the desert in 2014 will be urbanized in 2025. The urban areas within a 5-km buffer around: the Great Pyramids, Islamic Cairo and Al-Baron Palace were calculated, highlighting an intense urbanization especially around the Pyramids; 28% in 2014 up to 40% in 2025. Knowing the current and estimated urbanization situation in GCR will help decision makers to adjust and develop new plans to achieve a sustainable development of urban areas and to protect the historical locations.
Registration of aerial images to enrich 3D LIght Detection and Ranging (LiDAR) points with radiometric information can enhance the capability of object detection, scene classification, and semantic segmentation. However, airborne LiDAR data may not always come with on-board optical images collected during the same flight mission. Indirect geo-referencing can be adopted, if ancillary imagery data is found available. Nevertheless, automatic recognition of control primitives in LiDAR and imagery datasets becomes challenging, especially when they are collected on different dates. This paper proposes a generic registration mechanism based on using the Phase Congruency (PC) model and scene abstraction to overcome the stated challenges. The approach relies on the use of a PC measure to compute the image moments that determine the study scene's edges. Potential candidate points can be identified based on thresholding the image moments' values. A Shape Context Descriptor (SCD) is adopted to automatically pair symmetric candidate points to produce a final set of control points. Coordinate transformation parameters between the two datasets were estimated using a Least Squares (LS) adjustment for four registration models: first (affine), second, third order polynomials, and Direct Linear Transform (DLT) models. Datasets covering different urban landscapes were used to examine the proposed workflow. The Root Mean Square Error (RMSE) of the registration is between one to two pixels. The proposed workflow is found to be computationally efficient especially with small-sized datasets, and generic enough to be applied in registering various imagery data and LiDAR point clouds.
The World Health Organization has reported that the number of worldwide urban residents is expected to reach 70% of the total world population by 2050. In the face of challenges brought about by the demographic transition, there is an urgent need to improve the accuracy of urban land-use mappings to more efficiently inform about urban planning processes. Decision-makers rely on accurate urban mappings to properly assess current plans and to develop new ones. This study investigates the effects of including conventional spectral signatures acquired by different sensors on the classification of airborne LiDAR (Light Detection and Ranging) point clouds using multiple feature spaces. The proposed method applied three machine learning algorithms—ML (Maximum Likelihood), SVM (Support Vector Machines), and MLP (Multilayer Perceptron Neural Network)—to classify LiDAR point clouds of a residential urban area after being geo-registered to aerial photos. The overall classification accuracy passed 97%, with height as the only geometric feature in the classifying space. Misclassifications occurred among different classes due to independent acquisition of aerial and LiDAR data as well as shadow and orthorectification problems from aerial images. Nevertheless, the outcomes are promising as they surpassed those achieved with large geometric feature spaces and are encouraging since the approach is computationally reasonable and integrates radiometric properties from affordable sensors.
Abstract. In light of the ongoing urban sprawl reported in recent studies, accurate urban mapping is essential for assessing current status and evolve new policies, to overcome various social, environmental, and economic consequence. Imagery and LiDAR data integration densifies remotely sensed data with radiometric and geometric characteristics, respectively, for a precise segregation of different urban features. This study integrated aerial and LiDAR images using point primitives, which were obtained from running the Phase Congruency model as an image filter to detect edges and corner. The main objective is to study the effect of applying the filter at different spatial resolutions on the registration accuracy and processing time. The detected edge/corner points that are mutual in both datasets, were identified as candidate points. The Shape Context Descriptor method paired-up candidate points as final points based on a minimum correlation of 95%. Affine, second and third order polynomials, in addition to the Direct Linear Transformation models were applied for the image registration process using the two sets of final points. The models were solved using Least Squares adjustments, and validated by a set of 55 checkpoints. It was observed that with the decrease in spatial resolution, on one hand, the registration accuracy did not significantly vary. However, the consistency of the model development and model validation accuracies were enhanced, especially with the third order polynomial model. On the other hand, the number of candidate points decreased; consequently, the processing time significantly declined. The 3D LiDAR points were visualised based on the Red, Green, and Blue radiometric values that were inherited from the aerial photo. The qualitative inspection was very satisfactory, especially when examining the scene’s tiny details. In spite of the interactivity in determining the candidate points, the proposed procedure overcomes the dissimilarity between datasets in terms of acquisition technique and time, and widens the tolerance of accepting points as candidates by including points that are not traditionally considered (i.e. road intersections).
A complex pattern of urban demographic transition has been taking shape since the onset of the COVID-19 pandemic. The long-standing rural-to-urban route of population migration that has propelled waves of massive urbanization over the decades is increasingly being juxtaposed with a reverse movement, as the pandemic drives urban dwellers to suburban communities. The changing dynamics of the flow of residents to and from urban areas underscore the necessity of comprehensive urban land-use mapping for urban planning/management/ assessment. These maps are essential for anticipating the rapidly evolving demands of the urban populace and mitigating the environmental and social consequences of uncontrolled urban expansion. The integration of light detection and ranging (LiDAR) and imagery data provides an opportunity for urban planning projects to take advantage of its complementary geometric and radiometric characteristics, respectively, with a potential increase in urban mapping accuracies. We enhance the color-based segmentation algorithm for object-based classification of multispectral LiDAR point clouds fused with very high-resolution imagery data acquired for a residential urban study area. We propose a multilevel classification using multilayer perceptron neural networks through vectors of geometric and spectral features structured in different classification scenarios. After an investigation of all classification scenarios, the proposed method achieves an overall mapping accuracy exceeding 98%, combining the original and calculated feature vectors and their output space projected by principal components analysis. This combination also eliminates some misclassifications among classes. We used splits of training, validation, and testing subsets and the k-fold cross-validation to quantitatively assess the classification scenarios. The proposed work improves the color-based segmentation algorithm to fit object-based classification applications and examines multiple classification scenarios. The presented scenarios prove superiority in developing urban mapping accuracies. The various feature spaces suggest the best urban mapping applications based on the available characteristics of the obtained data. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.