Landslides in urban areas have been relatively well-documented in landslide inventories despite issues in accuracy and completeness, e.g., the absence of small landslides. By contrast, less attention has been paid to landslides in sparsely populated areas in terms of their occurrences and locations. This study utilizes high-resolution and LiDAR-derived digital elevation models (DEMs) at two different times for landslide detection to (1) improve the localization and detection accuracies in landslide inventories, (2) minimize human intervention in the landslide detection process, and (3) identify landslides that cannot be easily documented in the current state of the practice. To achieve this goal, multiple preprocessing steps were used to ensure the spatial alignment of the multi-temporal DEMs. Map algebra was then used to calculate the vertical displacement for each cell and create a DEM of Difference (DoD) to obtain a quantitative estimation of ground deformations. Next, the elevation changes were filtered via an appropriate Level of Detection (LoD) threshold to mark potential landslide candidates. The landslide candidates were further assessed with the aid of customized topographic maps as auxiliary data and pattern recognition to distinguish landslides (true positive changes) from construction, erosion, and deposition (false positives). The results from the proposed method were compared with existing landslide inventories and reports to evaluate its performance. The new method was also validated with temporal high-resolution Google Earth images. The results showed the successful application of the method in landslide detection and mapping. Compared with traditional methods, the proposed method provides a semi-automatic way to obtain landslide inventories with publicly available yet lowly utilized DEM data, which can be valuable in preliminary analysis for landslide detection.
This paper presents a comparison study between methods of deep learning as a new category of slope stability analysis, built upon the recent advances in artificial intelligence and conventional limit equilibrium analysis methods. For this purpose, computer code was developed to calculate the factor of safety (FS) using four limit equilibrium methods: Bishop’s simplified method, the Fellenius method, Janbu’s simplified method, and Janbu’s corrected method. The code was verified against Slide2 in RocScience. Subsequently, the average FS values were used to approximate the “true” FS of the slopes for labeling the images for deep learning. Using this code, a comprehensive dataset of slope images with wide ranges of geometries and soil properties was created. The average FS values were used to label the images for implementing two deep learning models: a multiclass classification and a regression model. After training, the deep learning models were used to predict the FS of an independent set of slope images. Finally, the performance of the models was compared to that of the conventional methods. This study found that deep learning methods can reach accuracies as high as 99.71% while improving computational efficiency by more than 18 times compared with conventional methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.