2022
DOI: 10.3390/rs14010211
|View full text |Cite
|
Sign up to set email alerts
|

A GIS-Based Landslide Susceptibility Mapping and Variable Importance Analysis Using Artificial Intelligent Training-Based Methods

Abstract: Landslides often cause significant casualties and economic losses, and therefore landslide susceptibility mapping (LSM) has become increasingly urgent and important. The potential of deep learning (DL) like convolutional neural networks (CNN) based on landslide causative factors has not been fully explored yet. The main target of this study is the investigation of a GIS-based LSM in Zanjan, Iran and to explore the most important causative factor of landslides in the case study area. Different machine learning … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(19 citation statements)
references
References 63 publications
0
19
0
Order By: Relevance
“…The use of hidden layers increases the model flexibility, though one hidden layer is usually sufficient to model continuous functions [84], considering that an excessive number of hidden neurons would cause overfitting problems; therefore, it is preferable to use the least number of hidden neurons for an adequate performance of the network [85]. Based on the above and considering that trial and error methods are used to determine the number of neurons in the hidden layer [86], the implementation of ANN in this research was based on an experimental procedure, elaborating several tests with different numbers of hidden neurons, which allowed obtaining a better performance of the network in terms of runtime, validation parameter values, and avoiding problems of non-convergence of the ANN, by configuring a hidden layer with three neurons. By increasing the number of hidden neurons or the number of neurons within this layer, problems such as an excessive runtime or lack of convergence were encountered.…”
Section: Implementation Of the Ann-mlp Algorithmmentioning
confidence: 99%
“…The use of hidden layers increases the model flexibility, though one hidden layer is usually sufficient to model continuous functions [84], considering that an excessive number of hidden neurons would cause overfitting problems; therefore, it is preferable to use the least number of hidden neurons for an adequate performance of the network [85]. Based on the above and considering that trial and error methods are used to determine the number of neurons in the hidden layer [86], the implementation of ANN in this research was based on an experimental procedure, elaborating several tests with different numbers of hidden neurons, which allowed obtaining a better performance of the network in terms of runtime, validation parameter values, and avoiding problems of non-convergence of the ANN, by configuring a hidden layer with three neurons. By increasing the number of hidden neurons or the number of neurons within this layer, problems such as an excessive runtime or lack of convergence were encountered.…”
Section: Implementation Of the Ann-mlp Algorithmmentioning
confidence: 99%
“…The most traditional way to understand how a machine-learning models work is to list the variable importance ranking (e.g., Band et al, 2020;Hosseini et al, 2020;Zhao et al, 2022b). Here, we also produce the same graphics in Figure 5 but use SHAP values to sort each predictor according to the impact it may have over the final susceptibility.…”
Section: Model Interpretation 421 Global Interpretationmentioning
confidence: 99%
“…The stepwise linear regression model (LSR) can select variables closely related to the response variables by significance testing, solving the problem of collinearity among explanatory variables (Zhu et al, 2017). However, when the models with insignificant influence on dependent variables are ignored, there would be a lower prediction accuracy when the forest AGB and the independent variables do not have a simple linear relationship (Yadav et al, 2021;Zhao et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Many non-parametric models have been explored for forest AGB estimation, such as random forest (RF) (Yadav et al, 2021), k-nearest neighbors (kNN) (Wan et al, 2021;Andras et al, 2022;Beaudoin et al, 2022), support vector machine (SVM) (Mountrakis et al, 2010;Christoffer et al, 2013), and maximum entropy (MaxEnt) (Wang et al, 2022;Zhao et al, 2022). Although the nonparametric models can provide an excellent fitting effect, it is still hard to improve the precision influence caused by overestimation and underestimation.…”
Section: Introductionmentioning
confidence: 99%