2023
DOI: 10.3390/land12020420
|View full text |Cite
|
Sign up to set email alerts
|

Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case Study of Seoul Metropolitan Area, Korea

Abstract: Unplanned and rapid urban growth requires the reckless expansion of infrastructure including water, sewage, energy, and transportation facilities, and thus causes environmental problems such as deterioration of old towns, reduction of open spaces, and air pollution. To alleviate and prevent such problems induced by urban growth, the accurate prediction and management of urban expansion is crucial. In this context, this study aims at modeling and predicting urban expansion in Seoul metropolitan area (SMA), Kore… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 78 publications
0
2
0
Order By: Relevance
“…Its automated workflows save computational time and resources, allowing for a more robust and comprehensive exploration of the model space. Furthermore, PyCaret's compatibility with multiple machine learning algorithms and its seamless integration with other Python libraries, such as scikit-learn, XGBoost and LightGBM, offer a high level of flexibility and customization, as demonstrated in previous research [41].…”
Section: Discussionmentioning
confidence: 95%
“…Its automated workflows save computational time and resources, allowing for a more robust and comprehensive exploration of the model space. Furthermore, PyCaret's compatibility with multiple machine learning algorithms and its seamless integration with other Python libraries, such as scikit-learn, XGBoost and LightGBM, offer a high level of flexibility and customization, as demonstrated in previous research [41].…”
Section: Discussionmentioning
confidence: 95%
“…Shapley additive explanations (SHAP) [ 71 , 72 ] is a method that provides explanations for the predictions made by ML models. It assigns a contribution value to each feature in the input data to explain the model’s predictions.…”
Section: Methodsmentioning
confidence: 99%
“…In this study, shallow algorithms not only facilitate easier explanations for research participants but also ensure that the data analysis remains accessible to researchers. As artificial intelligence becomes increasingly integrated into research and policy making, the emphasis on the explainability of these algorithms grows [ 78 , 79 ]. Notably, the trade-off between model accuracy and interpretability in AI has been a focal point in recent research, with a survey paper offering an in-depth analysis of explainable AI methodologies and suggesting future research avenues to optimize this balance [ 80 ].…”
Section: Methodsmentioning
confidence: 99%