2022
DOI: 10.1016/j.cscm.2022.e01059
|View full text |Cite
|
Sign up to set email alerts
|

A novel approach to explain the black-box nature of machine learning in compressive strength predictions of concrete using Shapley additive explanations (SHAP)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
52
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(53 citation statements)
references
References 77 publications
0
52
1
Order By: Relevance
“…Although ML has provided important advances in diagnosing autism, considerable challenges must be addressed. Many methods for classification lack interpretability, which is disadvantageous, especially for the understanding of medical data [28,29]. Also, according to Table I, [25,27], small data sets are quite common [30][31][32][33], which might cause unreliable results.…”
Section: Introductionmentioning
confidence: 99%
“…Although ML has provided important advances in diagnosing autism, considerable challenges must be addressed. Many methods for classification lack interpretability, which is disadvantageous, especially for the understanding of medical data [28,29]. Also, according to Table I, [25,27], small data sets are quite common [30][31][32][33], which might cause unreliable results.…”
Section: Introductionmentioning
confidence: 99%
“…Many researchers have implemented this technique to interpret the black-box nature of the ML-based models. Ekanayake et al have implemented a decision tree, adaptive boost (AdaBoost), and extreme gradient boost (XGBoost) in the SHAP framework . Zaki et al utilized SHAP to study the optical properties of glass by implying the ML model .…”
Section: Introductionmentioning
confidence: 99%
“…Ekanayake et al have implemented a decision tree, adaptive boost (AdaBoost), and extreme gradient boost (XGBoost) in the SHAP framework. 38 Zaki et al utilized SHAP to study the optical properties of glass by implying the ML model. 39 Onsree et al investigated the effect of features on the accuracy of the ML model using SHAP.…”
Section: Introductionmentioning
confidence: 99%
“…As brain data are characterized by high complexity and highly correlated brain regions, ML algorithms have been widely used as a important tool capable of detecting acute and permanent abnormalities in the brain [61][62][63]. On the other hand, ML shows a lack of interpretability and a black box nature that is an especially disadvantageous general limitation when it comes to understanding medical data [64,65]. In the last years new techniques have emerged to help in the interpretation of machine learning results.…”
Section: Introductionmentioning
confidence: 99%