2022
DOI: 10.1038/s41598-022-15618-4
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of explainable artificial intelligence methods in the phase classification of multi-principal element alloys

Abstract: We demonstrate the capabilities of two model-agnostic local post-hoc model interpretability methods, namely breakDown (BD) and shapley (SHAP), to explain the predictions of a black-box classification learning model that establishes a quantitative relationship between chemical composition and multi-principal element alloys (MPEA) phase formation. We trained an ensemble of support vector machines using a dataset with 1,821 instances, 12 features with low pair-wise correlation, and seven phase labels. Feature con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 55 publications
0
1
0
Order By: Relevance
“…Insights into feature effects on model predictions from the Shapley additive explanation (SHAP) analysis. The CP plots provide a counterfactual interpretation to quantify feature effects and offer clear visualization to investigate the relationships between model responses and features (62). However, the approach is limited to displaying information for one feature at a time.…”
Section: Ceteris-paribus Plotmentioning
confidence: 99%
“…Insights into feature effects on model predictions from the Shapley additive explanation (SHAP) analysis. The CP plots provide a counterfactual interpretation to quantify feature effects and offer clear visualization to investigate the relationships between model responses and features (62). However, the approach is limited to displaying information for one feature at a time.…”
Section: Ceteris-paribus Plotmentioning
confidence: 99%
“…With the rise of explainability, ML research looks beyond simply explaining the machine learning model. Several papers in the last year have covered use cases combining machine learning explainability and clustering to find relationships between instances [3,10]. Based on a COVID-19 dataset, [3] tries to identify better clusters based on KernelSHAP values.…”
Section: Related Workmentioning
confidence: 99%
“…This work uses machine learning methods to model cause-effect relationships in the context of decoring inorganically bound sand cores. There are various examples of gaining insights from data in many technology areas, for example, material analysis [7], laser beam welding [8], or injection molding [9]. The basic idea is to train machine learning models using annotated data.…”
Section: Machine Learning Modelsmentioning
confidence: 99%