2022 IEEE International Symposium on Advanced Control of Industrial Processes (AdCONIP) 2022
DOI: 10.1109/adconip55568.2022.9894124
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Fault Diagnosis Model using Stacked Autoencoder and Kernel SHAP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 15 publications
0
0
0
Order By: Relevance
“…Bi et al [654] acknowledged the need for incorporating both human and machine intelligence to develop smart systems, emphasizing the role of interpretability to make humans trust in machine intelligence. The authors identified three approaches to achieve interpretability: intrinsic interpretable models, such as Bayesian neural networks [533]; enhancing black-box models, like deep neural networks, with interpretability capabilities [327,328,[666][667][668]; and employing model-agnostic methods [390,669,670] such as LIME (local interpretable model-agnostic explanations) [671] and SHAP (Shapley additive explanations) [672]. Several recent publications have surveyed interpretability techniques for machine learning models [673][674][675][676][677][678][679].…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…Bi et al [654] acknowledged the need for incorporating both human and machine intelligence to develop smart systems, emphasizing the role of interpretability to make humans trust in machine intelligence. The authors identified three approaches to achieve interpretability: intrinsic interpretable models, such as Bayesian neural networks [533]; enhancing black-box models, like deep neural networks, with interpretability capabilities [327,328,[666][667][668]; and employing model-agnostic methods [390,669,670] such as LIME (local interpretable model-agnostic explanations) [671] and SHAP (Shapley additive explanations) [672]. Several recent publications have surveyed interpretability techniques for machine learning models [673][674][675][676][677][678][679].…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…The SHAP (SHapley Additive exPlanations) value [47] is a novel feature importance calculation method, which is suitable for both regression and classification problems that can represent a valid alternative to the most used feature selection methods, because it can provide a local interpretation of the model on a single prediction, as well as the global average behaviour of the model in terms of the contribution of each feature to the outcome of the ML algorithm [48].According to the existing literature [9,[49][50][51], the terms "interpretability" and "explainability" are often used synonymously, referring to models where users can understand how the inputs are mapped into outputs. This is the case for the techniques aimed at quantifying the contribution given by each feature to determine the outcome of their importance and highlight those with the greatest impact on prediction.…”
mentioning
confidence: 99%