2024
DOI: 10.1109/tii.2023.3240601
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Fault Diagnosis of Industrial Processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Bi et al [654] acknowledged the need for incorporating both human and machine intelligence to develop smart systems, emphasizing the role of interpretability to make humans trust in machine intelligence. The authors identified three approaches to achieve interpretability: intrinsic interpretable models, such as Bayesian neural networks [533]; enhancing black-box models, like deep neural networks, with interpretability capabilities [327,328,[666][667][668]; and employing model-agnostic methods [390,669,670] such as LIME (local interpretable model-agnostic explanations) [671] and SHAP (Shapley additive explanations) [672]. Several recent publications have surveyed interpretability techniques for machine learning models [673][674][675][676][677][678][679].…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…Bi et al [654] acknowledged the need for incorporating both human and machine intelligence to develop smart systems, emphasizing the role of interpretability to make humans trust in machine intelligence. The authors identified three approaches to achieve interpretability: intrinsic interpretable models, such as Bayesian neural networks [533]; enhancing black-box models, like deep neural networks, with interpretability capabilities [327,328,[666][667][668]; and employing model-agnostic methods [390,669,670] such as LIME (local interpretable model-agnostic explanations) [671] and SHAP (Shapley additive explanations) [672]. Several recent publications have surveyed interpretability techniques for machine learning models [673][674][675][676][677][678][679].…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…Togo et al 9 provided an Explainable framework for toxicity prediction. Jang et al 10 5 and has become very popular in prediction of complex machine learning models. Nguyen et al 12 employed LIME for Prediction of Parkinson's Disease Depression.…”
Section: Previous Work On Explainablementioning
confidence: 99%
“…Togo et al provided an Explainable framework for toxicity prediction. Jang et al augmented Fault Diagnosis of Industrial Processes modeling with SHAP explanations. Fatahi et.al .…”
Section: Explainable Ai Model For Drop Size Estimation In An Rdcmentioning
confidence: 99%