2022
DOI: 10.36001/phme.2022.v7i1.3336
|View full text |Cite
|
Sign up to set email alerts
|

Case-study Led Investigation of Explainable AI (XAI) to Support Deployment of Prognostics in the industry

Abstract: Civil nuclear generation plant must maximise it’s operational uptime in order to maintain it’s viability. With aging plant and heavily regulated operating constraints, monitoring is commonplace, but identifying health indicators to pre-empt disruptive faults is challenging owing to the volumes of data involved. Machine learning (ML) models are increasingly deployed in prognostics and health management (PHM) systems in various industrial applications, however, many of these are black box models that provide goo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…However, unlike traditional heuristics and statistical models, interpreting these black-box AI and ML models is challenging and represents a significant concern. Hence, incorporating XAI approaches has become paramount as a way to ensure that advances in ML models remain explainable and transparent [14][15][16]. XAI strives to encompass particular motivations, such as accountability, fairness, privacy, reliability, transparency, and trust-building [17], in order to ensure the overarching goal of applying AI and ML technologies [18].…”
Section: Related Workmentioning
confidence: 99%
“…However, unlike traditional heuristics and statistical models, interpreting these black-box AI and ML models is challenging and represents a significant concern. Hence, incorporating XAI approaches has become paramount as a way to ensure that advances in ML models remain explainable and transparent [14][15][16]. XAI strives to encompass particular motivations, such as accountability, fairness, privacy, reliability, transparency, and trust-building [17], in order to ensure the overarching goal of applying AI and ML technologies [18].…”
Section: Related Workmentioning
confidence: 99%
“…The use of XAI in prognostics has emerged as a noteworthy subject in recent years. For instance, in (Amin, Brown, Stephen, & McArthur, 2022), the authors used SHapley Additive exPlanations (SHAP) to explain prognostic models built from nuclear power station data. Besides, (Nor, Pedapati, Muhammad, & Leiva, 2022), authors built Bayesian deep learning models and apply SHAP to determine the contribution of data from sensors.…”
Section: Introductionmentioning
confidence: 99%