2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ) 2021
DOI: 10.1109/etfa45728.2021.9613467
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Machine Learning: A brief survey from the predictive maintenance perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(28 citation statements)
references
References 81 publications
0
28
0
Order By: Relevance
“…Besides these challenges, characteristics, tools, solutions, adoption, and recommendations in these papers as shown in the systematic map in Figure 4, interpretable and Explainable machine learning models are deficient [65], [66], which is remarkable in SME scenarios. For example, [67] proposed Explainable Artificial Intelligence (XAI) in remaining useful life prediction of Turbofan Engines.…”
Section: Best Practices (Rq4)mentioning
confidence: 97%
“…Besides these challenges, characteristics, tools, solutions, adoption, and recommendations in these papers as shown in the systematic map in Figure 4, interpretable and Explainable machine learning models are deficient [65], [66], which is remarkable in SME scenarios. For example, [67] proposed Explainable Artificial Intelligence (XAI) in remaining useful life prediction of Turbofan Engines.…”
Section: Best Practices (Rq4)mentioning
confidence: 97%
“…However, while these surveys are generalists, we focus on explainers for time series classification problems. We underline that some surveys related to XAI are focused not only on machine learning but also on social studies [22], [23], recommendation systems [24], model-agents [25], and domain-specific applications such as health and medicine [26] or predictive maintenance [27].…”
Section: Related Workmentioning
confidence: 99%
“…d: Easy-to-use explainers are desirable Some XAI methods to explain black-box models might be viewed as black-boxes themselves, as pointed out in previous works [27]. Explaining complex ML models calls for sophisticated and often complicated XAI methods.…”
Section: C: Domain-specific Explanations For Specific Applicationsmentioning
confidence: 99%
“…Posthoc explanations require that a low-dimensional and to some extent local representation of the learned behavior can be created without too much loss of information, which can for example be visually perceived. Thereby, significant research gaps in PHM on interpretable machine learning especially on models of time series analysis and prediction applications in general exist (Vollert et al, 2021).…”
Section: Concepts For the Integration Of Cross-application Knowledge ...mentioning
confidence: 99%