2022
DOI: 10.1016/j.eswa.2021.116100
|View full text |Cite
|
Sign up to set email alerts
|

Rule extraction in unsupervised anomaly detection for model explainability: Application to OneClass SVM

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…These requirements aim to check the complexity of interaction between end-users and the suggested explainable approaches and overview the end-user expectations. The operational aspect covers the interaction with the end-user, the trade-off between model explanation and prediction and the user awareness needed to use the explanatory power of the methodology effectively (Barbado et al. , 2022).…”
Section: Fact Sheets Analysis: Benchmarking Of Two Methodsmentioning
confidence: 99%
“…These requirements aim to check the complexity of interaction between end-users and the suggested explainable approaches and overview the end-user expectations. The operational aspect covers the interaction with the end-user, the trade-off between model explanation and prediction and the user awareness needed to use the explanatory power of the methodology effectively (Barbado et al. , 2022).…”
Section: Fact Sheets Analysis: Benchmarking Of Two Methodsmentioning
confidence: 99%
“…A second category of anomaly explanation methods groups approaches that additionally associate the responsible features with the values they take, as for instance [2]. The latter can for instance be identified by rules expressed as conjunction of predicates, where explanations take a disjunctive normal form.…”
Section: Anomaly Explanationmentioning
confidence: 99%
“…Recently, there are a few studies discussing interpretation methods on unsupervised learning or anomaly detection. First, some studies develop global interpretation through model distillation [49] and rule extraction [5] as well as interpretation for clustering models [25], which are beyond the scope of this study. As for local interpretation of anomaly detection, in addition to COIN [31] and CADE [58] which are used as baselines, most of other works simply modify the existing supervised methods to unsupervised learning.…”
Section: Related Workmentioning
confidence: 99%