2021 IEEE International Conference on Autonomous Systems (ICAS) 2021
DOI: 10.1109/icas49788.2021.9551129
|View full text |Cite
|
Sign up to set email alerts
|

General Frameworks for Anomaly Detection Explainability: Comparative Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…The defects are annotated with manually created ground truth pixel maps, with binary indications of pixels that are part of the defect. The dataset has been previously used to evaluate feature relevance XAI approaches by Ravi et al ( 2021 ), although their evaluations are limited to qualitative inspections of results. To instead generate quantitative results of XAI performance, we use the ground truth anomaly segmentation maps as ground truths for explanations.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The defects are annotated with manually created ground truth pixel maps, with binary indications of pixels that are part of the defect. The dataset has been previously used to evaluate feature relevance XAI approaches by Ravi et al ( 2021 ), although their evaluations are limited to qualitative inspections of results. To instead generate quantitative results of XAI performance, we use the ground truth anomaly segmentation maps as ground truths for explanations.…”
Section: Methodsmentioning
confidence: 99%
“…For the ERP dataset, Tritscher et al ( 2022b ) conduct a hyperparameter study of multiple anomaly detectors on the data, finding architectures that yield good results on the dataset. For our showcases, we select their second best performing model, the autoencoder neural network (Goodfellow et al, 2016 ) architecture, with their found hyperparameters as they show that their best performing one-class support vector machine (Schölkopf et al, 2001 ) exibits an erratic decision process that may influence a quantitative XAI evaluation and autoencoder networks are commonly studied in the domain of explainable anomaly detection (Antwarg et al, 2021 ; Ravi et al, 2021 ; Müller et al, 2022 ).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations