2020
DOI: 10.1007/978-3-030-65965-3_27
|View full text |Cite
|
Sign up to set email alerts
|

A Ranking Stability Measure for Quantifying the Robustness of Anomaly Detection Methods

Abstract: Anomaly detection attempts to learn models from data that can detect anomalous examples in the data. However, naturally occurring variations in the data impact the model that is learned and thus which examples it will predict to be anomalies. Ideally, an anomaly detection method should be robust to such small changes in the data. Hence, this paper introduces a ranking stability measure that quantifies the robustness of any anomaly detector's predictions by looking at how consistently it ranks examples in terms… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Although calibration usually requires either labeled examples or a known contamination factor, Kriegel et al (2011) introduce UNIFY, a method to obtain calibrated probabilities from anomaly scores without such requirements. In absence of labeled data, Marques et al (2020) develop an internal measure to evaluate the quality of an anomaly detector, while Schubert et al (2012) and Perini et al (2020) develop rank similarity measures to compare the anomaly rankings of different detectors. However, none of these works propose a method to find an appropriate decision threshold for the anomaly scores in an (unlabeled) dataset.…”
Section: Related Workmentioning
confidence: 99%
“…Although calibration usually requires either labeled examples or a known contamination factor, Kriegel et al (2011) introduce UNIFY, a method to obtain calibrated probabilities from anomaly scores without such requirements. In absence of labeled data, Marques et al (2020) develop an internal measure to evaluate the quality of an anomaly detector, while Schubert et al (2012) and Perini et al (2020) develop rank similarity measures to compare the anomaly rankings of different detectors. However, none of these works propose a method to find an appropriate decision threshold for the anomaly scores in an (unlabeled) dataset.…”
Section: Related Workmentioning
confidence: 99%