Proceedings of the 15th ACM International Conference on Distributed and Event-Based Systems 2021
DOI: 10.1145/3465480.3468292
|View full text |Cite
|
Sign up to set email alerts
|

Explainable anomaly detection on high-dimensional time series data

Abstract: As enterprise information systems are collecting event streams from various sources, the ability of a system to automatically detect anomalous events and further provide human-readable explanations is of paramount importance. In this paper, we present an approach to integrated anomaly detection (AD) and explanation discovery (ED), which is able to leverage state-of-the-art Deep Learning (DL) techniques for anomaly detection, while being able to recover humanreadable explanations for detected anomalies. At the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…There are several previous approaches to anomaly detection as listed and extracted on "Result and Discussions". However, one crucial difference between some of those approaches and the case we are interested in is that there is no further explanation of the difference between anomaly and extreme observations [38], [39].…”
Section: ) Evaluationmentioning
confidence: 99%
“…There are several previous approaches to anomaly detection as listed and extracted on "Result and Discussions". However, one crucial difference between some of those approaches and the case we are interested in is that there is no further explanation of the difference between anomaly and extreme observations [38], [39].…”
Section: ) Evaluationmentioning
confidence: 99%
“…There are also DNN-specific post-hoc methods such as DeepLift [20] and IntegratedGradients [21], which leverage the gradients of DNN outputs with respect to the input features. While these posthoc models were originally developed for supervised DNNs, recent works have extended them to the AE and its probabilistic variant, the variational autoencoder (VAE) [22,23,24,25,26,27].…”
Section: Explainable Deep Learningmentioning
confidence: 99%
“…Explainability for time series can be done by using dimension reduction (see e.g. [RSJD21]) or deep learning methods (see e.g. [AGP + 20]).…”
mentioning
confidence: 99%