2018 IEEE International Conference on Data Mining Workshops (ICDMW) 2018
DOI: 10.1109/icdmw.2018.00204
|View full text |Cite
|
Sign up to set email alerts
|

EXAD: A System for Explainable Anomaly Detection on Big Data Traces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 13 publications
0
19
0
Order By: Relevance
“…In [52] the authors perform anomaly detection using a LSTM neural network. They then approximate the neural network by a decision tree in order to retrieve the explanations.…”
Section: Anomaly Explanation By Feature Valuesmentioning
confidence: 99%
“…In [52] the authors perform anomaly detection using a LSTM neural network. They then approximate the neural network by a decision tree in order to retrieve the explanations.…”
Section: Anomaly Explanation By Feature Valuesmentioning
confidence: 99%
“…Some techniques limit the search space by using a heuristic approach; however, the approach does not guarantee to find the optimum subspace, which is the subspace where the outlier is the most abnormal [47,50]. Instead of searching the subspace, some techniques [24,68] depend on the local neighborhood of each outlier to extract its outlying attributes, and some other techniques [89,90] use interpretable models to measure the contribution of each feature/attribute to the abnormality of the object.…”
Section: Challenges In Generating Outlying Attributes Of An Individua...mentioning
confidence: 99%
“…Methods such as Shapley values [99] and region partition trees [100] have been successfully applied to explain detected anomalies. More systemic approaches have been developed too, such as Exathlon [101], EXAD [102], and others [103]. While EXAD focuses on explanation discovery for each anomaly, Exathlon crafts the explanations, providing two pieces of information to the user: why the data point was identified as an anomaly and the root causes of such anomaly.…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%