2022
DOI: 10.1016/j.eswa.2021.116263
|View full text |Cite
|
Sign up to set email alerts
|

AutoLog: Anomaly detection by deep autoencoding of system logs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(14 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…The most common loss function in the reviewed publications is the Cross-Entropy (CE), in particular, the categorical cross-entropy for multi-class prediction [20], [57] or binary cross-entropy that only differentiates between the normal and anomalous class [61]. Other common loss functions include the Hyper-Sphere Objective Function (HS) where the distance to the center of a hyper-sphere represents the anomaly score [24], [39], [41], [62], the Mean Squared Error (MSE) that is used for regression [20], [27], [28], [47], [50], [53], [68], and the Kullback-Leibler Divergence (KL) and Marginal Likelihood (ML) that are useful to measure loss in probability distributions [49], [58].…”
Section: B Deep Learning Techniquesmentioning
confidence: 99%
See 3 more Smart Citations
“…The most common loss function in the reviewed publications is the Cross-Entropy (CE), in particular, the categorical cross-entropy for multi-class prediction [20], [57] or binary cross-entropy that only differentiates between the normal and anomalous class [61]. Other common loss functions include the Hyper-Sphere Objective Function (HS) where the distance to the center of a hyper-sphere represents the anomaly score [24], [39], [41], [62], the Mean Squared Error (MSE) that is used for regression [20], [27], [28], [47], [50], [53], [68], and the Kullback-Leibler Divergence (KL) and Marginal Likelihood (ML) that are useful to measure loss in probability distributions [49], [58].…”
Section: B Deep Learning Techniquesmentioning
confidence: 99%
“…While such approaches draw less semantic information from the single tokens, they have the advantage of being more flexible as they rely on generally applicable heuristics rather than pre-defined parsers and are therefore widely applicable. Some approaches make use of a combination (COM) of parsing and token-based pre-processing strategies, in particular, by generating token vectors from parsed events rather than raw log lines [28], [38].…”
Section: Log Data Preparationmentioning
confidence: 99%
See 2 more Smart Citations
“…VeLog proposed by Qian et al [42] achieves sequential modeling of execution paths and the number of execution times using variational autoencoders (VAE). Catillo et al proposes AutoLog [43] which models term-weightings with autoencoders.…”
Section: Related Workmentioning
confidence: 99%