Abstract:Log messages are widely used in cloud servers and other systems. Millions of logs are generated each day which makes them important for anomaly detection. However, they are complex unstructured text messages which makes this task difficult. In this paper, a hybrid log message anomaly detection technique is proposed which employs pruning of positive and negative logs. Reliable positive log messages are first selected using a Gaussian mixture model algorithm. Then reliable negative logs are selected using the K-… Show more
“…2) Evaluation metrics: Quantitative evaluation (ER-2) of anomaly detection approaches typically revolves around counting the numbers of correctly detected anomalous samples as true positives (T P ), incorrectly detected non-anomalous samples as false positives (F P ), incorrectly undetected anomalous samples as false negatives (F N ), and correctly undetected non-anomalous samples as true negatives (T N ). In the most basic setting where events are labeled individually and samples represent single events (e.g., as in the BGL data set), it is relatively straightforward to evaluate detected events with binary classification [34], [36]. Some of the reviewed papers additionally consider a multi-class classification problem for data sets where different types of failures have distinct labels by computing the averages of evaluation metrics over all classes [55] or plotting confusion matrices [32].…”
Automatic log file analysis enables early detection of relevant incidents such as system failures. In particular, selflearning anomaly detection techniques capture patterns in log data and subsequently report unexpected log event occurrences to system operators without the need to provide or manually model anomalous scenarios in advance. Recently, an increasing number of approaches leveraging deep learning neural networks for this purpose have been presented. These approaches have demonstrated superior detection performance in comparison to conventional machine learning techniques and simultaneously resolve issues with unstable data formats. However, there exist many different architectures for deep learning and it is nontrivial to encode raw and unstructured log data to be analyzed by neural networks. We therefore carry out a systematic literature review that provides an overview of deployed models, data pre-processing mechanisms, anomaly detection techniques, and evaluations. The survey does not quantitatively compare existing approaches but instead aims to help readers understand relevant aspects of different model architectures and emphasizes open issues for future work.
“…2) Evaluation metrics: Quantitative evaluation (ER-2) of anomaly detection approaches typically revolves around counting the numbers of correctly detected anomalous samples as true positives (T P ), incorrectly detected non-anomalous samples as false positives (F P ), incorrectly undetected anomalous samples as false negatives (F N ), and correctly undetected non-anomalous samples as true negatives (T N ). In the most basic setting where events are labeled individually and samples represent single events (e.g., as in the BGL data set), it is relatively straightforward to evaluate detected events with binary classification [34], [36]. Some of the reviewed papers additionally consider a multi-class classification problem for data sets where different types of failures have distinct labels by computing the averages of evaluation metrics over all classes [55] or plotting confusion matrices [32].…”
Automatic log file analysis enables early detection of relevant incidents such as system failures. In particular, selflearning anomaly detection techniques capture patterns in log data and subsequently report unexpected log event occurrences to system operators without the need to provide or manually model anomalous scenarios in advance. Recently, an increasing number of approaches leveraging deep learning neural networks for this purpose have been presented. These approaches have demonstrated superior detection performance in comparison to conventional machine learning techniques and simultaneously resolve issues with unstable data formats. However, there exist many different architectures for deep learning and it is nontrivial to encode raw and unstructured log data to be analyzed by neural networks. We therefore carry out a systematic literature review that provides an overview of deployed models, data pre-processing mechanisms, anomaly detection techniques, and evaluations. The survey does not quantitatively compare existing approaches but instead aims to help readers understand relevant aspects of different model architectures and emphasizes open issues for future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.