2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA) 2019
DOI: 10.1109/aiccsa47632.2019.9035217
|View full text |Cite
|
Sign up to set email alerts
|

A Lightweight Deep Autoencoder-Based Approach for Unsupervised Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Here, AE is an unsupervised neural network, which network structure is the same as UTEN-IDS. We use one statistical approach used in [42] to set the threshold of AE reconstruction loss, and the threshold is used for classification. As mentioned earlier, Kitsune is an online method and KitNET is the core detection algorithm of Kitsune.…”
Section: Performance Analysis On Ces-cic-ids 2018 Datasetmentioning
confidence: 99%
“…Here, AE is an unsupervised neural network, which network structure is the same as UTEN-IDS. We use one statistical approach used in [42] to set the threshold of AE reconstruction loss, and the threshold is used for classification. As mentioned earlier, Kitsune is an online method and KitNET is the core detection algorithm of Kitsune.…”
Section: Performance Analysis On Ces-cic-ids 2018 Datasetmentioning
confidence: 99%
“…can be applied on unsupervised anomaly detection problems [2,22,26] but they often fall short when dealing with high dimensional data with complex structures contrary to deep learning algorithms. In a semi-supervised or unsupervised context, generative neural networks such as AEs can be adapted for anomaly detection problems [21,27,28] by training them on normal data so that they learn to reproduce or generate good behaviors with a small reconstruction error. During this training, each data point is first reduced to a lower dimension representation that is expanded to its original size afterwards.…”
Section: Related Workmentioning
confidence: 99%
“…The usual framework for anomaly detection with AEs is to collect the reconstruction errors for all points of the dataset and find a threshold value that will determine which errors are considered outliers. AEs have proven themselves to be efficient tools for unsupervised anomaly detection [21,27] with the caveat that the architecture and hyperparameters of such neural networks must be adequately chosen to get a competitive performance for real life applications.…”
Section: Introductionmentioning
confidence: 99%