2020
DOI: 10.7717/peerj-cs.327
|View full text |Cite
|
Sign up to set email alerts
|

Application of deep autoencoder as an one-class classifier for unsupervised network intrusion detection: a comparative evaluation

Abstract: The ever-increasing use of internet has opened a new avenue for cybercriminals, alarming the online businesses and organization to stay ahead of evolving thread landscape. To this end, intrusion detection system (IDS) is deemed as a promising defensive mechanism to ensure network security. Recently, deep learning has gained ground in the field of intrusion detection but majority of progress has been witnessed on supervised learning which requires adequate labeled data for training. In real practice, labeling t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(18 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…Regardless of the significant inherent challenge of the NSL-KDD dataset, for instance, the inadequate reflection of current low footprint attack scenarios, it is still considered the most preferred IDSs evaluation dataset because of its distinctive attribute of maximizing predictions for classifiers [15]. It consists of four attack categories with 41 attributes and a single labeled class distinguishing between malicious or regular network traffic [21]. Finally, interested readers can refer to reference [64] for the detailed theoretical and technical documentation of the NSL-KDD dataset.…”
Section: Synopsis Of the Unsw-nb15 Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…Regardless of the significant inherent challenge of the NSL-KDD dataset, for instance, the inadequate reflection of current low footprint attack scenarios, it is still considered the most preferred IDSs evaluation dataset because of its distinctive attribute of maximizing predictions for classifiers [15]. It consists of four attack categories with 41 attributes and a single labeled class distinguishing between malicious or regular network traffic [21]. Finally, interested readers can refer to reference [64] for the detailed theoretical and technical documentation of the NSL-KDD dataset.…”
Section: Synopsis Of the Unsw-nb15 Datasetmentioning
confidence: 99%
“…Furthermore, irrespective of the phenomenal achievements of researchers and professionals in mitigating the escalating security challenges through IDSs, the challenges of improving the detection rate, accuracy, detection of novel attacks, and reducing false alarm rates are still issues yet to be addressed in the research domain of IDSs. Moreover, the mind-blowing complexity of the present cutting-edge networks has challenged the detection capability of many existing IDSs [21]. Consequently, the past years have seen many researchers and professionals exploit the novelty of machine learning techniques to address the problems mentioned earlier by designing, developing, and deploying effective and efficient IDSs [11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In our previous work [12], we reviewed autoencoder based anomaly intrusion detection methods, whereby single layer denoising models [13], Long Short Term Memory (LSTM), Recurrent Neural Network [14], [15], ensembled stacked autoencoders [16], [17], and sparsely connected networks [18], [15] were demonstrated across a range of IDS data sets. Vaiyapuri and Binbusayyis [19] evaluated a number of autoencoder network architectures for anomaly detection, finding the use of a contractive penalty to regulate the network provided the best performance when evaluated offline using the NSL-KDD and UNSW-NB15 data sets.…”
Section: Autoencoder Anomaly Detectionmentioning
confidence: 99%
“…A number of methods were proposed in the literature to determine the anomaly threshold, an important parameter in deciding whether to label a sample as a positive detection. The threshold can be set to the average RE value observed during training [19]. Naïve Anomaly Threshold (NAT) sets the threshold at the maximum observed RE during training [16].…”
Section: Autoencoder Anomaly Detectionmentioning
confidence: 99%