2021
DOI: 10.2478/jaiscr-2022-0010
|View full text |Cite
|
Sign up to set email alerts
|

An Autoencoder-Enhanced Stacking Neural Network Model for Increasing the Performance of Intrusion Detection

Abstract: Security threats, among other intrusions affecting the availability, confidentiality and integrity of IT resources and services, are spreading fast and can cause serious harm to organizations. Intrusion detection has a key role in capturing intrusions. In particular, the application of machine learning methods in this area can enrich the intrusion detection efficiency. Various methods, such as pattern recognition from event logs, can be applied in intrusion detection. The main goal of our research is to presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…The Auto-Encoder 1 for data enhancement and the Auto-Encoder 2 for feature enhancement use the same model structure, consisting of an input layer, a full connection layer, a batch regularization layer, and an output layer [27]. The specific model structure of AE is shown in Fig.…”
Section: Auto-encoder Enhancementmentioning
confidence: 99%
“…The Auto-Encoder 1 for data enhancement and the Auto-Encoder 2 for feature enhancement use the same model structure, consisting of an input layer, a full connection layer, a batch regularization layer, and an output layer [27]. The specific model structure of AE is shown in Fig.…”
Section: Auto-encoder Enhancementmentioning
confidence: 99%
“…Gradient Boosting tried to reduce the errors of the weak learner models and combine the predictions from the weak learners to become a strong learner. The extension of Gradient Boosting technique is the XGBoost, which reduces the time computation and computational complexity of the Gradient Boosting technique [26,29,48].…”
Section: Machine Learningmentioning
confidence: 99%
“…False Negative -Negative samples incorrectly predicted as actual positives False Positive -Positive samples incorrectly predicted as actual negatives True Negative -The samples that are identified correctly as actual negatives Precision metric exhibits the accuracy of the positive class and measures whether the prediction of the positive class is correct as defined (26) and is given by, Precision = TP / (TP + FP) Recall is measured as the fraction of positive classes correctly detected to the total classes as defined in (27) and is given by, Recall = TP / (TP + FN) F1 Score is the weighted average score or harmonic mean of true positive (recall) and precision as defined in (28) and is given by, F1 Score = 2 * [(P * R) / (P + R)] Accuracy is measured as the fraction of total of True Positive, True Negative to the total of True Positive, False Positive, True Negative, and False Negative as defined in (29) and is expressed as,…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…In order to perform this operation we used a fully-connected autoencoder (AE) to encode the previously obtained MRGD. The autoencoder are used in various machine learning tasks such as: image compression, dimensionality reduction, feature extraction, image reconstruction [4,5,6]. As autoencoders use unsupervised learning, they are perfect for generating semantic hashes.…”
Section: Training and Hash Generationmentioning
confidence: 99%