2022
DOI: 10.3390/s22155822
|View full text |Cite
|
Sign up to set email alerts
|

Towards an Effective Intrusion Detection Model Using Focal Loss Variational Autoencoder for Internet of Things (IoT)

Abstract: As the range of security attacks increases across diverse network applications, intrusion detection systems are of central interest. Such detection systems are more crucial for the Internet of Things (IoT) due to the voluminous and sensitive data it produces. However, the real-world network produces imbalanced traffic including different and unknown attack types. Due to this imbalanced nature of network traffic, the traditional learning-based detection techniques suffer from lower overall detection performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 62 publications
0
10
0
Order By: Relevance
“…Results presented in Table 7 demonstrate that our ATCN-AE model achieves approximately 98% accuracy in classifying benign and malicious data. This result signifies a marked improvement over existing baseline models such as the Sparse autoencoder (91.2%) [15], CFLVAE (88.1% accuracy) [16], and AE-D3F (82.0%) [19] as visualized in Figure 8. Additionally, our proposed model surpasses the performance of models utilizing the SherLock dataset, as evidenced by studies [28][29][30], achieving accuracies of 82%, 83%, and 81%, respectively.…”
Section: Comparison With State Of the Artmentioning
confidence: 70%
See 1 more Smart Citation
“…Results presented in Table 7 demonstrate that our ATCN-AE model achieves approximately 98% accuracy in classifying benign and malicious data. This result signifies a marked improvement over existing baseline models such as the Sparse autoencoder (91.2%) [15], CFLVAE (88.1% accuracy) [16], and AE-D3F (82.0%) [19] as visualized in Figure 8. Additionally, our proposed model surpasses the performance of models utilizing the SherLock dataset, as evidenced by studies [28][29][30], achieving accuracies of 82%, 83%, and 81%, respectively.…”
Section: Comparison With State Of the Artmentioning
confidence: 70%
“…In [16], Khanam et al presented a classwise focal loss variational autoencoder (CFLVAE), a deep generative-based model, to solve unbalanced network traffic problems in intrusion detection systems for the Internet of Things (IoT). A well-balanced intrusion dataset is used to create fresh samples for minority attack types and train a deep neural network (DNN) classifier, improving intrusion detection accuracy.…”
Section: Anomaly Detection Using Deep Learningmentioning
confidence: 99%
“…Neural networks as a classifier differ from other classifiers in that they have special training properties. While the AUC scale is actively used to measure classification accuracy, neural networks use "training time" as a basic determinant of the efficiency of the classification process [19,36,41,46]. However, researchers prefer to use other measures to enhance the process [35,44].…”
Section: Metrics Used In Related Workmentioning
confidence: 99%
“…By deploying high-definition cameras and other devices in substations, combined with cloud platforms and Internet of Things (IoT) management platforms, the coverage of substation safety monitoring has been achieved. Security management personnel remotely patrol all operation sites through monitoring, promptly detecting and correcting unsafe behaviors on site, providing effective means for safety control [1,2]. However, the current methods based on video surveillance and manual inspections still have certain limitations, such as the existence of blind spots in existing video surveillance, which cannot effectively cover all operation sites; The workload of manual safety inspection and supervision is heavy, and the quality and efficiency of management are low [3,4].…”
Section: Introductionmentioning
confidence: 99%