2020
DOI: 10.1016/j.cose.2020.102062
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised feature selection and cluster center initialization based arbitrary shaped clusters for intrusion detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(26 citation statements)
references
References 34 publications
0
26
0
Order By: Relevance
“…Table 10 shows the results for DR, FAR, and accuracy obtained by the enhanced systems using keys only and keys and their positions when applied to the KDDCup 99 and NSL-KDD datasets. Table 11 presents the results of the comparison between the outcomes of the proposed methods (DEM3sel and DEMdif) and those of the various IDSs mentioned in the literature (Al-Ibaisi et al [15]; Eesa et al [26]; Al-Yaseen et al [27]; Rashid et al [17]; Yuan et al [28]; Rashid et al [20]; Prasad et al [29]; Lei et al [30]; and Nancy et al [31]) in terms of DR, FAR and accuracy, where all papers that we used for comparison depend on KDDCup dataset. From the table, it is clear that the DR, FAR, and accuracy obtained by the two methods of the proposed approach are good.…”
Section: Resultsmentioning
confidence: 99%
“…Table 10 shows the results for DR, FAR, and accuracy obtained by the enhanced systems using keys only and keys and their positions when applied to the KDDCup 99 and NSL-KDD datasets. Table 11 presents the results of the comparison between the outcomes of the proposed methods (DEM3sel and DEMdif) and those of the various IDSs mentioned in the literature (Al-Ibaisi et al [15]; Eesa et al [26]; Al-Yaseen et al [27]; Rashid et al [17]; Yuan et al [28]; Rashid et al [20]; Prasad et al [29]; Lei et al [30]; and Nancy et al [31]) in terms of DR, FAR and accuracy, where all papers that we used for comparison depend on KDDCup dataset. From the table, it is clear that the DR, FAR, and accuracy obtained by the two methods of the proposed approach are good.…”
Section: Resultsmentioning
confidence: 99%
“…In order to prevent attacks, it is necessary to quickly classify large amounts of data with little cost. In [22], unsupervised feature selection method is proposed to avoid the cost of tagging network traffic. Thus, it is aimed to achieve better classification results while reducing computational complexity.…”
Section: Related Workmentioning
confidence: 99%
“…With their instantiation called CINFO in [43], the authors further developed CUFS by using unsupervised discretization methods like equal-width and equal-frequency to adopt the methods to numeric data rather than categorical data. Another approach is given in [44] with Unsupervised Feature Selection and Cluster Center Initialization, denoted in this work as UFS_CCI. It derives feature scores as the difference of feature entropy from unlabeled data by computing the ratio of the maximum count of occurring values by the total amount of samples.…”
Section: Feature Selection For Outlier Detectionmentioning
confidence: 99%
“…Comparison of existing FS work for OD with the requirements defined in Section 3.1 ( and denote that the requirement is either fulfilled or not, ∅ denotes missing information to analyze the respective requirement, (++/+/−) for R-FS04 and R-FS08 are denoting, as objectively as possible, how well the requirement is fulfilled, R-FS02 and R-FS03 are combined since R-FS03 is a phenomenon associated with R-FS02 and none of the existing work in this table fulfills both). [44] ∅ ++ CBRW_FS [42] − + CUFS-DSFS [4] − + CINFO [43] + ++ ODEFS [45] + ++ IBFS [46] ++ ++…”
Section: Feature Selection For Outlier Detectionmentioning
confidence: 99%