Proceedings of the International Joint Conference on Neural Networks, 2003.
DOI: 10.1109/ijcnn.2003.1223682
|View full text |Cite
|
Sign up to set email alerts
|

On the capability of an SOM based intrusion detection system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0
1

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 104 publications
(83 citation statements)
references
References 5 publications
0
82
0
1
Order By: Relevance
“…Specific experimental results will be given to different K values (4,6,8,10,14,20), and the parameter of anomaly threshold value β will be taken as 1%. As revealed in the results of Table 2, when the values of clustering number K are taken within the range of 4-20, there are no significant fluctuations in the detection performance.…”
Section: Experiments and Results Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Specific experimental results will be given to different K values (4,6,8,10,14,20), and the parameter of anomaly threshold value β will be taken as 1%. As revealed in the results of Table 2, when the values of clustering number K are taken within the range of 4-20, there are no significant fluctuations in the detection performance.…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…It can be found by comparison that the detection performance can achieve a better effect if when the value of K is taken close to 8-10. PSO-KM [9] 86 2.8 -SOM [10] 91.5 14.5 -CSI-KNN [11] 91.4 2.6 92.5…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…Compared to Toosi and Kahani's study, our ensemble technique that combines all generated models with naïve Bayes provides a better detection rate for U2R, but not for others. Agarwal and Joshi [29] introduced a rule-based framework, Kayacik et al [30] introduced a self-organizing map (SOM)-based technique, and Pfahringer [31] explained the technique, which was the winner of KDD'99, that used a bagged boosting ensemble technique. Our study provides better accuracy results for all types of attacks compared to [27][28][29].…”
Section: Discussionmentioning
confidence: 99%
“…All the 41 features of the KDD 99 data set are used in the experiment for classification. H. G. Kayacik, A. N. Zincir-Heywood, and M. I. Heywood [3] have given a detailed description of KDD 99 benchmark dataset. This approach of unsupervised clustering and supervised classification makes the proposed model into a semi supervised learning model.…”
Section: Analysis Of the Effect Of Clustering The Training Data In Namentioning
confidence: 99%