2010 International Conference on Networking and Information Technology 2010
DOI: 10.1109/icnit.2010.5508526
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Machine Learning algorithms performance in detecting network intrusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…By comparing the experiments carried out by Jalill et al [38] and Katkar and Kulkarni [40], we have observed that SVM algorithm predict DOS-DDOS more accurately on the dataset UNSW_NB15 compared to the KDD'99 dataset (Acc_ SVM_UNSW_NB = 92.28% > Acc_ SVM_KDD = 62.5 %). This important difference according to W. Xingzhu [42] is caused by the redundant records on the KDD'99 dataset and SVM has slower training on high dimensional datasets.…”
Section: ) Use Model Evaluation Metricsmentioning
confidence: 91%
See 1 more Smart Citation
“…By comparing the experiments carried out by Jalill et al [38] and Katkar and Kulkarni [40], we have observed that SVM algorithm predict DOS-DDOS more accurately on the dataset UNSW_NB15 compared to the KDD'99 dataset (Acc_ SVM_UNSW_NB = 92.28% > Acc_ SVM_KDD = 62.5 %). This important difference according to W. Xingzhu [42] is caused by the redundant records on the KDD'99 dataset and SVM has slower training on high dimensional datasets.…”
Section: ) Use Model Evaluation Metricsmentioning
confidence: 91%
“…The accuracy can range from Acc=62.5% by using KDD'99 dataset and SVM algorithm to Acc=99.92% with Decision Tree J.48 algorithm and KDD'99 dataset. Indeed, according to the experiment of Jalill et al ( 2010) [38] based on the KDD'99 dataset, the Support Vector Machine (SVM) algorithm has a serious problem in accurately detecting DOS-DDOS attacks compared to the Decision Tree J.48 algorithm which shows high prediction accuracy that exceed 99%. The experiments based on the NB, C4.5, RF algorithms and UNSW_NB15 dataset realized by Bellouch et al (2018) [39], has shown that the prediction accuracy obtained by RF (Acc_ RF = 99.94%) is better than C4.5 (Acc_ C4..5 = 95.82%) and SVM (Acc_ SVM = 92.28%).…”
Section: ) Use Model Evaluation Metricsmentioning
confidence: 99%
“…Also shows the resource performance by the algorithms. Jalil et al, (2010) have use three ML algorithms for his research namely decision tree (J48), support vector machine and neural network. He used KDD 99 dataset to analyse the accuracy, detection rate and false alarm rate and compared these three algorithms by their results and shown that decision tree algorithms suit best for his research.…”
Section: Related Workmentioning
confidence: 99%
“…From his work, SVM showed a high performance of F-measure about 72%. (Ribeiro et al, 2015) 90% for spam - (Yang et al, 2013) 77% for twitter and yelp - (Alsudais et al, 2014) 90% for reporters and 84% for reportees - (Sinha et al, 2016) 96.07% - (Meda et al, 2014) 93.6% F-measure - (Chen et al, 2015c) Decision tree High 96.51% - (Ribeiro et al, 2015) 99% for IDS - (Jalil et al, 2010) 87.6% for spam - (Yang et al, 2013) 92% for spam reporters and 90% for spam reportees - (Sinha et al, 2016) 92% F-measure for C4.5 - (Chen et al, 2015c) Naïve Bayes High 86.63% - (Ribeiro et al, 2015) 70.9% F-measure - (Chen et al, 2015c) K-NN Average 84% for reporters and 89% for reportees - (Sinha et al, 2016) 90.5% F-measure - (Chen et al, 2015c) SVM Average 88.75% - (Du and Fang, 2004) 57% around for IDS - (Jalil et al, 2010) 79.9% F-measure - (Chen et al, 2015c) Bayes network Average 83.3% for spam - (Yang et al, 2013) 81.9% F-measure - (Chen et al, 2015c) Xu et al (2016) introduced a new point of view to efficiently detect spam in social networks. He collected two types of datasets from the Twitter and Facebook using the application programming interface, which contains both spam and non-spam contents.…”
Section: Related Workmentioning
confidence: 99%
“…Their focus was on measuring the performance of classification algorithm based on True Positive rate and False Positive rate. Jalil and Masrek [12] in their paper evaluated the performance of J48 classification algorithm and compared its result to two other machine learning algorithm that are Neural Network and support Vector Machine based on the detection rate, false alarm rate and accuracy of classification based on attack type. This paper compares the performance of Naï ve Bayes and J48 algorithms on KDD"99 data set and studies the effects of removing redundancy from the dataset by applying preprocessing filter (i.e.…”
Section: Related Workmentioning
confidence: 99%