2018
DOI: 10.17671/gazibtd.368583
|View full text |Cite
|
Sign up to set email alerts
|

Intrusion Detection with Machine Learning and Feature Selection Methods

Abstract: Bilgisayar ve internetin, günlük yaşamın vazgeçilmez bir unsuru haline gelmesi ile birlikte internet sitelerinin ve web tabanlı uygulamaların sayısı da hızla artmıştır. Bilgi, fikir, para gibi birçok önemli unsurun internet siteleri ve uygulamalar aracılığıyla paylaşımının yapılması ise bilgi güvenliği konusunu önemli ve güncel bir hale getirmiştir. Günümüze kadar güvenlik duvarı, virüs programları gibi yazılımlar bilgisayar ve sistem güvenliği için kullanılmış ancak yeterli olmamıştır. Bu nedenle mevcut yazıl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 23 publications
0
9
0
1
Order By: Relevance
“…Kaynar et al. [19] obtained new datasets using attribute selection algorithms over a dataset developed for attack detection systems, and then using these new datasets k-nearest neighbor algorithm, support vector machines and over learning machine algorithms are applied and compared. As a result, they observed that feature selection methods increase the success rate of all three machine learning algorithms.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Kaynar et al. [19] obtained new datasets using attribute selection algorithms over a dataset developed for attack detection systems, and then using these new datasets k-nearest neighbor algorithm, support vector machines and over learning machine algorithms are applied and compared. As a result, they observed that feature selection methods increase the success rate of all three machine learning algorithms.…”
Section: Literature Reviewmentioning
confidence: 99%
“…formula where E is entropy, N is the total sample size, n is number of classes, and ns (i) is the sample for the i th class [54].…”
Section: Information Gainmentioning
confidence: 99%
“…The SVM suggested by Cortes and Vapnik [14] uses the principle of structural risk minimization. SVM is a method of machine learning that divides data into two classes with the help of hyperplane [15]. SVM is one of the classifiers used for many different tasks, especially in recent years [16].…”
Section: Svmmentioning
confidence: 99%
“…In this method, it is tried to find a high plane where the distances of the nearest samples between the two classes are maximized. This method can often be used for linearly separating data; besides, it can also be used for nonlinearly separable data because it can make the data linearly separable with the help of the kernel functions [15]. The working principle of the classifier can be explained by an example; the data is classified by labeling the class to be detected as 1, and all other data in the other class as -1.…”
Section: Svmmentioning
confidence: 99%