2012
DOI: 10.1093/jigpal/jzs037
|View full text |Cite
|
Sign up to set email alerts
|

Mutating network scans for the assessment of supervised classifier ensembles

Abstract: As it is well known, some Intrusion Detection Systems (IDSs) suffer from high rates of false positives and negatives. A mutation technique is proposed in this study to test and evaluate the performance of a full range of classifier ensembles for Network Intrusion Detection when trying to recognize new attacks. The novel technique applies mutant operators that randomly modify the features of the captured network packets to generate situations that could not otherwise be provided to IDSs while learning. A compre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…MLPbagging (AUC ¼ 0.943) had the strongest predictive capacity, followed by MLPdagging (AUC ¼ 0.928), MLP-RTF (AUC ¼ 0.884) and MLP models (AUC ¼ 0.902). MLP-bagging is more efficient in mitigating volatility and discrimination compared with other ensemble approaches (Pham et al 2017;Sedano et al 2013). Feature selection approach is widely used to test the predictive capacity of variables to improve model performance by eliminating unwanted or unimportant factors in advance (Pham, Pradhan et al 2016).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…MLPbagging (AUC ¼ 0.943) had the strongest predictive capacity, followed by MLPdagging (AUC ¼ 0.928), MLP-RTF (AUC ¼ 0.884) and MLP models (AUC ¼ 0.902). MLP-bagging is more efficient in mitigating volatility and discrimination compared with other ensemble approaches (Pham et al 2017;Sedano et al 2013). Feature selection approach is widely used to test the predictive capacity of variables to improve model performance by eliminating unwanted or unimportant factors in advance (Pham, Pradhan et al 2016).…”
Section: Discussionmentioning
confidence: 99%
“…A threshold function is a Boolean function which determines whether a certain threshold is crossed by the value equality of its inputs. The percentage bag size indicates the training range size (Sedano et al 2013). Likewise, 16 iterations, 1 seed, 100% of bag size (training range size) and MLPnn as base classifiers were set for bagging.…”
Section: Construction Of Models and Dp Mapsmentioning
confidence: 99%
“…We used Bagging to obtain a much improved and more accurate land subsidence model because this algorithm performs well in predicting land subsidence susceptibility, as it is sensitive to small adjustments in the training data and consequently [43,46]. Bagging ensembles more effectively reduce uncertainty and bias compared to other ensembles [69]. In addition, this algorithm is capable of reflecting complex non-linear interaction between land subsidence and related factors, although it lacks a statistical significance test which can limit quantitative hypothesis testing [43].…”
Section: Baggingmentioning
confidence: 99%