2021
DOI: 10.1080/19393555.2021.1985191
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking full version of GureKDDCup, UNSW-NB15, and CIDDS-001 NIDS datasets using rolling-origin resampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 62 publications
0
2
0
Order By: Relevance
“…The improved values for numerals of several attributes have an impact on the development of a suitable and fine-tuned model using ML approaches like SVM, LR, CNN, and KNN [41,42]. It also takes a lot of processing resources to train high dimensional datasets.…”
Section: 9feature Normalizationmentioning
confidence: 99%
“…The improved values for numerals of several attributes have an impact on the development of a suitable and fine-tuned model using ML approaches like SVM, LR, CNN, and KNN [41,42]. It also takes a lot of processing resources to train high dimensional datasets.…”
Section: 9feature Normalizationmentioning
confidence: 99%
“…Akin to the 6-percent-GureKDDCup'99, the three NIDS datasets are chosen as they contained the mandatory IP truncation features: the source IP address and destination IP address. More details of the datasets, data cleansing, and data preparation can be found in our previous work [34].…”
Section: Extended Experimental Empirical Studiesmentioning
confidence: 99%