2019 14th Asia Joint Conference on Information Security (AsiaJCIS) 2019
DOI: 10.1109/asiajcis.2019.00-10
|View full text |Cite
|
Sign up to set email alerts
|

Malware Classification using Early Stage Behavioral Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 11 publications
0
18
0
1
Order By: Relevance
“…More particularly, accuracy of the model will be slightly degraded during the warning level and will be boosted after updated when the batch window is full or out of control state is occurred. Figure (8) and Figure (9) show the detailed results of the performance of both fixed batch window size and adaptive batch window size in terms of detection accuracy in Figure (8) and classification error in Figure (9). As can be observed, in both figures the proposed concept-driftbased adaptive incremental batch performs better than the other fixed batch size incremental learning strategies.…”
Section: Figure 5 Base Classifier Selectionmentioning
confidence: 79%
See 4 more Smart Citations
“…More particularly, accuracy of the model will be slightly degraded during the warning level and will be boosted after updated when the batch window is full or out of control state is occurred. Figure (8) and Figure (9) show the detailed results of the performance of both fixed batch window size and adaptive batch window size in terms of detection accuracy in Figure (8) and classification error in Figure (9). As can be observed, in both figures the proposed concept-driftbased adaptive incremental batch performs better than the other fixed batch size incremental learning strategies.…”
Section: Figure 5 Base Classifier Selectionmentioning
confidence: 79%
“…Another set of experiments were conducted to select the base classifier. The commonly used classifiers in the existing related works were implemented in this study for the comparison which are Support Vector Machine (SVM) [44], Naïve Bayes (NB) [45], Logistic Regression [46], Random Forest (RF) [47], XGBoost [9] and Deep Learning (DL). These classifiers were trained based on the dataset denoted by DS1 in Table 1.…”
Section: Figure 4 Comparison Of Feature Selection Techniquesmentioning
confidence: 99%
See 3 more Smart Citations