2016 IEEE Symposium Series on Computational Intelligence (SSCI) 2016
DOI: 10.1109/ssci.2016.7850079
|View full text |Cite
|
Sign up to set email alerts
|

Detecting malicious URLs using machine learning techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0
6

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 100 publications
(34 citation statements)
references
References 19 publications
0
28
0
6
Order By: Relevance
“…5 4 24,520 138,925 99.78% (Zhao and Hoi, 2013) Classic Perceptron 990,000 10,000 99.49% (Patil and Patil, 2018) Random Forest 26,041 26,041 99.44% (Zhao and Hoi, 2013) Label Efficient Perceptron 990,000 10,000 99.41% (Chen et al, 2014) Logistic Regression 1,945 404 99.40% (Cui et al, 2018) SVM 24,520 138,925 99.39% (Patil and Patil, 2018) Fast Decision Tree Learner REPTree26,041 26,041 99.19% (Zhao and Hoi, 2013) Cost-sensitive Perceptron 990,000 10,000 99.18% (Patil and Patil, 2018) C A R T 5 26,041 26,041 99.15% (Jain and Gupta, 2018b) Random Forest 2,141 1,918 99.09% (Patil and Patil, 2018) J 4 8 6 26,041 26,041 99.03% (Verma and Dyer, 2015) J48 11,271 13,274 99.01% (Verma and Dyer, 2015) P A R T 7 11,271 13,274 98.98% (Verma and Dyer, 2015) Random Forest 11,271 13,274 98.88% (Shirazi et al, 2018) Gradient Boosting 1,000 1,000 98,78% (Cui et al, 2018) Naïve-Bayes 24,520 138,925 98,72% (Cui et al, 2018) C4.5 356,215 2,953,700 98.70% (Patil and Patil, 2018) Alternating Decision Tree 26,041 26,041 98.48% (Shirazi et al, 2018) SVM (Linear) 1,000 1,000 98,46% (Shirazi et al, 2018) CART 1,000 1,000 98,42% (Adebowale et al, 2019) Adaptive Neuro-Fuzzy Inference System 6,843 6,157 98.30% (Vanhoenshoven et al, 2016) Random Forest 1,541,000 759,000 98.26% (Jain and Gupta, 2018b) Logistic Regression 2,141 1,918 98.25% (Patil and Patil, 2018) Random Tree 26,041 26,041 98.18% (Shirazi et al, 2018) k-Nearest Neighbuors 1,000 1,000 98,05% (Vanhoenshoven et al, 2016) Multi Layer Perceptron 1,541,000 759,000 97.97% (Verma and Dyer, 2015) Logistic Regression 11,271 13,274 97.70% (Jain and Gupta, 2018b) Naïve-Bayes 2,141 1,918 97.59% (Vanhoenshoven et al, 2016) k-Nearest Neighbours 1,541,000 759,000 97.54% (Shirazi et al, 2018) SVM (Gaussian) 1,000 1,000 97,42% (Vanhoenshoven et al, 2016) C 5 . 0 8 1,541,000 759,000 97.40%…”
Section: Referencementioning
confidence: 99%
“…5 4 24,520 138,925 99.78% (Zhao and Hoi, 2013) Classic Perceptron 990,000 10,000 99.49% (Patil and Patil, 2018) Random Forest 26,041 26,041 99.44% (Zhao and Hoi, 2013) Label Efficient Perceptron 990,000 10,000 99.41% (Chen et al, 2014) Logistic Regression 1,945 404 99.40% (Cui et al, 2018) SVM 24,520 138,925 99.39% (Patil and Patil, 2018) Fast Decision Tree Learner REPTree26,041 26,041 99.19% (Zhao and Hoi, 2013) Cost-sensitive Perceptron 990,000 10,000 99.18% (Patil and Patil, 2018) C A R T 5 26,041 26,041 99.15% (Jain and Gupta, 2018b) Random Forest 2,141 1,918 99.09% (Patil and Patil, 2018) J 4 8 6 26,041 26,041 99.03% (Verma and Dyer, 2015) J48 11,271 13,274 99.01% (Verma and Dyer, 2015) P A R T 7 11,271 13,274 98.98% (Verma and Dyer, 2015) Random Forest 11,271 13,274 98.88% (Shirazi et al, 2018) Gradient Boosting 1,000 1,000 98,78% (Cui et al, 2018) Naïve-Bayes 24,520 138,925 98,72% (Cui et al, 2018) C4.5 356,215 2,953,700 98.70% (Patil and Patil, 2018) Alternating Decision Tree 26,041 26,041 98.48% (Shirazi et al, 2018) SVM (Linear) 1,000 1,000 98,46% (Shirazi et al, 2018) CART 1,000 1,000 98,42% (Adebowale et al, 2019) Adaptive Neuro-Fuzzy Inference System 6,843 6,157 98.30% (Vanhoenshoven et al, 2016) Random Forest 1,541,000 759,000 98.26% (Jain and Gupta, 2018b) Logistic Regression 2,141 1,918 98.25% (Patil and Patil, 2018) Random Tree 26,041 26,041 98.18% (Shirazi et al, 2018) k-Nearest Neighbuors 1,000 1,000 98,05% (Vanhoenshoven et al, 2016) Multi Layer Perceptron 1,541,000 759,000 97.97% (Verma and Dyer, 2015) Logistic Regression 11,271 13,274 97.70% (Jain and Gupta, 2018b) Naïve-Bayes 2,141 1,918 97.59% (Vanhoenshoven et al, 2016) k-Nearest Neighbours 1,541,000 759,000 97.54% (Shirazi et al, 2018) SVM (Gaussian) 1,000 1,000 97,42% (Vanhoenshoven et al, 2016) C 5 . 0 8 1,541,000 759,000 97.40%…”
Section: Referencementioning
confidence: 99%
“…The presence of malicious URL in phishing emails is a key characteristics of spam emails and Vanhoenshoven et al [185] tested the effectiveness of RF in detecting such URLs within spam emails using a publicly available database. The authors came into conclusion that with an accuracy of 97.69%, RF actually performed better than few other classification techniques such as MLP, C4.5 Decision Tree, SVM and NB.…”
Section: K: Supervised Systems Discussing Performance Of Different Almentioning
confidence: 99%
“…In this framework, the traditional tools and infrastructures are not useful because we deal with big data created with a high velocity, and the solutions and predictions must be faster than the threats. Artificial Intelligence and ML analytics have turned out in one of the most powerful tools against the cyberattackers (see [35][36][37][38][39][40][41]), but obtaining actionable knowledge from a database of Cybersecurity events by applying ML algorithms usually is a computationally expensive task for several reasons. A database of Cybersecurity contains, in general, a huge amount of dynamical, and unstructured but highly correlated and connected data, so we need to deal with some costly aspects of the quality of the data such as noise, trustworthiness, security, privacy, heterogeneity, scaling, or timeliness [42][43][44].…”
Section: Complexitymentioning
confidence: 99%