2017
DOI: 10.1109/tc.2017.2703808
|View full text |Cite
|
Sign up to set email alerts
|

Off-the-Hook: An Efficient and Usable Client-Side Phishing Prevention Application

Abstract: Abstract-Phishing is a major problem on the Web. Despite the significant attention it has received over the years, there has been no definitive solution. While the state-of-the-art solutions have reasonably good performance, they suffer from several drawbacks including potential to compromise user privacy, difficulty of detecting phishing websites whose content change dynamically, and reliance on features that are too dependent on the training data. To address these limitations we present a new approach for de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 86 publications
(54 citation statements)
references
References 31 publications
0
52
0
2
Order By: Relevance
“…# legit. Accuracy (Marchal et al, 2017) Gradient Boosting 100,000 1000 99.90% (Whittaker et al, 2010) Logistic Regression 16,967 1,499,109 99.90% (Xiang et al, 2011) Bayesian Network 8,118 4,780 99.60% (Cui et al, 2018) C 4 . 5 4 24,520 138,925 99.78% (Zhao and Hoi, 2013) Classic Perceptron 990,000 10,000 99.49% (Patil and Patil, 2018) Random Forest 26,041 26,041 99.44% (Zhao and Hoi, 2013) Label Efficient Perceptron 990,000 10,000 99.41% (Chen et al, 2014) Logistic Regression 1,945 404 99.40% (Cui et al, 2018) SVM 24,520 138,925 99.39% (Patil and Patil, 2018) Fast Decision Tree Learner REPTree26,041 26,041 99.19% (Zhao and Hoi, 2013) Cost-sensitive Perceptron 990,000 10,000 99.18% (Patil and Patil, 2018) C A R T 5 26,041 26,041 99.15% (Jain and Gupta, 2018b) Random Forest 2,141 1,918 99.09% (Patil and Patil, 2018) J 4 8 6 26,041 26,041 99.03% (Verma and Dyer, 2015) J48 11,271 13,274 99.01% (Verma and Dyer, 2015) P A R T 7 11,271 13,274 98.98% (Verma and Dyer, 2015) Random Forest 11,271 13,274 98.88% (Shirazi et al, 2018) Gradient Boosting 1,000 1,000 98,78% (Cui et al, 2018) Naïve-Bayes 24,520 138,925 98,72% (Cui et al, 2018) C4.5 356,215 2,953,700 98.70% (Patil and Patil, 2018) Alternating Decision Tree 26,041 26,041 98.48% (Shirazi et al, 2018) SVM (Linear) 1,000 1,000 98,46% (Shirazi et al, 2018) CART 1,000 1,000 98,42% (Adebowale et al, 2019) Adaptive Neuro-Fuzzy Inference System 6,843 6,157 98.30% (Vanhoenshoven et al, 2016) Random Forest 1,541,000 759,000 98.26% (Jain and Gupta, 2018b) Logistic Regression 2,141 1,918 98.25% (Patil and Patil, 2018) Random Tree 26,041 26,041 98.18% (Shirazi et al, 2018) k-Nearest Neighbuors 1,000 1,000 98,05% (Vanhoenshoven et al, 2016) Multi Layer Perceptron 1,541,000 759,000 97.97% (Verma and Dyer, 2015) Logistic Regression 11,271 13,274 97.70% …”
Section: Referencementioning
confidence: 99%
See 1 more Smart Citation
“…# legit. Accuracy (Marchal et al, 2017) Gradient Boosting 100,000 1000 99.90% (Whittaker et al, 2010) Logistic Regression 16,967 1,499,109 99.90% (Xiang et al, 2011) Bayesian Network 8,118 4,780 99.60% (Cui et al, 2018) C 4 . 5 4 24,520 138,925 99.78% (Zhao and Hoi, 2013) Classic Perceptron 990,000 10,000 99.49% (Patil and Patil, 2018) Random Forest 26,041 26,041 99.44% (Zhao and Hoi, 2013) Label Efficient Perceptron 990,000 10,000 99.41% (Chen et al, 2014) Logistic Regression 1,945 404 99.40% (Cui et al, 2018) SVM 24,520 138,925 99.39% (Patil and Patil, 2018) Fast Decision Tree Learner REPTree26,041 26,041 99.19% (Zhao and Hoi, 2013) Cost-sensitive Perceptron 990,000 10,000 99.18% (Patil and Patil, 2018) C A R T 5 26,041 26,041 99.15% (Jain and Gupta, 2018b) Random Forest 2,141 1,918 99.09% (Patil and Patil, 2018) J 4 8 6 26,041 26,041 99.03% (Verma and Dyer, 2015) J48 11,271 13,274 99.01% (Verma and Dyer, 2015) P A R T 7 11,271 13,274 98.98% (Verma and Dyer, 2015) Random Forest 11,271 13,274 98.88% (Shirazi et al, 2018) Gradient Boosting 1,000 1,000 98,78% (Cui et al, 2018) Naïve-Bayes 24,520 138,925 98,72% (Cui et al, 2018) C4.5 356,215 2,953,700 98.70% (Patil and Patil, 2018) Alternating Decision Tree 26,041 26,041 98.48% (Shirazi et al, 2018) SVM (Linear) 1,000 1,000 98,46% (Shirazi et al, 2018) CART 1,000 1,000 98,42% (Adebowale et al, 2019) Adaptive Neuro-Fuzzy Inference System 6,843 6,157 98.30% (Vanhoenshoven et al, 2016) Random Forest 1,541,000 759,000 98.26% (Jain and Gupta, 2018b) Logistic Regression 2,141 1,918 98.25% (Patil and Patil, 2018) Random Tree 26,041 26,041 98.18% (Shirazi et al, 2018) k-Nearest Neighbuors 1,000 1,000 98,05% (Vanhoenshoven et al, 2016) Multi Layer Perceptron 1,541,000 759,000 97.97% (Verma and Dyer, 2015) Logistic Regression 11,271 13,274 97.70% …”
Section: Referencementioning
confidence: 99%
“…1. State-of-the-art methods of phishing website detection report classification accuracy (the classification accuracy measure is described in Section 3.4.1) well above 99.50% and use different classification algorithms: ensembles (Gradient Boosting) (Marchal et al, 2017), statistical models (Logistic Regression) (Whittaker et al, 2010), probabilistic algorithms (Bayesian Network) (Xiang et al, 2011), classification trees (C4.5) (Cui et al, 2018). There is no common agreement about what classification algorithm is the most accurate in phishing website prediction on datasets with predefined features (Chiew et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Content-based detection [20] Visual and layout similarity [21][22][23][24] Heuristics [25,26] URL evaluation [27,28] User activities [29] Evaluation and ranking [30] Whitelists [31][32][33] Blacklists [34,35] Hybrid [36,37] Network-based (Detection and prevention)…”
Section: Anti-phishing Techniquesmentioning
confidence: 99%
“…Consequently our features are independent from any dataset and more specifically from the data we later processed in experiments. Dataindependent features and the machine learning method choice ensure generalizability of the fingerprinting technique [24]. Having assessed our technique on a large set of 33 IoT devices (IP cameras, sensors, coffee machine, etc.)…”
Section: Security Analysis and Discussionmentioning
confidence: 99%