2014
DOI: 10.1145/2584680
|View full text |Cite
|
Sign up to set email alerts
|

An Anti-Phishing System Employing Diffused Information

Abstract: The phishing scam and its variants are estimated to cost victims billions of dollars per year. Researchers have responded with a number of anti-phishing systems, based either on blacklists or on heuristics. The former cannot cope with the churn of phishing sites, while the latter usually employ decision rules that are not congruent to human perception. We propose a novel heuristic anti-phishing system that explicitly employs gestalt and decision theory concepts to model perceptual similarity. Our system is eva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
22
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(23 citation statements)
references
References 59 publications
1
22
0
Order By: Relevance
“…Some detection methods rely on URL lexical obfuscation characteristics [12], [33] and webpage hosting related features [30], [39] to decide if a webpage is a phish. The visual similarity of a phish with its target was also exploited to detect phishs [31], [40]. Visual similarity analysis presupposes that a potential target is known a priori though, limiting its applicability.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Some detection methods rely on URL lexical obfuscation characteristics [12], [33] and webpage hosting related features [30], [39] to decide if a webpage is a phish. The visual similarity of a phish with its target was also exploited to detect phishs [31], [40]. Visual similarity analysis presupposes that a potential target is known a priori though, limiting its applicability.…”
Section: Related Workmentioning
confidence: 99%
“…We present the size of the testing sets used to evaluate each system and the provenance of the legitimate set, showing how representative the set is. For example, using popular websites (such as top Alexa sites) [25], [31] as the legitimate set is not representative. The ratio of training to testing instances (Train/Test) indicates the scalability of the method and the ratio of legitimate to phishing instances (Leg/Phish) shows the extent to which the experiments represent a real world distribution (≈ 100/1) [2], [27].…”
Section: Related Workmentioning
confidence: 99%
“…HTML analysis has also been exploited to this end, often complemented by the use of search engines to identify phishing pages with similar text and page layout [24,28], or by the analysis of the pages linked to (or by) the suspect pages [29]. The main difference with target-independent approaches is that most of the target-dependent approaches have considered measures of visual similarity between webpage snapshots or embedded images, using a wide range of image analysis techniques, mostly based on computing lowlevel visual features, including color histograms, twodimensional Haar wavelets, and other well-known image descriptors normally exploited in the field of computer vision [30,31,12,13]. Notably, only few work has considered the combination of both HTML and visual characteristics [11,32].…”
Section: Phishing Webpage Detectionmentioning
confidence: 99%
“…2). Most of them are based on comparing the candidate phishing webpage against a set of known targets [10,11], or on extracting some generic features to discriminate between phishing and legitimate webpages [12,14].…”
Section: Introductionmentioning
confidence: 99%
“…One approach is to compare the content of presumed phishing Web pages with the original Web page being phished as in [30], [31], [32], [33], [34]. The main shortcoming of such a method is that the site being phished must be first identified.…”
Section: Related Workmentioning
confidence: 99%