Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2004
DOI: 10.1145/1014052.1014066
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial classification

Abstract: Essentially all data mining algorithms assume that the datagenerating process is independent of the data miner's activities. However, in many domains, including spam detection, intrusion detection, fraud detection, surveillance and counter-terrorism, this is far from the case: the data is actively manipulated by an adversary seeking to make the classifier produce false negatives. In these domains, the performance of a classifier can degrade rapidly after it is deployed, as the adversary learns to defeat it. Cu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
618
0
2

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 776 publications
(625 citation statements)
references
References 17 publications
5
618
0
2
Order By: Relevance
“…However in the machine learning and pattern recognition literature the issue of the hardness of evasion in adversarial classification problems has not been deeply and formally investigated yet. Most of the works proposed countermeasures against specific kinds of attacks for spam filtering and intrusion detection tasks (see for instance [1][2][3]), and only few of them proposed formal models of adversarial classification tasks [4,5], or analysed the main issues raised by the application of machine learning techniques [6]. Therefore, from an engineering viewpoint the design of accurate and hard to evade classification systems for security applications is still an open problem.…”
Section: Introductionmentioning
confidence: 99%
“…However in the machine learning and pattern recognition literature the issue of the hardness of evasion in adversarial classification problems has not been deeply and formally investigated yet. Most of the works proposed countermeasures against specific kinds of attacks for spam filtering and intrusion detection tasks (see for instance [1][2][3]), and only few of them proposed formal models of adversarial classification tasks [4,5], or analysed the main issues raised by the application of machine learning techniques [6]. Therefore, from an engineering viewpoint the design of accurate and hard to evade classification systems for security applications is still an open problem.…”
Section: Introductionmentioning
confidence: 99%
“…And even if the email spam problem were to be solved, it is not obvious that the solution would apply to spam in other media. The general problem of adversarial information filtering [44] -of which spam filtering is the prime example -is likely to be of interest for some time to come.…”
Section: The Spam Ecosystemmentioning
confidence: 99%
“…For example, within the Probably Approximately Correct framework, Kearns and Li bound the classification error an adversary can cause with control over a fraction of the training set [10]. Dalvi et al apply game theory to the classification problem [6]. They model the interactions between the classifier and attacker as a game and develop an optimal counter-strategy for an optimal classifier playing against an optimal opponent.…”
Section: Related Workmentioning
confidence: 99%