Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2012
DOI: 10.1145/2339530.2339697
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial support vector machine learning

Abstract: Many learning tasks such as spam filtering and credit card fraud detection face an active adversary that tries to avoid detection. For learning problems that deal with an active adversary, it is important to model the adversary's attack strategy and develop robust learning models to mitigate the attack. These are the two objectives of this paper. We consider two attack models: a free-range attack model that permits arbitrary data corruption and a restrained attack model that anticipates more realistic attacks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
67
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 104 publications
(68 citation statements)
references
References 19 publications
1
67
0
Order By: Relevance
“…Zhou et al [17] present two attack models for which optimal learning strategies are derived. They formulate a convex optimization problem in which the constraint is defined over the sample space based on the proposed attack models.…”
Section: Related Workmentioning
confidence: 99%
“…Zhou et al [17] present two attack models for which optimal learning strategies are derived. They formulate a convex optimization problem in which the constraint is defined over the sample space based on the proposed attack models.…”
Section: Related Workmentioning
confidence: 99%
“…A version of secure SVMs and Relevance Vector Machines (RVMs) has been proposed in [74,73]. Similarly to the problem of learning with invariances, these classifiers aim to minimize the worst-case loss under a given attacker's model.…”
Section: Proactive Defensesmentioning
confidence: 99%
“…However, in many situations there exists an adversary (such as a spammer) who manipulates the training data distribution (e.g., spam emails) in a way as to attack the classifiers (spam detectors). This is a scenario that challenges the assumption made in most traditional classifiers, and thus has motivated the research advanced of adversarial learning [1][2][3][4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Zhou et al [6] introduced a model based on support vector machines that can tackle two kinds of attacks an adversary may carry out. However, the model is only evaluated on synthetically generated data instead of real world evolved data under adversarial influence.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation