2014
DOI: 10.1109/tkde.2013.57
|View full text |Cite
|
Sign up to set email alerts
|

Security Evaluation of Pattern Classifiers under Attack

Abstract: Abstract-Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending patte… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
407
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 380 publications
(408 citation statements)
references
References 41 publications
1
407
0
Order By: Relevance
“…Given the opacity of the learning process it is not always clear, either to the attacked or even the attacker, what the consequences might be or how to identify such an attack. Examples of attack on machine learning include spam filtering [35], malware classifiers [36][37][38] and biometric recognition systems [39]. Machine learning can be subjected to attacks during training and inference phases.…”
Section: N Tuptuk S Hailesmentioning
confidence: 99%
“…Given the opacity of the learning process it is not always clear, either to the attacked or even the attacker, what the consequences might be or how to identify such an attack. Examples of attack on machine learning include spam filtering [35], malware classifiers [36][37][38] and biometric recognition systems [39]. Machine learning can be subjected to attacks during training and inference phases.…”
Section: N Tuptuk S Hailesmentioning
confidence: 99%
“…Research on Machine Learning has traditionally focused on improving the effectiveness of the solutions, without taking into account adversarial settings. However, a current area of research has begun to explore the reliability and security of Machine Learning algorithms under adversarial models [31,21,32,33].…”
Section: Attacks On Nidsmentioning
confidence: 99%
“…Adversarial learning that is conducive to cyber security can be conducted using reactive and/or proactive modes of operation [10]. In the reactive mode, the offensive side devises and engages in the attack while the defense is limited to analyzing the attack and developing countermeasures.…”
Section: Introductionmentioning
confidence: 99%