2018
DOI: 10.1109/access.2018.2805680
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
195
0
4

Year Published

2019
2019
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 338 publications
(199 citation statements)
references
References 62 publications
0
195
0
4
Order By: Relevance
“…These algorithms are susceptible to many threats that either decrease the accuracy and performance of the classifiers or expose sensitive data used in the training process of the classifiers. Examples of the potential threats that can be utilised by attackers include poisoning, evasion, impersonation and inversion attacks s [272]. Poisoning is a threat in which the attacker injects malicious samples with incorrect labels into the training dataset to modify training data distribution, decrease the discrimination power of the classifier in distinguishing between the normal and abnormal behaviour of the system, and ultimately decrease classifier accuracy and performance.…”
Section: ) Security Of ML and Dl Methodsmentioning
confidence: 99%
“…These algorithms are susceptible to many threats that either decrease the accuracy and performance of the classifiers or expose sensitive data used in the training process of the classifiers. Examples of the potential threats that can be utilised by attackers include poisoning, evasion, impersonation and inversion attacks s [272]. Poisoning is a threat in which the attacker injects malicious samples with incorrect labels into the training dataset to modify training data distribution, decrease the discrimination power of the classifier in distinguishing between the normal and abnormal behaviour of the system, and ultimately decrease classifier accuracy and performance.…”
Section: ) Security Of ML and Dl Methodsmentioning
confidence: 99%
“…e researchers [81,83,84] proposed different techniques to detect adversarial examples in the input and to create different benign and adversarial examples. As we mentioned earlier, the target of the attacker is to add more noise to formulate [47] Changes the discriminant results Resource consuming [50][51][52] Misclassifies positive sample Integrity attack [53] False negative passes through the system Easily detected [54][55][56] Availability attack [57] False positive results in blocking records Time and resource consuming [58][59][60] Privacy violation attack [61] Easily exploit the training dataset Its performance is not reliable as it based on iterations [62][63][64] Targeted attack [65] Misclassified to any arbitrary class It does not provide assurance about the generated samples [66][67][68] Indiscriminate attack [69] Good trade-off Perturbation is high [70,71] Highly efficient 10 Mobile Information Systems effective adversarial examples. According to [83], it is not easy to detect such adaptive attacks, and some detection techniques effectively work while some ineffective.…”
Section: Detecting Adversarial Examplesmentioning
confidence: 99%
“…The bottom line for all defensive approaches is the need for realistic analysis of the potential attackers for goal, knowledge, capability and strategy. Security and Privacy aspects of a machine learning based system have two aspects: safe data and safe model (Liu et al, 2018). The first focuses on the security and privacy issues of the data which is vulnerable against different attacks, more importantly the injection of invalid/malicious input from adversaries or leakage of sensitive information.…”
Section: Robustnessmentioning
confidence: 99%