2020
DOI: 10.48550/arxiv.2001.09684
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning

Abstract: Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in quickly adapting to the surrounding environments. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. Thus, this paper provides a comprehensive survey that discusses emerging attacks in DRL… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 60 publications
0
16
0
Order By: Relevance
“…To date, there does not exist a defense that ensures complete protection against adversarial ML attacks. In our previous works [9], [21], we have performed an extensive survey of the adversarial ML literature on robustness against adversarial examples, and showed that nearly all defensive measures proposed in the literature can be divided into:…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To date, there does not exist a defense that ensures complete protection against adversarial ML attacks. In our previous works [9], [21], we have performed an extensive survey of the adversarial ML literature on robustness against adversarial examples, and showed that nearly all defensive measures proposed in the literature can be divided into:…”
Section: Discussionmentioning
confidence: 99%
“…used in validating the defense and always look for a change in the false positive and false negative scores.• Evaluation of the defense mechanism against out-ofdistribution examples and transferability-based adversarial attacks is very important. Although these recommendations and many others in[9],[21]-[23] can help in designing a suitable defense against adversarial examples but this is still an open research problem in adversarial ML and ripe for investigation for ML-based 5G applications.…”
mentioning
confidence: 99%
“…In multi-agent environments, the ability of an attacker to create adversarial observations increases significantly [17]. A comprehensive survey on the main challenges and potential solutions for adversarial attacks on DRL is available in [21]. The authors classify attacks in four categories: attacks targeting (i) rewards, (ii) policies, (iii) observations, and (iv) the environment.…”
Section: Related Workmentioning
confidence: 99%
“…In multi-agent environments, an attacker can significantly increase the adversarial observations ability [11]. Ilahi et al review emerging adversarial attacks in DRL-based systems and the potential countermeasures to defend against these attacks [17]. The authors classify the attacks as attacks targeting (i) rewards, (ii) policies, (iii) observations, and (iv) the environment.…”
Section: Related Workmentioning
confidence: 99%