2022
DOI: 10.1109/tai.2021.3111139
|View full text |Cite
|
Sign up to set email alerts
|

Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 74 publications
(33 citation statements)
references
References 60 publications
1
32
0
Order By: Relevance
“…They have implemented their adversarial techniques and the performance results of their techniques to show that their system is very effective but did not solve robustness. Later on, this issue addressed by Ilahi et al 117 presented the security challenges and countermeasures for AL attacks on RL. They have analyzed the vulnerability details in‐depth, which helps to prevent a malicious attack.…”
Section: Al Techniques For Security and Privacy Preservationmentioning
confidence: 99%
“…They have implemented their adversarial techniques and the performance results of their techniques to show that their system is very effective but did not solve robustness. Later on, this issue addressed by Ilahi et al 117 presented the security challenges and countermeasures for AL attacks on RL. They have analyzed the vulnerability details in‐depth, which helps to prevent a malicious attack.…”
Section: Al Techniques For Security and Privacy Preservationmentioning
confidence: 99%
“…Data poisoning attacks like label flipping, backdoor attack, and model poisoning attack are very common adversarial attacks on DRL and are explored in the literature for IoV applications [3]. In a recent work, the authors provide a comprehensive survey on various attacks on DRL [6]. The authors in this work briefly discuss the possible choices of attacks carried out by an adversary on a DRL model which includes an attack on the state, action, reward, and model.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, we propose an attack detection framework against Sybil-based data poisoning attacks in the context of DRL-based mechanisms in IoV applications. Several adversarial data poisoning attacks like adversarial random noises, data flipping and backdoor attacks, are common and explored in the literature for vehicular applications [3], [6]. Different from the existing works, we use Sybil-based adversarial attacks.…”
Section: B Attack Modelmentioning
confidence: 99%
“…Adversarial attacks on deep reinforcement learning are often formulated to minimize the expected reward. Because there are multiple attack targets (such as states, actions, environments, and rewards) various attacks are possible in deep reinforcement learning [6]. A well-known adversarial attack on TV gameplay [7] is defined on the state space, which attacks adds an adversarial perturbation δ x to the input TV screen x.…”
Section: Related Workmentioning
confidence: 99%
“…Many adversarial attacks in deep reinforcement learning perturb the state observations to cause the agents to malfunction [5], [6]. The attacks on the state observations allow white-box attacks where the adversary can access the neural networks, such as policy networks and Q-networks, that take the state observations as the input [7]- [9].…”
Section: Introductionmentioning
confidence: 99%