2023
DOI: 10.1109/tdsc.2022.3143566
|View full text |Cite
|
Sign up to set email alerts
|

Attacking Deep Reinforcement Learning With Decoupled Adversarial Policy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
55
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 62 publications
(55 citation statements)
references
References 30 publications
0
55
0
Order By: Relevance
“…Some scholars have tried to solve this problem using reinforcement learning. 25 A federated learning client selection control framework based on deep Q-learning, 26 was proposed to offset the bias of Non-IID data by actively selecting the best subset of devices in each round of communication for the FL client selection problem under Non-IID data. This strategy, on the other hand, necessitates training the deep Q network in each round, resulting in a massive training volume and high training cost.…”
Section: Related Workmentioning
confidence: 99%
“…Some scholars have tried to solve this problem using reinforcement learning. 25 A federated learning client selection control framework based on deep Q-learning, 26 was proposed to offset the bias of Non-IID data by actively selecting the best subset of devices in each round of communication for the FL client selection problem under Non-IID data. This strategy, on the other hand, necessitates training the deep Q network in each round, resulting in a massive training volume and high training cost.…”
Section: Related Workmentioning
confidence: 99%
“…[5][6][7][8][9][10] These perturbations are imperceptible to human beings but can easily fool DNNs, which raises invisible threats to the vision-based automatic decision. [11][12][13][14][15] Consequently, the robustness of DNNs encounters great challenges in real-world applications. 16,17 For example, the existence of AEs can pose severe security threats for traffic sign recognition in autonomous driving.…”
Section: Introductionmentioning
confidence: 99%
“…15 They are also prone to different attacks, such as inversion attack, 16 mining attacks, 17,18 and so forth. [19][20][21][22] In general, machine learning can be divided into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is widely used in solving classification and regression problems.…”
Section: Introductionmentioning
confidence: 99%