Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/832
|View full text |Cite
|
Sign up to set email alerts
|

Failure-Scenario Maker for Rule-Based Agent using Multi-agent Adversarial Reinforcement Learning and its Application to Autonomous Driving

Abstract: We examine the problem of adversarial reinforcement learning for multi-agent domains including a rule-based agent. Rule-based algorithms are required in safety-critical applications for them to work properly in a wide range of situations. Hence, every effort is made to find failure scenarios during the development phase. However, as the software becomes complicated, finding failure cases becomes difficult. Especially in multi-agent domains, such as autonomous driving environments, it is much harder to find use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(32 citation statements)
references
References 1 publication
0
32
0
Order By: Relevance
“…STARS can discover diverse scenarios exhibiting var-ious accident patterns, which is different from previous studies that conduct RL-based adversarial attacks on driving policies [11], [18], [12], [13], [14]. Most previous studies [11], [18], [12], [13] aim to get direct collision scenarios by designing direct collision rewards.…”
Section: A the Discovered Av-responsible Scenariosmentioning
confidence: 84%
See 2 more Smart Citations
“…STARS can discover diverse scenarios exhibiting var-ious accident patterns, which is different from previous studies that conduct RL-based adversarial attacks on driving policies [11], [18], [12], [13], [14]. Most previous studies [11], [18], [12], [13] aim to get direct collision scenarios by designing direct collision rewards.…”
Section: A the Discovered Av-responsible Scenariosmentioning
confidence: 84%
“…There exist some studies [11], [12], [13], [14] that adopt the Attack-by-Policy strategy to attack an autonomous driving policy. However, most of these studies aim at directly causing accidents [11], [12], [13], rather than discovering the vulnerability of the under-test policy.…”
Section: B Attacks On Policiesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this approach targets a mixed-traffic driving with a single AC and multiple human-driven cars, thus it does not consider complex scenario having more than one non-communicating AC agents. Another work [17] performs adversarial RL for testing a multiagent driving environment by training more than one rulebased driving model. While the results look promising, the approach only covers the cases where the trained adversarial cars are exposed to a single AC.…”
Section: Related Workmentioning
confidence: 99%
“…There have been attempts to generate adversarial test cases mainly either by using Generative Adversarial Networks (GANs) [14] or optimization based methods [15] [16] targeting the input state of deep neural networks. Furthermore, RL has been used in different styles to test autonomous driving within simulations [17] [18]. However, while RL has shown great results as an adversarial agent, RL is still mainly used for testing ACs in a single-agent environment.…”
Section: Introductionmentioning
confidence: 99%