2015
DOI: 10.48550/arxiv.1511.08779
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multiagent Cooperation and Competition with Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 0 publications
0
26
0
Order By: Relevance
“…To empirically support our theoretical results on EMGs, this section is devoted to introduce two simple decentralized extensions of singleagent Deep Reinforcement Learning (DRL) algorithms designed to work with LT L specifications. The first approach is based on extending with temporal logic specifications a popular baseline in MARL called I-DQN [30], while the second is a multi-agent extension of LPOPL that we referred in Sec 1. The extended algorithms described below are employed in the experiments presented in Sec.…”
Section: Deep Marl With Co-safe Lt L Goalsmentioning
confidence: 99%
See 2 more Smart Citations
“…To empirically support our theoretical results on EMGs, this section is devoted to introduce two simple decentralized extensions of singleagent Deep Reinforcement Learning (DRL) algorithms designed to work with LT L specifications. The first approach is based on extending with temporal logic specifications a popular baseline in MARL called I-DQN [30], while the second is a multi-agent extension of LPOPL that we referred in Sec 1. The extended algorithms described below are employed in the experiments presented in Sec.…”
Section: Deep Marl With Co-safe Lt L Goalsmentioning
confidence: 99%
“…This allows Independent Q-learning [32] to train multiple agents in a decentralized fashion. Here we consider a deep learning variant of this algorithm (see, e.g., [30]), where each agent is trained with an independent DQN. However, in our case, we adopt a decentralized version of an algorithm that uses LT L specifications and LT L progression instead of classical reward functions (see, e.g., [20]).…”
Section: I-dqn With Co-safe Lt L Goalsmentioning
confidence: 99%
See 1 more Smart Citation
“…Cooperative Cooperative CAD environments help in developing agent algorithms that can learn near-globally optimal policies for all the driving agents that act as a cooperative unit. Such environments help in developing agents that learn to communicate [9] and benefit from learning to cooperate [25]. This type of environments will enable development of efficient fleet of vehicles that cooperate and communicate with each other to reduce congestion, eliminate collisions and optimized traffic flows.…”
Section: Nature Of Tasksmentioning
confidence: 99%
“…Independent DQN [25] extends DQN to cooperative, fully-observable Multi-Agent setting, applied to a two-player pong environment, in which all agents independently learn and update their own Q-function Q i (s, a i ; θ i ).…”
Section: A Appendixamentioning
confidence: 99%