2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341303
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph

Abstract: The complexity of multiagent reinforcement learning (MARL) in multiagent systems increases exponentially with respect to the agent number. This scalability issue prevents MARL from being applied in large-scale multiagent systems. However, one critical feature in MARL that is often neglected is that the interactions between agents are quite sparse. Without exploiting this sparsity structure, existing works aggregate information from all of the agents and thus have a high sample complexity. To address this issue… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…The utilisation of GNNs can change according to the information that are encoded in the graph. Many notable works [43], [134]- [143] focus the attention on the communication between robots, a crucial aspect for task and motion planning, and exploit graph encoding to efficiently model inter-agents relationships. Usually, graph neural networks are used to learn a communication graph, which is then exploited for example to directly support the agents in making decisions, or to foster multi-agent reinforcement learning frameworks in policy learning.…”
Section: Gnns For Multi-agent Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…The utilisation of GNNs can change according to the information that are encoded in the graph. Many notable works [43], [134]- [143] focus the attention on the communication between robots, a crucial aspect for task and motion planning, and exploit graph encoding to efficiently model inter-agents relationships. Usually, graph neural networks are used to learn a communication graph, which is then exploited for example to directly support the agents in making decisions, or to foster multi-agent reinforcement learning frameworks in policy learning.…”
Section: Gnns For Multi-agent Systemsmentioning
confidence: 99%
“…In section IV, we reviewed several approaches that build a communication graph for multi-robot systems coordination, which can be also applied for exploration tasks [43], [134]- [142]. Some of these works, instead, are specifically designed for the task of exploration, such as [142] where the authors tackle the task of coverage control, with the aim of predicting the distribution of a set of robots in a region such that the likelihood to spot events of interest is maximised.…”
Section: B Exploration and Navigationmentioning
confidence: 99%
“…There has been large success in generating high-performing cooperative teams using MARL in challenging problems such as game playing [4,22,20] and robotics [27]. Recently, the use of communication in MARL has become highly prevalent as agents can share important information to greatly improve team performance [6,26,15,10,39].…”
Section: Related Workmentioning
confidence: 99%
“…Multi-agent reinforcement learning (MARL) is also an emerging method to solve the multi-agent problems that contain cooperation requirements such as simultaneous arrival and collision avoidance [9][10][11][12][13][14][15]. In [16], deep recurrent multi-agent actor-critic framework (R-MADDPG) is developed for cooperation under partial observable situations and limited communication capability such as network bandwidth.…”
Section: Introductionmentioning
confidence: 99%