2021
DOI: 10.1613/jair.1.12594
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Strategic Structures in Multi-Agent Inverse Reinforcement Learning

Abstract: A core question in multi-agent systems is understanding the motivations for an agent's actions based on their behavior. Inverse reinforcement learning provides a framework for extracting utility functions from observed agent behavior, casting the problem as finding domain parameters which induce such a behavior from rational decision makers.  We show how to efficiently and scalably extend inverse reinforcement learning to multi-agent settings, by reducing the multi-agent problem to N single-agent problems whil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…It utilises a sampling-based approximation by relating MaxEnt IRL to generative adversarial networks (Goodfellow et al 2014), enabling effective reward tuning for complex scenarios. Some recent work extends AIRL to the multi-agent setting (Yu, Song, and Ermon 2019;Fu et al 2021) and the mean-field setting (Yang et al 2018a;Chen et al 2023). However, they are limited to either the countable-agent or the homogeneous many-agent cases.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It utilises a sampling-based approximation by relating MaxEnt IRL to generative adversarial networks (Goodfellow et al 2014), enabling effective reward tuning for complex scenarios. Some recent work extends AIRL to the multi-agent setting (Yu, Song, and Ermon 2019;Fu et al 2021) and the mean-field setting (Yang et al 2018a;Chen et al 2023). However, they are limited to either the countable-agent or the homogeneous many-agent cases.…”
Section: Related Workmentioning
confidence: 99%
“…First, it aids in comprehending and predicting the objectives of interacting agents, such as determining the destinations of autonomous vehicles (You et al 2019). Second, it enables the design of agent environments with known reward signals to guide their behaviour as desired, akin to mechanism design (Fu et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…To this end, inverse reinforcement learning (IRL) seems to be a promising approach to investigate. However, a challenge is that existing IRL methods (Arora & Doshi, 2021;Fu et al, 2021) are based on the assumption that the underlying game has a unique equilibrium, but many dilemmas have multiple equilibria. A related aspect is the correctness of information pertaining to individual players.…”
Section: P2mentioning
confidence: 99%
“…It can be difficult to define a posterior approach in complex systems. The Agents must therefore be capable of learning and adapting with time [4], [13], [14]. The branch of study known as "inverse reinforcement studies" looks at how an agent's aims, beliefs, or rewards are affected by how well it performs.…”
Section: Introductionmentioning
confidence: 99%