2021
DOI: 10.48550/arxiv.2104.14654
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Agent-Level Maximum Entropy Inverse Reinforcement Learning for Mean Field Games

Abstract: Mean field games (MFG) facilitate the otherwise intractable reinforcement learning (RL) in large-scale multi-agent systems (MAS), through reducing interplays among agents to those between a representative individual agent and the mass of the population. While, RL agents are notoriously prone to unexpected behaviours due to reward mis-specification. This problem is exacerbated by an expanding scale of MAS. Inverse reinforcement learning (IRL) provides a framework to automatically acquire proper reward functions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…In the Investment-Graphon problem -an adaptation of a problem studied by Chen et al (2021), where it was in turn adapted from Weintraub et al ( 2010) -we consider many firms maximizing profits, where profits are proportional to product quality and decrease with total neighborhood product quality, i.e. the graph models overlap in e.g.…”
Section: Methodsmentioning
confidence: 99%
“…In the Investment-Graphon problem -an adaptation of a problem studied by Chen et al (2021), where it was in turn adapted from Weintraub et al ( 2010) -we consider many firms maximizing profits, where profits are proportional to product quality and decrease with total neighborhood product quality, i.e. the graph models overlap in e.g.…”
Section: Methodsmentioning
confidence: 99%
“…ERMFNE serves as a generalization of MaxEnt IRL to MFGs. The researchers show that this method is effectively able to recover ground truth rewards for MFGs [19].…”
Section: Entropy Regularized Mean Field Nash Equilibriummentioning
confidence: 99%
“…Computational intensity and algorithmic run-time for computing traditional NE solution concepts grows exponentially with expansion of joint stateaction spaces when population size increases [48,15,23]. A MFNE solution concept simplifies this computation for a population model through considering its asymptotic limit, assuming that agents in the population are homogeneous, and that the population approaches infinity [19]. Mean-field approximations leverage the empirical distribution representing aggregated population behaviors, reducing interactions to a dual-view interplay of a single agent and the whole population.…”
Section: Entropy Regularized Mean Field Nash Equilibriummentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we come up a new result which assures that even with non-uniform interactions, MFC is a good choice for approximating MARL if the reward of each agent is an affine function of the mean-field distributions 'seen' by that agent. We note that the behaviour of agents in multitude of social and economic networks can be modeled via affine rewards (refer the examples given in (Chen et al, 2021)), and thus for many cases of practical interest, MFC can approximate MARL with non-uniform interactions.…”
Section: Introductionmentioning
confidence: 99%