Proceedings of the 10th Hellenic Conference on Artificial Intelligence 2018
DOI: 10.1145/3200947.3201010
|View full text |Cite
|
Sign up to set email alerts
|

Multiagent Reinforcement Learning Methods to Resolve Demand Capacity Balance Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…All the agents aim at maximizing the expected discounted return E G i st . According to (5), all the agents have the same objective.…”
Section: B Pomdpmentioning
confidence: 99%
See 1 more Smart Citation
“…All the agents aim at maximizing the expected discounted return E G i st . According to (5), all the agents have the same objective.…”
Section: B Pomdpmentioning
confidence: 99%
“…The demand-capacity imbalance could be constructed as the interacted networks of flight trajectories, in which agents with interactions were defined as "peers" and the connection of "peers" neighbourhood promoted the information propagation. Independent reinforcement learning, edge-based multi-agent reinforcement learning and agentbased multi-agent learning were proposed according to the features of agents' coordination graph [5]. The hierarchical reinforcement learning frameworks were proposed based on the state-action abstraction and temporal action abstraction by taking advantage of the coordination of agents to handle real-world problems.…”
Section: Introductionmentioning
confidence: 99%
“…The system can be constructed with the network structure, in which agents with interactions are defined as "peers" and connected for information propagation. With this definition, the edge-based and agent-based reinforcement learning leverage the coordination graph to solve the DCB issue [13]. And to enable collaboration among multiple agents, the hierarchical reinforcement learning formulates state-action abstraction and temporal action abstraction to resolve the congestion issue [14].…”
Section: Introductionmentioning
confidence: 99%
“…For example, in the air traffic simulator FACET [11], some specific location points in the two-dimensional space were taken as agents, training which to decide the safety separation between the passing aircraft [12], or each aircraft was used as an agent to train itself to allocate an appropriate delay based on GDP. In this mode, scholars have explored many MARL frameworks, such as edge-based, agent-based, hierarchical MARL framework [13], [14].…”
Section: Introductionmentioning
confidence: 99%