2017
DOI: 10.1155/2017/4580206
|View full text |Cite
|
Sign up to set email alerts
|

A Decentralized Partially Observable Markov Decision Model with Action Duration for Goal Recognition in Real Time Strategy Games

Abstract: Multiagent goal recognition is a tough yet important problem in many real time strategy games or simulation systems. Traditional modeling methods either are in great demand of detailed agents’ domain knowledge and training dataset for policy estimation or lack clear definition of action duration. To solve the above problems, we propose a novel Dec-POMDM-T model, combining the classic Dec-POMDP, an observation model for recognizer, joint goal with its termination indicator, and time duration variables for actio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Algorithms developed for inference and learning in probabilistic graphical models such as DBNs can be modified and applied to optimize polices for SMDPs (Hoffman et al, 2012;Yin et al, 2016). Multiagent planning (MAP) and coordination tasks (e.g., multirobot cooperation in robot soccer or in search-andrescue operations, or in engagements in military and security applications) typically require asynchronous decisions by the agents, as well as frequent updating of inferences about each other's goals and plans, and hence revisions in each agent's own goals and plans (Jiao, Xu, Yue SWei, & Sun, 2017). They are thus naturally modeled by combining the partial observability of POMDPs with the random timing of SMDPs.…”
Section: Decentralized Multiagent Control: Pomdp Decpomd Smdp and mentioning
confidence: 99%
“…Algorithms developed for inference and learning in probabilistic graphical models such as DBNs can be modified and applied to optimize polices for SMDPs (Hoffman et al, 2012;Yin et al, 2016). Multiagent planning (MAP) and coordination tasks (e.g., multirobot cooperation in robot soccer or in search-andrescue operations, or in engagements in military and security applications) typically require asynchronous decisions by the agents, as well as frequent updating of inferences about each other's goals and plans, and hence revisions in each agent's own goals and plans (Jiao, Xu, Yue SWei, & Sun, 2017). They are thus naturally modeled by combining the partial observability of POMDPs with the random timing of SMDPs.…”
Section: Decentralized Multiagent Control: Pomdp Decpomd Smdp and mentioning
confidence: 99%
“…MDHMM, a signal statistical analysis model, is widely used in the fields of pattern recognition and financial areas, and many researches focus on the applications of movement segments for controlling a multifunctional prosthetic hand, measurement assessment and face recognition, etc. [15][16][17][18]. Reference [19] proposed a zero-delay MDHMM in terms of the fitting capacity and prediction power to capture the evolution of the foreign exchange rate data under different frequent trading environments.…”
Section: Introductionmentioning
confidence: 99%