2012
DOI: 10.1609/icaps.v22i1.13520
|View full text |Cite
|
Sign up to set email alerts
|

Improved Non-Deterministic Planning by Exploiting State Relevance

Abstract: We address the problem of computing a policy for fully observable non-deterministic (FOND) planning problems. By focusing on the relevant aspects of the state of the world, we introduce a series of improvements to the previous state of the art and extend the applicability of our planner, PRP, to work in an online setting. The use of state relevance allows our policy to be exponentially more succinct in representing a solution to a FOND problem for some domains. Through the introduction of new techniques for av… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
78
0
3

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 63 publications
(81 citation statements)
references
References 11 publications
0
78
0
3
Order By: Relevance
“…In the meantime, the action-based planning community studied how to handle general nondeterminism quite extensively in the past years, following different approaches such as, for instance, reactive planning systems [4], deductive planning [45], model checking [20], and, especially, fully observable nondeterministic planning (FOND planning) [7,37,38,42]. However, these approaches to nondeterministic action-based planning do not support flexible plans and temporal uncertainty, and do not account for controllability issues.…”
Section: Limitations Of the Current Approachmentioning
confidence: 99%
“…In the meantime, the action-based planning community studied how to handle general nondeterminism quite extensively in the past years, following different approaches such as, for instance, reactive planning systems [4], deductive planning [45], model checking [20], and, especially, fully observable nondeterministic planning (FOND planning) [7,37,38,42]. However, these approaches to nondeterministic action-based planning do not support flexible plans and temporal uncertainty, and do not account for controllability issues.…”
Section: Limitations Of the Current Approachmentioning
confidence: 99%
“…We have evaluated 4 the performance of state-of-the-art FOND planner PRP over the five benchmarks described above. We chose PRP over other FOND planners such as FIP (Fu et al 2011), since PRP has been shown to clearly outperform FIP over the International Planning Competition FOND benchmark (Muise, McIlraith, and Beck 2012). Behavior Composition problems can be also mapped into safety games (De Giacomo and Felli 2010), which are then solved with game solvers typically using model checking techniques.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…The second contribution of this paper involves an empirical evaluation of three existing state-of-the-art systems that are able to synthesize such type of non-classical plans, namely, one automatic FOND planner and two model checking based game solver systems. In particular, we evaluate our encoding proposal using state-of-the-art FOND planner PRP (Muise, McIlraith, and Beck 2012) and compare it with two competitive game solver verification frameworks, namely, McMAS (Lomuscio, Qu, and Raimondi 2009) and NuGaT (based on NuSMV (Cimatti et al 2000)), 1 on various non-trivial classes of composition instances. Interestingly, the results obtained suggest that, despite the high computational complexity of the task at hand, the existing tools can already handle realistically sized composition instances.…”
Section: Introductionmentioning
confidence: 99%
“…We call such assumptions stochastic fairness. Plans in this setting are called strong-cyclic, and their importance is evidenced by the existence of several tools for finding them, e.g., NDP (Alford et al 2014), FIP (Fu et al 2016), myND (Mattmüller et al 2010), Gamer (Kissmann and Edelkamp 2011), PRP (Muise, McIlraith, and Beck 2012), GRENADE (Ramírez and Sardiña 2014), and FOND-SAT (Geffner and Geffner 2018). Such policies ensure the goal with probability 1 (Geffner and Bonet 2013).…”
Section: Introductionmentioning
confidence: 99%